Using goals to design and verify rule bases

Using goals to design and verify rule bases

Decision Support Systems 21 Ž1997. 281–305 Using goals to design and verify rule bases P.G. Chander ) , R. Shinghal 1, T. Radhakrishnan 2 Departmen...

1MB Sizes 1 Downloads 67 Views

Decision Support Systems 21 Ž1997. 281–305

Using goals to design and verify rule bases P.G. Chander ) , R. Shinghal 1, T. Radhakrishnan

2

Department of Computer Science, Concordia UniÕersity, Montreal, Canada H3G 1M8

Abstract The design of rule-based systems is often plagued by errors and anomalies. The verification and validation ŽV and V. processes to detect errors and anomalies in a rule base are complex. Methods that are general enough for comprehensive anomaly detection suffer from heavy computation. Special methods for V and V that have reduced computational needs lack in their scope and applicability. Most of the existing verification tools perform their checking based on the syntax of rule base encoding often ignoring useful meta knowledge of the domain. In this paper, we propose a way to abstract domain knowledge using goals. At the design level, goals are realized in a rule base using one of several design schemes, where a design scheme is a goal-to-hypothesis mapping satisfying certain constraints. At the implementation level, goals are inferred using partially ordered rule sequences called paths. Verification of a rule base can be performed by identifying certain rule aberrations, that can be indicative of the rule base anomalies circularity, ambivalence, redundancy, and deficiency. A case study is presented to highlight that the goal-based approach is useful for preventing rule subsumption Ža form of redundancy. and for enhancing the Žrun time. performance of a rule base. q 1997 Elsevier Science B.V. Keywords: Rule-based system; Design schemes; CARD anomalies; Verification and validation

1. Introduction and motivation Conceptually, the design and development of a rule-based system is a combination of comprehension, mapping, and encoding of the knowledge from a domain expert into a set of rules w1x. The comprehension Žalso called knowledge acquisition. is required in order to acquire the problem solving knowledge from the domain expert. The mapping is a formalization of this knowledge into a task domain identifying the set of tasks associated with problem )

Corresponding author. Bell Laboratories, Business Communications Systems, 11900, N. Pecos Street, 24-44, Westminster, CO 80234. Tel.: 303-538-0587; fax: 303-538-5041; e-mail: [email protected] 1 Tel.: q1-514-848-3004; fax: q1-514-848-2830; e-mail: [email protected]. 2 Tel.: q1-514-848-3019; fax: q1-514-848-2830; e-mail: [email protected].

solving. The rule-base development encodes the task domain representation of the acquired knowledge into a set of rules. Comprehension, mapping, and encoding are done by a knowledge engineer. The design of rule bases, however, is often plagued by anomalies abbreviated in the literature as CARD w2,3x: Circularity, Ambivalence, Redundancy, and Deficiency. Verification of rule bases entails detecting the CARD anomalies.

Definition 1 ŽCARD Anomalies. A rule base that contains a rule andror an atom such that removing the rule or removing a part of the rule does not affect the functioning of the system, is said to exhibit redundancy. A rule base is ambivalent if it violates domain constraints. Deficiency is the inability of a system to provide an adequate response for a permis-

0167-9236r97r$17.00 q 1997 Elsevier Science B.V. All rights reserved. PII S 0 1 6 7 - 9 2 3 6 Ž 9 7 . 0 0 0 4 6 - 8

282

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

sible combination of initial evidence. A rule base is circular whenever a set of rules can repeatedly cause one another to fire in an interminable loop. It is not practical to detect these anomalies manually in large rule bases; procedures to automate their detection are required. Such detection procedures are collectively referred to as rule base verification procedures w2,3x. 1.1. Related research Early works on verification used simple pairwise rule comparisons, see for instance Refs. w4,5x. However, procedures that do not take into account the inference chains in a rule base can miss detecting some anomalies as pointed out by Ginsberg w6x. It has also been observed that additional evaluation perspectives can be obtained if the V and V processes take into account meta knowledge of the domain w7,8x: for example, the extended structure checker in EVA uses the notion of atom synonyms to detect redundant rules, but does not take into account the inference chains in the rule base. For a comprehensive anomaly detection, inference chains need to be considered w9x. But verification, taking into account the inference chains in a rule base, however, has an exponential complexity in the worst case w6,10x. Limiting himself to propositional, not predicate, logic Ginsberg w6x computed labels—the initial evidence required to infer for each final hypothesis to check for redundancy and inconsistency. The COVER verification tool takes into account the linear inference chains in a rule base for anomaly detection, but does not make use of meta knowledge of the domain w2x. In summary, an effective verification Žor, more generally, evaluation. procedure for rule-based systems can be judged using the following three criteria: 1. the extent meta knowledge of a domain is utilized for additional perspectives on anomaly detection; 2. accounting for rule interactions Žhence, inference chains. to improve the scope of anomaly detection rather than limiting its scope to individual Žor pairwise. rules; and 3. provisions to control the computation required for computing the transitive closure of rule inferences.

In this paper, we are motivated towards developing a design frame work integrating evaluation for rule-based systems. More specifically, we are interested in the following issues in the development and analysis of rule-based systems: Ža. how do we map a design choice to a given implementation? Žb. how do we ensure that the implementation, and its associated design restriction Žif any., represents part of the acquired knowledge? Žc. how does the methodology support restructuring of existing systems Žsay to improve its performance.? Žd. how do we effectively evaluate the developed rule base conforming to the above criteria? Though we have developed several evaluation procedures for rule-based systems, in order to conserve the size of this paper, we restrict ourselves to verification. For additional details, we refer the reader to Refs. w11–16x. The paper is organized as follows. In Section 2, we describe how goal specification can be used to abstract the knowledge of a domain. Goals not only serve to abstract a large body of knowledge, but also set a reference for later verification and validation. Once goals are conceived, the task of realizing them in a rule base entails some restrictions: this results in several design schemes for rule-based systems. Section 2 also explicates how goals are inferred in a rule base using a notion of a rule base path Žor, simply, path.. Section 3 describes a case study of knowledge restructuring using goal specification w15x. Thus, goal specification is useful not only for development, but for reverse engineering as well. In Section 4, we describe how rule bases developed adhering to our model can be verified for the CARD anomalies by providing our Žverification. perspectives using paths and goals. Goal specification can be used to control the computation required for path extraction from a rule base, but those details are described elsewhere to conserve the size of this paper w11x.

2. Abstracting domain knowledge by goals The knowledge acquisition process advocated by us is based on capturing the problem solving in a domain in the form of ‘goal to goal’ progressions as

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

283

Fig. 1. At the functional requirements stage of the system, the acquired knowledge is mapped to identify goals, and inviolables. Note, the goals at this stage only abstract the acquired knowledge. The decision of how to represent and realize each goal in the system would be a design decision.

viewed in traditional AI research w17x. In order to solve a problem, a system will transit through a set of states. It is unrealistic to enumerate every possible state that is traversed by the system without some sort of abstraction over the state space. A domain expert solving problems in the domain knows of the typical mile posts that are accomplished as part of solving problems in the domain; thus, they can be specified. Such states are called goals. In addition, the domain expert also specifies constraints associated with the domain called inviolables; an inviolable is a conjunction of hypotheses such that all of them should not be true at the same time. An example of an inviolable is MALE Ž x . n PREGNANT Ž x .; it is obvious that no goal or part of a goal should contain an inviolable. Goals also serve as meta knowledge of the domain, and are useful for later verification and validation w18,14–16x.

The process of identifying and mapping the acquired knowledge into goals is not a mechanical process w19x. Often, the extent to which a domain expert communicates the knowledge clearly and unambiguously, the skill of the knowledge engineer, and the rigor of the knowledge acquisition process play a major role. This process is similar to the way problem concepts are identified and mapped to operators and states in SOAR w17x. To illustrate, we show an example from Ref. w20x, identifying goals and inviolables Žsee Fig. 1.. Every goal, when translated into a first order logic formula, consists of a conjunction of hypotheses, where each hypothesis is represented as a logic atom Ža predicate with its arguments.. To illustrate, Fig. 2 shows the translation of some of the goals identified in Fig. 1 into a conjunction of atoms denoting hypotheses. The atoms present in goals are called goal

Fig. 2. Translating some of the goals identified in Fig. 1 into a conjunction of first order logic atoms.

284

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

atoms; the other atoms are non-goal atoms, they being needed for rule encoding. While solving problems, two kinds of goals are inferred: goals that facilitate inferring a solution, and the goals that represent the solutions to problems in the domain. The former are called intermediate goals and the latter final goals. Typically, the intermediate goals are those that are achieved in order to infer a final goal, and a final goal is part of some solution. Definition 2 ŽIntermediate and Final Goals. Goals that are inferred in order to facilitate reaching a solution are called intermediate goals. The goals that are used for indicating domain solutions are called final goals. In our model, problem solving in a domain is viewed as a succession of goal inferences until a set of final goals are inferred. This progression can be conceptually portrayed by means of an ANDrOR graph called the goal graph of the domain, or, simply, the goal graph. Fig. 3 shows a goal graph for the example medical domain used in Fig. 1 w20x. Each node in the goal graph corresponds to a goal, where the unshaded nodes denote goals that are not solutions, and the shaded nodes denote solutions. A connector in this graph is from a set of goals G s  g i , g i , . . . , g i 4 to a goal g, where g f G. A circu1 2 n lar mark in the connector near g indicates that goal g is inferable from goal g i 1 and goal g i 2 and . . . goal g i n. A connector thus indicates the conceptual encoding required to infer a given goal g from a set G of goals. The different connectors to a goal depict

the different alternatives for inferring that goal. Permissible initial evidence are said to be level-0 goals; other inferred goals are at higher levels, the first digit of the subscript indicating the level of a goal. In the implementation of a rule base, domain knowledge is encoded using a set of if . . . then . . . rules w17x. Problem solving at this level takes place by rule firings to infer hypotheses representing the various domain knowledge concepts w17,21x. The syntax of the rules to encode the acquired knowledge partitions hypotheses inferred in the rule base into two types: intermediate and final. An intermediate hypothesis is one that occurs in the consequent of at least one rule and in the antecedent of at least one rule. A final hypothesis is one that occurs in the consequent of at least one rule, but never in the antecedent of a rule. A design issue arises in mapping the intermediate and final goals in a goal specification to the intermediate and final hypotheses in a rule base. This mapping is necessary because it explicates the link between the semantic representation of the acquired knowledge Žusing goals. to its actual syntactic realization Žusing hypotheses.. A design scheme for a rule base is a realizable restriction imposed in mapping goals to hypotheses for satisfaction of a set of domain dependent and independent criteria. Definition 3 ŽDesign Scheme. A design scheme D is an ordered pair of partial mappings - m 1 , m 2 ) such that: F¨Hi jHf m 1 :F

m2 : I ¨ H i j H f where F is the set of final goals, I is the set of intermediate goals, H i is the set of intermediate hypotheses, and H f is the set of final hypotheses. The mapping is partial because not all hypotheses need be goal constituents. An analysis of the mapping restrictions provide, however, different design schemes for structuring rule bases. This is described next. 2.1. Designing rule base structures to realize goals

Fig. 3. A sample goal graph from a medical domain Žto diagnose liver and biliary disorders. to abstract important states associated with problem solving.

There are theoretical and pragmatic aspects that need to be considered in designing rule-based sys-

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

tems. A single design scheme may not be suitable for all domains; thus, we need a design schemata. A design schemata requires the following: Ž1. a pragmatic component describing the qualitative aspects of the design schemes contained, and Ž2. a theoretical component that outlines the relationship between the design schemes to facilitate choosing a design scheme. More specifically, the pragmatic component dictates the choice between design schemes depending upon the relative importance given to development, evaluation, and maintenance. The theoretical component allows for further compromises between development and maintenance by formalizing the properties of design schemes, and its use. However, the mapping restriction that is imposed at the design phase between goals and hypotheses is not unique. This can be stated as follows: Issue: Let f s A 1 n A 2 n, . . . be a final goal and i s AX 1 n AX 2 n, . . . be an intermediate goal specified, where the A’s are hypotheses. The question arises: What properties should A 1 , A 2 , . . . and AX 1 , AX 2 , . . . satisfy in the rule base? The possible choices for the constituent hypotheses of final and intermediate goals are summarized in Fig. 4. We will use the notation - Fn, Im ) to denote a design scheme, in analyzing the relationships between the 25 possible design schemes from Fig. 4. While all the choices in Fig. 4 are realizable restrictions resulting in various design schemes, some of them can be counter intuitive. Consider for example restriction I1 for intermediate goals that requires them to be composed of only final hypotheses. In this case, the intermediate goals cannot help infer any final goals because final hypotheses cannot be causal to other final hypotheses. A similar observation is applicable to restriction F4 for final goals. Thus, owing to counter intuitive semantics of the

285

specification, we recommend avoiding all design schemes with either of these mapping restrictions. Further, restriction I5, where intermediate goal composition is unconstrained, is discouraged because specification of intermediate goals can be uncontrolled, and such ad hoc specification may not reflect the intent of the domain expert. We, however, allow restriction F5 for ease in solution specification to facilitate functional, structural andror empirical validation w22,18,23,11x, but care should be exercised in specifying solutions when using this restriction. In general, the choice of which mapping to choose for a given goal realization entails several factors. We give below two important criteria that play a major role in making this choice. For additional details, refer to Chander et al. w15x. Ø The extent of analysis and synthesis components in a domain influences the type of the hypothesis constituent in goals, and hence is a factor in determining a goal-to-hypothesis mapping. Ø For large rule bases, the mapping should also facilitate ease in refining a given goal specification because such refinements are quite often needed to control the computation required for path extraction and for subsequent evaluation w11,12x. Unfortunately, there is no procedure to choose a design scheme from the set of schemes for a domain. In other words, for an arbitrary domain and development criteria, there is no known step-by-step method by which we can choose a design scheme from a set of design schemes. Often, the skill and experience of the knowledge engineer, size of the proposed system, the extent of analysis and synthesis components in problem solving used in the domain, domain constraints expressed though domain dependent criteria, and long term objectives Žsuch as the extent of maintenance expected. expressed through domain independent criteria dictate whether to reject or se-

Fig. 4. The choices for constituent hypotheses in a final goal f and an intermediate goal i, provided neither f nor i contains an inviolable.

286

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

lect a design scheme from its characteristics w24–26x. As an example consider a domain dependent criteria where states accepted as solutions can also lead to Žperhaps more refined. solutions. For this domain, design schemes with choice F1 for final goals is not the right choice because solution causality cannot be captured by this choice for final goals. On the other hand, if one cannot infer additionally from domain solutions, or if the solutions are mutually exclusive then design schemes with choice F1 are perhaps more appropriate. Appendix A gives an example of choosing a design scheme for a particular domain. An examination of the relationship between the various design schemes shows that some scheme mappings are more general than others. This allows us to identify a relationship called inheritance between the design schemes. Inheritance between the design schemes is important because it allows for compromises in development vs. maintenance and also influences certain system qualities Žunderstandability, maintainability, etc.., and thus plays a role in design scheme selection. The details, however, are beyond the scope of this paper and the reader is referred to Refs. w27,28x. Goal specification and the various design schemes are not constrained for only development. Goal specification can be usefully utilized for reverse engineering as well. We describe in Section 3 a case study for optimization of rule bases. 2.2. Modeling goal inference in problem solÕing In our model, problem solving is viewed as the inference of a succession of goals until a set of final goals are inferred. In general, a set of rule firings are required before a goal can be inferred because goals are conjunctions of hypotheses. Thus, to abstract

problem solving that takes place at the rule base level in terms of the goals specified for the domain, we must explicate the relationship between a set of rules in the rule base inferring a goal. A rule base constructed based on a given goal specification and a design scheme implements the problem solving by rule sequences that progress from goalŽs. to goal from the given goal specification. Every such rule sequence F in the rule base is said to have realized a connector in the goal graph; thus, there may be more than one rule sequence to realize a given connector. These rule sequences are called rule base paths Žor, simply, paths., the term used in order to be consistent with the Žstructural. validation terminology w29,8x. If a rule r j occurs immediately after rule ri in such a sequence F , then a non-goal atom in the consequent of ri unifies with an atom in the antecedent of r j . Rule r j in the rule sequence F is said to be accessible from rule ri ; goal atoms inferred within the sequence F contribute only to the goal being inferred. A path, thus, localizes the rule interactions Žhence, the inference chains. that occur in a goal-to-goal progression. In addition, the partial ordering captures inference chains that are not only linear, but those that are partially ordered as well. Just as a goal graph can be used to pictorially depict the relationship between the goals, details of goal realization at the rule base structural level can also portrayed by a graph. Such a graph depicting paths in a rule base is called the rule graph. A rule graph is a labeled directed graph. Each node in a rule graph corresponds to a rule. The atoms inferred by firing a rule ri are shown by labeling the directed arcs from ri as A1i , Ai2 , . . . An arc labeled A ni from ri to r j indicates that the atom A ni in the consequent

X

Fig. 5. A rule graph representation of a path. The rule sequence infers goal g once goal g is inferred.

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

287

Fig. 6. A comparison of the rule and goal graphs.

of ri unifies with an atom in the antecedent of r j . Goal atoms inferred by rules in a path are shown by directed arcs incident on the right vertical bar representing goal g. An example rule graph of a path is shown in Fig. 5. Goal graphs and rule graphs are compared in Fig. 6. Formally, every path is a poset, denoted by - s , %) , where s is the set of rules in the path, and % is a partial ordering relation between the rules of the path defined as follows: ;ri ,r j g s . ri % r j ™ r j is accessible from ri A path can also be represented by specifying the partial ordering relation % between the rules in the path. Thus, the path of Fig. 5 can be represented as %s  - R 1 , R 2 ) , - R 1 , R 3 ) , - R 2 , R 4 ) , R 3 , R 4 )4 . The extent to which a given rule base realizes the acquired knowledge of goal inference is reflected by the paths in the rule base; they are collectively said to portray the structure of the rule base. Definition 4 ŽRule Base Structure. The structure of a rule base Žor, simply structure. is defined as ² G , P , D : where G is the goal specification of the domain, II is a set of rule base paths, and D s ² m 1 , m 2 : is the adhered design scheme, such that: Ž ;F g P . Ž 'G, g . Ž G ; G . Ž g g G . G n F & g and Ž 1.

Ž;ggG .H=

½

m 1 Ž g . if g is a final goal m 2 Ž g . if g is an intermediate goal

Ž 2. where H is the set of all hypotheses in the rule base. The second condition in the above definition simply asserts that goals are inferred using rule base

hypotheses only. Thus, if any external actions are to be modeled as part of a goal inference, it should still be represented using a hypothesis in the rule base; the action can take place following the inference of this hypothesis. During system evaluation, this can ensure that a given rule fires correctly by examining the inferred goal. This is particularly useful, when simulating external actions Žthat could take place during field operation. as part of a rule can be costly, or cannot be done during development. As an example, consider a life support system that monitors patient breathing, and turns on additional oxygen when oxygen intake falls below a threshold. In the system implementation, a rule should fire when the oxygen intake falls below a threshold, and turn on the appropriate oxygen equipment. In our model, this action should be represented using an hypothesis in the rule consequent in addition to executing the appropriate action. During system development, the rule firing can then be mapped to a path which can be inspected to check if the rule fires under the correct conditions. Note, the development site may or may not have the associated control equipment in this case owing to its cost. The extraction of paths from a rule base using a given goal specification is a non-trivial problem because procedures that extract inference chains from a rule base have an exponential complexity in the worst case w6,29x. However, goal specification can be used to control the computation required to because it can be refined to cut down the number of rule dependencies to be enumerated during path extraction w11x. We have developed a tool called Path Hunter to extract the paths in a rule base from a given goal specification. Path Hunter has been used successfully to extract the paths from a large rulebased system containing 435 rules w11x. The extraction of paths Žand the goal graph realized in the rule

288

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

base. is termed structure extraction, and it influences a variety of evaluation processes for rule-based systems w14,8,6,29,15,18x. To model the problem solving that occurs due to rule firings, one must extract the goal graph realized in a rule base. A goal graph extraction process from a rule base should enumerate all the connector chains from permissible initial evidence to a final goal. This progression can be easily captured because every path in the rule base represents a connector. Thus, we only need to enumerate path sequences required from a given initial evidence until a final goal is inferred w14x. Such a sequence of paths is called a route. Definition 5 ŽRelevant and Irrelevant Routes. A route is a sequence of one or more paths. A route from a level-0 goal to a solution is called a relevant route; any other route from a level-n goal to a goal g X , where n G 1 or g X is not a solution, is called an irrelevant route. Irrelevant routes indicate that some intermediate goals are not useful for inferring final goals, andror some final goals are not reachable w14x. Such routes and goals in the goal graph are thus not useful for problem solving and can indicate redundancy andror deficiency in the rule base w15x. 3. Goal supported restructuring: a case study Goal specification can be used for rule base restructuring to improve rule base performance, and is

also useful for preventing some coding errors. A common problem when encoding rules Žor rule groups. whose consequent Žaction. depends upon the specificity of the input conditions is rule subsumption Ža form of rule redundancy.: for example, rule R: A ™ C subsumes rule RX : A n B ™ D w30x. More generally, rule ri is said to subsume rule r j , whenever the antecedent of ri is more general than the antecedent of r j . Rule subsumption in a rule base is undesirable because it can produce unexpected results during problem solving w30x. It can also increase the amount of work an inference engine has to do when checking for rule activations because it causes more number of rules to be activated for a given set of input conditions w16x. In addition, a direct encoding of the domain knowledge without structuring it can lead to more work on the part of the inference engine because the number of combinations it should enumerate for determining which rules are ready to fire can be quite large w21,16x. A systematic identification of goals and encoding them as rule group discriminators during rule base development, however, can be used to prevent rule subsumption and performance degradation Žby limiting the number of combinations to be searched by an inference engine for rule activations.. The following case study, that restructures an existing rule base using goals, illustrates the above aspects. Goal specification was applied successfully for optimizing the rule base of an expert system designed to perform library search w16x. The system functions as follows: it is given a set of input search fields associated with

Fig. 7. An example rule from the library reference expert system.

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

a document such as title, author, subject . . . , up to a total of 13 fields. Typically, a subset of these is entered by the user. The system then checks for the location of the document using a data base system, and having obtained the location eventually retrieves the document. The Žfirst. prototype version uses a sample library data base at a central site. This prototype was developed under CLIPS w21x consisting of 205 rules. In this application, however, the knowledge obtained from the human reference librarians Žthe domain experts. was translated to handle the various input fields while searching for a relevant document. One such rule is shown in Fig. 7. This rule is

289

activated when the user enters five input search fields, but they do not match any of the existing document descriptions. When a direct translation is done as shown in Fig. 7 to check whether each input field is different from the other using the ‘&’ and ‘; ’ operators of CLIPS, it resulted in rule subsumption and an enormous computational overhead. The reason is as follows. In the system, rules that handle i-fields entered subsume the rules that handle the case Ž i q 1.-fields entered, whereas the handling of i input fields and i q 1 input fields are supposed to be exclusive. This subsumption arises because there are no discriminators coded into the rules to distinguish more focussed

Fig. 8. Run time analysis of a typical Žsimplified. rule in the rule base of the library reference expert system.

290

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

Fig. 9. Restructuring the sample rule shown in Fig. 8.

search specifications from general search specifications. As a result, a huge amount of pattern matching takes place, and a large number of rules are activated all the time Žthough they do not fire. when the number of input fields is close to 13. Note, the use of salience Ža form of rule priority assignment in CLIPS. to force sequentiality so that i q 1 input fields get priority over rules that handle i input fields does not always work. When translating the rule in Fig. 7 and others internally, and during pattern matching the inference engine will have to do an enormous amount of error checking in trying to satisfy the antecedent constraints. Combined with the subsumption that is existing between the rules, the number of pattern combinations required to check which rules are enabled

would be of factorial complexity with respect to the input fields because all possible permutations of the input fields have to be considered by the inference engine while instantiating the antecedent variables. This is illustrated in Fig. 8 that portrays the structure of a typical rule in the rule base and its run-time behavior. In fact, the system does even run when the number of input fields is more than six. In Fig. 8, it is assumed that the initial facts in the working memory are Žphase author., f-2: Žphase title., and f-3: Žphase subject.. The specific values associated with the fields are not shown for simplicity. The abbreviation CE n–m refers to a combination of facts that Žpartially. satisfy the antecedent. The activations refer to the number of times the rule can fire. Note that the number of activations of the

Fig. 10. The restructured rule based on additional level-0 goals.

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

rule is 6 Žs 3!.. In addition, this rule subsumes every rule Žgroup. that handles more than three input fields, thus increasing the pattern matching computation. An analysis of encoding of knowledge in rules such as the one in Fig. 7 indicates incorrect and inadequate design of atoms to reflect the captured knowledge. We need to identify predicates and goals Žlevel-0 in this case. to reflect the extent of input handling. In addition, we also need to make use of the fact that rules that handle i-fields should be treated exclusively from rules handling a different number of input fields. Thus, we need two level-0 goals: FIELD-INPUT Ž n. that says how many field are input, and FIELDS Ž x, y, . . . . that records the fields that actually input. For example, if the fields author, title and subject are entered, then these atoms

291

are respectively, FIELD-INPUT Ž3. and FIELDS Žauthor, title, subject., and must be asserted as initial facts before the system is to run. The atom FIELDINPUT Ž i . groups and discriminates rules handling i input fields from rules that handle j input fields, where i / j. The arguments of the predicate FIELDS structures the antecedent of the rules so that the variables have only one unique instantiation for a given set of input fields Žno permutation is necessary.. The effect of such restructuring applied to the sample rule in Fig. 8 is shown in Fig. 9a, and its execution trace for the same input Žaugmented by the atom FIELDS Žauthor, title, subject.. is shown in Fig. 9b. With the added level-0 goals, the rule shown in Fig. 7 is restructured as shown in Fig. 10.

Fig. 11. The effect of goal based restructuring in the library reference expert system. Individual, autonomous rule groups are formed consistent with the way problems would be handled in the domain.

292

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

These modifications cut down the subsumption between rules and the inference engine computation required to check for rule activations as there are no more elaborate error checks forced as part of pattern matching. This enabled the system to handle the full thirteen field input. In addition, the goals also modularized the system by helping to group rules handling i input fields to be treated independently of rules handling field number different from i, making the analysis of the rule base easier because each group is typically not more than seven to eight rules. The effect of the goal based restructuring is shown in Fig. 11. 4. Detecting CARD anomalies using goals and paths Rule base verification entails the detection of CARD anomalies in a rule base. Redundancy is the result of unwanted, or excess rules and atoms; deficiency refers to missing knowledge; ambivalence exists whenever an inviolable becomes true; and circularity exists due to circular rule enablements causing potentially infinite loops, but, more importantly, indicates the presence of circular Žand hence, possibly inaccurate. reasoning in the system. In our approach, the detection of CARD anomalies requires the identification of certain rule situations that can be indicative of these anomalies. They are called rule aberrations because they indicate an abnormality in the system w15x.

Definition 6 ŽRule Aberration. The anomaly set A of a rule base is the set of anomalies that can be present in a rule base Žtypically the CARD anomalies.. A rule aberration in a rule base consists of a set of paths that portray the manifestation of one or more elements from the anomaly set A. If p , a set of paths in a rule base, is an aberration, then we can state that:

p * a : A.

Goal specification, paths, and the extracted goal graph allow for a comprehensive detection of the CARD anomalies in a rule base at any stage during its construction: detect paths that adhere to one or more rule aberrations. We give below a list of ten aberrations, and provide comments about the possible anomalies these aberrations could indicate. We, however, do not claim that the list is exhaustive. Studying the various aberrations provides a different perspective on anomaly detection in rule bases. Moreover, by basing our study on rule sequences pertinent to problem solving rather than individual rules, we capture the rule interactions as well. Prior to the application of the aberration specifications below, it is assumed that the relevant and irrelevant routes from the goal graph Žsee definition 5. have been extracted from the rule base w14x. For convenience, an informal description of such a procedure appears in Fig. 12.

Fig. 12. Determination of relevant and irrelevant routes in a goal graph.

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

The identification of specific rules and atoms that may be causal to an anomaly in the rule base is called flagging. Our procedures flag rules to make the knowledge engineer aware of them; on further examination, the knowledge engineer may leave a flagged rule unchanged, edit the flagged rule, or may add other rules to the rule base so that the flagged rule is no longer causal to the anomaly. Below, we describe a list of aberrations for redundancy w29x. Note, these aberrations can be caused due to deficiency in the system as well. An example system is presented in Appendix B to illustrate the manifestation of these aberrations. Aberration 1 ŽRedundancy detection using route irrelevancy.. Motivation: A rule that does not contribute to problem solving is redundant. begin For all rules r do if all routes in which r appears are irrelevant or empty, then flag r as potentially redundant; end A rule that does not appear in any path is not useful for inferring any goal, and hence redundant. More generally, if a rule appears only in irrelevant routes, then it does not contribute to solving any problem in the domain because an irrelevant route either does not begin from initial evidence, or does not end in a solution. Hence, such a rule that appears only in irrelevant routes could be redundant. Similarly, a rule does not contribute to problem solving if it does not appear in any route. Aberration 2 ŽDetection of redundant rule chains.. Motivation: Rule chains that infer common goalsratoms are indicative of redundant work in the system. A rule chain as used in the literature w31x is a linear sequence of rules. Aberration 2 is used to detect redundant rule chains. beginr ) For redundancy of rule chains; See Nguyen w31x; we also enable detection of redundant rules )r 1. Flag paths appearing only in irrelevant routes as redundant.

293

2. For any two paths F 1 and F 2 , Ži. if the goals required for F 1 subsume the goals required for F 2 , and the goal inferred by F 1 subsumes the goal inferred by F 2 , then flag F 1 and F 2 . r ) For efficiency, we may consider only relevant routes )r 3. For any two paths F 1 and F 2 where each appears in at least one relevant route Ži. if goals required for F 1 is subsumed by the goalŽs. required for F 2 , and Žii. if goal inferred by F 2 subsumes goal inferred by F 1 , then flag F 1 and F 2 ; r ) For example, if goal atoms inferred by F 2 is a subset of the goal atoms inferred by F 1 , then flag rules in F 2 not appearing in any other path. ) r end. The method in Nguyen w31x will flag the above rule chains, but their approach to the extraction of rule chains may not be practical for large rule bases. We, however, efficiently extract paths using our path hunter tool w11x. Note, because of the partially ordered nature of paths, we flag not only rule chains that are linear, but those that are partially ordered as well. Aberration 3 ŽDetection of redundant atoms.. Motivation: Multiply inferred atoms in a path may be redundant. The intuition behind aberration 3 for detecting redundant atoms is that whenever atoms are inferred by a rule r, if some other ruleŽs. always fire to infer these additionally, then these atoms in the consequent of r are possibly redundant. beginr ) Based on goal redundancy )r 1. For all paths F , if more than one rule infers the set of goal atoms for goal g inferred by F do Ži. Let X:s set of goal atoms that are multiply inferred. Žii. if any subset Y of X is multiply inferred in all paths in which these goal atoms, are inferred, flag this subset of goal atoms. Žiii. if a toe rule r in F has goal atoms only from the set Y in Žii. above, flag rule r. r ) a complementary step to step 1; applied to non-goal atoms; )r

294

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

2. For all paths F , let X be the set of dangling non-goal atoms Ži. if non-goal atoms in Ža subset of. X are dangling in every path they appear, flag this Žsub.set of non-goal atoms. Žii. For all rules r1 that infer some dangling nongoal atoms in Ža subset of. X above, and some other consumed non-goal atoms, if some other ruleŽs. infer the consumed non-goal atoms in every path where r1 appears, flag r1. end. Two types of non-goal atoms can be identified in a path: ‘dangling’ and ‘consumed’. A dangling nongoal atom in a path is an atom in the consequent of a rule, but does not unify with an atom in the antecedent of any other rule in the path Žsee atom A13 in Fig. 5.. A consumed non-goal atom is one that is in the consequent of a rule in a path that unifies with an atom in the antecedent of some rule in the path Žsee atom A11 in Fig. 5.. In this aberration, if atom A31 is dangling in every path it appears, then it is flagged. In addition, step 1 Žiii. of this aberration flags redundant toe rules Žrules that infer only goal atoms in a path. whenever their consequent is inferred by other rules in every path they appear.

Aberration 4 ŽSubsumed rules.. Motivation: Use the meta knowledge of goals to identify rules that are functionally equivalent to each other.

The traditional methods flag duplicate rules and rules of the form a ™ b and a n c ™ b, where the latter is subsumed by the former. Using paths, however, a more general form of detection is possible. Let r1: A 1 n A 3 ™ H and r2 : A 1 n A 2 ™ HX be any two rules such that A 3 is a goal atom, the remaining atoms are non-goal, and the H’s represent hypotheses which can be a conjunction of atoms. Further, let H subsume HX . Then, the rule pair - r1 , r2 ) is flagged redundant if the following conditions hold: Ži. if every path in which r1 appears has at least one rule that infers A 2 Žin other words, whenever r1 fires, r2 can also fire. then perhaps r1 , or r2 is redundant; Žii. whenever r2 occurs in a path from G to g and A 3 is contained in G Žin other words, r1 can also be used to traverse between any G to g

whenever r2 can do so., then perhaps r1 , or r2 is redundant. We flag both the rules because the consequent of rules r1 and r2 should be examined before concluding redundancy. For example, Ž1. if H is BIRD Ž x . and HX is BIRD ŽTweety., and condition Ži. above holds then r2 is redundant; and Ž2. if H is BIRD Ž x . and HX is BIRD Ž x . n SINGS Ž x ., and condition Žii. above holds then r1 is redundant. begin Let r1 and r2 be any two rules in the rule base. 1. Flag - r1 , r2 ) , r ) r1 can traverse between any goal—goal whenever r2 can )r Ži. if the goal atoms in the antecedent of r1 are contained in the goals required for every path where r2 appears, and Žii. Every non-goal atom in the antecedent of r1 not present in the antecedent of r2 is supplied by some rule in every path in which r2 appears, and Žiii. One of the rule consequents subsumes another. 2. Flag - r1 , r2 ) , r ) vice versa )r Ži. if goal atoms in the antecedent of r2 are contained in he set of goals required for every path that contains r1 , and Žii. Every non-goal atom in the antecedent of r2 not present in the antecedent of r1 is supplied by some rule in every path in which r1 appears, and Žiii. One of the rule consequents subsumes another. end This aberration is a general form for detecting rule subsumption: whenever a rule can replace another rule in all goal-to-goal progressions the latter participates, then these rule pairs are flagged. This aberration can also detect syntactic subsumption between rules because rule r: a ™ b can appear in every path where rule rX : a n c ™ b can appear. The detection of subsumed rules is important because of performance implications Žsee our case study in Section 3., and from the observation that rule subsumption can interfere with certain ‘greedy’ inference strategies producing unexpected results w30x Aberration 5 ŽDetecting useless inferences.. Motivation: Redundant atoms in a rule consequent serve no useful purpose.

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

begin 1. Let X:s rules flagged redundant in any of the above aberrations 1–4, consider the atoms in the antecedent of the rules in X. 2. If some of these atoms never appear in the antecedent of rules not in X then flag these atoms. 3. If a rule r infers any of the atoms flagged in step 2, then flag r. r ) useless inference )r end Inferring an atom used only in the antecedent of flagged rules is possibly redundant. This aberration enables detection of redundant atoms by inspecting the antecedent of the rules flagged already Žby any of the above aberrations.. If an atom A in the antecedent of one of these rules never appears in the antecedent of rules that are not flagged, then this atom is redundant. Note, this type of flagging would require examination of some rules that have not been flagged, but use these atoms in their consequent. Aberration 6 ŽDetecting redundant consumed atoms.. Motivation: Redundant atoms in rule antecedents can cause rule unreachability. Atoms in the antecedent of a rule r inferred only by flagged rules are redundant. The rule r can become unreachable if all the flagged rules are deleted from the rule base. This type of flagging would require examination of some rules that have not been flagged, but use these atoms in their antecedent. This is complementary to aberration 5. begin 1. Let X:s rules flagged redundant in any of the above aberrations 1–4, consider the atoms in the consequent of the rules in X. 2. If some of these atoms never appear in the consequent of rules not in X, then flag these atoms. 3. If a rule r uses any of the atoms flagged in step 2, then flag r. r ) potential unreachability )r end For example, consider a rule r: a ™ b flagged by one Žor more. of the aberrations 1–4. Let rX : b ™ c be a rule that is not flagged by the aberrations. Then aberration 6 flags this rule rX because if the knowledge engineer deletes rule r then rule rX becomes Žpotentially. unreachable: that is, it could never fire. Thus, caution is advised in deleting flagged rules.

295

The following aberrations characterize ambivalence in the system w16x. Aberration 7 ŽAmbivalence in a Path.. Motivation: Goal inference should not violate a domain constraint. begin 1. ŽIntra-path. The conjunction of non-goal atoms in a path should not be subsumed by an inviolable. This also applies to the conjunction of all the atoms in a path. 2. ŽGoal ambivalence. No goal should be subsumed by an inviolable. This can indicate inaccuracies in knowledge acquisition. 3. The set of goals required by a path should not contain or be subsumed by an inviolable. end. Aberration 7 is based on a path and a goal it infers. Clearly, in trying to infer a goal, the non-goal atoms in a path should not violate a constraint. For example, MALE Ž x . n FEMALE Ž x ., should not be inferred in a path even though it may infer something useful. Condition 2 prohibits incorrect goal specification. Condition 3 enables no path should start at the cost of violating a constraint. Aberration 8 ŽAmbivalence over Inference Chains.. Motivation: Solutions should not be inferred from permissible initial evidence at the cost of violating a domain constraint. begin 1. The conjunction of non-goal atoms in a relevant route should not be subsumed by an inviolable. This means as part of problem solving at least one constraint is violated. 2. The conjunction of goal atoms in a relevant route should not be subsumed by an inviolable. This means as part of problem solving at least one constraint is violated. r ) Also check for constraint violation for all atoms inferred in the route )r end. Aberration 8 considers transitivity of inferences Žan inference used for other inferences. to check for ambivalence. This can occur under two scenarios. In the first scenario, while goals are not inviolables

296

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

themselves, some of the goal atoms can be part of inviolables. In problem solving therefore the system can be ambivalent whenever this set of atoms are collectively inferred over a sequence of paths. In general, a system is potentially ambivalent, whenever a set of goals subsume an inviolable. The ambivalence is potential because, this is problematic iff we have pathŽs.rroutes that involve this set of goals. In the second scenario, we ensure no atoms involved in a relevant route violate a constraint: this ensures that this complete sequence of paths from initial evidence to final goals is free of ambivalence. More specifically, we may check if a set of non-goal or goal atoms or their combination violates a constraint in order to focus the fix in the rule base or goal specification or both. Note, checking that every path is free from ambivalence Žcf. aberration 7. does not ensure that a route is free from ambivalence. Aberration 9 ŽImpact of Impermissible Initial Evidence.. Motivation: A rule base should not infer any meaningful result from an impermissible combination of initial evidence. begin 1. As part of route determination check if an impermissible set of initial evidence is obtained at level-0.

2. ŽReverse of 1. For every set of impermissible initial evidence, check if paths and routes can be enumerated. Flag all these paths. end. Conditions 1 and 2 check for all routes that can be caused by an impermissible evidence combination: this is serious because, if at least one of these routes is relevant, then an inviolable is treated as a valid input by the system. This reflects on inaccuracy and negative adequacy of the system Žsolving problems that are not intended to be solved.. Note, condition 2 requires a modification to the algorithm for route enumeration described in Ref. w14x. There is only one aberration for circularity. Aberration 10 ŽPath Circularity.. Motivation: Goal inference should not entail circular reasoning. begin 1. A sequence of paths ŽF 1 , . . . , Fn . where goal inferred by a path F iy1 1 - i F n is used as part of the start state of path F i , such that the goal inferred by Fn is contained in the start state of F 1. end. The start state of a path is the set of goals required by a path before the rules in the path can fire. Since paths are partially ordered, the above detection of circularity can flag more than one circu-

Fig. 13. An example description of the occupation of the various persons in a university domain.

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

297

Fig. 14. Typical knowledge elicited from a domain expert about a university environment. Interviews are the most common ways of eliciting knowledge from the domain. These are later translated into a set of goals associated with the domain by the knowledge engineer in collaboration with the domain expert forming the goal specification for the domain.

lar dependency between the rules in the system. In addition, during path extraction, the tool Path Hunter w11x also flags rules whenever it detects a circular accessibility relationship between the rules in the rule base. Currently, we are in the process of writing a verification tool based on the algorithms developed for detecting rule aberrations. In general, these algorithms require that the paths be pre-processed into a set of indices before aberrations can be spotted. The typical indices are the rule index Žlist of paths, and routes in which a rule appears., and fact index Žlist

of paths and routes in which the fact appears.. As an example, an algorithm to detect aberration 6 appears below. It, however, requires only the current set of flagged rules and atoms. We have assumed the existence of a function Antecedent Žr. to return a list of atoms in the antecedent of rule r. Most of the algorithms to detect aberrations are simple except a few such as the one to detect aberration 4. Procedure aberration 6; r) detect aberration 6 )r Input: Paths, FlaggedR, FlaggedA. begin

Fig. 15. The goal specification of the university domain from its description in Fig. 14. An asterisk on a goal indicates that it is a final goal.

298

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

Let FlaggedR:s current set of flagged rules; Let FlaggedA:s current set of flagged atoms; For all x such that x g FlaggedA do If 'r f FlaggedR n x g Antecedent Žr. then Flag rule r r ) r can become potentially unreachable )r end.

5. Summary and conclusion The design and development of rule-based systems often cause anomalies in the rule base. We presented a frame work that can facilitate a variety of software engineering processes in a rule-based system life-cycle to deal with such anomalies. The frame work is based on abstracting the acquired knowledge in terms of goals Žgoal specification., and representing knowledge in terms of rule sequences inferring goals Žpaths. with the design stage providing a goal-to-hypothesis mapping Ždesign scheme. appropriate for the domain. We also described how goals and paths can be a useful tool both for detecting the CARD anomalies in a rule base, and for knowledge restructuring to optimize rule bases Žusing a case study.. The design restrictions imposed between goals and the hypotheses realizing them in the rule base results in several design schemes. The inheritance relationship between the different design schemes allows varying amounts of freedom with which knowledge can be encoded. It also provides compromises between development and maintenance, and under certain conditions, inheritance can be used to perform an automated transformation of a rule base in one scheme into another w28x. Thus, one can choose a scheme ‘better’ suited for development, but transform it into another before field delivery. The analysis of this inheritance relationship and related issues are not discussed as they are beyond the scope of this paper. Finally, an implemented rule base must be subjected to various evaluation procedures to compare its actual behavior with the expected behavior. We provided our perspectives on rule base verification based upon paths and goals. More specifically, we described detecting the CARD anomalies in a rule base by specifying rule situations called rule aberra-

Fig. 16. The inviolables of the university domain. It is assumed that a registered student cannot be both undergraduate and graduate. In addition, if a person receives a bursary, hershe cannot be categorized as not receiving any financial aid.

tions. The computation required for path extraction Žhence, for evaluation. can be controlled by an appropriate choice of a design scheme and prudent goal refinements Žif needed.. We are currently in the process of writing a verification tool that implements the aberration specifications described in this paper. We have also developed tools to extract paths from a given rule base as well as to measure path coverage w11,12x, and intend to augment this tool suite with the above path-based verifier. This tool suite is expected to provide reliable support to system developers.

Appendix A. Illustrating design scheme selection Consider constructing a rule base to identify a person’s occupation in a university environment. The

Fig. 17. The rule base encoding the knowledge describing the university domain.

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

domain description appears in Fig. 13 and a typical analysis for a knowledge engineer to prune, or choose design schemes is described below. The design schemes based on choice I1 or I5 for intermediate goals are ruled out according to the recommendation in Section 2.1. In addition, for the above description choice I2 for intermediate goals

299

that forces every intermediate goals to have at least one final hypothesis is not convenient for encoding. The - F1, I3 ) , and - F1, I4 ) schemes can accommodate the goal specification in Fig. 13, but can have difficulty inaccommodating goals specified later as the rule base develops incrementally. For example, adding new goals REGULAR_STUDENT, IR-

Fig. 18. Paths extracted from the rule base of Fig. 17. To avoid cluttering, not all of the inferred atoms in a path are shown.

300

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

Fig. 19. The goal graph for the rule base shown in Fig. 17.

REGULAR_STUDENT,

but emphasizing their relation to while still retaining them as solutions, can be cumbersome owing to the restriction F1 that requires all final goals should contain only final hypothesis. Note, changing a final goal to an intermediate goal can require significant rule base modification in these schemes. The - F2, I3 ) scheme can accommodate the goal specification with ease. But, of course, care should be taken while maintaining causalrtemporal relations between final goals. For instance, consider the domain constraint involving DEAN and ASSOCDEAN: we simply cannot realize final goal ASSOCDEAN as an intermediate hypothesis and use it to infer DEAN owing to restriction F2. A similar argument applies to the - F2, I4 ) scheme. However, a reversal of goal types is better accommodated in the - F2, I3 ) scheme owing to its flexibility. Realizing the solutions as they are specified while maintaining their relationships is easier in schemes with choice F3 for final goals that require every solution to have at least one intermediate hypothesis. However, some of the final goals in this scheme can be counter intuitive, if they are realized as only intermediate hypotheses Žhence, less understandable.. A similar observation applies to schemes with choice F5 for final goals that impose no constraints in final goal realization. In addition, care should be exercised if such a scheme is chosen because the rule base can become obscure and error prone due to incremental modifications. GRAD, UGRAD,

A knowledge engineer may thus prefer to choose either the - F2, I3 ) scheme, or the - F2, I4 ) scheme for this domain.

Appendix B. An example to illustrate rule aberrations We present a small system to describe a university environment to illustrate the manifestation of the various aberrations. The body of knowledge acquired by interviews for this domain is shown in Fig. 14; the goal specification of the domain is shown in Fig. 15; and the inviolables in Fig. 16. Note, permissible combinations of initial evidence Žwhich represent conjunction of atoms representing initial evidence that a domain expert would use to start problem solving. should also be specified. In order to show

Fig. 20. Summarizing the results of redundancyrdeficiency aberrations for the example rule base of Fig. 17.

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

301

Fig. 21. The path where rule R15 appears. The path is ambivalent as it infers the inviolable BURSARY n NOFINAID.

the effect of each initial evidence on problem solving, we show each atom that is an initial evidence as a level-0 goal. Thus, permissible combinations of initial evidence would be represented as AND edges from level-0 goals in the goal graph. Suppose design scheme - F1, I3 ) is chosen for this domain, a rule base to encode the goal progression knowledge is shown in Fig. 17. The paths of the rule base are shown in Fig. 18 and the goal graph in Fig. 19. For this rule base and the given goal specification, the intermediate goals g 12 and g 14 are irrelevant. A summary of the application of aberration specifications 1–6 is shown in Fig. 20. For example, rule R15 was flagged by aberration 1 because it does not appear in any path. The actual cause, however, is due to deficiency as will be apparent later. Similarly, atom COMPLETEDŽ x, Juniorcollege. in the consequent of rule R9 was flagged using aberration 3. Note, however, that this atom is used in the antecedent of rule R15 Žthat was flagged also redundant.: thus, detection methods based on simple unification may not detect this atom as redundant w32x. Inspection of the rules and atoms flagged by the above aberrations should also reveal a design scheme violation. Of course, whenever rules or atoms are flagged by aberration procedures, refining the goal specification and editing the rule base may remove the anomalies. For example, the subsumption problem between R8 and R13 can be corrected by modifying the permissible combination of initial evidence HARDWORKING Ž x . n GREENBORDRID Ž x . to HARDWORKING Ž x . n GREENBORDRID Ž x . n GOODRECORD Ž x, y ., and the antecedent of rule R13 to contain the atom GOODRECORD Ž x, Courses.. However, modifications to fix a detected anomaly can also introduce additional anomalies. For illustration, consider the

following modification to the rule base in Fig. 17 to ensure that rule R15 appears in a path. As rule R15 was flagged redundant by aberration R-1, a knowledge engineer trying to fix this redundancy can include GOODRECORD Ž x, Juniorcollege. as initial evidence. However, this still would not make R15 appear in a path. It is required to remove GOODCAREER Ž x . in the consequent of rule R15. In addition, the conjunction REGISTERED Ž x . n GOODRECORD Ž x, y . must also be added as a permissible combination of initial evidence. To accommodate the status of a registered person with good grades in

Fig. 22. The modified rule base based upon the evaluation results of the rule base shown in Fig. 17.

302

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

junior college, assume that the following new rule is added: R18:REGISTERED Ž x . n GOODRECORD Ž x,Juniorcollege. ™ COMPLETED Ž x,Juniorcollege. n NOFINAID Ž x . to encode the knowledge describing a person just registered after completing junior college, who does not receive any financial aid. This adds a new path which is shown in Fig. 21. However, the modification to remove the redun-

dancy of rule R15 has now resulted in another anomaly in the rule base. The path in Fig. 21 infers an inviolable: BURSARY n NOFINAID; thus, the system is now ambivalent. An inspection of the rule base shows that, in this case, atom NOFINAID Ž x . is not required in the consequent of rule R18, because rule R1 makes that inference for a general registered undergraduate student. The impact of a rule modification can immediately be assessed by thus checking the paths affected for aberrations, if any.

Fig. 23. Paths of the rule base shown in Fig. 22 which is the rule base modified to fix the errors, anomalies and scheme violations detected after evaluation of the rule base shown in Fig. 17.

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

303

Fig. 24. The goal graph for the rule base shown in Fig. 22.

The final version of the rule base after making the required modifications to fix the detected anomalies is shown in Fig. 22. Once all the detected anomalies are fixed, and the system behavior is judged to be acceptable by a knowledge engineer, the system is tested for the satisfaction of the user acceptability criteria w33,34x. Often, the user acceptability criteria can impose further constraints on the user interface required, on the response times, on the system adequacy, etc. Let us assume that our user acceptability criteria involves that the system should be optimal and adequate in its domain w14x: that is, it should utilize all the intermediate goals specified for problem solving and produce a solution for every permissible combination of initial evidence. The final set of paths and the goal graph for the system are shown in Figs. 23 and 24. There are no more unwanted rules or atoms. All the rules in the rule base come into play for problem solving and the system is optimal and adequate for the given goal specification. As there are no irrelevant final goals, the system satisfies the user acceptability criteria.

w3x

w4x

w5x

w6x

w7x

w8x

w9x

w10x

References w11x w1x G.R. Yost, Acquiring knowledge in SOAR, IEEE Expert 8 Ž3. Ž1993. 26–34. w2x A.D. Preece, R. Shinghal, A. Batarekh, Principles and prac-

tice in verifying rule-based systems, Knowledge Eng. Rev. 7 Ž2. Ž1992. 115–141. R.M. O’Keefe, D.E. O’Leary, Expert system verification and validation: a survey and tutorial, Artificial Intelligence Rev. 7 Ž1. Ž1993. 3–42. T.A. Nguyen, W.A. Perkins, T.J. Laffey, D. Pecora, Checking an expert systems knowledge base for consistency, completeness, in: Proceedings of the 9th International Joint Conference on Artificial Intelligence ŽIJCAI 85., Vol. 1, Boston, MA, 1985, pp. 278–375. B.J. Cragun, H.J. Steudel, A decision-table-based processor for checking completeness and consistency in rule-based expert systems, Int. J. Man–Machine Stud. 26 Ž5. Ž1987. 633–648. Allen Ginsberg, Knowledge-base reduction: a new approach to checking knowledge bases for inconsistency and redundancy, in: Proceedings of the 7th National Conference on Artificial Intelligence ŽAAAI 88., Vol. 2, St. Paul, MN, August 1988, pp. 585–589. Robert T. Plant, The meta knowledge level: a methodology for validation, in: Proceedings of the AAAI Workshop on Validation and Verification of Knowledge-Based Systems, Washington, DC, July 1993, pp. 94–108. C.L. Chang, J.B. Combs, R.A. Stachowitz, A report on the expert systems validation associate ŽEVA., Expert Syst. Appl. 1 Ž3. Ž1990. 217–230. A.D. Preece, R. Shinghal, A. Batarekh, Verifying expert systems: a logical framework and a practical tool, Expert Syst. Appl. 3 Ž2r3. Ž1992. 421–436. Marie-Christine Rousset, On the consistency of knowledge bases: the COVADIS system, Computational Intelligence, 4 Ž2.: 166–170, May 1988; Also in ECAI 88, Proc. European Conference on AI, Munich, August 1–5, 1988, pp. 79–84. C. Grossner, A. Preece, P. Gokulchander, T. Radhakrishnan, C.Y. Suen, Exploring the structure of rule based systems, in: Proceedings of the 11th National Conference on Artificial Intelligence ŽAAAI 93., Washington, DC, 1993, pp. 704–709.

304

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305

w12x A. Preece, C. Grossner, P. Gokulchander, T. Radhakrishnan, Structural validation of expert systems: experience using a formal model, in: Notes of the Workshop on Validation and Verification of Knowledge-Based Systems, 11th National Conference on Artificial Intelligence, Washington, DC, July 1993, pp. 19–26. w13x A. Preece, P. Gokulchander, C. Grossner, T. Radhakrishnan, Modeling rule base structure for expert system quality assurance, in: Notes of the Workshop on Validation of Knowledge-Based Systems, 13th International Joint Conference on Artificial Intelligence, Savoic, France, August 1993, pp. 37– 50. w14x P. Gokul Chander, R. Shinghal, T. Radhakrishnan, Static determination of dynamic functional attributes in rule-based systems, in: Proceedings of the 1994 International Conference on Systems Research, Informatics and Cybernetics, AI Symposium ŽICSRIC 94., Baden Baden, Germany, August 1994, pp. 79–84. w15x P. Gokul Chander, T. Radhakrishnan, R. Shinghal, Using paths to detect redundancy in rule bases, in: Proceedings of the 11th IEEE Conference on Artificial Intelligence Applications, IEEE CAIA ’95, Los Angeles, CA, February 1995, pp. 133–139. w16x P.G. Chander, R. Shinghal, T. Radhakrishnan, Goal supported knowledge base restructuring for verification of rule bases, in: Notes of the Workshop on Verification and Validation of Knowledge-Based Systems, 14th InternationalJoint Conference on Artificial Intelligence, Montreal, Canada, August 1995, pp. 15–21. w17x Rajjan Shinghal, Formal Concepts in Artificial Intelligence, Chapman & Hall, London, UK, co-published in U.S. with Van Nostrand–Reinhold, New York, 1992. w18x A. Preece, C. Grossner, P. Gokulchander, T. Radhakrishnan, Structural validation of expert systems: experience using a formal model, in: Jay Liebowitz ŽEd.., Second World Congress on Expert Systems, Estoril, Portugal, January 1994, pp. 323–330. w19x Gregg R. Yost, Allen Newell, A problem space approach to expert system specification, in: Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI ‘89, San Mateo, CA, 1989, pp. 621–627. w20x P. Lucas, Refinement of the HEPAR expert system: tools and techniques, Artificial Intelligence Med. 6 Ž2. Ž1994. 175–188. w21x J. Giarratano, G. Riley, Expert Systems: Principles and Programming, 2nd edn., PWS Publ., Boston, MA, 1993. w22x A. Batarekh, A.D. Preece, A. Bennett, P. Grogono, Specifying an expert system, Expert Syst. Appl. 2 Ž4. Ž1991. 285– 303. w23x N. Zlatareva, A.D. Preece, State of the art in automated validation of knowledge-based systems, Expert Syst. Appl. 7 Ž2. Ž1994. 151–167. w24x J. Debenham, Expert systems designed for maintenance, Expert Syst. Appl. 5 Ž3. Ž1992. 233–244. w25x Steven A. Wells, The VIVA method: a life-cycle independent approach to KBS validation, in: Proceedings of the AAAI Workshop on Validation and Verification of Knowledge-Based Systems, Washington, DC, July 1993, pp. 109– 113.

w26x J.A. Long, I.M. Neale, Using paper models in validation, verification and testing, Int. J. Expert Syst. 6 Ž3. Ž1993. 357–382. w27x P.G. Chander, T. Radhakrishnan, R. Shinghal, Design schemes for rule-based systems, International Journal of Expert Systems: Research and Applications, November 1996, In press. w28x Prabhakar Gokul Chander, On the design and evaluation of rule-based systems, PhD thesis, Department of Computer Science, Concordia University, Montreal, May 1996. w29x J.D. Kiper, Structural testing of rule-based expert systems, ACM Trans. Software Eng. Meth. 1 Ž2. Ž1992. 168–187. w30x Daniel E. O’Leary, Inference engine greediness and subsumption of conditions in rule-based systems, in: Notes of the Workshop on Verification and Validation of KnowledgeBased Systems, Fourteenth International Joint Conference on Artificial Intelligence, Montreal, Canada, August 1995, pp. 42–48. w31x T.A. Nguyen, Verifying consistency of production systems, in: Proceedings of the 3rd Conference on Artificial Intelligence Applications, Washington, DC, Spring 1987, pp. 4–8. w32x F. Polat, H.A. Guvenir, UVT: a unification-based tool for knowledge base verification, IEEE Expert 8 Ž3. Ž1993. 69–75. w33x A.D. Preece, Towards a methodology for evaluating expert systems, Expert Syst. 7 Ž4. Ž1990. 215–223. w34x Carlo Ghezzi, Mehdi Jazayeri, Dino Mandrioli, Fundamentals of Software Engineering, Prentice-Hall, New York, 1991. Dr. P.G. Chander is a post doctoral fellow in the department of Computer Science at Concordia University. He completed his Bachelor’s degree in Electrical and Electronics Engineering at Regional Engineering College Tiruchirappalli, his Master’s degree in Computer Engineering at Boston University, and his Doctoral degree in Computer Science at Concordia University. His research interests include Artificial Intelligence, Distributed Systems, Multi-media Databases, and Design of Distributed Multi-media Applications. Dr. R. Shinghal is a professor of computer science in Concordia University, Montreal. He is the author of Formal Concepts in Artificial Intelligence, 666 pp., published in the UK by Chapman and Hall, co-published in the U.S. with Van Nostrand.

P.G. Chander et al.r Decision Support Systems 21 (1997) 281–305 Dr. T. Radhakrishnan obtained his M.Tech and PhD from Indian Institute of Technology, Kanpur, India. He has been working at Concordia University since 1975. His areas of research and teaching interests are in Cooperating Intelligent Agents, User Interfaces, and the Design of Reliable Knowledge Base Systems. He is also interested in Social Aspects of Computing and Information Technology particularly for the benefit of people in Developing countries.

305