A Petri-Net based approach for verifying the integrity of production systems

A Petri-Net based approach for verifying the integrity of production systems

Int. J. Man-Machine S&dies (1992) 36, 447-468 A Petri-Met based approach for verifying the integrity of production systems RITU AGARWAL Department of...

2MB Sizes 7 Downloads 44 Views

Int. J. Man-Machine S&dies (1992) 36, 447-468

A Petri-Met based approach for verifying the integrity of production systems RITU AGARWAL Department of MIS and Decision Sciences, 45469-2130, USA

University of Dayton, Dayton, OH

MOHAN TANNIRU School of Management,

Syracuse University, Syracuse, NY 13244-2130,

USA

(Received 26 April 1990 and accepted in revised form 8 April 1991) The production rule formalization has become a popular method for knowledge representation in expert systems. Current development environments for rule-based systems provide few automated mechanisms for verifying the consistency and completeness of rule bases as they are developed in an evolutionary manner. We describe an approach to verifying the integrity of a rule-based system. The approach models the rule-base as a Petri-Net and uses the structural properties of the net for verification. Procedures for integrity checks at both local and chained inference levels are described.

1. Introduction systems (ES) are becoming increasingly popular in commercial environments as organizations are becoming aware of the capabilities of these systems to lever expertise. This popularity is manifest in the form of rule-based applications as conceptually, this type of knowledge representation is the simplest to program and develop. Knowledge modelled through production rules, represented as condition-action pairs, exhibits many desirable properties that enhance both its semantic clarity and maintainability. These include modularity, uniformity and the ability of the resultant ES to explain and justify its reasoning to the user (Hayes-Roth, 1985). The high degree of modularity inherent in rule-based systems, however, also contributes to the opacity of the production rule representation which renders the behavior flow (or procedural component) of the system largely invisible to users and developers. The incremental and iterative nature of knowledge transfer between experts and knowledge engineers requires that system development proceeds in a prototyping mode. While modularity and procedural independence facilitate prototyping by allowing knowledge to be added to the system as and when it becomes available, there are some inherent drawbacks also. The addition of new knowledge in a piece-meal manner invariably results in certain types of inconsistencies (Nazareth, 1989) and rule-based development environments do not provide for a mechanism to automatically verify integrity each time the knowledge base changes (Mettrey, 1987). The evolutionary construction of knowledge bases resists formal verification and on-going testing is required to ensure correct system performance. In traditional (procedural) programming environments, adding code to a program 447

Expert

0020-7373/92/030447+ 22$03.00/O

@ 1992 Academic

Press Limited

448

R. AGARWAL

AND

M. TANNIRU

and ensuring that each addition is consistent with the existing code are also issues of concern. However, the sequential nature of code execution in these environments makes the isolation of existing code that will be affected by changes relatively easy. In the context of rule-based systems, because program execution is goal- or data-driven, there is no a priori method of determining which rules will be impacted by the addition, deletion or updating of existing rules. Recently, a few approaches to checking the consistency and completeness of knowledge bases as they are modified over time have been identified (Cragun & Steudel, 1987; Miller et al., 1986; Nguyen ef al., 1985; Nguyen, 1987; Nguyen ef al., 1987; Ginsberg, 1988). Each of these techniques provides an automated mechanism for verifying rule-base integrity because manual verification is unfeasible even for a knowledge base of moderate size (Nazareth, 1989) and the problem of ensuring integrity in a rule base has been made increasingly tractable with each technique. The focus of the majority of reported research (excluding Ginsberg, 1988) has been on integrity checking at the local level (a single goal parameter, or parameters relevant in a particular decision making context) and provides few mechanisms for ensuring integrity at the global level (inference chains that connect inputs to final system goals via intermediate goals). This is a serious limitation as global errors also can have an extremely detrimental effect on system correctness. Reported execution speeds of the algorithms described for verification vary from polynomial ranges (0(n’)) to exponential times (0(2”)), w h ere n represents the number of production rules in the knowledge base. We extend previous research by describing an approach to maintaining integrity in a rule base at both local and global levels. The proposed approach is constructed around a specific modelling formalization-a Petri-Net-and uses the structural properties of the net for integrity verification. While our procedures are similar in functionality and scope to those described by Ginsberg (1988), the modelling formalization employed provides additional benefits for managing other aspects of a production rule base (Agarwal & Tanniru, 1991; Gulati & Tanniru. 1990). The analogy between Petri-Nets and production systems has been noted previously by Zisman (1977). The procedures we describe are computationally tractable, and it is shown that the execution complexity of verification checks using the Petri Net compares favorably with existing approaches. On-going verification is possible in our method, so that errors can be identified as soon as they are introduced into the knowledge base. The organization of the subsequent discussion is as follows. Section 2 describes the types of errors that can surface in a rule-based system with the addition of knowledge and formalizes the conditions in an expert system that our methods are designed to identify and help resolve. The following section reviews prior research in rule-base consistency and completeness. Section 4 demonstrates how a Petri-Net representation can be used to verify system integrity at local and global levels. Also included here is a discussion of the tractability of this approach and future extensions to the research. The final section presents our conclusions. 2. A classification

of potential

errors in a knowledge

base

A rule-based expert system contains production rules describing expert knowledge associated with problem solving in the target domain. Each rule consists of a

VERIFYING THE INTEGRITY

OF PRODUaON

SYSTEMS

449

left-hand side (LHS), containing one or more antecedent parameter-value combinations (literals), combined using conjunction, disjunction or negation, and a right-hand side (RHS) that is similarly constructed. The left-hand side represents facts or “states of the world” from which the right-hand side is logically derivable. Both the LHS and the RHS may contain variables that might be instantiated at run-time or through a database of facts. Rule execution is controlled by the inference engine which employs a basic match-execute strategy (Hayes-Roth, 1985). Inference may be goal-(backward) or data-(forward) driven. In the former case, the inference engine tries to satisfy a production containing the goal parameter in its RHS. The satisfaction of this production may require additional rules to be fired, and this process continues till the system is able to logically conclude the goal. In the latter case, known facts are matched against the LHSs of rules, and productions are fired until the goal parameter becomes a member of the known facts. Several anomalies may be inadvertently introduced into a knowledge base as new knowledge is added in the form of independent rules. Some of these errors may be purely syntactical and perhaps detectable by the programming environment, while others could be potentially more serious logical errors (Nazareth, 1989). Because the performance of an ES (in terms of correctness of advice) is critically contingent upon the integrity of the knowledge-base, it becomes extremely important to be able to detect and correct logical errors. Nazareth (1989) describes an exhaustive taxonomy of these types of logical errors. Below, we classify errors based upon whether they occur at the focal or global level. The local level consists of all rules that have the same consequent goal parameter and represents the context within which that parameter is assigned a value, while the global level relates to the entire set of rules comprising the knowledge base. As might be intuitively obvious, while errors at both levels can have serious ramifications for knowledge-base correctness, errors at the local level are easier to detect while global inconsistencies are far more difficult to isolate. In the taxonomy described below, it is assumed that all rules are in unitized form, i.e. the LHS consists only of conjunctions and the RHS concludes a value for a single parameter. Such knowledge-bases have been identified as “well structured” (Pederson, 1989) and easier to maintain and debug. This assumption does not sacrifice any generality, as the ensuing discussion is easily extended to any type of rule structure. We use the following convention for notation. Lower case symbols annotated with arabic numerals represent parameter-value pairs. Thus, a parameter p with three legal values would be represented as pl, p2 and p3.

2.1. POTENTIAL

ERRORS

AT THE

LOCAL

LEVEL

Redundancy Redundancy in a knowledge base can occur in two forms: absolute redundancy, where an existing rule is replicated, and subsumption, where one rule is a more specialized case of another. Obviously, when the more general rule succeeds, the restrictive rule is also fired. As pointed out by Suwa, Scott and Shortcliffe (1984), redundancy may not always be a problem in a deterministic ES, but can cause errors of inference in an ES with uncertainty modelled in it. For example, redundancy is

450

R. AGARWAL

AND

M. TANNIRU

evident in the following cases: pl-+rl pl-+rl Identical rules pl & ql+rl pl+rl Subsumption of antecedent Subsumption in a rule set can be resolved in one of two ways: either by removing the more specialized rule or by explicitly controlling inference through a conflict resolution strategy which specifies when the more specialized or general rule should fire. While it may not be necessary to resolve every instance of subsumption, its detection must be facilitated by the verification procedure. Inconsistency The advice provided by an ES must be consistent in that the same states of the world must lead to the same set of inferences being made by the system. Situations where the same antecedent conditions lead to different conclusions have been variously labelled (Cragun & Steudel, 1987; Nguyen et al., 1985; Nguyen, 1987; Nguyen et al., 1987) as logical conflict, ambiguity etc. Notice that equivalent LHSs and different RHSs do not necessarily imply that rules are inconsistent. In circumstances where the RHS parameter is multi-valued (such as a wine advising system recommending several alternate wine selections for the same meal) it may be possible to have identical antecedents and different consequents. Thus, an instance of inconsistency is a situation when antecedents are identical and outcomes are mutually exclusive. Inconsistency may also manifest itself in the form of subsumed antecedents with mutually exclusive consequents. In some production system implementations, this type of inconsistency can be controlled through explicit conflict resolution strategies such as specificity. In practice, however, a verification procedure must identify all possible cases of potential inconsistency for the knowledge engineer and expert’s perusal. Even though this may result in the validation effort exploring system behaviors that may not occur, the decision to either eliminate inconsistency or to retain it with an appropriate degree of control requires additional domain knowledge and an understanding of semantic constraints. Formally, inconsistency appears in the following manner: pl+rl pl+r2 rl and r2 mutually exclusive pl-rl pl & ql+r2

rl and r2 mutually exclusive Incompleteness A rule base is said to be incomplete if there exist either certain legal combinations of parameter-value pairs which are not contained in the LHS of an existing rule in the knowledge base, or there are certain legal consequent values which are not

VERIFYING

THE

INTEGRITY

OF PRODUaION

451

SYSTEMS

present in the RHS of any rule. Incompleteness implies that for certain scenarios the ES will be unable to provide a satisfactory response. Clearly this is undesirable from a user’s perspective in that a consultation with the system might result in an uninformative “no conclusion” (Cragun & Steudel, 1987). The identification of incompleteness requires that the expert be able to associate a set of legal values with each antecedent and consequent parameter. While it is possible for certain combinations of legal antecedent parameter values to be unfeasible or unknown in a particular domain, the onus of identifying this must be on the knowledge engineer or the expert, with the verification procedure highlighting the incomplete scenarios. For example, the following knowledge base exhibits incompleteness: pl &ql-? ?

2.2.

Potential

-

r3

errors at the global level

Redundancy in chained inference The scope of the verification task increases dramatically

as we move from the local to the global level. Redundancy is present when there is an unnecessary implication in the knowledge base, i.e. a consequent reachable directly from an antecedent is implied through an inference chain. The following rules exhibit redundancy in chained inference: pl-+ql ql-rl

pl+rl Redundancy in chained inference must be interpreted with caution. Clearly, if ql is a required inference that is part of another rule antecedent or ql is required to provide a causal explanation to the user, it is desirable to retain the first two productions and eliminate the third. If, however, ql is used only to infer rl, the third production achieves the same functionality much more concisely. Inconsistency

The definition of inconsistency at a global level needs to be extended considerably from that at the local level. In chained inference, it becomes extremely difficult to isolate the presence of inconsistency mechanically, as was possible at the local level. Conceptually, inconsistency exists when two inference chains are simultaneously fireable and result in mutually exclusive outcomes. Inconsistency is present in each of the following rule sets: pl+ql-,rl rl+ -pl Pl-ql

pl+rl ql-;,r2 rl and r2 mutually exclusive

pl+ql+rl+pl circular chain of inference

452

R. AGAKWAL

AND

M. TANNIKU

The first and third case of inconsistency described above are both variations of circular chains or knowledge-base cycles and have been addressed by previous research quite extensively (Nguyen, 1987; Nguyen et al., 1987). The other, more subtle case of inconsistency is not detectable through existing approaches. Incompleteness

At the global level, incompleteness has a different connotation than at the local level. Conceptually, incompleteness here implies that there are certain components of the rule base which are either unnecessary or they are unreachable. Formally, incompleteness is said to exist in the following situations: an input is never used a goal is never established a subgoal is never used/established An input never being used could be the consequence of one of two situations: either the input is irrelevant (and should be removed from all rule antecedents) or that there are one or more missing rules. If the expert identifies the input value as being legal and feasible in the domain, it implies that there are missing rules (which the verification procedure must identify). In either case, additional domain knowledge is required to determine if the input never being used represents a gap in the knowledge-base or not. Gaps or inconsistencies in the knowledge base of the kind described above can seriously impair system performance. While not all errors will necessarily result in logically incorrect advice (e.g. redundancy), their presence will certainly affect the performance of the system in terms of its efficiency (Nazareth, 1989). Redundancy can also result in update anomalies when one instance of a redundant rule is removed and another is not. Verification must become an integral and on-going activity in the expert system development cycle to prevent the system from providing erroneous advice. In the following section we describe some approaches suggested by past research for the knowledge-base verification problem. 3. Previous work in knowledge-base

verification

Several approaches to the verification of knowledge encapsulated in an ES have been suggested in the past. While some of these approaches focus upon the external validity of the system (i.e. the degree of correspondence between the expert’s advice and the systems’ recommendation) (Geissman & Schultz, 1988; Green & Keyes, 1987; Liebowitz, 1986; O’Keefe, Balci & Smith, 1987), others confine themselves to the internal validity of the knowledge (i.e. the logical correctness of the rules). As Nazareth (1989) highlights, the two problems are closely interrelated as a lack of validity at one level has implications for validity at the other. We restrict the scope of our discussion to internal validity checks both at the local and global levels. Completeness and consistency checking of a rule base has been addressed by Suwa, Scott and Shortcliffe (1984). They describe a rule-checking program associated with the ONCOCIN expert system. The rule checker aids in automatically identifying redundancies, subsumptions, inconsistency at a local level and missing rules. The checks are made among rule sets that have the same consequent and

VERIFYING

THE

INTEGRITY

OF PRODUCTION

SYSTEMS

453

apply within a specific context. Thus, the knowledge base is partitioned into rule contexts, and each partition is independently verified for rule integrity. Such a strategy enhances the speed of the checking process by clustering logically related rules together, thereby ensuring that potentially unfeasible logical combinations are not considered. Suwa, Scott and Shortcliffe’s rule checker performs an essentially local verification of the knowledge-base in that errors in chained inference are not addressed. Knowledge-base completeness is determined through exhaustive enumeration, by identifying all legal combinations of antecedent parameters and determining if they are covered by an existing rule in the knowledge base. Building upon the rule checker, Nguyen (1987) and Nyugen et al. (1985, 1987) describe an automated rule verifier called CHECK which they have extended and modified over time. In addition to static checks for redundancy, subsumption and inconsistency at the local level, the program is able to detect global errors in the form of circular chains. The definition of completeness used by CHECK is fairly extensive. Completeness checks performed include unreferenced attribute values (certain legal values of antecedents do not appear in any LHS), illegal attribute values (a rule has an unfeasible or illegal attribute value in its antecedent), missing rules, unreachable conclusions (a goal is neither a RHS nor is it in the LHS of a rule), dead-end IF conditions (attributes that are not askable and not in the RHS of any rule) and dead-end goals (goals which have no rules for them). CHECK supports both goal-driven and data-driven inference, and allows rules to contain variables that may be instantiated at run-time. While detection strategies are the same for both types of inference, in data-driven execution, the detection of unreachable conclusions does not apply. Similar to the strategy described by Suwa, Scott and Shortcliffe (1984), CHECK requires the knowledge engineer to partition rules into sets, with each set being associated with a particular subject category. A decision-table-based method for rule-base verification has been described by Cragun and Steudel (1987). Rules are partitioned into disjoint sets and each set is represented as a decision table, with rows indicating logical conditions and columns representing rules. Partitioning is done based on two criteria: first, rules belonging to a sub-table must have at least one condition in common, second, a rule in a sub-table must not set the value for a condition in another rule in the same table. The Expert System Checker (ESC) program processes the entire knowledge base to first construct these sub-tables and second, examines each decision table to identify inconsistency, redundancy, completeness and missing rules at the local level. The authors suggest that in a table with no inconsistency or redundancy, completeness checking is tantamount to numerical computation, because the maximum number of possible legal combinations of inputs (and hence, rules) is known a priori. All checks performed by ESC are local in nature. Ginsberg (1987, 1988) describes an approach to integrity checking called KB-reduction that can, “in principle, detect all inconsistencies, redundancies and potential contradictions” (Ginsberg, 1987, p. 102) in a knowledge base. Knowledge base reduction draws upon concepts elucidated in the context of truth maintenance systems. Rules in the knowledge base are ordered into levels using a “depends on” relationship, and the procedure requires the computation of a label for each rule, each hypothesis and default hypothesis asserted by the knowledge base. The label contains all possible circumstances under which the hypothesis will be asserted. The

454

R. AGARWAL

AND

M. TANNIRCJ

labels are then used to infer the existence of redundancies, inconsistencies and contradictions in the rule set. In addition to the implemented programs described above, other strategies for knowledge-base verification, including network-based approaches (Miller et al., 1986; Murray & Tanniru, 1987) have been suggested. However, reported evidence on the efficacy of these approaches is scarce, making it difficult to assess their usefulness. For a knowledge base containing n rules, of the programs described here, numerical completeness checks run at speeds of O(n) in ESC. Numerical completeness simply identifies the presence or absence of missing rules, without specifying which rules are absent. All three programs offer a speed of O(n2) when checking for inconsistencies and redundancies, and O(2”) when locating missing rules. Ginsberg’s (1988) procedures offer exponential complexity. More importantly, apart from the circular chain detection algorithm of CHECK and KB-reduction procedures, these programs do not provide procedures for determining the integrity of the knowledge base at the global level. Errors are assumed to occur primarily between pairs of rules, clearly an unrealistic assumption (Nazareth, 1989; Ginsberg, 1988). In the following section we describe how a rule-base can be modelled as a Petri-Net. The approach is similar in principle to Ginsberg’s (1988) in that dependencies are captured through the network. The difference is that while KB-reduction specifies “depends-on” relations between rules, the Petri-Net formalization is based on finer-grain dependencies among parameters (findings, hypothesis etc., in Ginsberg’s terminology), with rules providing the relationships among these parameters in a vector form. Additionally, besides allowing the use of the structural properties of Petri-Nets for rule-base verification, the modelling approach can also facilitate knowledge base modularization (Agarwal & Tanniru, 1991) and simulation for performance evaluation (Gulati & Tanniru, 1990). 4. Petri-Net

approach

for rule-base

verification

Petri-Net models are abstract, formal representations of information flow (Murata, 1984; Peterson, 1977). They describe the input/output relationship between objects using a graphical representation, and have been shown to be powerful modelling formalizations for systems that exhibit concurrent behavior with precedence constraints on concurrency. Input and output objects are called places (depicted as circles) and their functional relationship is called a transition, i.e. the transformation of input places to output places (depicted as a vertical bar). The dynamic behavior of a Petri-Net is controlled by the movement and propagation of tokens. The availability of tokens in input places can automatically trigger transitions that have these places as their inputs, and make their output places available (place tokens in them) for further propagation. The capture of explicit input and output dependencies among objects facilitates a simulation of system behavior (Murata, 1984). A Petri-Net model provides a natural and powerful modelling paradigm for rule-based systems. Parameter-value pairs corresponding to places and rules are analogous to transitions. Because rule-based systems capture the dependency between the antecedent conditions and corresponding consequent values, asserting the satisfaction of the antecedent conditions (having a token in the input places)

455

VERIFYING THE INTEGRITY OF PRODUCTION SYSTEMS

1. If application time is long and staff availability is tight then

2. If application time is short then decision is design.

FIGURE 1. Current knowledge-base

implies the firing of the rule and asserting the values for the consequent (putting a token in its output places). In this section, we use a software acquisition example (Figure 1) to illustrate how the Petri-Net model of a rule based system can be used to test its integrity. As specified before, lower case letters followed by arabic numerals denote parameter-value pairs. While we do not include variables in rules explicitly in the following discussion, the approach can be extended to rule bases containing variables with the following restrictions. First, it is assumed that the possible values that variables can assume are known a priori (the reason for that will become apparent as we describe the creation of incidence matrices), allowing for the construction of rules corresponding to each instantiation of the variables. Second, the range of allowable values for these variables is assumed to be a finite set. These assumptions are fairly realistic for a large number of expert systems and, in fact, prior research (Cragun & Steudel, 1987; Suwa, Scott & Shortcliffe, 1984, Ginsberg, 1987; 1988) has also focused on such restricted types of knowledge bases. In cases where it is desired to verify a knowledge base containing a large number of variables, a unification algorithm (such as the approach suggested by Nguyen ef al., 1985; 1987) may be utilized prior to the matrix manipulation described below. 4.1. INTEGRITY

CHECKING

AT THE LOCAL LEVEL

Figure 1 depicts a simple knowledge base with two rules. The antecedent conditions are represented as al, ~2, and sl and consequents as dl and d2. Each rule can thus be described as shown in Figure 2 (circles are not explicitly shown here to represent places). Note that there will be as many distinct places as there are possible values for the input and output parameters represented in the knowledge base and each rule corresponds to a transition. The transition/place relationship modelled in a Petri-Net can be succinctly summarized in the form of an incidence matrix (Figure 3). A “-1” is used to indicate an input place and “+l” an output place. Structurally, an incidence matrix

51 -

t/l (rule I)

ul =+

1 02-e FIGURE 2. Petri-Net representation

d2 (rule 7)

of the rule base.

456

R. AGARWAL

FIGURE 3. Incidence

AND

M. TANNIRU

matrix.

is very similar to a transposed decision-table, except that it uses limited entries. Note that if an attribute-value pair appears both on the left and right hand side of a rule (an obvious error in the specification of the rule), the entry in the matrix would be a zero. This type of node is referred to as a “conserving node” in a Petri-Net model and is represented by a unique character in the matrix. In the context of rule-bases, such a situation corresponds to a self-referencing rule and is isolated during the process of incidence matrix construction for appropriate action of the by the knowledge engineer. Note that in several cases, self-reference attribute-value pair is superfluous (pl . rl+ pl), while self-reference at the attribute level is one of reassignment (~1 . rl +p2). The second category of self-referencing is not addressed in this paper. The rest of this section describes procedures for manipulating the incidence matrix to test for integrity. Four new rules will be added to the knowledge base (Figure 4). Notice that the rules represented in the Petri-Net model are in their unitized or well-structured form, i.e. the LHS consists only of conjunctions of literals and the RHS has a single literal. By visual inspection it is evident that the third rule is subsumed in rule 1, the fourth rule is redundant with the second rule, the fifth rule may be inconsistent with the first rule and the last rule is in direct conflict with rule 2. This information is also directly obtainable through the matrix multiplication procedure described below. First, construct a row vector corresponding to the new rule (for rule 3, this vector is [0 0 -1 lo]). Multiplication of the incidence matrix with the transpose of this vector yields a column vector with each row corresponding to an existing rule in the knowledge base (Figure 5). Let the rule base contain n rules and let the new rule be represented by k (>n). In order to interpret the resulting vector, we define two measures: S(i) and O(i). S(‘)I embodies the concept of rule size and is computed as the total number of antecedent conditions and consequent actions (one in unitized form) that appear in a rule. O(i) is the value of the ith row in the output vector. Intuitively, O(i) is indicative of “rule overlap” and represents the number of common antecedent and consequent literals among the two rules.

FIGURE 4. New rules.

457

VERIFYING THE INTEGRITY OF PRODUCTION SYSTEMS

O(i)

S(i)

al

a2

Sl

dl

-1

1

d2

0

2

0

(3)

rule 1

-1

-1

= 1

(2)

rule 2

t_j

-1

1

0

I

I

0 S(k) = (2)

FIGURE5. Local level verificationsubsumption

The output vector results are interpreted

(shown

for rule 3).

as follows:

If S(i) > O(i) and O(i) = S(k) then rule i is subsumed in the new rule k. If S(i) = O(i) and O(i) < S(k) then rule i subsumes the new rule k. If S(i) = O(i) = O(k) then rule i is redundant to the new rule k. In all other cases, no conclusions can be drawn regarding redundancy or subsumption. The interpretation is based on the result that common consequents with more, fewer or equal antecedent conditions imply subsumption in either direction or redundancy respectively. The result follows directly from the assumption that the rule base is in unitized form where each rule is allowed to have only one consequent. This interpretation will not apply when a consequent is multi-valued, i.e. several different values for the consequent are allowed to be asserted simultaneously. In the example, rule 3 subsumes rule 1, and rule 4 is redundant with rule 2. To test for inconsistency and conflict, the test vector constructed must have its consequent replaced by one (or more) mutually exclusive outcomes (in this case, for rule 5 we would use dl instead of d2). With this change, a similar matrix multiplication is performed (Figure 6). The interpretation described previously applies here as well. If a rule i in the knowledge base is either subsumed by or subsumes the test rule, then a possible inconsistency exists between the new rule and rule i. If rule i is redundant with the test rule, then the new rule added is in direct conflict with rule i. In this case, rule 5 exhibits potential inconsistency with rule 1 (rule 1 subsumes the test rule) and rule 6

5. al Test rule: al

-

d2

+

dl

O(i)

S(i)

2 (3)

rule 1

(2)

rule 2

FIGURE6. Local level verification-inconsistency

=

(shown

I

0

I

for rule 5).

458

K. AGARWAL

(.I

C.2

cz

(4

-1 0

-1 (I

0 -1

0 -1

-1 0 I I

0 -1 I I

-1 o-1 1 1

AND

M. TANNIRU

Output matrix

0 =

3 1

2 I

1 7

I 7 ,

1 I

FIGURE 7. Local level verificatiowompleteness.

is in conflict with rule 2 (rule 2 is redundant with the test rule). In the case where there are multiple exclusive outcomes (for example: dl, d2, d3, . . .), the test vector will contain Is in all the columns corresponding to these exclusive outcomes and the same interpretation is applicable. Completeness of a rule base at the local level is established by multiplying the incidence matrix with the transpose of a “condition test matrix”. Each row in this test matrix corresponds to a combination of the antecedent parameter values, with a +l entry for each consequent value. Multiplication of the incidence matrix with the condition test matrix transposed yields an output matrix whose elements are interpreted in the same way as the subsumption/redundancy conditions. Those conditions that are neither redundant nor subsumed in either direction by the existing rule base are categorized as “incomplete”. This procedure is illustrated in Figure 7 for the rule base defined above. Note that the actual rows constructed for the condition test matrix (columns in the matrix used for multiplication) have values for each consequent in order to reduce the number of multiplications, but this does not have any impact on the test vector size. Here the output vector associated with cl is redundant with rule 1, output vectors for c3 and c4 are subsumed in rule 2. However, the output vector for c2 is not covered by any rule as its elements are less than both the test vector size and the knowledge-base rule size. Thus, the combination al . s2 is incompletely specified by the current rule base. The other part of the completeness test is merely a check on the reachability of all distinct output conditions by at least one set of input conditions. Assuming that the consequent parameter values for the decision are dl, d2 and d3, we only need to examine the columns of the incidence matrix corresponding to these consequen& values. If either the column does not exist or it exists with no non-zero elements in it, then the associated consequent is not reachable using any of the rules defined in the current knowledge base. In this case, d3 is not reachable. 4.2. INTEGRITY

CHECKING

AT THE GLOBAL

LEVEL

We extend and modify the knowledge base associated with the software acquisition example to illustrate verification at the global level. Two levels have been added to the network by introducing several new parameters: in business, product line, vendor and technology. The resulting knowledge base, network, and incidence matrices are shown in Figures 8 and 9. Notice that the network looks slightly different from previous ones in that we have collapsed (for brevity) all parametervalue combinations for a particular parameter into a single node in the Petri-Net.

c

VERIFYING THE INTEGRITY

OF PRODUCTION

459

SYSTEMS

1. If application time is long and staff availability is tight then decision is buy. (al.sl --+ dl 2. If application time is short and risk is moderate then decision is design. (a2.r2-+ d2) 3. If application time is short and risk is high then decision is contract develop. (a2.rl + d3) 4. If vendor is new and technology is known then risk is moderate.

(vl.r2-+ R)

5. If technology is new then risk is high. (11*rl) 6. If in business < 2 then vendor is new. (hl - Vl) 7. If in business > 2 and product line is different (h2.p2 --) Vl)

then vendor

is new.

_

FIGURE 8. Expanded knowledge base.

Three new rules are to be added to this knowledge-base (Figure 10). These rules affect the network at a global level, i.e. they include antecedent parameters that are not used exclusively to establish a single consequent. In order to test for global integrity, it becomes necessary to first identify the relevant portion of the global network affected by the new rule. This is done by tracing back through the dependencies implicit in the rule, until the consequent in the new rule is identical to that of the rules in the resulting network. Once this step is complete, integrity checking procedures are similar to those described under local checking. The precise procedure is illustrated by rule 8. Summarized versions of the algorithm’s result in the other two cases (rules 9 and 10) are presented in Appendices-A and B.

Al

A2

A3

1

L-.

FIGURE 9. Petri-Net and incidence matrices.

460

R. AGARWAL

AND M. TANNIRU

FIGURE 10. New rules.

Step 1: find the minimal cover for the new rule The minimal cover for a rule is defined as the part of the global network that minimally covers all dependencies implied in the new rule. In other words, we are seeking those parameters in the network which show dependencies that are possibly similar to those established by the new rule. Procedurally, this requires the extraction of a subnet of the global network containing all parameters included in the antecedent part of the new rule and sequentially establishing a value for the consequent defined in the new rule. For rule 8 (a, t, d), we trace back through link nodes (such as r) until the sub-network selected includes the path defined by the new rule. Define the relevant node set R associated with this subnet as including the nodes: (v, t, r, a, s, d), where (v, t) establish r and r, in turn, will establish d in conjunction with a and S. The “relevant set” is merely a union of all the parameters that recursively, via the link nodes, establish the consequent in the new rule, and minimally cover the parameters in the new rule. The sub-net shown in Figure 11 is termed a two-level network, where the first level is the one with no predecessor nodes and the succeeding ones incremented by one. Here the first level establishes r and the second level establishes d. Note that this sub-network contains two’local networks-Al and A2. Step 2: perform local checking at first level of sub-net Construct test vector(s) for local checking using the following rules. The antecedent parameter set of the test vector(s) is established by intersecting the new rule’s antecedent parameters with the antecedent parameters at the current level. The consequent parameter for the test vector(s) is the link node between the current and the next level in the subnet. In the case of rule 8, the intersection yields antecedent parameter “t” and the link node is “r”. Because the new rule has tl as its antecendent, the test vectors contain tl as the antecedent parameter value and each value of the link node as the consequent parameter. Here these test vectors are: [0 -10 lo] for consequent rl, and [0 -10 0 l] for consequent r2. Using the

FIGURE 11. Minimal

cover for rule 8.

VERIFYING

THE INTEGRITY

OF PRODUCTION

461

SYSTEMS

test vector

I

test vector 2

S(i)

(3) (2)

rule 4 rule 5

II I

=

1

S(k) = (2) FIGURE12. Local checking

at first level.

local checking procedures defined in the earlier section, we obtain the result shown in Figure 12. The outcome of this multiplication indicates that test vector 1 is redundant with rule 5 at the local level, i.e. tl causes T to take on value rl. Since the original knowledge base is assumed to be correct (i.e. void of any undetected inconsistencies or redundancies), the values of the link node propagated are those that are observed to exhibit redundancy, subsumption or potential inconsistency at the current level. We cannot conclude any global inconsistency or redundancy, however, until we reach the end of the minima1 cover (i.e. the consequent of the new rule). Here only rl is propagated to the next level. Step 3: perform integrity checking at the next level Construct the test vectors at this level using a procedure similar to the one described in Step 2 and qualify them using link node values associated with test vectors that have been propagated from the previous level. In this case, the link node value is rl. Set R tR-{non-link parameters at previous level}. To identify additional parameters that appear in the antecedent part of the test vector, intersect the antecedent parameter set of the new rule with the antecedent parameters at the current level and append the link node to this intersection. In this case, the modified relevant set includes (r, a, s, d) as (v, t) are previous level non-link nodes. The antecedent parameter set contains (r, a, s). The intersection of these two sets with the new rule yields (a) and, upon appending the link node, we have (a, r). Thus, the test vectors will contain a2 (the condition in the new rule) and rl (the link node value that needs to be propagated to this level) as their antecedent parameter conditions. There will be three consequent parameter conditions (one for each value of d) and, thus, the test vectors are: [0 -10 -10 10 0] for consequent dl, [0 - 10 - 10 0 lo] for consequent d2 and [0 - 10 - 10 0 0 l] for consequent d3. If more link node values are generated in the previous sub-net, then test vectors must be constructed for each link node value. Using each of these test vectors, perform matrix multiplication as shown in Figure 13. Test vector 3 is seen to be redundant with rule 3 in the knowledge base. Since there are no more levels to pursue, we examine the implication of redundancy at each level. Rule 5 and rule 3 together have been shown to be redundant with a part of the new rule. For example, rule 5 says that tl-+rl and rule 3 says rl . a2+d3. This implies that if tl and a2 are the input conditions, the conclusion that can be

462

R. AGARWAL

test vector

test 1

vector

0

S(i)

0 = 0 1 0 0

1 1 2

I!

S(k) = (3) FIGURE 13. Local

checking

test 2

vector .7 0

-1

-1

M. TANNIRU

0

-1

(3) rule 1 (3) rule 2 (3) rule 3

AND

-1 0

-1

1

0 0 1 0 (3)

0 2 2

II

0

0 =

-1 0 I) 0

I 3

1 (3)

at next (final) level

drawn is d3, exactly identical to the consequent of rule 8. Thus, the addition of this new rule could, potentially, lead to redundancy in the knowledge base. Incompleteness at the global level is attributed to an input parameter value never being used, a goal value not being established or a sub-goal value not being linked in either direction. Completeness checking at the global level can be done in two different ways. One approach requires a completeness check at the local level and the propagation of the implications to the next or preceding level. The other approach uses the reachability algorithms associated with Petri-Nets. We describe both of these approaches briefly. Upon performing the completeness check at the local level, if it is determined that an input condition does not have a valid outcome (as in the case: al . s2+ ?), then the implications are as follows: for each parameter-value pair appearing in the input condition (e.g. al or s2), if the parameter-value pair is not a sub-goal (it does not have predecessor nodes in the global network as is the case for s), then it is an incompletely defined input condition. If it does have predecessor nodes (as is the case with r), then it is an incompletely defined path caused by an unused sub-goal. Similarly, an unestablished output condition (consequent parameter-value pair) can imply an unestablished goal (if it has no successor nodes) or an unused sub-goal. In the global network shown here, v2 is neither used nor established. Such broken, unreachable or dead-end paths can also be identified by creating input combination vectors as initial markings and output condition vectors as final markings, and using reachability algorithms (Murata, 1984; Peterson, 1977) to establish the existence of a path. For example, here the input for the global network requires values for parameters b, p, t, a and s, and the output vector has values for only d. An example of the input marking is (bl, p 1, cl, al, ~1) and the output marking is dl, d2, or d3. Thus, using reachability algorithms, it can be determined if the input marking defined above can establish dl, d2, or d3. If the reachability algorithm is unsuccessful, then that path is incomplete. However, this only implies that either the input conditions or the intermediate goals leading to a value for d are incompletely specified. The procedure does not identify where on the path the incompleteness exists. Also, for a large network, an application of the algorithm can be fairly complex as it involves matrix inversions (Murata, 1984) unless it is possible to decompose the network into sub-networks.

VERIFYING THE INTEGRITY

OF PRODUCTTON SYSTEMS

463

It appears that a local evaluation of the network for completeness and its propagation backward and forward through the network provides more specific information on the nature of incompleteness and makes the problem mathematically tractable. Further investigation is, however, necessary before we can draw any type of generalizable conclusions. Thus far, we have addressed the testing of a knowledge base when a new rule is added. The same procedures apply when an existing knowledge-base has to be tested for inconsistencies and redundancies. The procedure requires that we first define the parameter dependency network and the incidence matrices associated with all consequent parameters (i.e. construct the Petri-Net model of the knowledge base), and then test the rules incrementally for potential inconsistency or redundancy. The order in which the rules are selected for testing is not significant as long as all the inconsistencies are identified prior to any decisions on which rules should be altered, removed etc. Note that this initial effort in building the network and incidence matrices, while extensive for a large knowledge base, need be done only once and simply revised as new rules are added. Another point to note is that as new rules are added, either the existing incidence matrix associated with a consequent will be altered (if the effect is local), or the network itself may be changed if the rule spans many levels of the network (global). In either event, a significant proportion of the existing incidence matrices can be re-used for verification. 4.3. COMPUTATIONAL

COMPLEXITY

At the local level, each of the II entries in the result vector is computed by m multiplications, where m is the number of columns in the matrix. The number m is the cardinality of the set representing the union of all antecedent literals for a particular consequent plus one (for the consequent literal). Previous approaches have computed complexity by ignoring the cell level comparisons made between rules, i.e. the comparison of two rules is treated as a single operation, rather than as the sum of comparisons made at parameter-value levels. In order to provide a uniform basis for comparison with previous approaches, we treat the computation of each entry in the result vector as one algorithmic step. Thus, for inconsistency and redundancy, the matrix multiplication procedure described here has a running time complexity of O(n’). By including cell-level comparisons, the complexity of the operation is no worse than O(n2m). Note that, in practice, efficient algorithms for matrix multiplication (Sedgewick, 1983) exist, and the complexity can be reduced in specific implementations. The actual matrix multiplication procedure used for determining completeness in the Petri-Net approach has a complexity of O(n’). However, we recognize that the size of the condition-test matrix will still be an exponential function of the number of antecedent conditions, so the procedure suggested here provides efficiency similar to previous approaches. The major computational benefit obtainable from the Petri-Net approach is in integrity checking at the global level. We have described a procedure for this which offers identical complexity to procedures at the local level. The procedure essentially entails performing local checking at as many levels of the network as are affected by the new rule, so the complexity is simply the summation of local complexity across

464

R. AGARWAL

AND

M. TANNIRU

levels. The implication of this result is that inconsistency, incompleteness and redundancy at the global level can also be isolated through computationally tractable procedures. Such a facility can considerably enhance the process of knowledge-base development and debugging. 4.4. LIMITATIONS AND EXTENSIONS

approach presented here has not addressed two specific issues that are fairly typical in knowledge-bases even if we assume that they are in “well structured” form. One of these issues relates to the treatment of negation in a clause. While it is possible to construct equivalent rules not containing negation by selecting complementary conditions (parameter-value pairs for that parameter), this can result in a very cumbersome transformation if a parameter can take many possible values. A possible approach is to create an artificial variable for the “not” condition and have a value for this variable be established in a sub-network. This will reduce the number of rule combinations that may need to be investigated at the network level. The other issue (not addressed here) relates to multi-valued consequents. In this case also, new artificial parameters for each subset of the parameter-values pairs can be created if these subsets can form mutually exclusive sets and/or we may define multiple rules, one for each consequent. In the latter case, if the system detects these as conflicting or inconsistent, then the knowledge engineer can override these concerns. Research is currently underway to make the approach described here generalizable by addressing these issues in an algorithmic fashion. In this paper, we have used the properties of Petri-Nets (primarily those of representation and reachability) in a minimal way to test the rule base for inconsistencies and redundancies. There are many other static (such as S and T invariance) and dynamic properties (token passing) of Petri-Nets (Murata, 1984) that are not fully explored in this paper. The role of simulation using Petri-Nets for performance evaluation is being explored (Gulati & Tanniru) and future research will examine the implications of other properties as well. While, as indicated earlier, other approaches offer similar verification benefits, the ability of this approach to provide other types of testing is an additional benefit. The

5. Conclusions The production rule formalization has become a popular knowledge representation paradigm for expert systems. A crucial problem confronting designers of rule-based systems is the lack of tractable procedures for verifying the integrity of a knowledge-base as it expands over time. In this paper we have described an approach to integrity verification that is structured around a Petri-Net model of a rule base. Several types of expert systems maladies are detectable by our procedures, including redundancy, subsumption and inconsistency. The approach extends previous research by providing efficient verification procedures at both local and global levels. References AGARWAL, R. & TANNIRU, M. (1991). Structured tools for rule-based systems. HICSS-24 Proceedings.

VERIFYING THE INTEGRITY

OF PRODUCTION SYSTEMS

465

B. J. & STEUDEL, H. J. (1987). A decision-table-based processor for checking completeness and consistency in rule-based expert systems. International Journal of Man-Machine Studies, 26,633~648. GEISSMAN, J. R. & SCHULTZ, R. D. (1988). Verification and validation of expert systems. AI Expert, 3, 26-33. GINSBERG, A. (1987). A new approach to checking knowledge bases for inconsistency and redundancy. Proceedings of the Third Annual Expert Systems in Government Conference, pp. 102-111. Washington, DC, GINSBERG, A. (1988). Knowledge-base reduction: a new approach to checking knowledge bases for inconsistency and redundancy. Proceedings of the National Conference on Artificial Intelligence, pp. 595-589. GREEN, C. J. R. & KEYES, M. M. K. (1987). Verification and validation of expert systems. Proceedings of the Western Conference on Expert Systems, WESTEX 87, IEEE, pp. 38-43. Anaheim, CA, GULATI, D. & TANNIRU, M. (1990). A model based approach to improve performance in rule-based systems, DSI-1990 Proceedings. HAYES-ROTH, F. (1985). Rule-based systems. Communications of the ACM, 28, 921-932. LIEBOWITZ, J. (1986). Useful approach for evaluating expert systems. Expert Systems, 3, 86-96. MET~REY, W. (1987). An assessment of tools for building large knowledge-based systems. AZ Magazine, 81-89. MILLER, P. L., BLUMENFR~JCHT,S. J., ROSE, J. R., ROTHSCHII.D, M., WELTIN, G., SWE~, H. A. & MARS, N. J. I. (1986). Expert system knowledge acquisition for domains of medical workup: an augmented transition network model. Proceedings of the Tenth Annual Symposium on Computer Applications in Medical Care, pp. 30-35. Washington, DC, MURATA. T. (1984). Petri-Nets and their application-an introduction. In SHI-KUO CHANG, Ed. Management and Ofice Information Systems, Plenum. MURRAY, T. J. & TANNIRU, M. R. (1987). Control of inconsistency and redundancy in PROLOG-type knowledge bases. Z&h Hawaii International Conference on System Sciences, Hawaii. NAZARETH, D. L. (1989). Issues in the verification of knowledge in rule-based systems. International Journal of Man-Machine Studies, 30, 255-271. NGUYEN, T. A. (1987). Verifying consistency of production systems. Proceedings of the Third Conference on Artificial Intelligence Applications, pp.4-8, IEEE Computer Society Press, Washington DC, NGUYEN, T. A., PERKINS, W. A., LAFFEY, T. J. & PECORA, D. (1987). Knowledge base verification, A I Magazine, 69-75. NGUYEN, T. A., PERKINS, W. A., LAFFEY, T. J. & PECORA, D. (1985). Checking an expert system’s knowledge base for consistency and completeness. Proceedings of the 9th IJCAI, pp. 374-378. Menlo Park, CA, O’KEEFE, R. M., BALCI, 0. & SMITH, E. P. (1987). Validating expert system performance. IEEE Expert, 81-90. PEDERSON, K. (1989). Well-structured knowledge bases. AI Expert, 4, 44-55. PETERSON, J. L. (1977). Petri-Nets. Computing Surveys, 9, 223-252. SEDGEWICK, R. (1983). Algorithms, Reading, MA: Addison-Wesley. SUWA, M., Scour, A. C. & SHORTLIFFE, E. H. (1984). Completeness and consistency in a rule-based system. In BUCHANAN, B. G. & SHORTLIFFE, E. H., Eds. Rule-Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project, pp. 159-170. Reading, MA: Addison-Wesley. ZISMAN, M. D. (1977). Use of production systems for modeling asynchronous, concurrent processes. In WATERMAN, D. A. & HAYES-ROTH, F.. Eds . Pattern-Directed Inference Systems, pp. 53-68, New York: Academic Press.

CRAGUN,

466

R. AGARWAL

Appendix

AND

A: integrity checking when rule 9 is added

Relevant set: {b, p. v, r. r)

_T)

Al Step 1: find the minimal

cover for the new rule

Test vectors:

TV1 TV2

g!$++wJ

bl + ~11 (-1 000 10) bl + v? ( - 1 0 0 0 0 I )

“:.

Redundancy of test rule TV1 with rule 6. Propagate vl to next level.

= IfI

7

=,A,

1

0

0

1

Step 2: Perform local checking at first level of sub-net

Test vectors:

TV1

\jl + rl

(- 1 0 0 0 1 0)

TV2

vl-+r2

(-100001)

Relevant set: {v, f. r}

gL;tG++j

7

=(t,

1 0

7

=,2,

0 1

TV2 subsumes rule 4. No more levels are pursued. rule 9: 61 + rl rule 6: bl+ vl rule 4: vl.r2 + r2 Implication:

bl.t2 -+ r2

Result: possible inconsistency caused by subsumed antecedents mutually exclusive consequents. Step 3: perform integrity checking at next level

and

M. TANNlRl

467

VERIFYING THE INTEGRITY OF PRODUCTION SYSTEMS

Appendix

B: integrity checking when rule 10 is added

Relevant set: {b. p. v. t, r. a, s, d) ft

w

.

/)

ed

S

I’ I’.

\

y

r

w /

A2

Al Step 1: find the minimal

>

a-

A3

cover for the new rule

Test vectors:

TV1 TV2

p2 + ~‘1 (0 0 0 - 1 1 0) p2+1>2 (000-101) TV2

S(i)

0

(2) rule 6 (3) rule 7 Subsumption of rule 7 in test rule TVl, Propagate ~71to next level.

h

Step 2: Perform local checking at first level of sub-net

Test vectors:

TV1

171.t2 -+ rl

(- 1 0 0 - 1 10)

TV2

vl.t2-+r2

(-100-101)

TV1

A2 1

t2

rl

r21

Redundancy of test rule TV2 with rule 4. Propagate r2 to the next level.

TV2

I

I-1

I I 1 0

Step 3: perform integrity checking at next level

I

0 1 I

R. AGARWAL

468

AND M. TANNIRIJ

Appendix B-(continued) Relevant set: {r, a, S, d}

[F A3 Test vectors:

TV1 TV2 TV3

r2 -+ dl r2+d2 r2+d3

(0 0 0 0 -1 10 0) (0000-1010) (0000-1001)

TV1 A3 S(i)

al

(3)rulel (3) rule 2 (3) rule3

-1

a2

Sl

rl

r2

-1 -1 -1

dl

d2

d3

1 -1

1

-1

1

Test rule TV2 subsumes rule 2. Implication: rule 7: b2.p2 rule 4: vl.t2 rule 2: a2.r2 rule 10: t2.p2

+ -b + -+

VI r2 d2 d2

(b2.p2.t2 -+ r2) (a2.b2.p2.r2 + d2)

Result: rule 10 may subsume the path defined by rules 7, 4, and 2. Step 4: perform integrity checking at next level

TV2

TV3