Checking the consistency of a hybrid knowledge base system

Checking the consistency of a hybrid knowledge base system

Knowledge-Based Systems 20 (2007) 225–237 www.elsevier.com/locate/knosys Checking the consistency of a hybrid knowledge base system Jaime Ramı´rez *,...

646KB Sizes 4 Downloads 74 Views

Knowledge-Based Systems 20 (2007) 225–237 www.elsevier.com/locate/knosys

Checking the consistency of a hybrid knowledge base system Jaime Ramı´rez *, Ange´lica de Antonio Universidad Polite´cnica de Madrid, Computer Science School, Spain Received 22 July 2005; accepted 3 May 2006 Available online 15 September 2006

Abstract This paper concerns a method to verify the consistency of a hybrid knowledge base system. We assume the knowledge base of the verified system supports the representation of production rules and frame taxonomies. Moreover, the knowledge base can also be used to represent non-monotonic and uncertain reasoning. For the purpose of the verification, an ATMS like theory is created by simulating the deductive process needed to deduce an inconsistency. Next, the ATMS like theory is processed to compute a specification of the initial fact base that allows for the execution of a deductive tree that leads to the inconsistency.  2006 Elsevier B.V. All rights reserved. Keywords: Verification; Inconsistency; Knowledge base system; Production rules; Frames; Non-monotonic reasoning

1. Introduction A priority objective in knowledge-based systems (KBS) engineering is quality assurance. Within knowledge engineering, there is a discipline, usually referred to as Verification & Validation (V&V), whose goal is to ensure the quality of a KBS. One area of research in V&V is related to establishing whether the KBS meets a series of formal properties. Indeed, a lot of work has been done on verifying KBS consistency. As a result of these research efforts, some computerized methods for identifying anomalies in the KBS structure that could potentially produce incorrect outputs have been proposed. This paper takes a step forward in the area of V&V methods. It presents a method, called MECORI, for detecting semantic inconsistencies in hybrid KBS using production rules and frame hierarchies. A semantic inconsistency occurs whenever the KBS is able to deduce a set of facts that are incompatible with the KBS application domain. Semantic inconsistencies can be specified by integrity constraints (IC). It will be shown what improvements *

Corresponding author. E-mail addresses: jramirez@fi.upm.es (J. Ramı´rez), angelica@fi.upm.es (A. de Antonio). 0950-7051/$ - see front matter  2006 Elsevier B.V. All rights reserved. doi:10.1016/j.knosys.2006.05.019

MECORI offers over its predecessors and how it overcomes many of their limitations. MECORI is a method for verifying knowledge bases (KBs) expressed in a knowledge representation formalism called GKR, which can be used to represent non-monotonic and uncertain reasoning. The rules expressed in the GKR formalism can include variables that can be instantiated as frames, propositions or relationships, and it can, therefore, be used to represent some classes of formulas expressed in second-order logic. Furthermore, MECORI can handle arithmetic constraints defined in the real domain within the antecedents of rules that express constraints on the values of attributes or certainty factors. The MECORI input is a KB in GKR format and a set of ICs. The MECORI output describes, for each IC that the KBS can violate, the sequence of rules that must be fired to deduce the semantic inconsistency and a specification of the initial fact base (FB) for this sequence of rules to be fired. The specification of the initial FB will contain a specification of all the GKR objects that must be included in this FB. The specification of a GKR object will, in turn, contain a set of constraints on the possible characteristics of the object in question. The method is based on the ATMS designed by de Kleer in the sense that it uses the concept of label as a way to

226

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

represent a description of a set of FBs. However, the concept of label has been extended substantially in the design of MECORI, since GKR KBs can contain objects of very different types, apart from simple propositions, the only kind of objects that could appear in a ATMS-like label. For the purpose of verification, an ATMS-like theory is built by simulating the deductive process followed by the KBS to infer a semantic inconsistency. Later, a deductive tree and a specification of the initial FB that can be used to deduce the inconsistency can be easily obtained from the ATMS-like theory. The Section 2 of this paper explains other methods that also verify the KBS consistency and their limitations. Section 3, explains some points related to the kind of KBSs and ICs that are verified by MECORI. In the Section 4, we explain the type of non-monotonic reasoning employed by the KBSs verified by MECORI. Section 5 explains how MECORI specifies how a KBS can violate an IC, if possible. In Section 6, the procedure for detecting an inconsistency is explained. Section 7 analyze the problem of translating a wide known language, called Jess, for defining rule-based systems into GKR. We end with some conclusions about our work, and some future lines of investigation derived from this work. 2. Related work Several methods have been proposed to date to detect inconsistencies in KBSs based on monotonic reasoning. These methods can be grouped into different families according to the approach they follow: • Tabular methods [1]. Tabular methods make up the first family for detecting anomalies in KBSs. The approach followed by a tabular method consists of comparing pairwise the rules of the RB, in order to establish certain relationships among their premises and conclusions. As these methods only examine pairs of rules, all the inconsistencies in which chains of rules are involved are beyond their scope. • Methods based on Petri nets [2,3]. The methods in this family models the KB by using a Petri net, so that the Petri net is used to simulate the execution of the KB. These methods need to check the model under all the possible initial states in order to warrant the absence of inconsistencies. Unfortunately, this test can be sometimes computationally very costly. • Methods based on graphs [4,5]. Graphs are a very attractive tool in order to represent conceptual dependencies. Moreover, graph theory provides us with analysis techniques to study properties such as connectivity or reachability in a rigorous and formal way. In addition, the soundness of methods based on graphs, like in methods based on Petri nets, can be formally proved. On the other hand, they have as a disadvantage, like Petri netbased methods, the necessity of simulating the execution of the system for every possible initial FB.

• Methods based on the generation of labels [6–10]. These methods are grounded on the ATMS designed by de Kleer [11]. As the generation of every possible initial FB is many times computationally intractable, these methods take some ideas used in the ATMS with the purpose of specifying set of initial FBs that allow for deducing an inconsistency. These specifications will be built without the necessity of modelling each possible execution of the KB. • Methods based on algebraic interpretations [12,13]. These systems are based on the transformation of the KB into an algebraic structure, like a boolean algebra, and the application of concepts and procedures related to that algebraic structure for the verification of the KB. This approach is the most reliable, because these methods are solidly grounded on algebraic concepts. Unfortunately, these methods can only be applied to propositional KBs. We will also mention another method that cannot be included in any of the previous families. This method, which was proposed by Gre´goire and Mazure [14], can test the consistency of RBs that comprise first order logic clauses with functions. The approach that this method follows consists on converting a first order logic KB into another propositional KB incrementally, at the same time that it checks whether the new propositional KB is consistent. The most noteworthy aspect of this method is that it can detect an inconsistency in a very early state due to its incremental nature. Nevertheless, its approach can lead to combinatorial explosion if the number of different instances of a subset of clauses (included in the initial KB) is very high. There are not many methods that can deal with the verification of non-monotonic RBs. Among them, we can highlight Wu and Lee’s method [15], which verifies RBs with production rules under closed world assumption, and Antoniou’s method [15], which analizes RBs expressed in default logic. Antoniou’s method is inspired on tabular methods for monotonic RBs, whereas Wu and Lee’s method translates the RB into a high level extended Petri net. Analogously, only a few methods have been proposed that can verify systems with uncertainty management [16] or hybrid systems. The first work that dealt with hybrid systems was that of Lee and O’Keefe [17], which characterized a set of new types of anomalies that appear as a result of considering the subsumption relationship among literals in different rules. The subsumption relationship is a consequence of the subclass-of relationships that appear in a frame hierarchy. In order to detect those anomalies, Lee and O’Keefe proposed a method with a tabular approach. Later on, as an extension of Lee and O‘Keefe’ work, Mukherjee et al. [18] presented a similar procedure to detect dead-end and unreachable rules, taking into account the subsumption relationship. They also proposed a method based on the generation of labels to detect absent rules. Mukherjee et al. also introduce an algorithm to detect anomalies caused by the interaction of rules and monitors

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

(methods and daemons) during the execution of the KBS. Finally, we will cite another method to verify hybrid systems, which was proposed by Levy and Rousset [19] This method is the most advanced method proposed to date to test hybrid systems. This method can be applied to hybrid systems expressed in the CARIN language, which is a language that combines the flexibility of the Horn clauses with the expressiveness of a description logic called ALNCR. This method is grounded on theoretical results related to the query containment problem, studied in the database literature. However, the main drawback of this method is that the same query for each valid input must be evaluated, and this process may imply a very high computational cost. In short, there exists a lack of methods that can verify: hybrid KBs, non-monotonic reasoning, uncertainty reasoning and constraint based reasoning (rules with arithmetic constraints) altogether. In addition, many of the proposed formal methods so far require executing the model of the KB for each initial FB, which is very costly computationally. MECORI, the method to be shown in this paper, is intended to fill this gap. 3. Scope of MECORI The inputs to MECORI are the GKR RB, a set of ICs, the set of invariant links in the frame taxonomy, and the kind of logic that is used in the KB for the management of negation (closed world assumption or 3-valued logic). 3.1. GKR language GKR language was designed to be used as a common language to which many languages based on frames (or simply objects) and rules could be translated. The reason for this translation is related to the possibility of using several verification methods/tools tailored to verify KBSs expressed in the common language. Hence, one of the objectives of the GKR’ designers was that GKR was a very expressive language, so that any other language could be translated into it without loss of information. GKR [20] supports the representation of production rules and a high number of object types in the FB: frame classes and instances, relationships, propositions, attribute values and attribute identifiers. A rule’s antecedent in GKR is a disjunctive normal form (DNF) formula made up of literals. A literal is an atom, a negated atom or a linear arithmetic inequation over attribute values and/or certainty factors. An atom states something about some object in the FB. In GKR a rule’s consequent contains a list of actions that can modify the state of an object, create or destroy objects while executing the KBS. This last characteristic allows us to represent some types of non-monotonic reasoning. As it is possible to declare variables as relationships and propositions in the rules, the antecedent of a rule is a second order logic formula. Nevertheless, the actions of

227

the rules can not change the type of a relationship or a proposition, therefore GKR supports a limited subset of the second order logic. Moreover, uncertain reasoning can be represented in GKR by associating certainty factors to attribute values, to tuples in a relationship or to propositions. Facts in a GKR KB can be considered to fall into two categories: a deducible fact is a fact that is obtained from the execution of the KBS; and an external fact is a fact which cannot be deduced by the KBS and can only be obtained from an external source. We will suppose that all the different facts that can result from binding the same literal are either external or deducible. Hence, we will also use the terms external literal or deducible literal. As we mentioned at the beginning of this section, the GKR KBSs can use any of two kinds of negation management: closed world assumption (CWA) or 3-valued logic. The kind of negation management determines: when a fact can be considered true or false; what is the effect of the actions; how the facts and actions can be chained during the KBS execution; and which pairs of actions are contradictory. For instance, in the 3-valued logic there are three truth values: true, false and unknown; while a fact will be false if its negation appears in the FB, a fact will be unknown if neither it nor its negation appear in the FB; moreover, the action Add(p) deduces the fact p, and the pair of actions Add(p) and Add(p) are contradictory. It must be highlighted that the action Add(p) cannot be employed under CWA. In addition, the set of invariant links in the frame taxonomy will be provided as an input to MECORI. Invariant links are links of the type subclass_of or subinstance_of that cannot be deleted from the frame taxonomy. For example, in a frame taxonomy in which the frames HEALTHYPERSON and SICK-PERSON are subclasses of the frame PERSON, and the frame PETER is subinstance of HEALTHY-PERSON, these subclass links can be considered invariant links, whereas the subinstance link could be deleted during the KBS execution. The invariant links will be used by MECORI to discard deductive trees that require sets of hypothesis in conflict with these invariant links. In this way, the overall performance of the process will be improved. MECORI does not take into account the schema of the FB (except for the invariant links of the frame taxonomy). We define the schema of the FB as the domain of the problem to solve, not the particular problem to solve. The schema of a GKR FB should define aspects such as the frame taxonomy, the relationships, the propositions, and so on. The reason why MECORI does not require the schema of the FB will be explained in Section 6.3. 3.2. Specification of the integrity constraints An IC defines a consistency criterion over input data, output data or input and output data. The IC form is:

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

228

9x1 2 T 1 9x2 2 T 2 . . . 9xn 2 T n 9ðÞxnþ1 2 T nþ1 9ðÞxnþ2 2 T nþ2 . . . 9xnþm 2 T nþm A )? where A is a second order logic formula in DNF that includes conditions over whatever type of GKR objects. Each literal in A has an associated scope, which specifies whether the literal is related to input data (external literal), or output data (deducible literal). For the variables in A, two kinds of quantifiers can be employed: the existential quantifier (with the classical meaning) and the restricted existential quantifier (denoted as $()x). • An IC $x 2 T(A(x) ) ^) is violated if at least one object in the class T that is included in the FB satisfies the conditions imposed over the variable x in the formula A. • An IC $()x 2 T(A(x) ) ^) is violated if every object in the class T that is included in the FB satisfies the conditions imposed over the variable x in the formula A and only those conditions. Let us see a simple example of IC: $x 2 MAN$y 2 BABY (GiveBirth(x, y) (I)) ) ^. This IC expresses that it is inconsistent to assume that a man has ever given birth to a baby. This semantics for the restricted existential quantifier permits the detection of knowledge gaps. Lets see an example of an IC with a restricted existential quantifier: $()x 2 PATIENT(Is_Ill(x, FLU)(O), (x.Fever = high)(I)) ) ^. Clearly, having a high fever is not enough to deduce that a patient has flu. So, if a KBS can violate this IC, it is likely that there is a knowledge gap in the KB, that is, the KBS needs more rules. The ICs can have one of these three scopes: input scope, output scope or input–output scope. The scope of the IC depends on the scope of the literals included within it. A literal with input scope states something about the initial FB, while a literal with output scope states something about the final FB (resulting after the execution of the KBS). 3.3. Dynamic aspects The RBS is assumed to execute with forward chaining or backward chaining. MECORI does not consider explicit control mechanisms or meta-rules. When a rule is fired, we assume that all the actions belonging to the consequent of the rule are executed sequentially.

4. Non-monotonicity and inconsistency GKR rules can add new facts to the FB, but they can also delete already existing facts. This allows the knowledge engineer to build KBSs with non-monotonic reasoning. So, we could find production rules of the form p fi Del(p) under CWA (p fi p). This kind of rules (when p is assumed to be provided) are not admissible in a RB from the point of view of classical logic or default logic [21], since they are logical inconsistencies. However, if we examine these rules from the point of view of temporal logic [22], and we rewrite them as p atnext p (where the intended meaning for the operator atnext is: p holds at the next time point that p holds), then these rules should be perfectly admissible in a RB. From our perspective, production rules should be interpreted as rules of the form p atnext p. If we admit rules of the form p atnext p, we situate ourselves quite far from the concept of inconsistency as defined in other works, so we are going to clarify the meaning of inconsistency in this work: A deductive tree T that deduces a pair of facts F and F 0 is consistent iff: (a) T does not contain a set of contradictory external facts, or (b) the deductive subtree of T that deduces F does not deduce F 0 in the end, and vice versa. This definition implies that the deductive subtree that deduces a fact F must not deny the other fact F 0 that must hold at the same time than F, and vice versa. When the KBS executes a deductive process, a deductive tree is evaluated firing a sequence of rules. A deductive tree defines a partial order for rule firings so many sequences correspond to a certain deductive tree. The definition showed above is not more than an structural property to be fulfilled by the deductive trees built by the KBS that we want to verify using MECORI. We will call this property Tree_Consistency(dt) where dt is a deductive tree. A deductive tree is a tree of rule firings defined recursively by means of the constructor tree and the constant NIL_TREE (empty tree). As MECORI simulates the KBS execution, MECORI will discard any deductive process that implies the creation of an invalid deductive tree. Next, we will define this property formally:

Tree_Consistency(dt) ” Tree_Consistency_Aux1 (Boundary(dt))  Tree_Consistency_Aux2 (dt, ;) Tree_Consistency_Aux1(B) ” ($is 2 INCONSISTENT_SETS is  ¨r2B Assumed_Facts(r)) Tree_Consistency_Aux2(dt, scope) ” (dt = NIL_TREE) $r$a1, $a2. . .$an(dt = tree(r, (a1, a2, . . ., an)), scope_in_rule = scopenDeduced_Facts(r), (($f 2 scope_in_rule, $ f 0 2 Assumed_Facts(r), (f = f 0 )) ($f 2 Deduced_Facts(r), $f 0 2 scope, (f = f 0 ))), Tree_Consistency_Aux2(a1, scope_in_rule [ Assumed_Facts(r)), Tree_Consistency_Aux2(a2, scope_in_rule [ Assumed_Facts(r)), . . .. . .. . . . . .. . .. . .. . .. . .. . . Tree_Consistency_Aux2(an, scope_in_rule [ Assumed_Facts(r)))

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

where the INCONSISTENT_SETS is the set of the different inconsistencies to be considered, the function Boundary(dt) returns the set of rule firings that are leaves of the tree dt, the function Deduced_Facts(r) returns the facts deduced by the rule firing r and the function Assumed_Facts(r) returns the facts that must hold to permit the rule firing r. In the definition above, the property Tree_Consistency_Aux1 specifies the condition (1) in the definition of the consistent deductive tree above, and the property Tree_Consistency_Aux2 specifies the condition (2). Lets see an example of an inconsistent RB. Lets take the production rules R1: r, s fi Del(p); R2: t fi Add(p); R3: p fi Add(q) under CWA. In the Fig. 1 we can see the deductive tree for the conjunction p  q that is supposed to be the antecedent of another rule. The facts p and q are deducible and all the other facts are external. Obviously (see rule R3), in order to deduce q, p must be deduced beforehand, and after having deduced p it is not possible to deduce p. This example deserves an additional comment. If we assume that the rules are executed with forward chaining and we fire them in the sequence [R1, R3, R2] then the facts p and q will be both true in the final FB. However, if the rules are fired in the following sequence [R2, R1, R3] then the facts p and q will be present in the final FB. With the first sequence, the fact q was deduced first, and then the fact p; with the second sequence the facts were deduced the other way round. Our definition of inconsistency includes situations like this one, when the truth values of the goal facts depend on the order in which they are deduced. Lets see an example of a RB that is consistent according to our definition, but inconsistent according to other definitions. Lets take the production rules R1: n, u fi Add(q); R2: s, q fi Add(q); R3: q, m, t fi Del(p); R4: v fi Del(q) under CWA. In the Fig. 2 we can see the deductive tree for the conjunction p  q that is supposed to be the antecedent of another rule. We assume that p and q are deducible facts, and all the other facts are external. We can see that there are six different sequences of rules that correspond to the deductive tree of the Fig. 2. However, among them, only three sequences are feasible ([R4, R2, R1, R3], [R1, R3, R4, R2] and [R1, R4, R2, R3]), and all of these three sequences deduce the same truth values for p and q.

Fig. 1. Example of an invalid deductive tree.

229

Fig. 2. Example of a valid deductive tree.

According to the above definition of inconsistency, it is clear that MECORI will not be able to verify some nonmonotonic KBS. In particular, all the KBS whose deductive trees do not follow the inconsistency definition exposed above. 5. Requirements for getting an inconsistency As the output, MECORI must specify the ICs that can be violated during some execution of the KBS. For each IC that can be violated, MECORI must generate a report describing which requirements the initial FB must fulfil so as to execute, starting from this initial FB, a certain deductive tree that causes the inconsistency described by the IC. MECORI will construct an object called subcontext to specify how the initial FB must be and which deductive tree must be executed in order to cause an inconsistency. There can be different initial FBs and different deductive trees to get the same inconsistency. So, an object called context will be used to specify all the different ways to violate a given IC. For that purpose, a context will be composed of n subcontexts. In turn, a subcontext is defined as a pair (environment, deductive tree) where an environment is composed of a set of metaobjects, and a deductive tree is a tree of rule firings. A metaobject describes the characteristics that one object which can be present in a FB should have. For each type of GKR object MECORI uses a different kind of metaobject to describe its characteristics. All the GKR objects and their counterpart metaobjects are outlined in the table below. GKR object

Metaobject

Attributes of the metaobject

Frame

Metaframe

Attribute

Metaattribute

(identifier, is_restricted_ exist, instance_of, subclass_of, metaattributes, metarelationships) (identifier, is_restricted_ exist, metaframe, metaidattributes, value_ conditions, cf_conditions)

230

Id-Attribute

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

Metaid-attribute

(identifier, is_restricted_ exist) Relationship Metarelationship (identifier, is_restricted_ exist, type, tuples, conditions_for_each_ tuple) Proposition Metaproposition (identifier, is_restricted_ exist, type, truth_value, conditions) In order to specify a GKR object, a metaobject must include a set of constraints on the characteristics of the GKR object. Some GKR objects may include references to other GKR objects (for example, a frame instance can have references to attributes and a relationship can include tuples of references to frame instances), so the counterpart metaobjects will contain references to other metaobjects. In the table above, the attributes of each metaobject are outlined. It must be mentioned that all the metaobjects share two attributes, identifier and is_restricted_exist. The identifier attribute will take a certain name as value, if the described GKR object is required to have that name; otherwise, the value of this attribute will be null. Moreover, the is_restricted_exist attribute of a metaobject will take the true value if it is required that all the GKR objects in the initial FB that satisfy the constraints imposed in this metaobject only satify those constraints. Otherwise, it is only required that at least one GKR object in the initial FB satisfies the constraints imposed in the metaobject. As certain constraints expressed as arithmetic constraints can restrict the attribute values and the certainty factors associated with GKR objects in the rules, a different kind of object called condition will be used to represent these arithmetic constraints. Conditions will also appear in environments, together with metaobjects, and they will be referenced from and contain references to the metaobjects that somehow occur in them. Due to the presence of references among metaobjects and conditions one or more networks of metaobjects and conditions can occur in one

environment. Fig. 3 illustrates an example of an environment describing a FB in which the formula Feels(X, Tiredness)  X.temperature P 36.5 is true, where the variable X is declared as an instance of the frame Person. If there exists a GKR object in the FB, for each metaobject in the environment, that fulfils all the requirements imposed on it, then the given formula will hold in the FB. Since MECORI does not take into account the schema of the FB, it is possible that any of the environments in the context associated with an IC specifies a FB that is unacceptable according to the initial schema. Similarly, it could happen that a deductive tree in one subcontext of the context associated with an IC cannot be effectively executed given the control strategy used by the KBS. Consequently, the knowledge engineer should inspect every subcontext looking for environments that are unacceptable according to the FB schema, and deductive trees that are not executable according to control mechanisms. This inspection could even be automated, if the schema and control strategy are explicitly declared, and knowledge engineer involvement would not be necessary. 6. How to compute the requirements for getting an inconsistency The consistency of a KB is checked by computing the context associated with each IC. If the context associated with an IC results to be empty, that means that there is not any valid initial FB that leads to the violation of this IC. If the contexts of all ICs are empty, that means that the KB is consistent with regard to the set of ICs. Although this division is so clear from the point of view of the algorithm specification (see the Section 6.5), conceptually, the method can be divided into two phases. In the first phase, the AND/OR decision tree associated with the IC is expanded by means of a backward chaining simulation of the real rule firing. The leaves of this tree are rules that only contain external goals. At this point, the difference between a deductive tree and an AND/OR decision

Fig. 3. Environment.

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

231

For each IC, first of all, MECORI creates a new metaobject for each variable of the IC and each referenced GKR object. Some constraints are also derived from each IC literal and the declaration part, and they are added to the metaobjects. As the variables of the IC can have existential or restricted existential quantification, a constraint is also inserted into the metaobjects to represent this aspect. Moreover, some conditions are created to represent the arithmetic constraints over attribute values and certainty factors that appear in the IC. As a result of this process, a set of metaobjects and conditions is built. This set describes the GKR objects that allow the IC literals to hold. Fig. 5 illustrates the set of metaobjects and conditions created for the IC shown at the bottom of the Fig. 4. We need two metaframes (OBJ1 and OBJ4) for the variable X and the GKR frame instance driving-licence, respectively, one metaattribute (OBJ2) for the age GKR attribute, one metarelationship (OBJ3) for the Own GKR relationship and three conditions (COND1, COND2 and COND3) for X.age 6 17, cf(X.age) = 1 and cf(Own(X, drivinglicence) = 1), respectively. The arrows in the Fig. 5 illus-

trate the presence of references to other metaobjects and conditions. Each pair (l, A), where l is a literal and A is the set of metaobjects associated with the object names and variables in l, is called a goal. Obtaining the context of an IC involves computing the context associated to each goal included in the IC. If the goal comprises an external literal, its context is created. The creation operation will be defined later on. In order to compute the context of a deducible goal, MECORI has to generate the contexts associated with all the rules that deduce the goal (conflict set). For the purpose of deciding whether a rule deduces a goal, MECORI checks whether there exists any action in the rule that is unifiable with the goal. If the goal and the action are unifiable, then the metaobjects bound to the variables and GKR object names in the goal are propagated to the action. In this step, the constraints in the propagated metaobjects that are deduced by the action are deleted. In the example of Fig. 4, the IC comprises an external literal (or input literal) and a deducible literal (or output literal). We assume that MECORI finds only one rule (R1) to deduce the goal associated with the deducible literal. As this rule includes an action that is unifiable with the deducible goal, then the metaobjects OBJ1, OBJ3 and OBJ4 are propagated to the action. In the Fig. 6, we can see that the reference between OBJ3 and OBJ1 is removed due to the goal-action unification, since the action can just deduce the tuple referenced in the goal. A GKR rule antecedent contains some conjunctions joined by disjunction operators. Hence, to compute the context of a rule, the context of each conjunction must be computed. In order to compute the context of a conjunction, the context of each goal included in the conjunction must be computed. A processing similar to that of an IC is carried out over each conjunction before computing the contexts of the included goals. In this processing new metaobjects and conditions are created and some constraints are added to the metaobjects. However, as some metaobjects of the current rule may have been propagated from another rule (or from the IC), these metaobjects may already contain some constraints. Hence, it could happen that a new constraint c was added to a metaobject that already contained

Fig. 4. Example with an IC and a rule.

Fig. 5. Metaobjects and conditions for the IC.

tree should be explained. While a deductive tree can be viewed as one way and only one way for achieving a certain goal (that is, for deducing a bound formula or for firing a rule), a AND/OR decision tree comprises one or more deductive trees, therefore it specifies one or more ways to achieve a certain goal. During the first phase, the metaobjects are built and propagated from a rule to another one. In this propagation, some constraints are added to the metaobjects due to the rule literals and the declaration part of the rule, and some constraints are removed from the metaobjects due to the actions. In the second phase, the decision tree is contracted by means of context operations, which will be explained further on, and the metaobjects associated with external goals and conditions associated to arithmetic constraints are inserted into the subcontexts. Next, we will explain these two phases. However, the treatment of the conditions will be shown superficially, since this treatment is explained in detail in [23]. 6.1. Expanding the AND/OR decision tree

232

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

Fig. 6. Metaobjects and conditions for the rule.

a constraint c 0 such that {c, c 0 } is contradictory. In this case, the current conjunction (of the current rule) will be discarded since it can not be used to deduce the current goal. So far, the treatment of the metaobjects quantified existentially have been explained. However, nothing has been commented about the treatment of the metaobjects quantified with the restricted existential quantifier. If a metaobject is associated with a variable quantified with the restricted existential quantifier, no new constraint can be added to the metaobject in any rule. The only constraints in the metaobject will be the constraints coming from the IC. If in a rule, some new constraint that is not already present in the metaobject must be added to the metaobject, it will yield an inconsistency, and the current conjunction (of the current rule) will be discarded. 6.2. Contracting the AND/OR decision tree First of all, we will outline the context operations. Next, we will show how these context operations are employed to contract the decision tree. 6.2.1. Context operations In this section, we will define the contexts operations: creation of a context, concatenation of a pair of contexts and combination of a list of contexts. (a) Creation. A context with an unique subcontext is created from an external goal h: C(h) = {(E, NIL_TREE)} where the environment E comprises all the metaobjects included in h. The external literal included in h will hold in any FB that satisfies all the constraints specified in E. (b) Concatenation of a pair of contexts. Let C1 and C2 be a pair of contexts and Conc(C1,C2) be the context resulting from the concatenation, then: Conc(C1, C2) = C1 [ C2. (c) Combination of a list of contexts. Let C1, C2, . . ., Cn be the list of contexts, and Comb(C1, C2, . . ., Cn) be the context resulting from the combination. The form of this resulting context is: Comb(C1, C2, . . ., Cn) = {(Ek1 [ Ek2. . . [ Ekn, DTk1*DTk2. . . * DTkn) s.t. (Ei, DTi) 2 Ci  order(k1, k2, . . ., kn)} where

order(k1, k2, . . ., kn) is a predicate that only holds for a unique permutation of (1, 2, . . ., n). The aim of this predicate is to ensure that the method will yield the same deductive tree after combining the same deductive trees in different orders. Further on, this property of the combination operation helps MECORI to detect redundant subcontexts (see Section 6.6). The definition of the context combination operation must be completed by defining two additional operations, the union of environments and the combination of deductive trees: (c.1) Union of environments (Ei [ Ej). This operation consists of the union of the sets of metaobjects and conditions Ei and Ej. After the union of both sets, MECORI checks whether any pair of metaobjects could be merged. A pair of metaobjects will be merged if they contain a pair of constraints c1 and c2, respectively, such that c1 and c2 specify the same name. As a result of this fusion, the new metaobject could be invalid if it contains contradictory constraints. In this case, the resulting environment will be invalid, and it will be discarded. Moreover, after the union of two environments, MECORI also checks whether the resulting set of conditions can be satisfied or, in others words, whether the resulting set of conditions is feasible. Finally, if the resulting environment represents an invalid input, that is, this environment specifies an initial FB that violates an input IC, then this environment will also be discarded. (c.2) Combination of deductive trees (DTi*DTj). Let DTi and DTj be deductive trees, then DTi*DTj is the deductive tree that results from constructing a new tree whose root node represents an empty rule firing, and whose two subtrees are DTi and DTj. If any of the two subtrees is empty, the resulting deductive tree is the other subtree. In the combination of deductive trees, it may make sense to check whether there is a pair of contradictory actions A and B, such that A 2 DTi (A appears in the consequent of any rule in DTi) and B 2 DTj; and if there is such a pair, it might be a good idea to discard the pair of deductive trees to be combined. Nevertheless, MECORI does not work in this way, because, if it did, it could discard feasible sequences of rule firings according to the control mechanisms. At this point, we should remind that our method does not take into account control mechanisms. However, we will explain in Section 5 how this deficiency can be overcome. For instance, in the example of the Fig. 2 rules R1 and R4 comprise a pair of contradictory actions. Because of the presence of this pair, three possible sequences of rule firings for that deductive tree are invalid for the final goal, but there are still three valid sequences for that same deductive tree. So, it could be said that MECORI

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

follows the Murphy’s law,1 in the sense that it will choose the most favorable option for deriving inconsistencies when enough information is not available. 6.2.2. Applying the context operations The contraction phase begins when all the leaves of the AND/OR decision tree are external facts. At this moment, MECORI computes the contexts associated with external goals by creating a context for each external goal. Next, computing the context associated with a conjunction involves combining the contexts associated with each goal included in the conjunction by means of a combination operation. Then, the actions of each deductive tree in the context resulting from the previous combination are applied to the conditions generated from the current conjunction in the expansion phase. The resulting conditions of this step are inserted into each environment included in the context resulting from the combination. These resulting conditions are defined only in terms of external metaobjects, that is, objects that must be provided as inputs in the initial FB. In this way, the context associated with a conjunction is computed. Later, the context associated with a disjunction (or with a rule) is yielded by concatenating the contexts associated with the conjunctions. Next, the current rule is added to each deductive tree included in the context associated with the rule. Because of the context operations, the root nodes of all the deductive trees that are included in this context are empty rule firings (see the combination of deductive trees in Section 6.2.1). So, when the current rule is added to one of these deductive trees, the firing of this rule replaces the empty rule firing. The context resulting from the previous step is named the updated context associated with the rule. Since in GKR, a rule can be executed one or more times consecutively due to the presence of recurrent actions over attributes or certainty factors, MECORI will detect and represent this aspect in the deductive trees (see [23]). If there exists only one rule to deduce a goal, the context associated with the goal is equal to the updated context associated with the rule. Otherwise, to obtain the context associated with the goal, the updated contexts associated with each rule included in the conflict set associated with the goal must be concatenated. In short, thanks to the contexts operations, the constraints generated from the external facts (inside the metaobjects) are propagated forwards from the leaves of the AND/OR decision tree to the IC. Thus, all these constraints are collected in the context associated with the IC. For the example presented in this section, the context associated with the IC is shown in the Fig. 7. This context is worked out after combining the contexts associated with the two goals included in the IC. The context associated with the input goal is calculated by creating the context for the goal, and the context associated with the output

1

Anything that can go wrong, will go wrong.

233

goal is equal to the context associated with the rule. In this example, as there is only one literal in the antecedent of the rule, the context of the rule is equal to the context associated with the unique literal. Given that this literal is external, the context associated with this literal is worked out as a result of the creation operation. 6.3. Checking the consistency of the taxonomy relationships in the simulation As we mentioned before, MECORI does not know the schema of the FB, that is, it does not know the initial frame taxonomy. The reason of this is that this information would be not useful for MECORI. As MECORI expands the decision tree backwards, it does not know the actions that will be executed forwards from the initial FB to the current state in the simulated reasoning yet. Thus, MECORI can not find out the state of the frame taxonomy in the current state of the simulated reasoning. So, MECORI can not use this information to check the consistency of the assumed taxonomy relationships during the process. On the other hand, as taxonomic relationships are transitive and some constrains included in the metaobjects are taxonomy links, it is not enough to propagate only these links in the first phase of the method, but the transitive closure of these links included in the metaobjects must also be propagated. In this way, the detection of invalid metaobjects due to the presence of contradictory taxonomy relationships is exhaustive. The transitive closure is initially created from the IC, with taxonomy links derived from the declaration part of the IC and the invariant links (provided as input to MECORI). Next, the transitive closure can be updated in two ways during the expansion phase. First, adding new taxonomy links coming from the conjunctions. And second, deleting taxonomy links due to the simulated effect of the actions over the metaframes. These updates force MECORI to recalculate incrementally or decrementally the transitive closure. Lets see an example of this detection of invalid metaobjects: let OBJ be a metaframe that posses a constraint

Fig. 7. Context associated with the IC.

234

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

is_subclass_of MAN. We assume that constraints MAN is_subclass_of PERSON and WOMAN is_subclass_of PERSON are invariant links. If, later on, a constraint is_not_subclass_of PERSON is added to OBJ, it is clear that apparently this constraint will not come into conflict with the old constraint is_subclass_of MAN. However, if the transitive closure is worked out, the conflict will become evident between is_not_subclass_of PERSON, and the deduced constraint is_subclass_of PERSON. 6.4. Dealing with non-monotonic reasoning MECORI simulates the non-monotonic reasoning (see Section 4) by adding and deleting constraints in the metaobjects. However, these operations and the check for contradictions in the metaobjects are not enough to ensure the consistency of non-monotonic reasoning. Therefore, a set of assumed proposition names and assumed tuples of metarelationships (SAPT) must be also propagated and updated during the first phase of the method. Some of these proposition names and tuples may be negated. Lets clarify the utility and the treatment of the SAPT by means of an example. In this example, in order to simplify the explanation, we will focus on the treatment of the SAPT, and other aspects of the method, already mentioned, will be left out. In the Fig. 8, a deductive subtree with two rules R1 and R2, and IC is shown. According to the definition of consistency given in Section 4, the presence of R1 makes the subtree invalid. If the SAPT is not employed, it will not be detected that the subtree is invalid, because the metarelationship associated to the relationship R will not be propagated to the rule R1. For the IC defined in Fig. 8, MECORI creates the SAPT as SAPT = {S1(o1, o2), R1(o1, o2), p}, where p is the name of a proposition, S1 and R1 are metarelationships associated to the relationship names S and R, respectively, and o1 and o2 are metaframes associated to the frame variables x and y, respectively. Next, two copies of this SAPT are propagated to the rules R1 and R2, respectively. In R1, the SAPT is updated by deleting the tuple S1(o1, o2) because of the presence of the action Add(S(x, y)); and in R2, the SAPT is updated by deleting the proposition p because of the presence of the action Add(p). These deletions are carried out, since both the tuple and the proposi-

Fig. 8. Invalid deductive subtree.

tion must not be assumed in their respective deductive trees any longer, because the rules R1 and R2 will deduce them just before the moment in which they must hold. Moreover, the tuple R2(o1, o2) and the proposition name t are added to the SAPT in R1, and the negated proposition name p and the proposition name t are added to the SAPT in R2. In the SAPT of R1 a conflict will arise due to the presence of the tuples R2(o1, o2) and R1(o1, o2) whose metarelationships are associated to the same relationship R. Hence, the deductive subtree will be invalid. A similar conflict between p and p could have arisen in R2, but the effect of the action Add(p) avoided it. 6.5. Specification of the algorithm In this section, the steps of MECORI will be outlined. As it can be deduced from the algorithm specification below, MECORI does not wait for expanding the whole decision tree to begin to contract the tree. Instead, as a decision subtree is ready to be contracted, MECORI will apply to it the context operations. The method can be specified in terms of three processes. First of all, the process for computing the context associated with an IC is invoked. In turn, this process needs to compute the context associated with each subgoal in the IC. In addition, the process for computing the context associated with a subgoal requires the process for computing the context associated with a conjunction. Finally, the process for computing the context associated with a conjunction calls recursively the process for computing the context associated with a subgoal for each subgoal in the conjunction. Process for computing the context associated with an IC Inputs: IC, RB, type of negation, invariant links Output: context 1. FOR each conjunction in the IC: 1.1. Create metaobjects for the variables and the GKR object names ) metaobjects. 1.2. Fill some metaobjects’ attributes according to the literals and the variable declarations. 1.3. Create the SAPT. 1.4. Create the transitive closure according to the taxonomy links defined in the conjunction and in the invariant links. 1.5. Generate the conditions over the certainty factors and the attribute values. 1.6. FOR each subgoal in the conjunction: 1.6.1. Compute the context associated with the subgoal. 1.7. Combine the contexts associated with all the subgoals ) context of conjunction without conditions. 1.8. FOR each deductive tree in context of conjunction without conditions: 1.8.1. Apply the actions of the deductive tree to the conditions ) updated conditions.

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

1.9. Add the updated conditions to the context of conjunction without conditions ) context of conjunction. 1.10. IF there was a contradiction in steps 1.7 or 1.9 THEN context of conjunction :¼ ; 2. Concatenate the contexts associated with all the conjunctions ) context 3. RETURN context Process for computing the context associated with a goal Inputs: goal, RB, type of negation, SAPT, used rules, transitive closure Output: context associated with the goal 1. Obtain the rules that can be chained with the goal according to type of negation ) conflict set 2. Remove used rules from conflict set 3. IF conflict set is not empty THEN 3.1. FOR each rule in conflict set: 3.1.1. Propagate the metaobjects from the goal to the rule. 3.1.2. FOR each conjunction in the rule: 3.1.2.1. Compute the context associated with the conjunction ) (context of conjunction, rule firing) 3.1.2.2. Add rule firing to each deductive tree of the context of conjunction ) updated context of conjunction 3.1.3. Concatenate the updated contexts associated with all the conjunctions ) updated context of rule 3.2. Concatenate the updated contexts associated with all the rules ) context associated with the goal ELSE 3.1. Create the context associated with the goal. 4. RETURN context associated with the goal Process for computing the context associated with a conjunction Inputs: rule, action, propagated metaobjects, conjunction, RB, type of negation, SAPT, used rules, transitive closure Outputs: context associated with the conjunction, rule firing 1. Create copies of the propagated metaobjects ) copies of propagated metaobjects 2. Create a copy of SAPT ) copy of SAPT 3. Create a copy of the transitive closure ) copy of transitive closure 4. Apply the action over the copies of propagated metaobjects 5. Create metaobjects for the free variables and the free object names ) bound conjunction 6. Fill some metaobjects’ attributes in bound conjunction according to the literals in conjunction. 7. Create the rule firing according to the bound conjunction and the rule ) rule firing

235

8. Remove deduced constraints by the rule firing from the copy of transitive closure and the copy of SAPT. 9. Add constraints assumed by the conjunction to the copy of transitive closure and the copy of SAPT 10. Create the conditions according to the arithmetic constraints in bound conjunction ) conditions 11. FOR each subgoal in bound conjunction: 11.1. Compute the context associated with the subgoal 12. Combine the contexts associated with all the subgoals ) context of conjunction without conditions 13. FOR each deductive tree in context of conjunction without conditions 13.1. Apply the actions of the deductive tree to the conditions ) updated conditions 14. Add the updated conditions to the context of conjunction without conditions ) context of conjunction 15. IF there was a contradiction in steps 6, 9, 12 or 14 THEN RETURN (;, _) ELSE RETURN (context of conjunction, rule firing) 6.6. Removing redundancy In this section, we will define the concept of redundant information in the contexts. First of all, we will define when a metaobject is redundant in an environment: A metaobject a is redundant in an environment E iff $b 2 Ena „ barb where arb denotes that the metaobject b is subsumed by the metaobject a. In other words, the metaobject b is more restrictive than the metaobject a. Roughly, we can say that the metaobject b is subsumed by the metaobject a iff "c(constraint) 2 a then c 2 b. For the sake of conciseness, a more formal definition will not be given in this paper. Logically, if a metaobject is redundant in an environment, this metaobject can be removed from the environment without loss of information. The reason of this removal is because the non-redundant metaobject entails at least the same facts than the redundant metaobject. Redundant subcontexts can appear in contexts: A subcontext (E, DT) is redundant in a context C iff $(E 0 , DT 0 ) 2 CnE 0 reE RulesTree(DT) = RulesTree(DT 0 ) where E 0 reE denotes that the environment E is subsumed by the environment E 0 , and the function RulesTree(DT) returns the tree of rules included in the deductive tree DT. Roughly, we can say that the environment E is subsumed by the environment E 0 iff "obj 0 2 E 0 $obj 2 Enobj 0 robj. The reason for removing redundant subcontexts is because the non-redundant subcontext specifies at least the same initial FBs than the redundant subcontext. However, the removal of redundant subcontexts may imply the loss of valuable information regarding the detection of redundant conjunctions in the rules. Hence, MECORI

236

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

leaves the decision of removing or not removing the redundant subcontexts to the Knowledge Engineer. 7. Verifying Jess knowledge base systems with MECORI In order to further demonstrate the utility of MECORI, it will be interesting to explain how MECORI may be applied to verify a KBS defined in a widely known language for representing rule-based systems. For that purpose, we have chosen the Jess language. Jess2 is a programmer’s library for Java that allows the programmer to build KBSs based on rules and objects. This library also supplies an interpreter that permits to define rules, facts, etc., as well as to fire the rules forwards (Rete algorithm), or backwards. This interpreter admits a language called Jess too, in which it is possible to define rules, facts, modules, functions, etc. In order to utilize MECORI for verifying a Jess KBS, the Jess KB should be translated into a GKR KB. How to carry out this translation will be analyzed in this section. The main problem is basically related to the difficulty, and even the impossibility of translating some constructs of Jess language into GKR language. The major aspect that cannot be represented in GKR, but it can be represented in Jess is the definition of functions. Moreover, it is not possible to include calls to functions defined by the user in the GKR rules. However, this lack of GKR is not relevant from the point of view of MECORI, because the verification of that kind of knowledge is beyond the scope of MECORI. Instead, for the purpose of the verification of functions, program verification techniques should be employed. There exist some constructs in the Jess rules that have not counterparts in GKR: the forall conditional element, the exists conditional element, and the logical conditional element. Nevertheless, these elements can be considered syntactic sugar, since they can be simulated with a bit more of effort on the designer’s side. In this sense, the forall conditional element can be simulated by replacing it with the enumeration of all the different instances. Moreover, the exists conditional element is useful when you want a rule to fire only once, although there may be many facts that could potentially activate it. Therefore, this construct can be simulated by defining the rule so that its firing adds a fact to the working memory, and the left part of this same rule requires that same fact not to hold. Finally, the logical conditional element may be useful to implement a truth maintenance system to support non-monotonic reasoning. However, the automated retraction of the facts that lose their logical support in the inference network can also be simulated by incorporating new rules. These new rules would be in charge of retracting the entailed facts whose supporter facts turn to be false.

2

http://herzberg.ca.sandia.gov/jess/index.shtml.

With regard to the available types for representing values, any available type in Jess is also available in GKR. Nevertheless, MECORI only can deal with values defined in the real domain (see [23]) or in discrete domains (with the predicates = and „) so far. Jess language provides a very rich syntax to define tests or conditions over the values that can be bound to the variables in the left part of the rules. However, no test has been identified that cannot be expressed in the GKR language except for the tests related to regular expressions. Apart from the issues already commented in the current section, no other limitation of expressiveness has been detected when tackling the problem of translating Jess into GKR. Moreover, in the opposite way, GKR provides constructs for which no counterparts exist in Jess, such as elements related to second order logic, related to the dynamic modification of the frame taxonomy, and related to multiple inheritance. 8. Conclusions The most noteworthy and innovative points of this method are: first, it can deal with hybrid KBSs that can use some type of non-monotonic reasoning; second, MECORI can deal with production rules that can contain second order logic formulae; third, MECORI can verify KBS with uncertain reasoning; and fourth, our method can deal with production rules that include some arithmetic constraints on attribute values and certainty factors. For the purpose of verification, an ATMS-like theory is built as in COVADIS or KB-REDUCER. However, unlike these methods, our method creates, updates and propagates metaobjects backwards while the AND/OR decision tree is being expanded, and later, our method propagates metaobjects forwards within contexts while the AND/OR decision tree is being contracted. We think that the proposed approach can be applied to large KBSs with a reasonable efficiency. As the proposed method only focuses on the deduction of an inconsistency by using the available rules, the computational cost will not depend on the size of the whole RB in the average case, but it will depend on the size of the decision tree expanded for the IC, which normally will involve an small percentage of the total set of rules. In addition, as it was commented in Section 2, the proposed method will not need to consider all the different initial FBs of the verified KBS, as other methods actually would do. Instead, MECORI will only simulate the execution of the KBS for each IC to be considered. So far, we have also proved the partial soundness of the main algorithms used by MECORI [24]. 9. Future work Currently, we are working in adapting the proposed method so that it can verify a KBS based on an ontology defined in OWL Lite language [25]. This adaptation

J. Ramı´rez, A. de Antonio / Knowledge-Based Systems 20 (2007) 225–237

lead us to consider the existence of taxonomic relations and the possibility of specifying properties for the relations such as transitive, symmetric, etc. These issues will imply, in addition, the existence of two interconnected levels of inference in a KBS, inference based on production rules and inference based on ontology axioms, which will have to be simulated somehow by the adapted method. Another promising line of work we are tackling is an extension of the proposed method that verifies the reasoning module of a deliberative agent cohabiting a dynamic environment with other agents [26]. In this dynamic environment the truth value of some external facts may change during the reasoning process as a result of the reception of new messages or stimuli coming from the environment of the verified agent. This last issue can be viewed as an extension of the notion of non-monotonic reasoning.

[11] [12]

[13]

[14]

[15] [16]

[17]

References [18] [1] T.A. Nguyen, W.A. Perkins, T.J. Laffey, D. Pecora, Checking an expert system knowledge base for consistency and completeness, in: Proceedings of the 9th International Joint Conference on AI (IJCAI’85), 1985, pp. 375–379. [2] P. Meseguer, A new method to checking rule bases for inconsistency: a petri net approach, in: Proceedings of the ECAI-90, 1990, pp. 437– 442. [3] X. He, W.C. Chu, H. Yang, S.J.H. Yang, A new approach to verify rule-based systems using petri nets, in: Proceedings of the TwentyThird Annual International Computer Software and Applications Conference, IEEE Comput. Soc., Silver Spring, MD, 1999, pp. 462– 467. [4] M. Ramaswamy, S. Sarkar, C.Y. Sho, Using directed hypergraphs to verify rule-based expert systems, IEEE Transactions on Knowledge and Data Engineering 9 (2) (1997) 221–237. [5] G.S. Gursaran, S. Kanungo, A.K. Sinha, Rule-base content verification using a digraph-based modelling approach, Artificial Intelligence in Engineering 13 (3) (1999) 321–336. [6] M. Rousset, On the consistency of knowledge bases: the COVADIS system, in: Proceedings ECAI-88, Munich, Alemania, 1988, pp. 79–84. [7] A. Ginsberg, Knowledge-base reduction: a new approach to checking knowledge bases for inconsistency and redundancy, in: Proceedings of the AAAI-88, 1988, pp. 585–589. [8] P. Meseguer, Incremental verification of rule-based expert systems, in: Proceedings of the 10th European Conference on AI (ECAI’92), 1992, pp. 840–844. [9] M. Dahl, K. Williamson, A verification strategy for long-term maintenance of large rule-based systems, Workshop Notes of the AAAI92 WorkShop on Verification and Validation of expert Systems, 1992, pp. 66–71. [10] M. Ayel, J.P. Laurent, Validation, Verification and Test of Knowledge-Based Systems: SACCO-SYCOJET: Two Different Ways of

[19] [20]

[21]

[22] [23]

[24]

[25]

[26]

237

Verifying Knowledged-Based Systems, John Wiley publishers, London, 1991. J. de Kleer, An assumption based TMS, Artificial Intelligence 28 (2) (1986) 127–162. A. de Antonio, Una interpretacio´n algebraica de la verificacio´n de sistemas basados en el conocimiento, Ph.D. thesis, Facultad de Informa´tica, Universidad Polite´cnica de Madrid (1994). L.M. Laita, E.R. Lozano, L. de Ledesma, V. Maojo, Computer algebra based verification and knowledge extraction in RBS application to medical fitness criteria, in: Proceedings of the EUROVAD’99, 1999. E. Gre´goire, B. Mazure, About the incremental validation of firstorder stratified knowledge-based decision-support systems, Information Sciences 142 (1–4) (2002) 117–129. C.H. Wu, S.J. Lee, Knowledge verification with an enhanced highlevel petri-net model, IEEE Expert 12 (5) (1997) 73–80. D.E. O’Leary, Verification of uncertain knowledge based systems: an empirical verification approach, Management Science 42 (12) (1996) 1663–1675. S. Lee, R.M. O’Keefe, Subsumption anomalies in hybrid knowledge based systems, International Journal of Expert Systems 6 (3) (1993) 299–320. R. Mukherjee, R.F. Gamble, J. Parkinson, Classifying and detecting anomalies in hybrid knowledge-based systems, Decision Support Systems 21 (4) (1997) 231–251. A.Y. Levy, M. Rousset, Verification of knowledge bases on containment checking, Artificial Intelligence 101 (1–2) (1998) 227–250. A. de Antonio, J. Carden˜osa, L. Martı´nez, GKR: A generic model of knowledge representation, Vol. II, Student Abstracts, in: Proceedings of the 12th National Conference on Artificial Intelligence (AAAI 94), 1994, pp. 1438. G. Antoniou, Verification and correctness issues for nonmonotonic knowledge bases, International Journal of Intelligent Systems 12 (10) (1997) 725–738. F. Kro¨ger, Temporal Logic of Programs, Springer-Verlag, Berlin, 1987. J. Ramı´rez, A. de Antonio, Semantic verification of rule-based systems with arithmetic constraints, in: Database and Expert Systems Applications, Lecture Notes in Computer Science, vol. 1873, Proceedings DEXA, Springer-Verlag, Berlin, 2000, pp. 437–446. http:// decoroso.ls.fi.upm.es (click publications). J. Ramı´rez, Me´todo para la verificacio´n de sistemas hı´bridos basado en la propagacio´n de etiquetas, Ph.D. thesis, Facultad de Informa´tica, Universidad Polite´cnica de Madrid, 2002. http://decoroso.ls.fi.upm.es (click publications). J. Ramı´rez, A. de Antonio, Consistency verification of a deductive system based on OWL Lite, in: Proceedings of 3rd International Workshop on Modelling, Simulation, Verification and Validation of Enterprise Information Systems (MSVVEIS-2005), INSTICC Press, 2005. J. Ramı´rez, A. de Antonio, Formal consistency verification of deliberative agents with respect to communication protocols, to appear in Lecture Notes in Computer Science, Proceedings of Formal Approaches to Agent-Based Systems III (FAABS 04), SpringerVerlag, Berlin, 2004, http://decoroso.ls.fi.upm.es (click publications).