Expert Systems with Applications PERGAMON
Expert Systems with Applications 23 (2002) 137±144
www.elsevier.com/locate/eswa
Diagnostic expert system using non-monotonic reasoning E.-S. El-Azhary a,*, A. Edrees a, A. Rafea b a
Central Laboratory for Agricultural Expert System (CLAES), Agriculture Research Center, Giza, Egypt b Computer Science Department, American University in Cairo (AUC), Cairo, Egypt
Abstract The objective of this work is to develop an expert system for cucumber disorder diagnosis using non-monotonic reasoning to handle the situation when the system cannot reach a conclusion. One reason for this situation is when the information is incomplete. Another reason is when the domain knowledge itself is incomplete. Another reason is when the information is inconsistent. This method maintains the truth of the system in case of changing a piece of information. The proposed method uses two types of non-monotonic reasoning namely: `default reasoning', and `reasoning in the presence of inconsistent information' to achieve its goal. q 2002 Elsevier Science Ltd. All rights reserved. Keywords: Expert systems; Non-monotonic reasoning; Cucumber disorder diagnosis
1. Introduction
2. Types of non-monotonic reasoning
Non-monotonic reasoning means that our intermediate beliefs may be changed according to additional information. Eitherington, Kraus, and Perlis (1991) de®ne nonmonotonic reasoning as the reasoning that can reach conclusions, which are not strictly entailed by what is known, and so may need to be retracted as new information is acquired. Non-monotonic reasoning is suitable for reasoning in the cases of incomplete information, incomplete knowledge and inconsistent information. Different types of non-monotonic reasoning were developed namely: default reasoning (Brewka, 1991; Reiter, 1980; Tan & Pearl, 1995) autoepistemic reasoning (Moore, 1985), reasoning in the presence of inconsistent information (Roos, 1992), counterfactual reasoning (Ginsberg, 1986; Lewis, 1973), priority reasoning (Wang, You, & Yuan, 1996; You, Wang, & Yuan, 2001) and negative reasoning (Pedgham, 1989). Section 2 reviews the different types of non-monotonic reasoning. Section 3 presents the motivation of this work. Section 4 discusses the approach used in the proposed diagnostic method. The expertise model of the proposed method is demonstrated in Section 5. Implementation issues together with two examples are discussed in Section 6. Section 7 presents the conclusion of this work.
Default reasoning (Brewka, 1991; Pelletier & Elio, 1997; Reiter, 1980; Tan & Pearl, 1995). Default reasoning is the process of making plausible inferences from incomplete information. These inferences may become invalid at a later point and thus need to be removed from the system. Very little of what we consider to be everyday belief is rigid in the sense that it is not open to revision, in fact many common or everyday beliefs are not consistent when considered together. As a result of our effort to understand the world we all inhabit, we adopt rules and draw conclusions on the basis of those rules. The inferences from these rules can be very general and may in fact not be correct in all instances due to the lack of preciseness of the rule. As a result of such situations we may adopt the rules that are not accurate enough. Autoepistemic reasoning (Moore, 1985). Autoepistemic reasoning is a derivation of the conclusion that is done with considering the one's own knowledge. The nonmonotonicity of the autoepistemic reasoning is thus a consequence of the fact that the meaning of the statements about one's own knowledge is context sensitive. In other words, the meaning of the proposition may change with respect to the new knowledge, which have become available. This proposition was wrong and derived us to the wrong conclusion with respect to the former knowledge, but this does not mean that we have to throw this away but its meaning simply changed. Reasoning in the presence of inconsistent information (Roos, 1992). In many situations humans have to reason
* Corresponding author. E-mail addresses:
[email protected] (E.-S. El-Azhary),
[email protected] (A. Edrees),
[email protected] (A. Rafea).
0957-4174/02/$ - see front matter q 2002 Elsevier Science Ltd. All rights reserved. PII: S 0957-417 4(02)00032-5
138
E.-S. El-Azhary et al. / Expert Systems with Applications 23 (2002) 137±144
with inconsistent information. When there is inconsistency in the information produced by the user, the system should isolate the inconsistent part and draw back the previous conclusion and produce a correct one. Counterfactual reasoning (Ginsberg, 1986; Lewis, 1973). Counterfactual reasoning is reasoning with counterfactual statements which are of the form `if P then Q' where P is known or expected to be false. When we are faced with a dif®cult goal or problem, we often reduce it to a sub-goal or sub-problem by considering a counterfactual of the form, `if only thus-and-so were true, I would be able to solve the original problem'. The original problem then reduces to proving the counterfactual (in some suitable sense) and to arranging for thus-and-so to be true. Counterfactual is nonmonotonic, after concluding a certain conclusion from a set of axioms, it is not necessarily to have the same conclusion from a super set of this set of axioms. It is possible to have a certain conclusion C using a set of axioms S, and have the negation of the same conclusion from a set Q which is a super set of S. Priority reasoning (Wang et al., 1996; You et al., 2001). Priority reasoning is de®ned as, monotonic inferences constrained by a simple notion of priority constraint. These types of constrained inferences can be speci®ed in a knowledge representation language where a theory consists of a collection of logic programming like rules and a priority constraint among them. Wang et al. (1996) recasted default reasoning by priority reasoning, so this type indicate that: non-monotonic reasoning monotonic inferences 1 priority constraint. In other words, non-monotonic reasoning is just a process of constructing monotonic derivations that satisfy a priority constraint. Negative reasoning (Pedgham, 1989). Negative reasoning is de®ned as doing default reasoning with inheritance schema which allows default reasoning about negative conclusions in the same way as it does about positive conclusions. 3. Motivation of the work By using non-monotonic reasoning, the system becomes more ¯exible and may change the current solution if more input information has been obtained. The following example demonstrates this situation. If the observation on the plant is `all leaves are yellow'. The system concludes that the cause is `nutrition de®ciency'. If more input information has been obtained, which is `the leaves are wilted and there are knots on the root', the system draws back the previous conclusion and concludes that the cause is `root knot nematode'. This work is motivated by three problems. The ®rst problem is when the input information is inconsistent; in this case the traditional expert system fails to reach a solution. This work deals with this situation by retracting the bit of information that causes the inconsistency. The second
problem is the incomplete input information. The traditional expert system cannot reach a solution unless the input information is complete. The third problem is when the domain knowledge is incomplete. In this case the traditional expert system in some situations cannot reach a solution. This work solves the second and the third problems by giving the default solution, which is the minimal solution that covers the all information. 4. Approach used in the proposed diagnostic method The two types: `default reasoning', and `reasoning in the presence of inconsistent information' of non-monotonic reasoning are found to be suitable for the proposed diagnostic method. Default reasoning deals with the situation of incomplete information. This means that default reasoning can reach conclusion with the information in hand. If more information is available, this conclusion may be changed. Default reasoning deals also with the situation when the expert system cannot reach a solution. This situation occurs due to incomplete domain knowledge. In this case, the system gives the default solution that covers the input information. `Reasoning in the presence of inconsistent information' deals with the situation of inconsistent information. In our work, this situation occurs due to the inconsistency between the constituents of the input information itself. One reason of the inconsistency between the constituents of the input information is due to changing a bit of information such that the new information is inconsistent with the remainder information. In this case, the system retracts the piece of information that causes inconsistency and the part of the suspected causes dependent on this piece of information and continues the reasoning. For example, if the input information is, `the stem is spotted, the color of stem spots is white, the leaves are wilted, leaf color is brown, and the root is rotted', then the bit of information `the stem is spotted' has been retracted, i.e. the stem is not spotted, the new information becomes inconsistent because how the color of stem spots is white in spite of the stem is not spotted. In this case the system retracts the piece of information `color of stem spots is white' and retracts also the suspected causes that depend on this retracted information and continues reasoning. 5. Expertise model The description of this model is based on the notation provided in CommonKADS methodology (Wielinga, 1994). In this methodology three categories of knowledge are distinguished: task knowledge, inference knowledge, and domain knowledge. The task knowledge describes the control of the reasoning process. The inference knowledge describes how the domain knowledge can be applied in the reasoning process. The domain knowledge is the factual knowledge about the application domain. The proposed
E.-S. El-Azhary et al. / Expert Systems with Applications 23 (2002) 137±144
Fig. 1. Example of domain ontology.
Fig. 2. Example of caused by model.
expertise model is demonstrated by using the existing domain knowledge, in the Central Laboratory for Agricultural Expert Systems (CLAES), of cucumber disorder diagnosis. This knowledge has been restructured to adapt the proposed model. 5.1. Domain knowledge The domain knowledge consists of two parts: domain ontology and domain models. Domain ontology is considered as the most fundamental constituent of domain knowledge. It de®nes the language of the domain, i.e. the terms used to describe the domain. This language consists of a number of constructs such as concepts, expressions and relations (Gruber, 1992). A concept is a central construct in our domain modeling. It may have an internal structure composed of set of properties. Each property could be
139
assigned by one or more value from a value set. Fig. 1 shows a part of the domain ontology of this work. Domain models, on the other hand, are coherent collections of statements about the domain that represent particular viewpoints on the domain knowledge such that it is suitable for the problem solving. The notion of expression as a domain modeling construct is introduced in CommonKADS because it occurs often in `domain rules' (Schreiber, 1994). Expressions about the domain construct provide a suitable way of modeling the structure of domain knowledge in which simple expressions such as leaves: color spotted appear. Where, leaves is a concept having the property color, and spotted is one of the possible values of this property. Both concepts and expressions are used for de®ning a set of domain relations. For instance, a causality relation may be de®ned between a concept of a certain disorder and an observable expression. In this work, the domain knowledge contains three static domain models and a dynamic domain model (DDM). The static domain models named `caused by', `communication model (CM)' and `determination'. The knowledge representation used in caused by model is implication rules between each observation and its causes. Fig. 2 shows a rule of the caused by model. This rule states that the abnormality of the root status is caused by the disorders `root rot', `root knot nematode', or `root lesion nematode'. CM is used for obtaining the input information from the user. It has four levels of hierarchy. Each successor level has more speci®c information. An example of the CM is shown in Fig. 3. The elliptical shape represents a property of a certain concept. The rectangular shape contains the legal values of the associated property. Each link between two properties is attached by a condition. This condition is a
Fig. 3. An example of CM.
140
E.-S. El-Azhary et al. / Expert Systems with Applications 23 (2002) 137±144
Fig. 4. Example of determination model.
Fig. 5. An example of DDM.
value of the upper property. A link is held only if its attached condition is true otherwise not. The CM shown in Fig. 3 states that if the color status of the fruit is abnormal then ask the user about the fruit color, and if the fruit color is spotted then ask the user about the spot color and the spot appearance. Determination model determines the certainty factor of the disorders according to the current situation. The knowledge in this model is represented as implication rules. Each level of the CM maps to a group of implication rules in the determination model. The disorders in each group have a certainty factor according to the level number in the CM of their group. The disorders in the group of the ®rst level have 0.1 certainty factor. The disorders in the group of the second level have 0.3 certainty factor. The disorders in the group of the third level have 0.7 certainty factor. The disorders in the group of the fourth level have 1.0 certainty factor. Fig. 4 is an example of the determination model. The ®rst rule in Fig. 4 states that ªif the leaves status is abnormal and the root status is abnormal then the certainty factor of `root rot' disorder is 0.1º. The certainty factor of `root rot' increase to 0.3 in the second rule if the appearance status of both leaves and root is abnormal, and so on for the third and fourth rules. The DDM is created during run time. The current dependencies and the plausible causes are cached in DDM. An example of the DDM is shown in Fig. 5. It consists of two clauses `dependencies' and `causes'. Dependencies clause consists of four arguments. The ®rst argument is the level
number in the CM model, the second argument is an observation, the third argument is a list of the successors of the second argument, and the fourth argument is a list of the causes of the second argument. Causes clause consists of two arguments. The ®rst argument is the level number in the CM. The second argument is the causes of the observations in this level. 5.2. Inference and task knowledge Fig. 6(a) and (b) shows the inference structure and the task knowledge of the main task, respectively. The task knowledge consists of two main parts: task de®nition and task body. The task de®nition describes the main system goal, input roles, and output roles. The task body describes the task type (composite, or primitive), subtasks (if any), additional roles (which are not included in either input roles or output roles), and the control over these subtasks. The main task consists of two subtasks, get solution and change ®nding. The input role is observables that indicates the abnormal observations on the plant. The output role is solution that indicates the solution of current case. The function of get solution subtask is to get the causes of the symptoms. It consists of four inference steps namely: get causes, get more speci®c observables, get related causes, and determine, and three transfer tasks namely: obtain from user, obtain from DDM, and present solution. Fig. 7(a) and (b) shows the inference structure and task knowledge of get solution subtask, respectively.
E.-S. El-Azhary et al. / Expert Systems with Applications 23 (2002) 137±144
141
Fig. 6. (a) Inference structure of main task. (b) Main task.
The goal of get causes inference step is to get the causes of the current observations using the determination model. Get more speci®c observables inference step gets the successor nodes of an current node using CM, it does not get the all sons of the current node, but it gets only sons related to the succeeded conditions attached to its links in CM. The goal of the get related causes inference step is to get the all possible causes related to an observation using caused by model. The function of the determine inference step is to determine the causes related to an observation taking into consideration the other existing observations by doing intersection between the all possible causes of the observation and the causes of the other existing observations. Obtain from user transfer task gets the value of an observable from the user. While getting the value of an observable, the user can return to the previous observable to change its value. The goal of the obtain from DDM transfer task is to return to the previous observable and gets its legal values according to current situation. Present solution transfer task presents the solution to the user. Change ®nding subtask consists of two transfer tasks namely: update dependencies, and update causes. Their
functions are to update the dependencies and causes clauses, respectively, in the DDM in the case of changing the value of an observable. Fig. 8(a) and (b) shows the inference structure and task knowledge of change ®nding subtask, respectively. 6. Implementation and example run The proposed method has been implemented using SICSTUS prolog on Pentium II PC computer. It has been applied on cucumber plant using the existing domain knowledge in Central Laboratory of Agricultural Expert Systems (CLAES). Two examples have been selected to show the capabilities of the proposed method. The ®rst example demonstrates the situation when the system cannot reach a solution, so the system gets the default solution. Actually this case occurs due to incomplete domain knowledge. The system obtains information from the user in steps. The information in the ®rst step is general, the information in the second step is more speci®c, and so on. Fig. 9 represents the input data in four steps. Using these input data
142
E.-S. El-Azhary et al. / Expert Systems with Applications 23 (2002) 137±144
Fig. 7. (a) Get solution inference structure. (b) Get solution task.
the system cannot reach a solution. So, the system gives the default solution, which is the minimal solution that covers all the informations as shown in Fig. 10. The second example demonstrates the situation when the user withdraws a bit of information. In this case the system deletes both information and conclusion that depend on the drawn back information and continues the reasoning. Fig. 11 represents the input data in four steps. The solution of this information is shown in Fig. 12. Then the user discovered that he/she had input a bit of false information, which was `Leaves color spots', see Fig. 11(c), so he/she withdraws it. In this case, the system deletes all the informations
related to the drawn back information, which are `Leaves spot color yellow', `Leaves spot appearance angular', and `Leaves spot position between veins'. The system also deletes the part of solution related to the drawn back information, which are `thrips', `white ¯y', `downy mildew', `alternaria leaf spot', and `spiders' and continues the reasoning. 7. Conclusion This work presents a diagnostic method that uses nonmonotonic reasoning to deal with some problems that the
E.-S. El-Azhary et al. / Expert Systems with Applications 23 (2002) 137±144
143
Fig. 8. (a) Change ®nding inference structure. (b) Change ®nding task.
Fig. 9. Input data in four steps of the ®rst example.
Fig. 10. Default solution.
expert systems suffer from. These problems are incomplete knowledge, incomplete information and inconsistent information. This method uses two types of non-monotonic reasoning namely: `default reasoning', and `reasoning in the presence of inconsistent information'. The proposed method consists of two main parts. The ®rst part generates a solution based on the available
Fig. 11. Information in four steps of the second example.
144
E.-S. El-Azhary et al. / Expert Systems with Applications 23 (2002) 137±144
Fig. 12. Solution of the second example.
information. The second part is responsible for the truth maintenance of the system in the case of altering a piece of information. A diagnostic expert system for cucumber plant has been established using this work. Two examples have been presented in this work to demonstrate some capability of the presented method. References Brewka, G. (1991). Nonmonotonic reasoning: Logical foundations of commonsense, Cambridge: Cambridge University Press. Eitherington, D. W., Kraus, S., & Perlis, D. (1991). Nonmonotonicity and the scope of reasoning. Arti®cial Intelligence, 52 (3).
Ginsberg, M. L. (1986). Counterfactuals. Arti®cial Intelligence, 30, 35±79. Gruber, T. R. (1992). Ontolingua: A mechanism to support portable ontologies, Technical Report KSL 91-66. Knowledge Systems Laboratory, Stanford University. Lewis, D. K. (1973). Counterfactuals, Cambridge, MA: Harvard University Press. Moore, R. C. (1985). Semantical considerations on nonmonotonic logic. Arti®cial Intelligence, 25, 75±94. Pedgham, L. (1989). Negative reasoning using inheritance. IJCAI. Pelletier, F. J., & Elio, R. (1997). What should default reasoning be, by default? Computational Intelligence, 13 (2). Reiter, R. (1980). A logic for default reasoning. Arti®cial Intelligence, 13, 81±132. Roos, N. (1992). A logic for reasoning with inconsistent knowledge. Arti®cial Intelligence, 57, 69±103. Schreiber, A. (1994). Th. and CML: The CommonKADS conceptual modeling language, Lecture notes in arti®cial intelligence 867. Berlin: Springer. Tan, S., & Pearl, J. (1995). Speci®city and inheritance in default reasoning. IJCAI 95 Proceeding, 2 Montreal. Wang, X., You, J., & Yuan, L. Y. (1996). Nonmonotonic reasoning by monotonic inferences with priority constraints. Proceedings of ICLP post conference workshop on nonmonotonic extension of logic programming. Wielinga, B. J. (1994). Expertise model de®nition document. ESPRIT Project P5248 KADS II, Document Id: KADS II/M2/UvA/026/5.0, University of Amsterdam. You, J., Wang, X., & Yuan, L. Y (2001). Nonmonotonic reasoning as prioritized argumentation. IEEE TKDE, Nov/Dec, 13 (6), 968±979.