COGNITIVE
SCIENCE
13,
5074549
(1989)
The MEDIATOR: Analysisof an Early Case-BasedProblemSolver JANETL.KOLODNER Georgia Institute of Technology
ROBERTL.SIMPSON DARPA/ISTO
Case-based reasoning is a reasoning method that capitalizes on previous experience. In case-based reosoning, a new problem is solved in a way that is anologous to a previous similar problem. Case-bosed reasoning can improve problem-solving behavior In several ways: by providing reasoning shortcuts, by worning of the potential for error, and by suggesting a focus for reasoning. The MEDIATOR was one of the earliest case-based, problem-solving programs. Its domain is dispute resolution, and it uses case-based reasoning for 10 different tasks involved in its problem solving. Whilssome of the MEDIATOR’s processes have been elaborated and improved on in later case-based problem solvers, there remain many lessons that con be learned about case-based reasoning by analyzing the MEDIATOR’s behavior. This article provides a short description of the MEDIATOR ond its domain, presents its successes and shortcomings, and analyzes the reasons why it behaves the way it does. As part of the onalysis, the differences and similarities between the MEDIATOR well as the implications
and later case-based of those differences.
reasoners
ore
also
described,
as
This research has been supported in part by NSF Grant Nos IST-8116892, IST-8317711, and IST-8608362, in part by the Army Research Office under Contract No. DAAG-29-85-K0023, and in part by the Air Force Institute of Technology. The views expressed are solely those of the authors. The MEDIATOR was implemented by Robert Simpson while he was a Ph.D. student at Georgia Tech. The analysis of MEDIATOR reported in this article was based partially on later research done by Rich Cullingford, Elise Turner, Tom Himichs, Juliana Lancaster, Katia Sycara, Roy Turner, and Hong Shinn on the PERSUADER, CAS, JULIA, MEDIC, and MECH projects. Thanks also to each of these people for insightful comments made during discussion and while reading earlier versions of this manuscript. Thanks, too, to the people in Janet Kolodner’s case-based reasoning seminar in fall, 1987, especially Reid Simmons, for the insightful comments made concerning the contents of this article. The shortcomings, of course, are all those of the authors. Correspondence and requests for reprints should be sent to Janet L. Kolodner, School of Information and Computer Science, Georgia Institute of Technology, Atlanta, GA 30332. 507
508
KOLODNER
AND
SIMPSON
1. INTRODUCTION Case-based reasoning is a reasoning method that capitalizes on previous problem-solving experiences when solving new problems. A case-based reasoner uses previous experiences as exemplars and bases new solutions upon those. As a result, case-based reasoning is particularly well-suited to solving problems in poorly understood domains. Case-based reasoning can improve problem-solving performance in many ways: by providing reasoning shortcuts, by warning of the potential for error, by suggesting a focus for reasoning, and by directing the problem solver through pathways that avoid previous mistakes. Consider the following example in which a doctor is able to avoid a previous diagnostic error after he is reminded of the prior experience (Kolodner, 1982): A psychiatrist seesa patient who exhibits clear signs of major depression. The patient also reports, among other things, that she recently had a stomach problem that doctors could find no organic cause for. While random complaints are not usually given a great deal of attention in psychiatry, here the doctor is reminded of a previous casein which he diagnosed a patient for major depression who also complained of a set of physical problems that could not be explained organically. Only later did he realize that he should also have taken those complaints into account; he then made a diagnosis of somatization disorder with secondary major depression. Becausehe is reminded of the previous case, the psychiatrist hypothesizes that this patient too might have somatization disorder and follows up that hypothesis with the appropriate diagnostic investigation. In this case, the doctor uses the previous case to generate a hypothesis. Because the cases are so similar, the doctor uses the diagnosis from the previous case as a hypothesis about the new one. The hypothesis drawn from the previous case provides the doctor with a reasoning shortcut and also allows him to avoid the mistake made previously. In addition, the hypothesis from the previous case causes him to focus on aspects of the case that he would not have considered otherwise: the unexplainable physical symptoms associated with somatization disorder. In the example above, the reasoner remembered a previous case and then used its result as a plausible answer in the new case. Often, it is not possible to use a previous result directly, since all the features of two cases rarely match exactly. When important features match but some other features do not (e.g., the new case has additional features that were not present in the previous case, or the environment in which the problem is to be solved is different), the previous solution can often be adapted to fit the new case. An example involving medical therapy illustrates adaptation nicely:
THE MEDIATOR
509
A doctor must prescribe treatment for a patient suffering from bipolar (manic) depression. The patient is in a manic state. He remembers a previous case in which he treated such a patient with standard medication in the hospital. He had hospitalized the patient because in previous cases,he found that manic patients would not take their medication unless supervised. This time, however, there are no beds available in the hospital. He finds out more about this patient’s home environment to see if the supervsion provided in the hospital can be provided at home. He finds out that his patient lives with his wife and children, and that his wife can supervise the treatment. He sends the patient home after instructing both the patient and his wife about the treatment. The major processes involved in case-based reasoning are remembering and adapting. Also important are control issues associated with remembering and adaptation: How can a good case be chosen? How can memories of several different problems be merged together to form a new solution? How can a reasoner know which parts of a previous case to focus on? How can a case-based reasoner be integrated with more traditional “from-scratch” reasoners? The MEDIATOR was one of the earliest case-based problem solvers, and its implementation begins to answer many of the questions above. Its domain is resolution of disputes, and it uses case-based reasoning for 10 different reasoning tasks involved in problem solving. While some of the MEDIATOR’s processes have been elaborated and improved upon in later case-based problem solvers, for example, CHEF (Hammond, 1986a), CASEY (Koton, 1988), JULIA (Hinrichs, 1988, 1989; Kolodner, 1987a; Shinn, 1988), there remain many lessons that can be learned about case-based reasoning by analyzing the MEDIATOR’s behavior. This article first provides a short description of the MEDIATOR and its domain, then an annotated example of the MEDIATOR solving a complex problem. It then provides additional detail about the MEDIATOR’s case-based reasoning processes. The final section presents the MEDIATOR’s problem-solving behavior, including both its successes and shortcomings, analyzes its behavior, explains how it differs from other case-based reasoners, and presents the implications of these differences. There are several things it is hoped to accomplish in this article. First, the MEDIATOR is explained as a case-based reasoner. It is unique in several significant ways: 1. The MEDIATOR integrates pieces of several previous cases in deriving solutions to problems. 2. The MEDIATOR uses case-based inference for a wide range of problemsolving tasks, including problem elaboration, planning, and recovery from failure.
510
KOLODNER
AND
SIMPSON
3. The MEDIATOR showshow a goal-directedproblem solverintegrated with a case-based inferencercanguidethe inferencerto focuson appropriate parts of a previouscase. Second,the MEDIATOR is usedto illustrate severalimportant elements of case-based reasoning.The MEDIATOR’s control mechanisms,for example, suggestmeansof interfacinga case-based reasonerwith a more traditional problem-reductionproblem solver.Other control mechanismsshow how to make a case-based reasonerfocus on the important featuresof a large case. Third, the advantagesanddisadvantages of approachingcase-based reasoningthe way the MEDIATOR doesareanalyzed,and the MEDIATOR’s approachesarecomparedto thoseof othercase-based reasoners.This analysis will not only provide the first analysisof thesedifferences,but will provide a set of dimensionsalongwhich to evaluatecase-based reasoning processes.The dimensionsconcentratedon herearecaseselection,transfer and adaptationmethods,and control. Fourth, someof the lessonslearnedthroughanalysisof a case-based reasonerin action are discussed.While researchersnormally concentrateon what their systemsdo well, it is equallyimportant to the researchcommunity to hearaboutweaknesses or failings of researchprojectsalongwith an analysisof what causedthoseweaknesses. In this way, future researchers may be ableto learn from the mistakesof others,ratherthan havingto experiencefailure first hand. A major weaknessin the MEDIATOR, for example,is its weak performancein anticipatingfailure. The analysisof this weaknessprovidesguidelinesabout how to organizea casememory and how to control reasoning. 2. THE MEDIATOR:
AN ANNOTATED
EXAMPLE
The MEDIATOR’s taskdomainis common-sense advicegivingfor resolution of resourcedisputes.In mediation,a third-party problemsolverattemptsto derivea compromiseagreementthat will resolvea disputebetweentwo disputing parties.To do this, hemust first understandthe needsof the parties in orderto find areasof potential agreementthat can form the basisfor a resolutionof their dispute.Basedon his understandingof the problem, he suggestsa solution that is either acceptedor rejectedby the disputing parties.Basedon the feedbackgivenby the partieswhenthey reject a solution, the mediator derivesa new solution plan or attemptsto persuadethe rejectingparty to acceptthe settlement.The cycleof suggestion,feedback, and incrementalrefinementor persuasion,continuesuntil both disputing parties are satisfied with the suggestedsolution. Mediation is successful whenboth disputantsaccepta suggested solution.The decisionsmadein the
THE MEDIATOR
l
Understand
Problem
-Interpret Problem -Classify Problem -Elaborate Problem *Elaborate Disputant l
Generate -Choose
511
Plan Planning
Goal
Policies
-Select Abstract Plan *Refine abstract plan *Choose plan actions *Bind plan variables -Predict Results l
l
Evaluate
Recover from -Understand -Remedy -Solve
Figure
Results
(interpret
feedback)
Failure failure
reasoning new problem
1. The MEDIATOR’s
error
reasoning
subgoals
MEDIATOR are looselymodeledafter the style of negotiationssuggested by the Harvard NegotiationsProject (Fisher& Ury, 1981;Raiffa, 1982). Readersinterestedin the analysisof the mediationtask that influencedthe implementationdecisionsmadein the MEDIATOR arereferredto Simpson (1985). The MEDIATOR programis responsiblefor understandinga problem, generatinga plan for its solution, evaluatingfeedbackfrom the disputants, and recoveringfrom reasoningfailures.Each of thesetasksgeneratessubgoals,and the MEDIATOR achievesthem in the orderin which they arise. Subgoalsin the MEDIATOR areachievedby first attemptingcase-based inference,then, if case-based inferencedoesnot yield a solution, applyingan appropriate“from-scratch” method(e.g., problemreduction,plan instantiation, dependency-directed backtracking).Figure 1 showsthe full rangeof subgoalsthe MEDIATOR achievesin the courseof resolvingconflicts. The MEDIATOR usescase-basedinferencefor all of them exceptpredicting results. Eachtime the MEDIATOR hasa subgoalto achieve,it searchesmemory for previoussimilar cases.If many exist, it choosesthe bestone.Thenit extractsthe solution to the analogoussubgoalin the previouscase,checksit for consistencywith the new case,and if it is consistent,tranfersit to the newcase.TheMEDIATOR beginswith subgoalsassociated with understanding and elaboratinga problem,the first oneof whichis to classifyit. It then movesto more traditional planningandmeta-planningsubgoals.After generatinga plan, it presentsit for approvalandfeedback.Its nextsubgoalsare associatedwith evaluatingthe feedback.If feedbacktells the MEDIATOR
512
KOLODNER
AND
SIMPSON
that its solution was unacceptable,its next subgoalsare thoseinvolved in failure recovery.If the MEDIATOR cannot find a similar casewhen it queries memory, it uses one of its “from-scratch,” problem-solving methods.’Becausethe MEDIATOR queriesmemory for caseseachtime a new subgoalarises,its solutions often integratepiecesof solutions from severalearlierproblems. In theexamplebelow,theMEDIATOR is solvingthe Sinaidispute(Simpson, 1985)basedon three other cases.2 Initially, the MEDIATOR is told that Egypt and Israelboth want physicalcontrol of the Sinai, andthat both haveusedmilitary meansin an attempt to achievethat goal. Its first subgoal, an understandingsubgoal,is to classifythedisputeinto oneof the dispute typesit knows. RECALLING
PREVIOUS
DISPUTES
TO CLASSIFY
THIS ONE. . .
remindedof the“PanamaCanalDispute” because bothdisputants areof typeM-POLITY. remindedof the“KoreanConflict” because bothobjectsareof typeM-LAND andbothusedM-MILITARY-FORCEto attempt‘PHYS-CONTROL* The MEDIATOR is remindedof two previouscases,the PanamaCanal disputeandthe Koreanconflict. It must now choosewhich of thesecasesis better. The MEDIATOR’s evaluationprocedurehastwo steps.First, a set of exclusionheuristicsare usedto rule out casesthat differ from the new caseon critical features.Next, a ranking function is usedto order the remaining cases.In the MEDIATOR, critical featureswhich must match (if specified)are the relationshipbetweendisputantgoalsand the derivations or sourcesof the disputantgoals. Sincethe new problem doesnot specify any of thesefeatures,neithercaseis excluded.The MEDIATOR nextranks closenessof fit of the remainingcasesto the new problem. Sinceit ranks similarity of disputant argumentsas more important than similarities betweendisputedobjects,and similarities betweendisputedobjectsas more important than similarities betweendisputants,it choosesthe Koreanconflict as a better match. In both, the disputantsarguedthat they wanted physicalcontrol overthe disputedobject,the disputedobject wasland, and the disputantswerecountries.The PanamaCanaldisputeis a poorermatch I When we refer to “from-scratch” problem solving, we are referring to more traditional problem-solving methods that do not rely on particular experiences. The MEDIATOR uses plan-instantiation methods to choose plans when no case is available and uses a dependencydirected search much like that in Teiresias (Davis, 1977) when no case can explain its failures to it. At other times, it applies default inference rules. Another “from-scratch” method is problem reduction. 2 We have selected this example because it shows the use of three different cases to do a variety of tasks during problem solving.
THE MEDIATOR
513
since the arguments and disputed objects are of different types than those in the Sinai dispute. Attempting to select the most applicable case. . . No casesexcluded from the match. . . Judging closenessof fit. . . Ranking argument > > disputed object > > disputants the “Korean Conflict” is the best fit. Choosing the “Korean Conflict”, which was classified as an M-PHYSICAL-DISPUTE In the MEDIATOR, previous cases serve as hypothesis generators. That is, they are used to suggest a means of achieving some problem-solver goal. After a suggestion is made by some case, the MEDIATOR checks the consistency of the suggestion before installing it in its plan. The consistency of a classification is checked by examining its recognition criteria to see if they hold in the new case. Since the Korean conflict, which it has just chosen as its analogous case, was classified as a “physical dispute,” the MEDIATOR checks to see if it is consistent to classify the Sinai dispute the same way. Since it is a dispute over possession of a physical object (the Sinai), “physical dispute” is acceptable. Checking for applicability of that classification. . . Because the disputed object is a physical object, the disputants’ goals are possessiongoals, “Physical Dispute” is consistent. Classifying the “Sinai Dispute” as a M-PHYSICAL-DISPUTE The MEDIATOR’s next step after classification is to elaborate its understanding of a problem by filling in details (e.g., goals of disputants with respect to the disputed object). Since no elaborations are possible with the given information, the MEDIATOR goes on to its planning phase. Its fiist planning subgoal is to choose a planning policy. Because no information that is different from that in its current exemplar (the Korean conflict) has been added since classification, the MEDIATOR continues to use the Korean conflict as its exemplar and suggests a “compromise policy.” This is consistent, and therefore is adopted. It then attempts to choose an abstract (skeletal) plan for resolving the dispute. Again, it continues to use the Korean conflict as its analog since no new information has been added that would make another case more appropriate. The Korean conflict was resolved using a plan the MEDIATOR calls “divide equally” and this is suggested for the Sinai. The MEDIATOR checks the consistency of a plan by first examining its preconditions to see if they hold in the new situation, then by checking that the results of the plan are consistent with the goals of the disputants. Since
514
KOLODNER
AND
SIMPSON
the preconditions of “divide equally” hold in the new case, and its results are consistent with the goals of both disputants (both gain physical control over the physical object), the MEDIATOR judges “divide equally” as applicable to the Sinai dispute and chooses it. ATTEMPTING TO CHOOSE A PLANNING POLICY FOR the “Sinai Dispute” Using the “Korean Conflict” which had a “compromise” planning policy Checking for applicability of that policy.. . Choosing “compromise” as the planning policy. ATTEMPTING TO SELECT A MEDIATION PLAN TO RESOLVE the “Sinai Dispute” Using the “Korean Conflict” which was resolved using “divide equally” Checking for applicability of that plan to the current case. . . My reasoning is as follows: The *SINAI* can be split without losing value, and it cannot be shared by taking turns; when this is considered with a compromise planning policy and my inference that the parties’ goals are in competition; all indicate that “divide equally” is a reasonable plan. Suggesting the plan “divide equally” for this dispute.
The MEDIATOR continues by filling in details of the plan and making predictions about the results of executing it. (This is not shown because of lack of space.) In particular, it binds the variables of “divide equally,” specifying the Sinai as the item to be divided, and Israel and Egypt as the recipients. Then it predicts that, if successfully carried out, Egypt and Israel will each be satsified with their part of the land. In traditional problem-solving paradigms, the responsibilities of the problem solver end once a solution is derived. Because a case-based reasoner depends directly on its own experience to make decisions, however, it cannot stop here but must have the capabilility of receiving and evaluating feedback about its solutions. Without feedback, the problem solver would repeat problem solving that was less than optimal. Intelligent use of feedback, on the other hand, allows a problem solver to become more proficient. When the MEDIATOR asks for feedback about its plan for the Sinai (divide it equally between Egypt and Israel), it is told that both Egypt and Israel object to the plan: Israel says it will not provide the security Israel wants, and Egypt says it wants all the land back since it used to be part of Egypt. This is presented to the MEDIATOR by telling it that Israel says its goal is M-NATIONAL-SECURITY, while Egypt says its goal is MNATIONAL-INTEGRITY.
THE MEDIATOR
515
Is this a good solution? (Y or N) No +*** DIVIDE EQUALLY not acceptable **** What happened? ; (we show the English equivalent) Israel saysthey want the Sinai for security Egypt saysthey want it for integrity
With the help of this feedback, the MEDIATOR attempts to understand why its first solution was bad and whether it could have avoided it, and then tries to come up with a better solution. The MEDIATOR treats blame assignment and failure recovery as problem-solving tasks. It first tries to understand the failure in the same way it tries to understand problems it has to solve. It then plans a remedy for the failure, just as it derives plans to solve problems. In attempting to explain failures, the MEDIATOR searches its memory of failed cases. It is reminded of a dispute in which two sisters both wanted the same orange (the orange dispute). In that case, as in the new one, “divide equally” had been attempted in the context of a physical dispute, and in both cases it had failed. Since that failure was the result of incorrectly understanding the goals of the disputants (“wrong goal inference”), the same explanation is suggested for the new failure. The MEDIATOR checks to see if “wrong goal inference” is consistent with the new case. It finds it consistent, since the goals in this case had been filled in by inference and were not the same as those indicated by feedback. The MEDIATOR therefore adopts “wrong goal inference” as its explanation. ATTEMPTING TO EXPLAIN FAILURE AND FIND NEW SOLUTION. RECALLING PREVIOUS FAILURES. . . . reminded of “two sistersquarrel over au orange” because in both “divide equally” failed and both objects are of type M-PHYS-OBJ Failure was because of M-WRONG-GOAL-INFERENCE. Checking for applicability of that classification. . . ... Transferring that classification to this failure.
Its next step is to remedy the failure, in this case to correct the misconception about disputant goals. A “wrong goal inference” is remedied by finding the correct goals of the disputants. This can be done, for example, by inferring goals from resulting actions, inferring goals from responses, and asking, among others. Since, in the orange dispute, the MEDIATOR had inferred the goals of the sisters from their actions, this method is suggested for the new case. Because there are no resulting actions, however, consistency checking rules it out as a remedy.’ Without a case to guide it, the ’ Recall that feedback was in the form of a statement of goals, no actions were taken.
516
KOLODNER
AND
SIMPSON
MEDIATOR attempts each of the plans it knows of for resolving a “wrong goal inference” and chooses the first that works. In this case, “infer goal from response” is tried first and allows the MEDIATOR to infer that Israel wants the Sinai for security while Egypt wants it for integrity. Attempting to use remedy called “infer goal from resulting actions,” used in previous case Unable to use previous remedy. Considering other rememdies for M-WRONG-GOAL-INFERENCE Looking at “infer goal from response” Based on the feedback, I replace ISRAEL’s goal with a M-NATIONAL-SECURITY goal and EGYPT’s goal with a M-NATIONAL-INTEGRITY goal. Remediation complete. The MEDIATOR has now remedied the failure; that is, it has corrected its misconception. It is now ready to attempt to solve the problem again. Because the problem has been reinterpreted, there is no need to redo probblem understanding, and the MEDIATOR goes directly to the planning stage. When it is attempting to choose an abstract plan, the reminding process, which is omitted from the output below, retrieves the same two cases as before: the Korean conflict and the Panama Canal dispute. This time, however, selection processes have more information to use in choosing the best case, and because the disputant goals in the Panama Canal dispute match those of the newly interpreted Sinai dispute fairly closely, the Panama Canal dispute is chosen as an exemplar. Using the Panama Canal dispute as a model, the MEDIATOR suggests giving Egypt political control of the Sinai but giving military control to Israel. Reconsidering the problem using new information. Considering the reinterpreted problem: Israel and Egypt both want the Sinai, which has been presented as ako M-PHYS-DISPUTE. A’ITBMPTING TO SELECT A MEDIATION PLAN TO RESOLVE THE “Sinai Dispute” RBCALLING SIMILAR DISPUTES. . . ... Reminded of the “Pamana Canal Dispute”. . . resolved using “divide into different parts.” Checking for applicability of that plan.. . .. . I suggest “divide into different parts” be used. Using the “Panama Canal Dispute” to create plan matching ISRAEL with. USA. . . matching EGYPT with PANAMA.. . matching SINAI with PANAMA-CANAL.. .
THE MEDIATOR
517
matching (*GOAL* (*NAT-SECURITY+ (ACTOR ISRAEL) (OBJECT SINAI))) with (*GOAL* (+MIL-CONTROL* (ACTOR USA) (OBJECT PANAMACANAL))). . . matching (*GOAL* (*NAT-INTEGRITY* (ACTOR EGYPT) (OBJECT SINAI))) with (*GOAL* (+POL-CONTROL* (ACTOR PANAMA) (OBJECT PANAMA-CANAL))). . . transferring other components unchanged. 3. THE MEDIATOR
AS A CASEBASED
REASONER
The MEDIATOR uses case-based reasoning to make a variety of types of inferences during problem solving. Figure 1 lists the set of problem-solving tasks performed by the MEDIATOR. Case-based reasoning is used for each of these tasks in the MEDIATOR, except predicting results. Recall that the MEDIATOR solves problems by decomposing them into their component parts and then solving each of those. It thus attempts casebased inference separately for each of the subgoals it encounters as it is solving a problem. In general, the MEDIATOR makes its case-based inferences by one of two methods: value transfer and partial instantiation. Value transfer is a process of extracting a previously used value to achieve some subgoal and propose it for transfer to the new case. Usually, the transferred value is atomic; it cannot be decomposed into parts. When the MEDIATOR is attempting to decide on a planning policy, for example, it looks to see what the planning policy was in the retrieved case and proposes it for the new case. Planning policy in the MEDIATOR is an atomic value (compromise or all-or-nothing) that cannot be decomposed. Partial instantiationis used to transfer solutions with parts. If a solution is viewed as a frame, the solution is composed of the frame type and the slot fillers. Partial instantiation is a process of transferring the template (frame) of the solution from the previous case and filling in as many of the details (slots) as possible, based on constraints associated with slots, general knowledge about the relationships between slot fillers, and knowledge of the new situation. When the MEDIATOR is attempting to choose an abstract plan, for example, it looks in the plan slot of the previous case and extracts the frame type and any constraint saying how to fill it. Some of those constraints come from the plan itself and some come from its use in the previous case. The plan “one cuts, the other chooses,” for example, has constraints saying that the “cutter” needs to understand the plan. This may have been achieved in a previous case involving children by using the older child as the “cutter.” If the new case involves children and it knows which child is the older one, it
518
KOLODNER
AND
SIMPSON
can instantiate the cutter and chooser slots based on those constraints, knowledge about how these constraints were satisfied in the past, and knowledge of the current situation. Any slots that cannot be filled at the time partial instantiation occurs will be filled later when the subgoal for filling a needed slot becomes active. In the language of knowledge representation, the MEDIATOR’s two case-based inference methods are used for three tasks: 0 to choose frames for representation (e.g., during problem classification, abstract plan selection, choice of plan actions, failure explanation, and failure remedy); l to choose values to fill slots (e.g., when choosing planning policies); l to instantiate frame variables (e.g., when binding plan variables). The particular case-based inference to be made is guided by reasoning subgoals. Subgoals provide focus by telling the case-based reasoner which parts of the previous case it should focus on. Focus will be discussed in some detail later. In short, the reasoner focuses on those parts of the previous case that have relevance to the subgoal it is attempting to achieve. One of the two inference methods listed above is chosen by examining the previous solution and the current subgoal. If the solution in the previous case to the current subgoal is an atomic value, valuetransferis used. If it is nonatomic, partial instantiationis used, and the subgoal itself gives guidelines as to which parts of the previous solution should be instantiated immediately and which ones should be instantiated later. When attempting to achieve reasoning goals which come up during problem solving, the MEDIATOR first attempts to use case-based inference to achieve the goal, If no case is available, or if the available case(s) cannot provide a solution consistent with the rest of the problem specification or solution, the MEDIATOR uses an appropriate “from-scratch” method. The MEDIATOR’s memory is generally able to provide it with at least one case to use each time the reasoner asks memory for a case. Thus, it attempts case-based reasoning almost all the time.’ The MEDIATOR’s memory is based on CYRUS (Kolodner, 1983a, 1983b) and is similar to Schank’s (1982) Dynamic Memory. Cases are stored in memory by first associating them with a MOP (or generalized case) and then indexing them by features that vary from the norms defined by the MOP’s generalized knowledge. MOPS in the MEDIATOR are associated with each of its dispute classifications (physical, economic, political), resolution plans (Figure 2), failure classifications (Figure 4), and remediation plans (Figure 3). Any particular case may thus be indexed in a variety of ’ This is somewhatsimplistic,sincesomerememberedcases are so far from the one being solved that case-based reasoning is fruitless. Our aim, however, was to show that case-based reasoningcouldbe done. Thus,we havenot dealt with the problem of deciding not to use casebased reasoning even when a case is available.
THE MEDIATOR
519
structures in memory. Failed and successful cases of each type are stored separately. And if a case goes through many iterations (as the Sinai dispute does), each attempt to solve it is stored separately. When memory is searched, the subgoal the MEDIATOR is attempting to achieve determines which memory structures are traversed. When trying to understand a dispute and when trying to choose an abstract plan, for example, dispute classification MOPS are searched. When trying to explain a failure, the failed cases associated with the failed plan are searched. When attempting to remedy failures, cases associated with the chosen failure classification are searched. When achieving a goal normally requires many inference steps, case-based reasoning helps by suggesting a solution without the need for interim reasoning steps. In these instances, the reasoner has only to check a proposed solution for consistency, rather than generating it from scratch. This happens most often in the MEDIATOR when it is selecting plans and understanding failures. The same process helps the problem solver to avoid making mistakes when the suggestion made by the previous case was derived after several iterations of problem solving, some of which produced faulty solutions. This is seen most clearly when the MEDIATOR attempts to understand failures. In this section, those two processes will be explained in more detail. 3.1 Choosing a Plan The MEDIATOR’s approach to generating a solution is based on successive refinement and instantiation of known abstract plans (Friedland, 1979; Wilensky, 1983). This is combined with meta-planning processes that make inferences about the kinds of plans that might be acceptable, and an inferential process that predicts the effects of executing a generated plan. The MEDIATOR generates a solution in four phases: 1. A meta-planning phase establishes an overall planning policy that guides later planning decisions. 2. A plan-selection phase, beginning at the highest level of abstraction, chooses the most promising general plan believed applicable for the problem. 3. A plan-instantiation phase specifies the plan roles and actions. 4. A prediction phase generates a specific set of expectations based on the assumption that all actions are executed as planned. Executing these phases ultimately results in both a proposed plan of action, which can be executed by some agent, and a set of expectations that must be confirmed. In this section, Phase 2, plan selection, is concentrated on. Plan selection is done by successive refinement of known abstract plans in the absence of cases. In the plan-instantiation paradigm, plans are specified at many levels of detail, from sequences of primitive (nondecomposable) actions to more complex abstract plans involving generalized actions (e.g., Fikes, Hart, 8c Nilsson, 1972; Friedland, 1979; Hayes-Roth & Hayes-
520 l
l
KOLODNER
divide -one -split
equally cuts, the other the difference
chooses
divide --divide --divide
unequally into different by equtty
parts
l
take turns (share) -take turns using -take turns choosing -worst goes first
l
use
game
of chance
l
use
game
of skill
-western . apply -one sions, l
AND
SIMPSON
shootout recognized standard of: market value, precedent, costs, moral criterion, scientific fudgement,
tradition, efficiency, professional ethics,
relevant court reciprocity
deci-
binding arbitration -conventional arbitration -final-offer arbitration Figure
2. The MEDIATOR’s
mediation
plans
Roth, 1979;Sacerdoti,1977;Wilkins, 1984).At thehighestlevelof abstraction, therearegenerallya small setof plansknown to a planner.Theremay be severalwaysof refining eachof these,severalwaysof refining thoserefinements,and so on. Abstract plans usedby the MEDIATOR for dispute mediationareshownin Figure2. Thoseit usesto remedyreasoningfailures, a processalsodoneby refinementwhenno casesareavailable,areshownin Figure 3. The MEDIATOR attemptsto choosethe most specificabstractplan appropriateto the problem. Whenno caseis available,it first selectsa plan at thetop of the abstractionhierarchyandthensuccessively refinesit by selecting more specificplansin the hierarchy.The initial point of entry into the plan hierarchycomesfrom pointersassociatedwith eachproblem classification. “Physical disputes,” for example,specifiesthat “divide equally” and “divide unequally” aretwo availableoptions, to be consideredin that order. “Wrong-goal-inference,” a failure classification, specifies“infer goalfrom response”and “infer goal from resultingactions” asappropriate meansof remedyingthat typeof reasoningfailure. In the absenceof experienceto guide it, the MEDIATOR checksthe preconditionsfor eachsuggestedplan, choosingthe first onewhosepreconditionsfit the problem. Its specializationsare then checkedsimilarly. For example,in solvingthe orangedispute(a “physical dispute,” with a “compromise” planningpolicy) without the benefitof a previouscase,the
521
THE MEDIATOR
l
Misunderstanding
Remedy
-Misclassification Remedy -Miselaborotion Remedy ‘Wrong-goal Remedy . Use Actual Result * Ask Alternote Ports * Use Gools Directly * Ask about Object Uses . Consider Themes . Ask l
Gools
Misplanning Remedy -Wrong Policy Remedy ‘Infer from Feedback *Use Alternote Policy -Plan Selection Remedy *Use Feedbock ‘Select
Figure
for
Sibling
3. The MEDIATOR’s
remediation
plans
MEDIATOR first checks the preconditions for “divide equally,” since “physical disputes” says that is the best potential plan. Its preconditions are: 1. 2. 3. 4.
The mediator has a compromise planning policy; The dispute has a competitive goal relationship; The disputed object can be split without losing value; and The disputed object cannot be used without losing value (i.e., it can’t be shared by taking turns).
Since these are all applicable for two people who want the same orange, “divide equally” is chosen at the most abstract level. Since “divide equally” can be accomplished by “one cuts, the other chooses” or “split the difference” (both abstract plans), the preconditions for each of those is checked, and “one cuts, the other chooses” is chosen. If a previous case is available when the reasoner needs to choose an abstract plan, however, the case-based reasoner uses the case to suggest the plan used previously, allowing the MEDIATOR to avoid searching its hierarchy of plans. In solving the orange dispute, for example, if the MEDIATOR is reminded of the “candy bar dispute” (two children both want to eat the same candy bar, resolved by having one split it in half and the other choose his half), it can select the specific plan called “one cuts, the other chooses” without having to consider intervening plans in the plan hierarchy. In domains with deeply nested plan hierarchies, this process can provide considerable savings of time and effort. The following fragment of output from the MEDIATOR program shows how this algorithm applies to the selection of a plan for orange-dispute-O, in
522
KOLODNER
AND
SIMPSON
which two sisters want the same orange. Here, the MEDIATOR has already retrieved candy-dispute-O as the most similar case. The plan used in candydispute-O, “one cuts, the other chooses,” is identified and its preconditions tested. Since the plan’s preconditions are found to hold in the current case, and the plan’s results are consistent with the new case’s goals, the plan is transferred and applied to orange-dispute-O. The MEDIATOR Doing Case-BasedPlan Selection
ATTEMPTING TO SELECT A NEGOTIATION PLAN TO RESOLVE THE DISPUTE IDENTIFIED AS #c M-PI-IYS-DISPUTE 22475211> (orange dispute-O). Using previously recalled case, where two children are quarreling over a candy bar. It was resolved using the plan known as “one cuts the other chooses.” Checking for applicability of that plan to the current case. . My reasoning is as follows: It normally doesn’t make senseto share ORANGEl, since its functionality is destroyed by its consumption, but it can be divided without loss of functionality; when this is considered with a compromise planning policy and my inference that the parties’ goals are in competition; all indicate that “one cuts the other chooses” is a reasonable plan. Selecting the plan “one cuts the other chooses” for this dispute.
3.2 Ek&ining Failures The second place in the MEDIATOR where case-based reasoning provides considerable advantage is in identifying the source of reasoning errors. As explained earlier, this is treated as an understanding task and, in particular, one that is solved through classification. As in dispute mediation, the MEDIATOR recovers from reasoning failures by first classifying them (this step) and then choosing a remedy. The MEDIATOR recognizes classes of errors corresponding to each of the tasks and subtasks its reasoner performs. Thus, in the MEDIATOR, failures are classified in five ways: 1. 2. 3. 4. 5.
misunderstanding failures, planning failures, plan execution failures, evaluation failures, unsolvable problem failures.
Because the majority of the MEDIATOR’s reasoning occurs in the areas of understanding and plan generation, and because of the nature of its feedback (i.e., it must be offered by a user; the MEDIATOR cannot see or
523
THE MEDIATOR
l
Misunderstanding -Poor Classification -Poor Elaboration *Wrong Goal Inference
l
Figure
4. The
Pool Planning -Wrong Policy -Wrong Abstract Plan -Poor Plan Institution -Poor Results Prediction
MEDIATOR’s
failure
classification
hierarchy
(a portion)
hear), its failure recovery knowledge is concentrated in the first two areas above: recognizing misunderstandings and planning errors. Within each of those broad classifications are more specific failure classifications; for example, poor elaboration and poor class~~cation are misunderstanding errors, while wrong planning policy, wrong abstract plan, poor plan instantiation, and poor results prediction are subclasses of planning errors. Wrong goal ir#erence is a particular kind of poor elaboration error. Figure 4 shows part of the MEDIATOR’s hierarchy of error classifications. The MEDIATOR has two “from-scratch” ways of classifying its errors. In some instances, it is possible to predict what will happen when particular kinds of errors occur. For example, if one sees a disputant doing something unexpected with &I object, and the dispute was classified as physical, then it is likely that the planning failure was due to a wrong goal i@erence. If one sees someone entering into an economic transaction with an object when the dispute was classified as physical, then probably the dispute was classified incorrectly and should have been classified as an economic dispute. When these predictions are available, the MEDIATOR can recongixe a failure as being of a certain type by matching results to expectations for each type of failure.’ The second from-scratch method, which is used more often, involves tracing backwards through the reasoning chain to find the inference that caused the MEDiATOR to make a poor prediction of results. When the MEDIATOR finds which one of its inferences was faulty, it begins backtracking through the inferences the faulty one depended upon. In the worst case, the MEDIATOR backtracks through the entire set of inferences it has made. If a previous case is recalled that failed in ways similar to the current one, however, it can help in two ways. First, it shortcuts the process above by suggesting a plausible interpretation. Second, a previous case that suggests an explanation for a failure can keep the MEDIATOR from deriving faulty explanations of an error. ’ For more
details,
see
Simpson(1985).
524
KOLODNER
AND
SIMPSON
The MEDIATOR finds previouscasesby looking throughits memory of failed cases.It looks through the casesassociatedwith the plan that failed and alsoat its memory for failed disputesof the appropriatetype. If a case is found that failed in wayssimiilar to the currentfailure, its errorclassification is suggestedand checkedfor consistencywith the currentcase.This is doneby checkingto seeif the conditionsfor recognizingthe suggestedtype of error hold in the currentcase.In the following examplefrom the MEDIATOR, we seeit recallingthe orangedisputewhenits solution to the Sinai disputefails, and usingthe explanationof that error (wronggoalinference) to explainits error in interpretingthe Sinai dispute. Explaining a Failure (mediator Sinai-dispute t) Considering the following problem: Israel and Egypt both want the Sinai, which has been presented as ako M-DISPUTE. I suggest that the plan called “one cuts the other chooses” be used. Do you agree, that this is the best solution? (Y or N) No. +**+ “one cuts the other chooses” not acceptable **** Can you provide any comments that might help me remedy this failure? ;Israel says it wants the Sinai ;for national security reasons (MOBJECT (*GOAL* (*NAT-SECURITY* (ACTOR ISRAEL) (OBJECT SINAI))))) (*MTRANS* (ACTOR EGYPT) (MOBJECT (*GOAL* (*NAT-INTEGRITY* (ACTOR EGYPT) ;Egypt saysit ;wants the Sinai (OBJECT SINAI)))))) ;for nat’l ;integrity reasons Attempting to explain this failure and find a new solution.
((*MTRANS* (ACTOR ISRAEL)
Considering the following problem: failed mediation for Israel and Egypt both want the Sinai, which has been presented as ako M-UNSUCCESSFUL-MEDIATION. It wiIl be referred to as #< M-UNSUCCESSFUL-MEDIATION 40544074> AmEMPTING TO RECALL SIMILIAR PROBLEMS IN ORDER TO CLASSIFY THIS ONE. . . ATTEMPTING TO RECALL SIMILAR FAILURES.. . looking for previous mediation plan failures. . . looking for failures with similar disputants or with similar goals. . looking for failures involving similar objects. . .
THE MEDIATOR
525
reminded of the failed mediation for two sistersare quarreling over an orange for which the plan “one cuts the other chooses” also failed and because the object in that case, ORANGEI, and SINAI are both of type M-PHYS-OBJ. There was one previous case found. #c M-WRONG-GOAL-INFERENCE 5304703> was the failed mediation for two sisters are quarreling over an orange. Failure in that case was because of M-WRONG-GOAL-INFBRBNCE. “Wrong-goal-inference” is consistent with this case. Transferring that classification to this failure. The current failure will be referred to as # In this example, the previous case shortcuts the problem solver’s process of deriving an explanation for its error. In a similar case, case-based reasoning keeps the problem solver from creating faulty explanations. Consider, for example, what the behavior of the MEDIATOR would have to be if it were told that Israel and Egypt objected to the original suggestion but was not told what their objectives were. In that case, it must hypothesize several possible explanations for the problem while backtracking through its reasoning. It would first consider whether the problem was that it had bound the roles incorrectly in “one cuts, the other chooses.” Selecting “wrong role binding” as its explanation, it would then attempt a solution in which the roles were bound the opposite way, and would suggest that as a new solution to the problem. Feedback would tell it that this solution was also unacceptable, and it would backtrack another step and consider some other explanation. Without explicit feedback, it would have to consider many faulty explanations before getting to the correct one. Remembering the orange dispute, however, allows it to suggest the correct explanation without having to make several explanation errors along the way. 4. DISCUSSION In this section, two broad topics are discussed: the implications of casebased reasoning, and the MEDIATOR’s contributions to an understanding of case-based reasoning. First, an analysis of the usefulness of case-based reasoning is presented, including its applicability, its behavior on many-step inferences, its behavior on one-step inferences, and its failure-avoidance techniques, all illustrated by the MEDIATOR. Next, an analysis of the steps involved in making a case-based inference is presented. After that, several processes that control and facilitate case-based inference are discussed: retrieval of appropriate cases, focus on relevant parts of a case, derivation and use of subgoals in case-based reasoning, and integration of a case-based reasoner with other reasoning methods. Then, case-based reasoning’s relationships with analogical reasoning and learning are discussed.
526
KOLODNER
AND
SIMPSON
4.1 Usefdness of Case-Based Reasoning Several conclusions can be drawn about case-based reasoning based upon experiences with the MEDIATOR. In this section, its strengths and weaknessesare explained, and why it behaved the way it did, and what, in general, can be expected from a case-based reasoner along the dimensions analyzed. 4. I.1 Broad Applicability of Case-Based Reasoning. Perhaps the MEDIATOR’s greatest contribution is in illustrating the range of inferences that can be made by case-based inference. As shown previously in Figure 1, the MEDIATOR uses case-based reasoning for 10 of its problem-solving tasks, including problem classification, choice of planning strategies, and explanation of and recovery from failure. In using essentially the same case-based inference processes for tasks involved in tmderstanding, planning, and recovery from failure, the MEDIATOR demonstrates the broad applicability of case-based inference and the generality of the case-based methods it uses. Many later case-based reasoners refine the MEDIATOR’s methods. Some are primarily planners, for example, CHEF (Hammond, 1986a,b), ARIES (Carbonell, 1986), JULIA (Kolodner, 1987a); some do problem understanding, for example, HYPO (Ashley & R&land, 1987, 1988), CASEY (Koton, 1988); and some recover from failures, for example, CHEF (Hammond, 1987). The MEDIATOR, however, remains the only one that uses case-based inference for such a large variety of tasks. 4.1.2 Reasoning Shortcuts on Mzny-Step Inference Chains. As stated in a previous section, the MEDIATOR’s case-based inference processes have their greatest advantage in situations where a long chain of reasoning is necessary to reach a conclusion. In the MEDIATOR, this happens during plan selection and remedy selection (done by successive refinement when no cases are available), and during recovery from reasoning failures when the MEDIATOR is trying to identify the source of its error (done by following dependencies when no cases are available). This is one area where all casebased reasoners will have advantages over more traditional problem solvers. 4.1.3 Behavior on One-Step Inferences. The behavior of case-based reasoning on “one-step” inferences must also be evaluated; that is, those that can normally be made with the application of one inference rule. When inferences are one-step, the MBDIATOR’s case-based reasoner makes the same suggestion that would have been made by the relevant from-scratch method. Thus, solutions do not suffer from case-based reasoning. On the other hand, in the MEDIATOR, these solutions do not tend to benefit from case-based reasoning. It would certainly be better to be able to claim that, based on the MEDIATOR’s behavior, case-based reasoning comes out ahead even on one-step inferences (e.g., that it allows more appropriate
THE MEDIATOR
527
decisions than can be made from scratch), but the MEDIATOR illustrates this only weakly. Nevertheless it is instructive to discover why the MEDIATOR was not better on one-step inferences, and to determine what features a case-based reasoner needs to do well on these inferences. In fact, it is not the MEDIATOR’s case-based reasoning processes that cause this deficiency, but two components of its environment. Roth are easily correctable. First, the cases in the MEDIATOR’s memory are not, in general, unusual in their details. That is, they don’t vary much from what is expected in the general case. The MEDIATOR’s cases include no truly novel solutions. Because of this, general common sense rules and mediation-specific knowledge usually are sufficient to deal with the cases the MEDIATOR has solved. If cases had violated common sense or the initial knowledge the MEDIATOR started with (its “book knowledge,” so to speak), the MEDIATOR would have made case-based decisions that it could not have made using only its given knowledge. There are two ways to make sure future case-based reasoners can make novel inferences even on one-step inferences. First, they can be seeded with novel experiences: those which general purpose rules don’t cover, but which nevertheless were solved. Had the MEDIATOR been seeded with a case like Ring Solomon and the baby, for example, which was solved by applying normal strategies in unusual ways, the MEDIATOR could have repeated that reasoning later, allowing it to make novel one-step inferences. Or, casebased reasoners can be taught using novel experiences, allowing them to make mistakes, and giving them the feedback necessary to make ultimately good decisions for those cases. As a case-based reasoner is forced to solve novel cases, and if it gets good feedback about its solutions, it becomes more and more able to make novel decisions. The second reason the MEDIATOR did not show novelty in its one-step, case-based inferences is because its memory’s representations and organixation prevent it from finding cases where common solutions failed. Because the MEDIATOR stores failed cases separately from successful ones, and only successful cases are remembered when trying to solve a problem, there is no way for the MEDIATOR to anticipate that it might be doing something wrong without explicitly asking the memory that question. An implementation that stored successful and failed cases together would alIow recall of both kinds of cases. Successful cases would provide shortcuts, while failed ones would be used to anticipate problems (as in CHEF (Hammond, 1986a)). A program that is better able than the MEDIATOR to anticipate failures would do a better job at one-step inferences than a purely general knowledge reasoner could do. It would make more appropriate decisions than those that could be made by the system’s original inference rules because its memories of failed cases would cause it to question whether the general in-
528
KOLODNER
AND
SIMPSON
ference was the correct one, and would suggest to it a better way to make its inferences. (See Kolodner, 1987b, for a discussion of this process.) One late version of the MEDIATOR added a step to recall failed cases during consistency checks. This version of the program could, in principle, perform better on one-step inferences than the version described up to now, but no running examples of this are available. An illustration that these two deficiencies can be corrected and that their correction leads to better performance on one-step inferences can be found in JULIA (Himichs, 1988; Kolodner, 1987a, 1987b). JULIA is a meal planner, and uses case-based reasoning to help it design meals. It often does better at one-step inferences using case-based inference than it can do with only its from-scratch reasoners for both reasons discussed above. First, when it encounters a previous failure, it is warned of the potential for failure if it uses the usual inference. Since it can be reminded of failures any time during problem solving, this feature allows it to make more appropriate inferences than could be done from scratch any time it makes an inference. Second, its memory is seeded with novel solutions. Many times a from-scratch inference gives an unimaginative answer (e.g., french fries are good with steak), while a case-based suggestion suggests something more unusual (e.g., try homemade potato chips with steak). One should also not discount the control case-based reasoning can provide in choosing which one-step inference to make. It is not hard to imagine a domain in which there are many possible “easy” inferences that could be made at any time. Underconstrained domains (e.g., meal planning) and domains where the amount of raw knowledge available is overwhelming (e.g., medicine) are two types of domains where this is seen. The advantage of case-based reasoning in such domains is that it suggests use of those inferences that have worked well in the past, allowing the problem solver to bypass the process of deciding among all the possible inferences it could make. It seems reasonable to guess that the combination of problem-solving experience and case-based reasoning helps a doctor fresh out of medical school to learn under what circumstances the “book knowledge” learned in school ought to be applied. 4.1.4 Anticipating and Avoiding Failures. A case-based reasoner that can remember previous cases where solving the problem was a many-iteration, trial-and-error process, can avoid mistakes made previously by suggesting working solutions early on in problem solving, thus allowing the problem solver to avoid its previous trial-and-error approach. One that can remember its past mistakes can, in addition, anticipate mistakes. The first is a passive process; the second more active. The MEDIATOR could do the first of these, but could do the second only poorly. That is, it could avoid previous mistakes if cases retrieved from memory suggested better solutions than the mistaken one, but it could not do a thorough job of explicitly antic-
THE MEDIATOR
529
ipating the possibility of a failure arising. If it encountered a situation in which two people wanted the same lemon, for example, its memory of the orange dispute would allow it first to infer that one disputant wanted to cook with the peel and then, to suggest that “divide by parts” be used to solve the problem. Thus it would avoid considering whether the lemon should be divided in half (as suggested incorrectly in the case of the orange dispute). On the other hand, it could not explicitly anticipate the problem situation. Why could the MEDIATOR avoid but not anticipate faihues? Avoiding problems by having correct suggestions made by previous cases is the same process as remembering and thus using suggestions to create reasoning shortcuts. Anticipation, on the other hand, requires additional reasoning. Since several other case-based reasoners (built coincidentally with, or soon after the MEDIATOR) could anticipate failures, for example, ARIES (Carbonnel, 1986), CHEF (Hammond, 1986a), it might be instructive to analyze why the MEDIATOR has a hard time doing this. Begin by looking at how ARIES and CHEF anticipate failures. ARIES stores failed and successful cases together in the same memory and both are equally accessible. Thus, a memory access can retrieve a failed or a successful case. If a failed case is recalled, it is used to anticipate a failure in the current situation. Because ARIES, in essence, queries memory for each problem-solving decision it makes (i.e., many times in the course of solving a problem), it could be reminded of a failed case almost any time during problem solving and anticipate problems during any problem-solving step. The MEDIATOR, on the other hand, stores its failed and successful cases in separate parts of the memory. Recall of failed cases is not a usual part of its recall process. Recalling by ARIES’ method would require the MEDIATOR to make two queries to memory instead of just one: one to “success memory” and one to “failure memory.” CHEF has a different method. Rather than allowing anticipation to happen any time during problem solving, its problem-solving procedure explicitly attempts to anticipate problems early in problem solving (before attempting to propose a solution, as part of a problem-understanding process). It looks in its memory for failed cases to find one with features similar to the current problem, and if it finds one, it anticipates that a failure might occur and refines the problem description so that the failure can be avoided. Because CHEF only checks for potential failure before the start of problem solving, it can anticipate and avoid those failures that can be foreseen early in problem solving, but it cannot deal with failures that can be anticipated only later in problem solving. One version of the MEDIATOR attempts to anticipate problems similarly to the way CHEF later approaches anticipation. It also bears some similarity to ARIES’ method. After a case-based inference is attempted, the MEDIATOR looks in its memory of failed cases for instances of similar failed use of that solution. That is, a solution to part of the problem is proposed and
!530
KOLODNER
AND
SIMPSON
as part of the consistency check before accepting the proposal, the MEDIATOR attempts to anticipate problems with the solution. If there have been previous failures associated with that part of the solution, the MEDIATOR evaluates whether or not there is a potential for such a problem in the new case. If not, it accepts the proposed solution. If so, it rejects it. Thus, if the abstract plan “one cuts the other chooses” is suggested, the MEDIATOR looks for failed cases of “one cuts the other chooses” with other features in common with the new problem. It uses anything it finds to anticipate problems that might arise from its proposed solution. This is like CHEF, in that memory for failed cases is given a special role during problem solving; like ARIES, in that anticipation can happen at any time during problem solving, not only at the very beginning. Its major difference is that anticipation is an evaluation process (in essence, the test part of generate and test), which occurs with each decision proposed along the way, rather than an understanding process done before solving the problem (as in CHEF) or a warning made before suggesting a solution to a subgoal (as in ARIES). While CHEF anticipates problems to plan around, the MEDIATOR’s method anticipates problems that can potentially arise from its suggested solutions. In principle, the MEDIATOR’s method for anticipation should be at least as powerful as ARIES’ method, perhaps more powerful, since a particular suggested solution is tested. The comparison with CHEF is more complex. Since the MEDIATOR is not confined to dealing with problems that can be anticipated before problem solving starts, it can, in principle, avoid more problems by its method. On the other hand, because it makes no explicit annotations about the problems it is dealing with, it can plan around a problem only at the point where it anticipates it, and cannot carry that anticipation through to other parts of its solution.6 In addition, the MEDIATOR’s memory organization, which separates failed and successful cases, requires more memory accesses than do either of the other methods. 41.5 Caveats. The conclusions, then, are that case-based reasoning has broad applicability, is useful in providing shortcuts on many-step inference chains, can be useful in providing guidance in making “easy” inferences, and can help a reasoner anticipate and avoid failures. The key to making these things happen, of course, is that the reasoner has to be provided with the right cases at the right times. It has been suggested that all that’s been done here is to remove the bottleneck-that retrieval may take as long as composition, and that even if it takes a shorter time, figuring out how to index cases may take more energy than composition. There are two answers to these concerns. First, recent work on parallel machines (e.g., Kolodner, ‘ Because this part of the MEDIATOR more direct claims about it.
was never fully implemented. it is hard to make
THE MEDIATOR
531
1988; Stanfill BEWaltz, 1986) strongly suggests that with the right hardware, retrieval algorithms can be made to run very fast. PARADYME’s retrieval algorithm (Kolodner, 1988), for example, runs in time’linear in the size of the probe (independent of the size of memory). Second, it is indeed hard to chose indices, and much effort must be spent in doing this correctly. This is a task that can be done in “off time,” however, rather than during problem solving, if memory records and saves its problem-solving experiences. Thus, it does not have to impact problem solving itself. Of course, the problem remains of coming up with guidelines for choosing indices wisely. Several researchers are endeavoring to do this (Barletta & Mark, 1988; Bimbaum & Collins, 1988; Hammond & Hurwitz, 1988; Kolodner & Thau, 1988; Owens, 1988; Schank, 1982). 4.2 Making a Case-Based Inference The MEDIATOR implements two solution-derivation methods: value transfer and partial instantiation. These two methods are also the primary methods used by other case-based reasoners, for example, CHEF (Hammond, 1986a,b), ARIES (Carbonell, 1986). The first method, value transfer, is used in the MEDIATOR to achieve those subgoals that no other subgoal is responsible.for refining or changing later in problem solving. The second method, partial instantiation of a previously used frame, is used when its subgoal is at an abstract level and other subgoals have responsibility for filling in details, or when the frame needs its details filled in based on specifics of the current problem situation. Both of these methods are useful for shortcutting processes requiring long chains of inferences. They allow a problem solver to put its effort into hypothesis evaluation (consistency checks in the MEDIATOR) rather than concerning itself with complex hypothesis-generation processes. This can be quite useful in domains where hypothesis generation is more difficult than evaluation (e.g., medical diagnosis, especially for novices). In the introduction to case-based reasoning here, it was mentioned that often a solution from a previous case will not fit the new case exactly and might have to be adapted. Yet, up until now, any adaptation that the MEDIATOR does has not been described. Indeed, the MEDIATOR doesn’t do adaptation in what has become the standard sense of the word. In most other case-based reasoners, adaptation is done by a set of special purpose heuristics that know how to adapt a full solution from a previous case to fit the new situation. In general, there is a set of heuristics associated with each adaptable dimension of a solution. SWALE (Kass & Leake, 1988), for example, has a set of adaptation heuristics to choose a new actor for a situation, a set to choose a new action, a set to choose an instrument, and so on. CASEY (Koton, 1988) has heuristics that know how to change an old representation based on particular deviations between the old case and the new one. There are heuristics for dealing with extra features in the new case, extra
532
KOLODNER
AND
SIMPSON
onesin the old case,missingfeaturesof each,andso forth. JULIA (Hinrichs, 1989)hasadaptationheuristicsthat breakan overconstrainedprobleminto parts, fix a representationwhen a solution to onepart of a problemserendipitously providesa solutionto someotherpart of the problem,andtransform (by substitution,deletion,or adjustment)a part of a previoussolution to fit the constraintsof a new problem. Alterman’s (1988)PLEXUS has heuristicsthat generalizeand specializeplan stepsby walking through an abstractionhierarchyof events. The MEDIATOR hasnoneof theseheuristics,yet it doesdo adaptation. Adaptation, in the MEDIATOR, is done as a byproduct of the problemsolving method. Recall that problemsare solvedin the MEDIATOR by breakingthem into piecesand solvingeachpieceindividually. Consistency checks,madeeachtime a solution to a subgoalis proposed,insurethat the individual solutionscan be fit togetherto createa coherentsolution to the wholeproblem.The MEDIATOR queriesmemory for a previouscaseeach time it hasa newsubgoalto achieve.Thus, subgoalsmay not all be achieved usingthe samepreviouscase.Onecasemight providean abstractplan, for example,while anothermight suggesthow to fill in the actors, another might suggestthe actions,andstill anothermight suggestthe instrumentsto beused.What resultsis an adaptationof the solutionthat providedthe initial partial solution (in this case,the one that providedthe abstractplan). But it is done by filling in detailsone by one, rather than by adaptingthe entire old solution. The MEDIATOR adaptssolutionswithout adaptationheuristicsbecause it solvesproblemsby decomposingthem hierarchically.That is, thoseparts of the problemwhich provideconstraintsfor the rest of the problem come first. Whenthe detailsof a solution to a subgoalaredependentupon as-yetunknown featuresof the situation, the MEDIATOR transfersonly those partsof the previoussolution that it caninfer at the time. Pull instantiation is left for later. Othercase-based reasonerswould transferthe solution in its entiretyand thenuseadaptationheuristicsto tweakthe solutionwhensome inconsistencyis found later. The medical therapy examplefrom the introduction will illustrate. In that example,the suggestedtherapy of treating the patient with standard mediation in the hospital was impossiblebecausethere were no hospital beds. This solution was adaptedto one of treating the patient at home where his wife could supervisehis medication. The MEDIATOR would solvethis problemin severalseparatesteps.In onestep,it would choosethe medication,in anotherit would choosethe location for the therapy.This is the part whereadaptationis needed.The MEDIATOR cando this adaptation if the representationof hospital is in termsof a functional abstraction (ii this casesupervised location), and if it can first chooseits abstraction and then fill in the detailsafterwards.It would choosesupervised location from the caserecalledin the original example.In a later step,whenfilling in
THE MEDIATOR
533
the detailsof the location, however,it would find that hospital, suggested by the previouscase,wasinconsistentwith the currentstateof the world. It would thereforetry to find anothersupervised location’ consistentwith the currentsituation, probablythe patient’s home. The MEDIATOR’s methodwill work for domainsin which the problem to besolvedis decomposable into easilysolvablesubproblemswhoseinternal dependencies arein onedirection only. That is, the MEDIATOR’s method will work if the solution to a later subproblemcannotaffect the solution to an earliersubproblem.If detailsof solutionsareinterleavedwith eachother, however,the MEDIATOR’s method won’t work. In the labor mediation domain, for example,detailsof solutionsare interleavedwith eachother. Whenonepart of a solution is changed,othersmust be changedto balance the solution(Sycara,1987).Specificadaptationheuristics(cf. Sycara,1988) are necessaryfor this. Meal planningand other designtasks provideexamplesof typesof domain wheresolutionsto subgoalsare interleaved.In thesekinds of tasks, the solution is highly underconstrained and the solution spaceis large.The problemis generallyrepresentedas a setof constraints,someof which are harderto achievethan others,andsomeof which canprovideguidelinesfor achievingtheothers.TheMEDIATOR’s methodcanbeusedto comeup with a baselinesolution that ignoressomeconstraints,but adaptationheuristics arethen necessary to adaptthe baselinesolutionto includetheremainderof the constraints(Hinrichs, 1989).An architect, for example,has esthetic, physical,and functional constraintsto deal with in creatinga design-too much to deal with all at once.Dealing with physical and functional constraintsfirst will createa designthat doeswhat it is supposedto do. Esthetic constraintscan then be addedto the designby adaptingit appropriately. 4.3 Control and Facilitation
of Case-Based Inference
Transfer and adaptationprocessesby themselvesdo not make a complete case-based reasoner.There are a number of other support processesthat control and facilitate case-based reasoningand that are, in fact, at leastas important as transfer and adaptation.Recall that the MEDIATOR solves problemsby decomposingthem into subproblemsand then attemptingto makea case-based inferenceto solveeachone.Making any individual casebasedinferenceincludesthe following steps: 1. Retrievea potentially applicablecasefrom long-termmemory. 2. Determinewhich portions of the old caseto focuson in orderto solve the problem solver’scurrently activesubproblem. 3. Derivea solution, or meansof solvingthe currentproblem-solvinggoal basedon the old case,and proposeit. 4. Checkthederivedvaluefor consistencywith the currentcase,andaccept or reject it.
534
KOLODNER
AND
SIMPSON
Thus, a case-based reasoner is dependent on retrieval processes, focus processes, and processes that allow it to divide a problem appropriately into its component parts. Each of these processes is discussed in this section, as well as the implications of these processes for integrating case-based reasoning with other reasoning processes. 4.3. I Locating and Choosing a Candidate Case. A case-based reasoner is only as good as the cases it can remember. Thus, one of the most important processes that facilitates case-based reasoning is retrieval and selection of appropriate cases to use for case-based reasoning. This problem has been referred to as the indexing problem. In its narrowest interpretation, the problem is to choose features to use in order to index a case in memory. These indexes are then used during retrieval just as those in a book are used to direct a reader to appropriate parts of a book. That is, indexes already in memory, which match the new problem, are used to point to cases in memory that at least partially match that problem. The better the indexes recorded in memory, and the more specified a new case is, the better this retrieval process will be at choosing appropriate cases. In general, it makes sense to choose as indices, features of a case that were important in solving the problem, and that are likely to come up in later situations. In its broader sense, the indexing problem also includes choosing among the many cases that may partially match a new case. Querying memory as described may result in many partial matches, and the best one(s) must be selected; The earliest work on indexing (Kolodner, 1983b, 1984) suggested that cases should be indexed by those features that could be used to make domainrelated predictions. The MEDIATOR uses this as its guideline. This works because the case memory is small, because the MEDIATOR is aware only of features that were important in solving a problem, and because all derived features of a case are part of its final representation. But in a larger memory, it is important to index only by those features with real relevance. Otherwise, the number of partial matches may be overwhelming and there may be no clear way to distinguish which of the partial matches is most appropriate. Work by Barletta & Mark (1988), Birnbaum & Collins (1988), Hammond (1986a, 1988), Kolodner (1983b, 1988), Owens (1988), Ashley & Rissland (1988), and Schank (1982, 1986) sheds light on this subject. Though the MEDIATOR does not address this problem directly, it is one of the most important topics for current research on case-based reasoning. The MEDIATOR does address the problem of choosing the best case from the set of partial matches retrieved by querying the memory. The process includes two steps in addition to querying the memory, three in all. First, memory is queried for a set of potentially relevant cases. That is, features of the new case are used as indexes into memory in order to find cases which share those features. Because previous cases are indexed in memory by features that were useful in solving them, the set of features of
THE MEDIATOR
535
the newcase,which are usedto find previouscases,is limited to thosethat werefound to be usefulpreviously.While this first stepretrievescasesthat sharea setof featureswith the newcase,no checkis madeto besurethat all significant featuresof the new and retrievedcasesmatch. In the secondstep,the set of potentially relevantcasesis filtered, using the differencesbetweenthe newand previouscases.Thosecasesthat differ from the new casealong significantdimensionsare excluded.In general,a caseis excludedif it is different from the new onealong a dimensionthat predictsthe applicability of availableplans or the similarity of results.In general,thesepredictionscan be madeby looking at the goalsr/ratmust be achieved to resolve the new problem and constraints put on those goals.
This is becausethe sameplansmay be chosenwhengoals are similar and becausesimilarity of goals predictssimilarity of results. In the MEDIATOR’s domain, the goalsthat must be achievedcan be found by looking at the statedgoals of the disputantsalong with the goals underlyingthose manifestgoals,if they areknown. Constraintscan be found by looking at planningpolicies,if they are known. If disputantgoalsof a previouscase (e.g.,possession) aredifferent than thosein the currentcase,if the typesof the underlyinggoals(e.g., satisfactiongoal, preservationgoal, crisisgoal) (Schank& Abelson, 1977),whenthey areknown, aredifferent, or if planningpolicies(e.g.,compromise),whenthey areknown,conflict, that caseis excludedfrom use. In thethird step,caseswhichhavenot beenexcludedin Step2, areranked for closenessof fit to the currentcaseby examiningtheir similaritiesto the newcase.Sincesomesimilaritiesaremoreimportant than others,a count of the numberof similar featureswould not work. The MEDIATOR usesa fixed ranking of dimensionsto measurehow closelyeachcasematchesthe newcase,ranking more predictivedimensionshigherthan others.Thus, in the MEDIATOR, similarity of disputant argumentsis worth more than similarity of disputants,which in turn is worth more than similarity of the disputedobjects.The casethat is rankedhighestis returnedfrom memory for the case-based reasonerto use. An examplewill illustrate. Our examplecomesfrom the MEDIATOR’s solutionto the Sinai dispute.When the MEDIATOR is initially attempting to classifythe Sinai dispute(beginningof the annotatedexample),memory locatestwo potentially applicablecases:the Panama Canal dispute, becauseboth aredisputesand the disputantswereof type political entity (MPOLITY), and the Koreanconflict, becauseboth aredisputes,thedisputed object in both casesis land, and becausein both, a military forceplan (MMILITARY-FORCE) was usedin previousattemptsto achievecontrol of the disputedobject. In Step2 of the process,the MEDIATOR tries to excludeany casesfrom that candidatesetwhich areinapplicable.Recallfrom above,that in the MEDIATOR’s domain, this meanscheckingto makesure that disputantgoals,goalsunderlyingthosegoals,andplanningpolicies,if
!536
KOLODNER
AND SIMPSON
they are known, do not conflict with the new case. In the Panama Canal dispute, the goals of both disputants were to possess the canal zone; the underlying reasons for those goals were political control for Panama and military control for the U.S., and a compromise planning policy was in order. In the Korean conflict, the goals of both disputants were to posses Korea; underlying reasons in both cases were political control, and a compromise planning policy was in order. Because, in the Sinai dispute (the new dispute), the goals of both disputants are to posses the Sinai, and because underlying goals and planning policies are not yet known, there are no conflicts between the Sinai dispute and the two retrieved cases that require either to be excluded from consideration. Therefore, in Step 3, both are evaluated for closeness of fit. The Korean contlict ranks higher because it is more similar on descriptive features,Inboth the Korean conflict and the Sinai dispute, the disputed object is a land mass, while in the Panama Canal dispute, the disputed object was a water mass. On all other evaluated dimensions, both cases were equally close.’ Perhaps the most important thing illustrated by the MEDIATOR’s method is the different use of differences and similarities between a previous case and a new problem. Differences between a recalled case and a new situation are used to make decisions about whether or not a case should be excluded, while simikties between the case and the problem are used for ranking (to decide which is the most appropriate case). Later programs that use differencesandsimilaritier~lyinchoosingthebestcaseareHYPO(Ashley& Rissland, 1988) and CASEY (Koton, 1988). The primary difference between the approaches in these two programs and the MEDIATOR’s approach is that HYPO and CASEY both use the cases in memory to evaluate which differences and similarities to heed, while the MEDIATOR has a static set of features it pays attention to. In a sense, HYPO and CASEY use casebased reasoning to decide which features are salient ones. The h4FDIATOR’s method is similar to later programs in the steps involved in selection. That is, all case-based reasoners have retrieval methods that include retrieval of partial matches, exclusion of some cases, and ranking to find a best case. While in most programs written up to now, the steps continue to be done in that order, there are several experiments currently being done to find out if efficiency can be gained by doing the steps simultaneously, for example, ARCS (“Ikgard, Holyoak, Nelson, & Go&field, in press) and MBR (Stanfill &Waltz, 19%). Though both seem more efficient, neitherhasyetbeenattanptedwithdataascomplexasthecasesusedin case-based reasoning systems. This is an area requirkg additional research.
THE MEDIATOR
537
Another area which still requires considerable attention is exactly how to rank cases in order to find the best match. The MEDLATOR used a static similarity metric to do this. There are several problems with such a method (see Ashley & Rissland, 1988 for a more in-depth dixussion of this). Fm. a static metric cannot take situational context into account. While, for any domain, a set of features that tend to be the most important ones can be chosen, in any particular context, some of those features may turn out not to be important, while other features, which were not included, may be more important. When a previous case resulted in failure, for example, the features that were responsible for its failure become the most important. If they match in the new case, they strongly predict failure in that case. A similarity metric should take situational context into account. Second, the relative importance of features often depends on the problem solver’s goals. When trying to form a hypothesis, those cases whose goals and constraints match those of the new problem are important. When evaluating a proposed solution, the plan proposed may be most important. And in some situations, features of the projected outcome are most important. A similarity metric should take problem-solver goals into account. Third, only some portions of a case might be important in ranking. The evaluation process should concentrate only on those parts of a case that are relevant to solving its current problem, and disregard unrelated parts of the case. If one is attempting to make an inference based on some part of a previous case, the entire case may not have to match the new case. Rather, those aspects of the previous situation, which have a bearing on the inference that is about to be made, are the only ones that need to be checked. Other case-based reasoners address parts of the ranking problem, but none address all of its parts. Recent work by Ashley and Rissland on the HYPO project (Ashley & R&land, 1988) explores a more dynamic similarity evaluation method. Cases are retrieved based on a number of dimensions, and each dimension has knowledge about its relative importance in different kinds of situations. The metric is thus more dynamic than the MEDIATOR’s metric in that it takes situational context into account, but it does not take the problem-solver’s goals into account (since in HYPO they are always the same) and only weakly partitions the case. CHEF (Hammond, 19ffi) uses the problem-solver’s goals in ranking by sending only a partial probe to the memory. The partial probe contains only those features of the case it is working on which have relevance to its current goal. Thus, while its evahration criteria are set beforehand like the MEDLATOR’s, it does take goals into account. It doesn’t do a complete job of handling goals, however, since it selects out features of the probe before probing memory, never giving memory a chance to propose other features that might be important, based on its experience. Work on PARADYME (Kolodner, 1988; Kolodner & Thau, 1988) addresses all of these issues, but is only in its exploratory stages.
538
KOLODNER
AND
SIMPSON
4.3.2. Deciding which Part of a Case to Focus on. Any case selected for case-based reasoning can be quite large, and can potentially generate many inferences. Some method of control is needed to insure that only appropriate inferences are made. There are two issues to discuss here. First, what is an appropriate inference? Second, how can the inference process be controlled so that only appropriate inferences are made? The MEDIATOR’s answer to the first question is that an appropriate inference is one that can help in the achievement of the reasoner’s current goal. Thus, if the reasoner is attempting to choose an abstract plan, then inferences that can help in that choice are appropriate ones to make. The MEDIATOR controls the appropriateness of its case-based inferences by using its current reasoning goal to provide focus. In particular, it focuses on those parts of the previous case that achieved its current reasoning goal. To continue with the example of choosing an abstract plan for its new problem, it focuses on the abstract plan it chose in the previous case. Siiarly, if it is attempting to explain a failure, it will look to see how the failure from the previous case was explained.a In the MEDIATOR, this is a fairly easy way to control focus, since the solutions to most goals reside in the same slot in any case representation. One can find the abstract plan used in a particular case by looking at the “plan” slot of that case. Similarly, one can find the explanation of the failure in a case by looking at the explanation slot of the frame filling the remediation slot of the case. In general, subgoals need to know where to look in a case to find out how they were achieved; or, there needs to be an easy way to find out how a subgoal was achieved in a previous case. CHEF (Hammond, 1986a), like the MEDIATOR, uses the first method. Carbonell(l986) uses the second method in work on derivational analogy. Cases are represented as a set of chunks, each one associated with a subgoal that was attempted while solving a problem. Each chunk records the subgoal that was to be achieved, the method by which it was achieved, the solution itself, and several other pieces of information. JULIA (Kolodner, 1987a) and PAIWDYME (Kolodner, 1988; Kolodner & Thau, 1988) do approximately the same thing. In any of these representations, one finds out how the subgoal was achieved by finding the appropriate chunk of the recalled case.
* The MEDIATOR kept track of its solutions but not how it derived them. Several researchers have pointed out, however, the utility of keeping track of the methods by which a goal was achieved (CarbonelI. 1986; Kolodner. 1987a; Bibaum & Collins, 1988). Using the MEDIATOR’s guidelines for focus in a case-based reasoner that keeps track of how it achieves its goals. focus would be on both the solution to the goal in the previous case and the method by which it was achieved. In the next step (that of actually making the case-based inference). a decision would be made as to which of these values to use in generating a solution for the new case.
THE MEDIATOR
539
4.3.3 Deriving Subgoals. It would be begging the question if the origin of a reasoner’s subgoals were not discussed. In the MEDIATOR, they are built into the system. It knows it must fast understand a problem by doing everything that entails, then generate a solution, then examine feedback and, if necessary, remediate. Although this makes the system inflexible, this approach was taken in the MEDIATOR so that issues more directly relevant to case-based reasoning could be concentrated on. Nevertheless, it is informative to consider other options that are available. In essence, the MEDIATOR’s method of control puts control of the casebased reasoner’s subgoals in the hands of a problem-reduction problem solver. While this was fine for the MEDIATOR, a more flexible architecture would let the case-based reasoner itself help derive subgoals. In the JULIA project (Hinrichs, 1988; Kolodner, 1987a), a large part of the MEDIATOR’s control is maintained, but made more flexible. The problem-reduction problem solver can either use general purpose problem-reduction methods or take its reductions from a case. This is particularly important when a previous case points to the potential for failure. When that happens, the problem-solver’s goal sequence often has to be modified. Carbonell’s ARIES (1986) always derives its subgoals from a case, once one is recalled, and uses problem reduction methods (MEA) only when no case is available. Hammond’s (1986a) CHEF, like the MEDIATOR, has its subgoals built in, but CHEF embodies-a different philosophy of control. While the MEDIATOR, its decendents, and ARIES assume subgoals analogous to those normally found in goal-directed problem solvers (e.g., NOAH, GPS), CHEF’s subgoals are tailored to case-based reasoning. Thus, CHEF’s subgoals are of the form “anticipate potential problems,” “adapt the old solution to fit the new case,” and “repair the old solution.” Subgoals thus do not break the problem into parts, but rather give guidelines to the problem solver about its reasoning processes. Were CHEF’s methods to be used to solve problems that require decomposition, it would also need a way to derive that type of subgoal. It is clear that subgoals ought to be derived dynamically, based on a combination of (1) what cases suggest, (2) the reasoning process being used, and (3) general planning or reasoning strategies. Much research must still go into investigating the integration of these sources of knowledge. 4.3.4 Integrating Case-Based and Goal-Directed Reasoning. The previous discussion of focus and goal derivation suggests a general architecture for integrating case-based with goal-directed reasoning. First, associate with goals the subgoals they can be reduced to, and/or the standard ways of achieving them (as is standard in a nonlinear problem solver), or provide a means of deriving subgoals (as, e.g., in means-ends analysis, Newell & Simon, 1972). Next, create processes for any appropriate reasoning methods
540
KOLODNER
AND
SIMPSON
that might be necessary in addition to the case-based reasoner. What is necessary here are processes that allow problem decomposition methods to work (e.g., a constraint propagator, a truth maintenance system), and processes that can create solutions from scratch when no cases are available for guidance. A goal scheduler wilI post reasoning goals in a network for all processes to see. Let each of the reasoning processes have a turn at achieving the active subgoal(s). The case-based reasoner should go first, since it can provide shortcuts and warn of potential problems. From-scratch processes will be run when case-based reasoning cannot provide answers. Problem reduction is used when subgoals need to be reduced. Other processes will be run when necessary. A constraint propagator, for example, is appropriately run each time a part of a solution is accomplished in order to make sure the rest of the solution will be consistent with it. A truth maintenance system might be run as part of the case-based reasoner’s consistency checks, and, to maintain proper bookkeeping, each time a part of a solution is completed. And, adaptation processes might be run when an entire solution has been created in this way to make it into a better solution (Hinrichs, 1989). The case-based reasoner should be a hybrid of those discussed here. For each case-based inference it attempts, it will have the subgoals of CHEF (anticipate, transfer, adapt, etc.) combined with the MEDIATOR’s careful policy of checking consistency. The problem-reduction problem solver would take its cues from cases, when available, and from general goal/subgoal knowledge when cases could not help. When the case-based reasoner could not solve a problem, the problem reducer would see if a case were available to give it guidance in reducing the problem into parts; only if a case were not available would it use its goal/subgoal knowledge. In addition, the problem reducer, the case-based reasoner, or any from-scratch methods, might have goals that need achieving as they do their work. Any may require some knowledge that is not available, for example. If this happened, they would post their goals to the goal scheduler. This architecture is meant to provide a starting point for integrating casebased with goal-directed problem solving. It certainly does not solve all problems. For example, it is not clear, psychologically or in terms of efficiency, what the guidelines are for giving up on case-based reasoning and going back to first principles, or from-scratch methods. Thus, the working relationship between case-based reasoning and from-scratch methods or problem reduction is not clear. Also, psychologically, it is hardly parsimonious to have an architecture that does things in so many different ways. Why should an inference based on a case, for example, be any different than one made based on a general plan derived from experience? It shouldn’t (Martin, 1988; Shinn, 1988; Turner, 1988; Turner & Cullingford, 1989). One must still focus on a part of the plan. One might still need to adapt something from the plan rather than use it directly.
THE MEDIATOR
541
Several things are necessary for a more parsimonious architecture. First, one must be able to integrate goal/subgoal hierarchies and from-scratch methods (first principles) into the memory that organizes cases. Second, the memory that organizes cases should also organize “generalized cases,” or plans, and recall them using the same methods that retrieve cases. The guideline here would be that more specific applicable knowledge takes precedence over less specific knowledge. Third, reasoning processes must be able to work based either on individual cases or generalizations of cases. While no current case-based reasoner can do all of these things, there are guidelines for making these things happen. Schank’s Dynamic Memory (1982) and Kolodner’s CYRUS (1983a, 1983b) provide guidelines for setting up such a memory. What has been discovered in building the MEDIATOR and other case-based reasoners provides guidelines for what needs to go into these memories. The MEDLATOR has processes that are consistent with such an architecture. And current research is addressing way to make these guidelines more concrete for real-world problems (e.g., Martin, 1988; Shhm, 1988; Turner, 1988; Turner & Cullingford, 1989). 4.4 Case-Based Reasoning and Analogy Much work in the past few years in the area of problem solving and knowledge acquisition has been devoted to analogy (e.g., Burstein, 1983; Gentner, 1982; Holyoak, 1985; Holyoak & Thagard, 1987; Ross, 1982, 1987), both within psychology and artificial intelligence. One set of researchers investigating analogy (Burstein, 1983; Gentner, 1982; Greiner, 1985; Winston, 1980) is examining analogies made across domains for the purpose of automatic knowledge acquisition in the new domain. In general, the analogous concept is given by the equivalent of a teacher, and the computer’s job is to set up the mapping between the pairs and learn a new concept in the target domain. Another set of researchers is working on analogical problem solving (Gick & Holyoak, 1983; Holyoak, 1985; Holyoak & Thagard, 1987; Ross, 1987; Winston, 1980). In analogical problem solving, one tries to solve a new problem by comparing the problem specitication to some old problem, and then solving the new problem based on the mapping that can be made between the two problem specifications. Holyoak, for example, has experimental subjects read a story about capturing a fortress. Holyoak later gives them the problem of eradicating a tumor with Xrays. In both, a unified assault is impossible; the soldiers would set off mines, while the Xrays would kill healthy cells in the path to the tumor. One can solve the medical prob lem by mapping armies to Xrqys, fortress to tumor, and capture to eradicate in the problem statement. Based on this mapping and the solution statement to the war story (surround the fortress and get to it from all sides with small numbers of soldiers in each group), the medical problem is solved by, in
542
KOLODNER
AND
SIMPSON
essence, reinstantiating the solution to the army problem using the new mapping. Analogies are made in doing case-based reasoning, and case-based reasoning is a method for doing analogical problem solving. Case-based reasoning, however, is a novel form of analogical problem solving. In general, researchers working on case-based reasoning have tended to work on a different set of problems than those working on analogical problem solving. While researchers in analogy have tended to investigate mapping quite extensively, researchers working on case-based reasoning have concentrated more on case selection. While analogy researchers investigate the creation of schemata during analogy making, researchers working on case-based reasoning investigate adaptation and the role of the problem solver in guiding transfer. Why? One major difference might have to do with the particular problems being tackled by both groups. Researchers in analogical problem solving have tended to examine analogies made across domains (e.g., between a war problem and a medical problem), while researchers working on case-based reasoning have tended to concentrate on within-domain analogies (e.g., two different disputes). When analogies are made within a domain, the two problems have similar representations, and mapping is relatively easy; descriptors playing the same functional role are mapped to each other. When analogies are across domains however, the mapping among problem parts is not as obvious. Thus, investigations of cross-domain analogical problem solving have necessarily concentrated on mapping. In addition, researchers working on across-domain analogy have found that subjects have trouble remembering analogous cases (Holyoak, 1985), although they are able to use them once reminded by the experimenter of the analogous case. Based on these experimental results, researchers working on analogical problem solving have not felt the need to investigate case recall. That is not to say that case-based reasoning cannot handle cross-domain analogies. Although the MEDIATOR does not address this issue, several current investigations do (e.g., Birnbaum & Collins, 1988; Kass & Leake, 1988; Owens, 1988). In general, these researchers are looking for a representational vocabulary which describes problems across domains. Chess games and battlefield maneuvers, for example, can both be expressed through a vocabulary expressing adversarial situations and counterplanning strategies. If such a vocabulary can be discovered, the same recall and adaptation techniques used for within-domain, case-based reasoning can be used across domains. Mapping would be fairly straightforward since at some abstract level both cases would be represented similarly. And the representational vocabulary would provide predictions about which cross-domain analogies could be recalled spontaneously and which ones would be difficult. Another difference is in the complexity of the problems being addressed by both sets of researchers. Tasks being studied by analogical problem-
THE MEDIATOR
543
solvingresearchers are,in general,not ascomplexasthosebeingstudiedby researchers working on case-based reasoning.The problemsstudiedin casebasedreasoningoften requirethat theproblembesolvedin partsandusually requirea fair amountof adaptation.Thosestudiedby researchers working on analogicalproblemsolving,on the otherhand, usuallyrequireonly reinstantiation.When problemsmust be solvedin parts,goal-orientedprocessing must bebroughtinto theframework.Recentexperimentalwork by Seifert (1988)showsthat goalsplaya largerole in guidinganalogicalproblemsolving in real-worlddomains. In addition, solutionsto problemsin domainsbeingstudiedin case-based reasoningusually require interactionwith the real world. Researchers involved in investigatinghow to solve problemsunder thesecircumstances must necessarilydealwith unpredictability.Thus, in case-based reasoning, the focusis on anticipatingandavoidingmistakes,gatheringfeedback,and explaining failures-issues not addressedby researchersworking on analogical problem solving. Furthermore,researchers working on case-based reasoninghavetended to require more of their models. Not only doesMEDIATOR requirean explanationof analogicalprocesses but alsothat the modelexplainhow the analogouscaseis recalled.In fact, work on recallprecededwork on adaptation within thecase-based reasoningcommunity (seeKolodner,1983a,1984; &hank, 1982).This is one areawhereresearchersworking on analogical problemsolvinghavelearnedfrom case-based reasoningwork. Recentpsychologicalwork on analogicalproblemsolvingbeginsto addressthe mechanics of recall (e.g., Ratterman& Gentner,1987;Thagardet al., in press). Someresearchin the areaof analogicalproblem solving doesapproach the sameproblemsaddressedin case-based reasoning,andit deservesseparateattention.That is Ross’s(1982,1989)investigationof theuseof analogy in learningnewtasks(e.g.,learninga text editoror learningto do probability word problems).Like thosetaskschosenby researchersworking on casebasedreasoning,theseare within-domain, real-world tasks that require feedback.This work is particularlyimportant in discussingcase-based reasoning becauseit lends psychologicalvalidity to the work being done in case-based reasoning. As in work on case-based reasoning,Rossis focusingon the useof cases that are representationallyquite close. Though the focus is on learning, ratherthan problemsolving, therearesomeinterestingobservationswhich can be madebasedon Ross’swork. While much work on analogicalproblem solving has focus& on mapping, Rosshas not found the needto do this. This is probably becausethe problemsRossis looking at arecloseto eachotherrepresentationally,asaretheMEDIATOR’s problems.In experiments,Rosshas found that peopleare easilyremindedof previoussimilar cases,use them easily in solving problems, and that the casesprovide a decidedadvantagein problem solving (at least in early skill learning).By
544
KOLODNER
AND
SIMPSON
showing that remembering structurally similar problems is a spontaneous and useful process, Ross’s work gives credibility to case-based reasoning as a psychological process. The details of the MEDIATOR’s methods, however, have not been investigated. 4.5 Case-Based Reasoning and Learning Up to now, this article has presented a set of problem solving and support processes that allow a reasoner to make inferences from previous cases. These methods, though problem-solving methods, are methods by which a problem solver can improve its performance. Case-based reasoning is a reasoning method that makes use of lessons learned by remembering. By remembering previous experiences and applying case-based reasoning to them, the problem solver improves its performance in several ways: 1. It can avoid mistakes made previously. 2. It can anticipate problems that occurred previously. 3. It can shortcut the problem-solving process. Several examples of the MEDIATOR shortcutting its problem-solving process have been shown. The MEDIATOR avoids mistakes made previously when it recalls the successful version of its solution to some problem that took several trials to resolve (e.g., it is reminded of the successful version of the orange dispute when solving a new problem). In this case, the successful solution provides a suggestion of a solution to the new problem that avoids the trial and error that went into solving the previous one. A previous solution can also avoid the potential for trial-and-error problem solving, as when the MEDIATOR is explaining its failures. While a case-based reasoner learns primarily by remembering, it cannot live in a vacuum separate from other learning procedures. Certainly, it is more efficient to create generalizations covering cases that are similar to each other, than to store huge unorganized sets of cases in memory and choose the best of a set of similar cases each time reasoning is done. The whole range of generalization methods is necessary for that, including similarity-based inductive methods, explanation-based generalization methods (DeJong C Mooney, 1986; Mitchell, Keller, & Kedar-Cabelli, 1986), and their combinations (Lebowitz, 1986). Equally necessary are blame-assignment processes that allow learning from reasoning failures by first explaining what went wrong and then, if possible, generalizing those explanations. In essense, the memory structures created by a case memory provide guidelines to learning processes. (gee Kolodner, 1983b, 1984, for an explanation of organizing cases in a memory.) Inductive methods, for example, need to be controlled so that only productive inductive inferences are made. Cases that are indexed similarly alert an inductive learner of potentially appropriate inductive inferences. Explanation-based generalization methods
THE MEDIATOR
545
work when an explanation can be created for some situation, but if knowledge is incomplete and an explanation cannot be created from first occurrence, indexing provides a way of noticing the recurrence of the same or similar failures, providing additional knowledge for an explainer to use in order to find an explanation eventually. At the same time, experience can be relied upon even when no explanation has yet been found. And when situations can be explained, EBG methods provide guidelines for choosing the right indexes for these cases, so that recurrence of the situation can be recognized. Combining case-based reasoning with other learning methods has several advantages. It allows an inductive generalizer to put off making generaliiations until it knows they will be worthwhile. It allows a generalizer to create useful similarity-based explanations when full causal knowledge is not available. And, it allows a generalizer to tailor its generalizations to the needs of the reasoner and to make several generalizations from the same problem. At the same time, it provides the potential to compile experience so that problem solving can often be an exercise in applying specialized schemas9 A reasoner that can combine case-based reasoning with appropriate learning methods, that can store and access its cases and generalizations in the same memory, that can decide which is most applicable to solving a new case, and that can apply its methods to either cases or their generalizations, will indeed be a powerful problem solver. 5. CONCLUDING
REMARKS
There are, of course, many topics that were not covered completely that must be investigated to develop the case-based reasoning methodology fully. First, better ways of integrating case-based reasoning into more general problemsolving frameworks are needed. In particular, criteria must be developed to evaluate the feasibility of case-based reasoning to resolve any given reasoning subgoal. Second, more consideration must be given to integrating learning into case memories. This may require new methodologies for doing explanation-based learning, since many domains where case-based reasoning is useful are not well enough understood to allow full explanations. Third, indexing and retrieval algorithms must be studied more systematically. If case-based reasoners are to be useful, their memories will have to retrieve cases within a reasonable period of time; and the cases they retrieve will have to be the most appropriate ones. Related to this, it may be necessary to find out more about how a case memory can be actively explored during problem solving, for example, hypothesis making. And, it may make sense to investigate the effects of real parallelism in searching the case memory. Fourth, if the reasoners are * See Shinn (1988) for an example of a problem solver that begins to do this.
546
KOLODNER
AND
SIMPSON
to improve their reasoning methods as well as their domain knowledge, ways of representing reasoning procedures will have to be improvised, so that they too can be examined and improved. The MEDIATOR illustrates several important aspects of case-based reasoning. l l l
l
It integrates the use of several cases in case-based reasoning. It shows how subgoals can be used to control focus within a case. It provides a set of steps for choosing the best out of a set of partially matching cases. It provides the basis for integrating case-based reasoning with a goaldirected problem solver.
There are also a number of claims that can be made about case-based reasoning based upon this investigation. 1. Case-based reasoning is useful for tasks requiring long chains of inference because it can shortcut the long inference chain. 2. Case-based reasoning focuses a reasoner on appropriate parts of its current problem. 3. Case-based reasoning helps a problem solver anticipate and avoid previous mistakes. 4. Case-based reasoning is broadly applicable to a wide variety of inference tasks. w
Original Submission Date: March 2, 1989.
REFERENCES Alterman, R. (1988). Adaptive planning. Cognitive Science, 12, 393-421. Ashley, K., & Rissland, E. (1987). Compare and contrast, a test of expertise. In Proceedings of American Association of Artificial Intelligence (pp. 273-278). Seattle, WA. Ashley, K., & Rissland, E. (1988). Waiting on weighting: A symbolic least commitment approach. In Proceedings of American Association of Artifcal Intelligence (pp. 239-244). St. Paul, MN. Barletta, R., &Mark, W. (May, 1988). Explanation-based indexing of cases. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 50-60). San Mateo, CA: Morgan-Kaufmann Publishers, Inc. Birnbaum, L., L Collins, Cl. (1988, May). The transfer of experience across planning domains through the acquisition of abstract strategies. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 61-79). San Mateo, CA: MorganKaufmann Publishers, Inc. Burstein, M. H. (1983). A model of learning by analogical reasoning and debugging. In Proceedings of the National Coqference on Artificial Intelligence (pp. 45-48). Washington, DC. Carbonell, J.G. (1986). Derivational analogy in problem solving and knowledge acquisition. In R. Michalski, J. Carbonell, & T. Mitchell (Bds.). Machine Learning: Vol. II. San Mateo, CA: Morgan-Kaufmann Publishers, Inc.
THE MEDIATOR
547
Davis, R. (1977). Interactive transfer of expertise: Acquisition of new inference rules. In Proceedings of the International Joint Conference on Artificial Intelligence (pp. 321-328). Cambridge, MA. DeJong, Cl., &Mooney, R. (1986). Explanation-based learning: An alternative view. Machine Learning, 1. 145-176. Fikes, R., Hart, P., & Nilsson, N. (1972). Learning and executing generalized robot plans. Artificial Intelligence, 3, 251-288. Fisher, R., & Ury, W. (1981). Getting to yes. Boston: Houghton Mifflin. Friedland, P. (1979). Knowledge-based experiment design in molecular genetics. (Tech. Rep. No. 79-771). Stanford, CA: Stanford University, Computer Science Dept. Gentner, D. (1982). Structure mapping: A theoretical framework for analogy and similarity. In Proceedings of the Fourth Annual Conference of the Cognitive Science Society (pp. 13-15) Ann Arbor, MI. Click, M.. & Holyoak, K. (1983). Schema induction and analogical transfer. Cognitive Psychology, 14. Greiner, R. (1985). Learning by understanding analogies. Unpublished doctoral dissertation. Computer Science Department, Stanford University, Stanford, CA. Hammond, K. (1986a). Case-basedplanning: An integrated theory ofplanning, learning, and memory. Unpublished doctoral dissertation. Dept. of Computer Science, Yale University, New Haven, CT. Hammond, K. (1986b). CHEF: A model of case-based planning. In Proceedings of American Association of Artificial Intelligence (pp. 267-271). Philadelphia, PA. Hammond, K. (1987, August). Explaining and repairing plans that fail. In Proceedings of the International Joint Conjerence on Arrificial Intelligence (pp. 109-114). Milan, Italy. Hammond, K. (1988, May). Opportunistic memory: Storing and recalling suspended goals. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 154-168). San Mateo, CA: Morgan-Kaufmann Publishers, Inc. Hammond, K., & Hurwitz, N. (1988, May). Extracting diagnostic features from explanations. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 169-178). San Mateo, CA: Morgan-Kaufmann Publishers, Inc. Hayes-Roth, B., & Hayes-Roth, F. (1979). A cognitive model of planning. Cognitive Science, 3, 275-310. Hhuichs, T. (1989, May). Strategies for adaptation and recovery in a design problem solver. In Proceedings of the DARPA Case-Based Reasoning Workshop, (pp. 115-118). San Mateo, CA: Morgan-Kaufmann, Publishers, Inc. Hinrichs, T. (1988, May). Towards an architecture for open world problem solving. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 182-189). San Mateo, CA: Morgan-Kaufmann Publishers, Inc. Holyoak, K. (1985). The pragmatics of analogical transfer. In G. Bower (Ed.), The Psychology of Learning and Motivation. Orlando, FL: Academic. Holyoak, K., & Thagard, P. (1989). A computational model of analogical problem solving. In S. Vosniadou & A. Ortony (Eds.), Analogy, Similarity, and Thought. Cambridge, UK: Cambridge University Press. Kass, A., & Leake, D. (1988, May). Case-based reasoning applied to constructing explanations. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 190-208). San Mateo, CA: Morgan-Kaufmann Publishers, Inc. Kolodner, J. (1982, August). The role of experience in development of expertise. In Proceedings of the American Association for Artificial Intelligence (pp. 273-277). Pittsburgh, PA. Kolodner, J. (1983a). Reconstructive memory: A computer model. Computer Science, 7, 243-280. Kolodner, J.L. (1983b). Maintaining organization in a conceptual memory for events. Cognitive Science, 7, 281-328.
548
KOLODNER
AND
SIMPSON
Kolodner, J.L. (1984). Re!rieval and organizationalstrategies in conceptual memory: A cotnputer model. Hillsdale, NJ: Erlbaum. Kolodner, J. (1987a, June). Extending problem solver capabilities through case-based inference. In Proceedings of the 1987 International Machine Learning Workshop (pp. 167178) Irvine, CA. Kolodner, J. (1987b, August). Capitalizing on failure through case-based inference. In Proceedings of Conference of the Cognitive Science Sociefy (pp. 715-726). Seattle, WA. Kolodner, J. (1988, May). Retrieving events from a case memory: A parallel Implementation. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 233-249). San Mateo. CA: Morgan-Kaufmann Publishers, Inc. Kolodner, J.L., & Thau, R. (1988). Design and implementation of a case memory (Tech. Rep. No. GIT-ICS-88/34). Atlanta, GA: Georgia Institute of Technology, School of Information and Computer Science. Koton, P. (1988, August). Reasoning about evidence in causal explanations. In Proceedings of& the American Association for ArtifTcial Intelligence (pp. 256-261). St. Paul, MN. Lebowitx, M. (1986). Concept learning in a rich input domain: Generalization-based memory. In R. MichaIski, J. Carbonell, & T. Mitchell (Eds.), Machine Learning II. San Mateo, CA: Morgan-Kaufmann Publishers, Inc. Martin, J. (1988, August). CORA: A best match memory for case storage and retrieval. In Proceedings of the American Association for Artificial Intelligence Case-Based Reasoning Workshop (pp. 88-94). Minneapolis-St. Paul, MN. Mitchell, T., Keller, R., C Kedar-Cabelii, S. (1986). Explanation-based generalization: A unifying view. Machine Learning, I, 47-80. Newell, A., & Simon, H. (1972). Human problem solving. Englewood Cliffs, NJ: PrenticeHall. Owens, C. (1988, May). Domain-independent prototype cases for planning. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 302-311). San Mateo, CA: Morgan-Kaufmann Publishers, Inc. Ralffa, H. (1982). The art and science of negofiation. Cambridge, MA: Harvard University Press. Ratterman, M.J., & Gentner, D. (1987. August). Analogy and similarity: Determinants of accessibility and inferential soundness. In Proceedings of the Ninth Annual Conference of the Cognitive Science Society (pp. 23-35). Seattle, WA. Ross, B. (1982). Remindings and their effects in learning a cognitive skill. uech. Rep. No. CIS-19). Palo Alto, CA: Xerox Palo Alto Research Center. Ross, B. (1989). Remindings in learning and instruction. In S. Vosniadou &A. Ortony (Eds.), Similarity and analogical reasoning. Cambridge, England: Cambridge University Press. Sacerdoti, E. (1977). A structure forplans and behavior. Amsterdam: Elsevier North-Holland. Schank, R. (1982). Dynamic memory. Cambridge, England: Cambridge University Press. Schank, R. (1986). Explanation patterns. Hillsdale. NJ: Erlbaum. Schank, R., & Abelson. R. (1977). Scripts, plans, goals, and understanding. Hillsdale,NJ: Erlbaum. Seifert, C. (1988, May). Goals in reminding. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 357-369). San Mateo, CA: MorganKaufmann Publishers, Inc. Shii, H. (1988, May). Abstractional analogy: A model of analogical reasoning. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 37&387). San Mateo, CA: Morgan-Kaufmann Publishers, Inc. Simpson, R. (1985). A computer model of case-based reasoning in problem solving. Doctoral dissertation, School of Information and Computer Science, Georgia Institute of Technology, Atlanta, GA.
THE MEDIATOR
549
Stanfill, C.. & Waltz, D. (1986). Toward memory-based learning. Communications of the ACM, 29, 1213-1228. Sycara, E. (1987). Resolving adversarial conf7icts: An approach integrating case-based and analytic methods. Doctoral dissertation. School of Information and Computer Science, Georgia Institute of Technology, Atlanta, GA. Sycara, K. (1988, May). Using case-based reasoning for plan adaptation and repair. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 425-434). San Mateo, CA: Morgan-Kaufmann Publishers, Inc. Thagard, P.. Holyoak, K., Nelson, G., & Gochfeld, D. (in press). Analog retrieval by constraint satisfaction. Artificial Intelligence. Turner, E., & Cullingford, R. (1989). Using conversation mops in natural language processing. Discourse Processes~ 12, 63-90. Turner, R. (1988, May). Organizing and using schematic knowledge for medical diagnosis. In J.L. Kolodner (Ed.), Proceedings of the DARPA Case-Based Reasoning Workshop (pp. 435-446). San Mateo. CA: Morgan-Kaufmann Publishers, Inc. Wilensky, R. (1983). Planning and understanding: A computational approach to human reasoning. Reading, MA: Addison-Wesley. Wilkins, D. (1984). Domain-independent planning: representation and plan generation. Art@ cial Intelligence, 22, 269-301. Winston, P. (1980). Learning and reasoning by analogy. Communications of the ACM, 23, 689-703.