Expert systems: art alternative paradigm

Expert systems: art alternative paradigm

Int. J. Man-Machine Studies (1984) 20, 21-43 Expert systems: an alternative paradigm MIKE COOMBS AND JIM ALTY Department of Computer Science, Unive...

1MB Sizes 0 Downloads 63 Views

Int.

J. Man-Machine Studies (1984) 20, 21-43

Expert systems: an alternative paradigm MIKE COOMBS AND JIM ALTY

Department of Computer Science, University of Strathclyde, Glasgow G1 1XH, Scotland, U.K. There has recently been a significant effort by the A.I. community to interest industry in the potential of expert systems. However, this has resulted in far fewer substantial applications projects than might be expected. This article argues that this is because human experts are rarely required to perform the role that computer-based experts are programmed to adopt. Instead of being called in to answer well-defined problems, they are more often asked to assist other experts to extend and refine their understanding of a problem area at the junction of their two domains of knowledge. This more properly involves educational rather than problem-solving skills. An alternative approach to expert system design is proposed based upon guided discovery learning. The user is provided with a supportive environment for a particular class of problem, the system predominantly acting as an advisor rather than directing the interaction. The environment includes a database of domain knowledge, a set of procedures for its application to a concrete problem, and an intelligent machine-based advisor to judge the user's effectiveness and advise on strategy. The procedures focus upon the use of user generated "explanations" both to promote the application of domain knowledge and to expose understanding difficulties. Simple database PROLOG is being used as the subject material for the prototype system which is known as MINDPAD.

1. Experts and expertise Over recent years there has been a significant effort by the A.I. community on both sides of the Atlantic to awaken industry to the potential of knowledge-based systems. As a result, many large companies have set up working groups, industry is well represented at A.I. conferences, and universities have run well-attended expert systems tutorials. There is ample evidence of interest. However, all this activity has resulted in far fewer substantial applications projects than might be expected. The authors have been recently involved in setting up two expert system projects within the very different application areas of paint selection for industrial machinery and computer-user support. In both of these projects, preparatory investigations have revealed the same basic difficulty in applying expert systems technology, namely that human experts are rarely called upon to perform the role that current computer-based experts are programmed to adopt. Rather than acting as "wise-men" called upon to produce solutions to c o m p l e x - - b u t essentially well-defined--problems, human experts in these domains are more often asked to provide conceptual guidance to other experts in adjacent fields to enable them to solve problems for themselves. Instead of being valued as diagnosticians, experts in the above domains are employed for their ability to assist colleagues to extend and refine their understanding of the problem area at the junction of the two fields of knowledge. In the case of the paint selection system, for example, this might involve the relationship between a food processing expert's knowledge of the toxicity of certain paint constituents and a paint 21 0020-7373/84/010021 +23503.00/0

O 1984 Academic Press Inc. (London) Limited

22

M. C O O M B S

A N D J. A L T Y

expert's knowledge of the adhesive properties of such constituents. The promotion of understanding is achieved by such activities as: (a) providing relevant contextual information; (b) focusing attention on important topics in the subject area; (c) helping to predict outcomes of given processing circumstances. Many of these functions involve educational rather than traditional problem-solving skills (for example, Newell & Simon, 1972) and their support appears to be conceptually outside the range of current expert systems. Expert systems are designed to achieve known and clearly defined solutions to a well-circumscribed class problem. This is true both of systems which encode the loosely structured, empirical knowledge of experts [for example, MYCIN (Shortliffe, 1976) and D E N D R A L (Feigenbaum, Buchanan & Lederberg, 1971)], and systems which reason from some representation of the "causal" relationships underlying the problem domain [for example, C A S N E T (Kulikowski & Weiss, 1982) and Davis (1983)]. The above programs are all clearly problem-solvers in that their sole objective is to achieve known and clearly defined solutions to a well-circumscribed class of problem. MYCIN's purpose is to diagnose and recommend treatments for bacterial infections of the blood; D E N D R A L analyses and evaluates spectographs and selects the most likely composition of substances. Moreover, as problem-solvers, their goals remain the same each time they are used (i.e. to recommend treatments, to name substances). As we argue in detail below, the critical difference between the role of an expert as a problem-solver and as an advisor is that, while the former focuses upon the process of obtaining a concrete, communicable solution to a problem, the latter is primarily concerned with the enrichment of the user's understanding of a problem area and the development of his skills at handling that area. As an advisor, the expert is expected to support a colleague's personal problem-solving, particularly at the junction between their two areas of expertise, and to help him decide what questions should be asked and how to look for answers. Implemented as a knowledge-based computer system, a guidance program for the areas we have studied would need to reverse the priorities of a conventional diagnostic expert system, focusing upon the elaboration of content relevant to problem-solving rather than on the solution itself. The knowledge-base would need to be highly detailed, containing many different types of information (theoretical, empirical, pragmatic), and open to changes throughout a guidance session. It must also be possible for the knowledge to be applied to novel goals generated by the user, enabling him to obtain answers to a wide range of different types of question, including: what would happen if X? why would X happen? how could X be prevented? what are the critical factors in X? Formal approaches to answering such questions have been made in the development of A.I. text-understanding systems (Lehnert, 1978), but have yet to be developed in a guidance context. A number of attempts have been made to extend diagnostic agents to enable them to support an advisory role. However, such efforts have revealed a number of substantial

EXPERT

SYSTEMS: AN ALTERNATIVE

PARADIGM

23

difficulties. A principal one of these concerns the content and structure of the knowledge-base, and is illustrated persuasively by the GUIDON project (Clancey, 1979). Following the success of MYCIN as a diagnostic program, it was decided to develop it into a tutorial system using the original knowledge-base in the new role of guiding and evaluating a student's learning. However, it was found that much of the knowledge required by a person attempting to learn how to problem-solve within the subject area (as distinct from using the system to support his problem-solving) proved to be implicit within the diagnostic rules, and so not available for inspection. It thus proved necessary to augment substantially the knowledge-base, including new information about subject primitives, the structure of concepts and suitable learning strategies [see NEOMYCIN (Clancey & Letsinger, 1981)]. A second source of difficulty concerns system architectures. The TEIRESIAS interface to MYCIN (Davis & Lenat, 1982), for example, was intended to provide an adaptable explanation system to help both user and subject expert to understand the system's reasoning. However, much of the power of TEIRESIAS depended upon the highly structured nature of content and the use of recta-rules for control. It might be suggested that the need for a highly structured subject domain places conceptual limits on the approach, and the extensive use of meta-rules could eventually be self-defeating by making the implications of conceptual relations within the system opaque. This is obviously not desirable in a system with the express purpose of developing understanding. Early in 1982, the authors decided to explore possible designs for an intelligent computer-based guidance system. The system--MINDPAD--would be explicitly concerned with supporting the conceptual aspects of a user's efforts at problem-solving; providing an environment in which the user may achieve a better understanding of his problem, and in so doing be equipped to solve it. The basic requirements for this, and some principles for its provision, emerged during a study of interactive problemsolving between university computer users and professional advisors undertaken by the authors and discussed below (Alty & Coombs, 1980, 1981; Coombs & Alty, 1980, 1982).

2. Expert guidance: results of the Advisory Service Study The Advisory Service Study was set up with the explicit objective of improving the effectiveness of computing advisory services in university computer centres. These services have been established in most centres to provide some measure of personal support in computing. This is necessary because of the very wide range of expertise found in any population of university users and the desire of many researchers to tap the power of the computer with as little investment in fundamental training as possible. Any realistic support for such people must be able to adapt to the needs of individuals, a goal which has proven difficult to achieve with conventional documents and automated "help" facilities (Alty & Coombs, 1980). In order to study interactions, random samples of conversations were recorded at five university computer centres. Soon after recording the participants were debriefed in depth, with reference to both the conversation text and the relevant documentary material. The debriefing was goal based, focusing upon such factors as the general

24

M. C O O M B S

A N D J. A I . T Y

motivation(s) for the session, the goal(s) behind each individual utterance, and the success with which goals were achieved (Coombs & Alty, 1980). Interactions took place at the user's initiative and without appointment. They were mainly concerned with the diagnosis and correction of failure in some item of software, this frequently involving in some way the user's own program. The success rate for both diagnosis and correction was high (around 80%), and was normally achieved on the first visit during an interaction lasting not more than 10 minutes. Viewed in retrospect, a striking feature of the majority of advisory conversations was the extent to which they resembled the style of interaction supported by rule-based systems. Given that such interactions were often considered unsatisfactory by users in spite of a high success rate, and given the contrast of a few interactions with a different style which were considered most satisfactory, it was decided to review the advisory study data in more detail for guidance on the design of our system. The interactions considered as unsatisfactory proved to be most strongly.controlled by the advisor, the user only being required to supply information where it could not be obtained elsewhere. There was also little feedback to the user after questions and no explicit account of how the information was to be used for reasoning. In general, all reasoning was covert, unless some explanation was explicitly requested. Advisors made no attempt to educate users. Conversations lacked any elaboration of the facts or terms employed by advisors (e.g. explanation of the error message " O V E R F L O W " in terms of the methods used to store numbers in computers), contained no review of the methods used to obtain the successful diagnosis, and no review of the solution. The effect of the above style emerged clearly during debriefing. First, the problem solved was sometimes a restricted (often the most concrete) portion of what was a much broader problem, with many conceptual ramifications. As might be expected, this was most frequently the case with inexperienced users who did not have the knowledge to verbalize the difficulty they "felt" themselves to have, and so resorted to asking something close and within their conceptual scope. The least well-prepared users were thus left to themselves both to define their real problem and to generalize the solution to it. Secondly, users were often left in doubt as to the correctness of their response to a question, given that advisors often did not appear to use the information provided. Because of this, these questions proved less useful as anchors to aid in their learning of successful reasoning than might have been expected. Thirdly, advisors failed to elaborate the context within which the advice could be understood, simply giving a bold assertion of the problem and the steps necessary for its solution. This made both recall of the solution and the reconstruction of reasoning very difficult. As mentioned above, a small number of interactions were characterized by a very different style and did not suffer from these difficulties. It is features of these sessions that we have incorporated into the design of our computer-based guidance system. They usually took place between a fairly-experienced user and a local computer expert who had a passing knowledge of the problem domain but no specific knowledge about the particular problem in hand. The two participants thus shared the roles of advisor and client, each within their respective area of expertise. However, this resulted in conversations which were notable for their apparent lack of structure, both participants making contributions in a loose and uneconomic manner. Often, much more information was given than appeared to be strictly relevant. Nevertheless, analysis of the goals of participants collected at debriefing indicated that this class of interaction was

EXPERT

SYSTEMS: AN ALTERNATIVE

PARADIGM

25

well-motivated despite surface appearances, the objective not being strict problemsolving as we had assumed, but problem-solving through mutual understanding. This required sensitivity to different structural factors. The most important factor in this second group of interactions was the extent to which both advisors and users made public and explicit much of what was covert and implicit with the majority of sessions. This had two effects. First, it was possible for participants to monitor each other throughout the interaction, and to provide effective feedback. Secondly, the public nature of the session had the side-effect of forcing a greater degree of precision in all aspects of the interaction and of forcing participants into a " d e e p e r " understanding of their mutual information and inferential needs during the solution of the problem. With reference to the improved feedback, this proved to help participants to develop a better understanding of the process of solution. With reference to the public nature of the session, there were many additional advantages. However, most importantly these included the "focusing" of relevant domain facts onto the problem and the clarification of processing goals (the two participants often stepping outside each other's problem view to suggest alternative, and perhaps more productive, approaches). Finally, the very act of participation appeared to help develop a better shared understanding of the problem and its structure. Both participants were often able to gain more fully from the session by abstracting computing facts and problem-solution methods of some general value. In order to make the transformation from the successful advisory sessions to a computer-based guidance system, it is necessary to isolate the fundamental cognitive procedures underlying interactions. To do this systematically requires some theory of the role of conceptualization and understanding in problem-solving, which will be approached in the next section. However, it is possible to make some headway by asking three questions of the conversations. (a) Why does the user need to seek help in the first place? (b) Is there an optimum interaction for a given class of problem? (c) How may a user's problem-solving skills be increased as a by-product of interactions? With regard to user motivation, it was found that problems were more often conceptual than procedural; the user lacked, and failed to develop while working alone, adequate concepts for both defining problem goals and for employing them towards a solution. While we do not claim that inadequate procedures were never at fault, problems were often solvable employing skills the user already possessed with a richer set of concepts. Moreover, it was observed that users frequently lacked the ability to develop appropriate concepts in the problem area because they were unaware of what the concepts would look like. For example, a user of the P R O L O G language was unable to develop the concept of "backtracking" because he saw variable instantiation as a permanent act. Given that all new concepts may be argued to be derived from existing concepts (see section 4), this points to some deficiency at a meta-skill level (i.e. procedure s for forming concepts appropriate to a given class of problem and problem domain). Difficulties in this area may include the use of an unsuitable "language" for representing the problem (e.g. it is better to think of the process of tree-passing in diagrammatic

26

M. C O O M B S

AND

J. A L " I ' Y

rather than verbal terms), the overgeneralization of a concept that has failed to achieve a solution (the new concept may allow the problem to be "solved", but in a manner that is not sufficiently concrete to be useful), and over-reliance on an over-elaborated problem context (e.g. using assumed and unverified characteristics of the underlying machine in debugging). Consideration of the optimum interactive relationship between advisor and user has suggested that as much of the processing as possible should be made explicit. We would also like to add that even with the most successful interactions, difficulties were traced to the limited capacity of human working memory, a participant often being forced to make the same deduction on several different occasions or to rehearse an argument to aid recall. A system for making explicit both concepts and arguments would help in this area. "Learning to learn" conceptualization and inferential skills is an important outcome of an advisory interaction. This appeared to be achieved best when the user was engaged in the solution of his own problem with the co-operation of the advisor. In education this would be classed as a type of guided discovery learning, with the advisor helping to identify solvable sub-problems, forcing the user to be explicit in describing and justifying his moves, supplying information or methods where required and monitoring progress. The general problem-solving strategy favoured by participants in the successful interactions involved the generation, and then critiquing, of explanations for some set of problem phenomena. Both generation and critiquing was conducted by both advisor and user, although one participant usually tended to lead more frequently. The focus of this activity was primarily upon individual results and units of explanations usually amounting to relations between two or three assertions. The direction of problemsolving activity was thus primarily conducted bottom-up with attention to the truth of individual assertions and their sequential validity. However, there were often occasions when this strategy would be radically changed, a whole set of errors or unexpected phenomena being asserted as arising from a single act; often this was itself asserted with confidence as following from some conceptual difficulty. This then appeared to trigger a radical switch in approach to a top-down search for confirming evidence (disconfirming evidence occasionally being ignored, or put aside to explain later). It was this emphasis on a conservative, bottom-up analysis of the problem that often made it difficult to identify the structure in conversations from study of the utterances alone. The actual selection of items of program or explanation to analyse was governed by the participants' models of the subject domain. The setting of work goals was thus achieved top-down, while the drift of problem-solving activity was primarily bottom-up. The process of goal selection, particularly global considerations, sometimes only became clear to the researcher during debriefing, although it was understood by the participants. Moreover, given that both participants had different levels of knowledge concerning the two subject domains, they engaged in a great deal of mutual justification of individual choices of work area, which itself generated problem-solving activity which was difficult to distinguish from the primary activity of solving the user's problem. Finally, there was the additional complication that any interaction involved both the diagnosis of the cause of the current problem and the generation of principles for correction. These two objectives were usually intermixed, particularly at the stage when the participants were near to identifying the problem and were looking ahead to correction.

EXPERT

SYSTEMS: AN ALTERNATIVE

PARADIGM

27

Following the findings summarized above, it was proposed that our guidance system should support rather than direct problem-solving. This support should mainly be in the form of assistance in building and evaluating concepts in the problem area. The model of support should be that of guided discovery, control being invested equally in the user and the system. The program may be seen as providing a set of tuned resources, some of them passive (e.g. a concept specification language), and some of them active (e.g. a guidance system for assisting the user with problem-solving strategy), to promote conceptual development and application.

3. Theoretical foundations A clear and fundamental feature of the guidance advisory interactions described above is the dynamic, constructive nature of the information processing undertaken by the user. Within British psychology there has been a long tradition of viewing human cognition in this way, the seminal work being undertaken by Bartlett concerning the nature of memory (Bartlett, 1932). In contrast to the dominant Behaviourist psychological philosophy of the period, Bartlett argued that the results of work on memory for complex, meaningful material is best explained in terms of active information processing. Learning is seen as a process of building models, or "schemata" of the subject material which is then used for its reconstruction on recall. The "schemata" may be interpreted as procedures which act to control remembering, and as a focus for the "attachment" of supporting procedures where the main procedures fail. This approach is present in the work of many others on both sides of the Atlantic and in a range of different application areas, including psychiatry (Kelly, 1955), education (Brunet, 1957; Piaget, 1972) and visual perception (Gregory, 1970). However, while theoretically promising, much of the work failed to develop into a general model of cognition that was sufficiently powerful for application purposes. The deficiency usually lay in the absence of both the conceptual and methodological tools to represent mental procedures and their transformation. There has been much recent progress in this direction, being principally centred on A.I. [for example, Schank's work on the use of Scripts (Schank & Abelson, 1977) and later MOPs (Schank, 1979) in text understanding]. However, these models are still not adequate for application purposes, lacking sufficient predictive power. The objective of much A.I. now is to define sufficient structure and content to achieve given processing goals, rather than the structure and content necessary for their attainment. They thus lack the appropriate formal foundation for the construction of powerful theories [the work of Lenat (Lenat & Harris, 1978) is one exception]; there is no systematic investigation into the necessary rules of macrostructure such as we find in linguistics (for example, van Dijk, 1980). Out of the many disciplines concerned with information processing that have developed this century, cybernetics is notable for being concerned with questions of necessity. Its fundamental concern is to establish the minimal theoretical structure required for some given class of processing. Within cybernetics, the most comprehensive application of this approach to issues of learning and understanding is found within the work of Pask (1975, 1976), which aims to produce a general theory of cognition rather than solely to explain specific cognitive episodes. This aspect of Pask's work has proved attractive for establishing a general framework within which the specific requirements of a particular problem-solving type and subject area may be modelled.

28

M. C O O M B S A N D J. A L T Y

Pask (1975) proposes that the minimum structure for learning requires at least two processors: one to interact directly with the world, and one to modify the procedures that undertake such interaction. A diagrammatic representation of this structure is given in Fig. 1, which follows that presented by Pask (1975). Each of the boxes represents some domain, the arrow across a box indicating that the procedures within it do something to the domain. The symbol " * " and the arrow deriving from it indicates a feedback path between domains. The domain interacting directly with the world is labelled "Level 0 " and contains " p 0 " procedures; the procedure modifying domain is labelled "Level 1" and contains meta-procedures " p l " . This structure can be seen as representing a program like H A C K E R (Sussman, 1975) which diagnoses faults in its procedures and corrects them accordingly.

Level I

Level 0

.J

I

FIG. 1. A representation of the m i n i m u m structure for learning (after Pask, 1975).

Examination of Fig. 1 will reveal that the structure has a limitation not possessed by humans: learning will only take place within the limits of the procedures stored at Level 1. However, humans have the ability to learn new ways of learning--learning strategies. To do this, the system will need to be able to re-build, or re-interpret, itself in some novel manner. The minimum structure proposed by Pask (1975) for doing this is given in Fig. 2. In order to re-build or re-interpret its procedures, the system must have some account of the former structures. There are thus now three units at each level (conceptually, at least): the newly built procedures (p'); a description of the old procedures (D); the procedure building procedures (p"). Given the two conceptual levels of operation, Pask now proposes the need for two complete but communicating systems. Moreover, these may be seen as residing either within one physical machine (or organism), or distributed between two machines (or organisms). The relationship between the two systems is termed a "Conversation", hence the choice of the term "Conversation T h e o r y " for the whole approach. Using the Conversational structure given in Fig. 2, Pask goes on to explore a variety of cognitive processes such as learning, problem-solving and thinking. Central to Pask's exposition are the notions of a "concept", " m e m o r y " , "understanding" and "explanation", which are re-defined within the notion of a Conversation. A Conversation is

29

E X P E R T SYSTEMS: AN A L T E R N A T I V E P A R A D I G M

pO'

J

-

'

@

?

'

pO"

J

'

~.

;

WORLD

.,,

J

J FIG. 2. A Conversational structure (after Pask, 1975).

seen as an interaction between two processors A and B concerning some "topic". The "topic" is defined as a set of connected assertions, each containing an interpreted formal relationship (e.g. "next", "adjacent", "sum"), the interpretation taking place within some context (e.g. statistics, computing, poetry). Examples of "topic"s would include physical laws, social theories and musical form. Topics are seen as related by the two rules of "subsumption" and "analogy" to form subject domains, which Pask represents as a connected graph of topics organized into topic hierarchies (following the rule of subsumption). In this manner, Pask is in accord with thinking on the structure of a thesaurus, which favours a lattice (Parker-Rhodes, 1978). Pask terms such graphs "entailment nets". The cognitive terms listed above (e.g. concept, memory, understanding, explanation) may be described as operations upon the topics in an entailment structure. A concept, for example, may be seen as a procedure which brings about (or "satisfies") the relation(s) which is (are) a topic: a memory for a topic may be seen as a procedure which reconstructs or reproduces a concept. The notion of understanding is more complex, it being seen as a product of meta-procedures (at Level 1) which build concepts a (at Level 0), and as evidenced by the ability of an organism (or machine) to provide an explanation of " W h y " building proceeds in a particular manner. Evidence for the possession of a concept is simply the " H o w " description if the activity necessary to bring about the topic relation(s)--at this level, explicit motivation for the steps in the procedure would not be required. It should be noted that all this activity takes place in a co-operative Conversational environment.

4. " M I N D P A D " - - a prototype guidance system 4.1. SYSTEM O V E R V I E W

The view of Guidance we have adopted for M I N D P A D derives both from the results of the empirical work done in the Advisory Service study and from the theoretical work of Conversation Theory. Together they are required to provide a conceptual framework for the design of a system which will support a rich variety of h u m a n computer exchange. This section gives a overview of the basic facilities required of such a system.

30

M. C O O M B S

AND

J. A L T Y

Interactions with the Guidance program M I N D P A D are structured in the context of Conversation Theory to enable the user to develop problem-solving skills with some subject domain by achieving an "understanding" of topics within the domain. The problem-solving skills are thus developed as a direct product of achieving "understanding". This is congruent with the view developed in sections 1 and 2 that the objective of a guidance session [as distinct from a problem-solving session, or indeed a tutorial (Coombs & Hughes, 1982)] is to give the user a measure of creative independence within the subject area. The session should aim to help the user learn at a metaconceptual level as well as at the base conceptual level. He should, for example, leave the session with the ability to describe the principal relations between topics, to give an account of appropriate problem-solving heuristics, and explain their relevance. A range of different, but equally valid, guidance procedures may be defined within a Conversational context, and it is intended that such variations will be a subject of our research. However, for the initial prototype system, it seems wise to keep as close as possible to the learning environments employed by Pask to test his theory (Pask, 1976). These may be described broadly as implementations of "guided discovery learning" and have proved to be very effective within an instructional setting. The system aims to develop the user's understanding of the subject area as a result of his working at a concrete problem. The problem acts as a focus for his efforts to identify and explore concepts and topics within the domain (represented in a frame-like database), his understanding being developed by the discipline of application and by the requirement that all solutions be "explained". Following Pask (1976), explanation is employed as the principal mechanism for generating the descriptions which enable intelligent interaction to take place both between user and system and within the user's own head. The task area selected for the prototype system is that of debugging simple database P R O L O G programs. P R O L O G provides a suitable domain for two reasons. First, novices often have difficulty in establishing a stable mental representation of a program because the language has both a procedural and a declarative semantics. This frequently leads to contradictions between application and processing descriptions of programs, the former referencing the programmer's knowledge of his application domain and the latter referencing the details of P R O L O G execution. Reasoning about a particular program thus closely resembles the interaction between co-operating experts discussed in the previous sections. Secondly, the recursive and backtracking features of P R O L O G can make flow of control difficult to follow and so justify the need for computer guidance. Both the P R O L O G and guidance systems are being programmed in F R A N Z LISP. The global structure of the prototype system is given in Fig. 3. It may be seen that the user has access to a number of different resources. Two of these may be described as passive and include: (a) the S T R U C T U R E G R A P H , which functions as the knowledge-base for the system and incorporates a P R O L O G interpreter; (b) the W O R K P A D , which has the primary function of providing the resources to enable the user to develop problem solutions and to construct explanations. However, it also serves to interface the user to both the S T R U C T U R E G R A P H and to an intelligent, machine-based A D V I S O R .

EXPERT SYSTEMS: AN ALTERNATIVE PARADIGM

31

STRUCTURE GRAPH - domoin knowledge - problem-solving knowledge

J

- acquisition kno~ed~ -

PROLOG interpreter

ADVISOR

W~K~D

FIG. 3. MINDPAD overview. The A D V I S O R is an active resource and has three functions: evaluating user explanations, identifying failures in understanding and guiding users on problem-solving strategy. To permit such activity, the A D V I S O R employs problem-solving knowledge stored within the S T R U C T U R E G R A P H , and has full access to W O R K P A D . The problem-solving knowledge concerns methods for identifying explanatory errors and ambiguities, making decisions about user understanding and advising on corrective action. In addition, the A D V I S O R runs the current Conversational model and coordinates interaction with the various learning resources. Within the framework of the Conversation diagram given in Fig. 2, the user may be seen as occupying system B and the A D V I S O R as occupying system A. The A D V I S O R , as mentioned above, has the role of making available resources, critiquing explanations, and suggesting tasks. Advice may be given gratuitously, or may be requested by the user, and the user may accept or reject it as he wishes. 4.2. THE "STRUCTURE GRAPH" The S T R U C T U R E G R A P H is the main source of factual knowledge for the system, including descriptions of P R O L O G structures, execution processes and comprehension difficulties. Information is organized into a network of frame-like schema termed "topics". These are related using the single rule of "subsumption". To this extent the STRUCT U R E G R A P H is a simplification of an "entailment net" as defined by Pask (1975), any given topic being interpreted as a complex relation resulting from some operation being applied to the topics falling below it. Knowledge about P R O L O G , for example, may be represented under four groups of related topics: syntax, fundamental structural units, program structure and program execution. A sample P R O L O G "sub-graph" is given in Fig. 4. It should be noted that the sub-graphs are not fully independent but are related to each other at nodes marked with angle brackets. Furthermore, the actual topics selected and their organization is not in any sense intended to be necessary. Some other author might have provided an alternative decomposition of the subject matter or some alternative organization for the current decomposition. However, it is intended that the view presented by the

32

M. C O O M B S A N D J. A L T Y

EXECUTION Execution I Goal solisfoction

I

I Backtracking

I Successful satisfaction

I

Matching

Inst ntlatlon

'L



Goal

~ Subgoal

I

I < queshon>

FIG. 4. Topic structure for P R O L O G execution.

S T R U C T U R E G R A P H should be both coherent and sufficient for the problem-solving tasks to be undertaken using the system. The S T R U C T U R E G R A P H includes several different classes of information. (a) P R O L O G facts---describing the structure of P R O L O G objects and processes of P R O L O G execution. (b) Problem-solving facts---describing the relationships between faults in a user explanations of program execution and comprehension errors. (c) Facts about knowledge acquisition---descriptions of methods for developing an understanding of P R O L O G concepts. Individual topics are thought of as frame-like entities, their structure being expressed in terms of classes of slot and meta-information on the values that a slot may adopt. Typical slot types include those for the description of substantive domain facts (STRUCT U R E , E V E N T ) , those for indexing the topic structure (SUBSUME), and those for recording details of the current interaction. In the case of slots that describe substantive information about the domain, the meta-information is in a form which both allows it to be output as coherent text and which allows it to be instantiated with instances from the current explanation and problem program. A sample topic description for the P R O L O G topic of " M A T C H " is given in Fig. 5. It will be noted that the actual descriptions of " M A T C H " are given in P R O L O G code. This is because a feature of the system is that descriptions should be executable as programs. It has been found that this provides a powerful procedure for constructing and evaluating hypotheses concerning comprehension errors. Finally, the S T R U C T U R E G R A P H provides both instructional and reference information to the user. The prototype system will provide a menu of question types that may be addressed to the knowledge-base, concerning both the relationships between topics and the substantive information within topics. Queries at the topic level include: (a) what topics exist in the domain? (b) what topics do I need to understand so I can understand topic X? (c) what topics subsume topic X?

33

E X P E R T SYSTEMS: A N A L T E R N A T I V E P A R A D I G M

TOPIC: MATCH SUBSUME: INST~hNTIATION, HEAD, GOAL EVENT: match(head([],goal([])). match(head([HIT1]),goal([HIr2])):mrule(const(H,H);var(I1,11);var(inst(H,H))), match(head(Tl),goal(T2)). EX~IPLE: succeed(match(head([a,b,c]),goal([a,b,c]))). succeed(match(head([a,X,c]),goal([a,Y,c]))). succeed(match(head([a,b,X]),goal({Y,h,c]))). fail(match(head([a,c,d]),goal([a,b,d]))). ACCESS: <

>

FIG. 5. Details o f t h e t o p i c " M A T C H " .

Queries at the substantive level include: (a) (b) (c) (d) (e) (f) (g)

tell me about topic X? is information item X correct? tell me about information item X? is information item X correct? what would happen if X (or not X)? could X cause (result in) Y? how is X related to Y?

Procedures for implementing these queries exploit the constraints present in the subsumptive relations between topics and relations permitted within the explanation grammar. They also draw heavily upon the techniques developed by Lehnert (1978). 4.3. " W O R K P A D '~ A N D T H E E X P L A N A T I O N C Y C L E

Consultation sessions develop through a cycle of problem specification, solution and explanation which may be mapped onto the Conversational model given in Fig. 2. The complete cycle of revision in program development is given in Fig. 6, although only the part specified by the broken line will be encompassed by the prototype. This focusses upon obtaining a correct explanation of execution of a program, including the execution of a faulted program. All of this problem-solving activity is mediated through WORKPAD. The program must therefore support a range of different types of interaction. However most of them will be focussed upon the processes of building and critiquing explanations. WORKPAD functions include the following. (a) Support of user activity: construction of solution programs; running of solution programs; construction of explanations; access to information in the STRUCTURE GRAPH; access to explanation grammar and dictionary of PROLOG terms.

34

M. COOMBS AND J. ALTY

Problem specification 9

I Gener!te solution - - D e s c r i b e solution {code expectedresult exp~notio~}. . . . . . . . . . . . . . "PRO!OG"execution

1

Actuol result (Comparewith _~ expectedresult)

~

T [Explain . . . . . . . . . . . . . .

FIG. 6. The consultation cycle.

(b) Support of A D V I S O R activity: critiquing of explanations; presentation of critiques; presentation of strategy advice. At this point it would simplify the exposition to give a summary of the M I N D P A D problem-solving cycle. This is intended to provide a context for understanding the operation of W O R K P A D , and later the A D V I S O R , and so will avoid detailed accounts of these facilities which will be discussed later. Furthermore, the cycle will be described in a form which emphasizes user control and the employment of passive resources for the analysis of user explanations. Although it is anticipated that the system will eventually be able to provide a range of ditterent levels of guidance intervention, the details of appropriate A D V I S O R support are as yet not sufficiently clear. A consultation starts with the user presenting his problem to the system. Throughout the interaction, information concerning the current solution is stored in W O R K P A D within a pair of declaration frames: one for the P R O L O G code itself and one for an explanation of the code in tcrms of its exccution. The explanation will also give the expected output of the code. Thc declaration frames are structured to accommodate a particular model of P R O L O G (outlined below), so helping to shape the user's concept development and problem-solving. Following the setting of the initial declaration frames, the user may either run the program that represents his proposed solution, or request a critique of his explanation. If he chooses to run the program, the actual results will be recorded in W O R K P A D below the declaration of the expected results. A comparison is then made and any deviation interpreted as indicating a failed solution. In the event of failure, the user is required to work at understanding the results by analysing the explanation and amending it in order to justify them. This may be done with various levels of system support. These include:

E X P E R T SYSTEMS: AN A L T E R N A T I V E P A R A D I G M

35

the simple retrieval of P R O L O G descriptions from the S T R U C T U R E G R A P H ; a request for a critique of the current explanation to be given by the A D V I S O R ; a consultation with the A D V I S O R on problem-solving strategy. At any point the user may decide that he understands the problem sufficiently to attempt another solution. In this case, he may abandon his effort to justify the previous results and work at composing the solution, along with its related explanation. This may then be run and, if it fails, the cycle is repeated. Before changing activity, the user has the option of saving the program and explanation for later inspection. W O R K P A D implements a "two level" approach to understanding P R O L O G programs proposed by Bryd (Bryd, 1980) and used in the design of debugging tools for the D E C - 1 0 P R O L O G system (Clocksin & Melfish, 1981 ). P R O L O G programs consist of sets of clauses, each of them beginning with a particular relation name (a predicate). There are two types of clause: facts, which assert a relation with regard to one or more objects [e.g. "likes(mary,wine)." states "Mary likes wine"]; rules, which assert that some set of relations are true, given the truth of some other set of relations [e.g. "likes(john,X) :--human(X),likes(X,wine)." states "John likes people who like wine."]. Given these two clauses, and an additional one stating that "Mary is human" (see Fig. 7), it is possible to use the P R O L O G interpreter to deduce that "John likes Mary" by getting it to instantiate the variable " X " in the rule with the value "mary". likes(mary,wine). likes(john,X):-human(X),likes(X,wine). human(mary). FIG. 7. A simple P R O L O G program.

P R O L O G undertakes the deduction by taking the question " W h o does John like?" in the form "?-likes(john,X1)." and seeking to match it to a clause within the program. We can see that it fails to match the first "likes" clause but will match the second, if it can find a single constant to instantiate both " X " and " X I " . The second "likes" clause is a rule and rules are used to break the solution down to sub-questions: in this instance "human(X) and likes(X,wine).". The question " h u m a n ( X ) " succeeds with the instantiation of " X " by " m a r y " , the second question now becoming "likes(mary,wine).". This matches the first "likes" clause, and so the original question is answered with the instantiation of the variable " X " (and so " X I " ) to " m a r y " , it so being deduced that "John likes Mary". From the above description it will be clear that the major problem in understanding P R O L O G programs lies not in evaluating the correctness of individual structures or cxccution steps, but in assessing the long-term implications of a structure or step---it is understanding the teleology of the program that is difficult. Although this is true of many programming languages, it is a very serious problem of P R O L O G . This appears to be for three reasons: first, the language has little syntax to guide the user's understanding of flow of control; secondly, the relational structure of clauses encourages an application orientation while programming, ~0hich can be in conflict with understanding at the execution level; thirdly, the P R O L O G interpreter uses backtracking to seek alternative matches when a match fails, and backtracking has been proven to make programs particularly opaque.

36

M. C O O M B S A N D J. A L T Y

At the top level of description, all clauses starting with the same relation name (be they facts or rules) may be regarded as a procedure. This reduces any account of processing given in an explanation to the order of procedure calls and the entry and exit conditions existing before and after such calls. Given that backtracking is a feature of P R O L O G processing, there are four possible parts to a procedure: the initial CALL; a successful r e t u r n - - E X I T ; an unsuccessful r e t u r n - - F A I L ; a backtrack into a previously completed p r o c e d u r e - - R E D O (see Fig. 8).

CALL = >

<= FAIL

likes (mary,wine).

EXIT=>

likes (john, X ) :- hurnon(X), likes (X ,wine)

<;REDO

FIG. 8. A top-level view of P R O L O G execution.

At the lower level P R O L O G is described in terms of the details of processing. This includes a range of operations, the most central of which are the matching of questions (termed "goals") to clauses, the creation of sub-questions ("subgoals"), the instantiation of variables and the process of backtracking. W O R K P A D provides a grammar and vocabulary (listed in a "dictionary" maintained as part of the S T R U C T U R E G R A P H ) for the user to employ for building explanations. The same grammar may be used for explanations at both levels of description. Moreover, levels may be mixed. This is necessary when explaining the execution of a large program, where a detailed exposition will only be required to aid the understanding of areas critical to the solution of a particular problem. As a result of a small amount of experimentation, it has been found that descriptions of P R O L O G execution are easiest to write in imperative form. Given that the majority of explanatory statements will concern execution, the explanation grammar requires that both descriptions of structures and events state the relation first, as is standard in predicate calculus. The dictionary may be assessed by the user at any stage of an interaction, and at present contains 47 terms. The grammar itself is very simple and is similar to that described by Sleeman & Hartley (1979). It is given in full below in BNF form. The grammar states that explanations consist of sets of connected arguments. These arguments are composed of states or event assertions, connected by one of a closed set of relational terms. States and events are also composed from a closed set of terms, all such terms being listed in the system dictionary. An example of an explanation using the grammar is given in Fig. 9. This relates to the simple program presented above concerned with answering the question " W h o does John like?". W O R K P A D uses indenting to emphasize the structure of arguments. It is explanations of the form illustrated in Fig. 10 which are presented to the A D V I S O R for criticism. The process of generating such critiques will be outlined below. 4.4. THE "ADVISOR" The A D V I S O R ' s primary role is to guide the user toward a better understanding of the subject domain around the problem currently being addressed via the system. The

EXPERT

SYSTEMS:

AN ALTERNATIVE

37

PARADIGM

::=I. ::=. ::=! . ::=. ::=!. ::=!. ::='-'.

~'{''}'~. ::='-'

~'{''}'~! /'{''}'/. ::=! '," . ::=':'
FIG. 9. W O R K P A D

term>.

explanation grammar.

solving of the problem itself is seen as a side-cffcct of reaching such understanding. All activity related to the provision of guidance is focussed upon the user explanations in W O R K P A D of P R O L O G execution and is directed towards identifying both misconceptions and gaps in the user's knowledge. The A D V I S O R may be called by the user at any point during the process of explanation building and is called automatically at the end of a program run. llkesl:Iikes(mary,wine). likes2:likes(john,X):-human(X),likes(X,wine). human1:human(mary). QUESTION: ?-likes(john,Xl). RESULT: PROVED likes(john,mary).

CALL - PROC:likes, GOAL:likes(john.Xl). MATCH - GOAL:likes(john,Xl)., likesl:likes(mary,wine). {RESULT} NMATCH ~TCH - GOAL:likes(john,X1)., likes2:likes(john,X). {RESULT} CALL - PROC:human, SUBGOAL:human(X). ~TCI! - SUBGOAL:human(X)., humanl:human(mary). {RESULT} SUCCEED {AND} INSTANT - X:mary EXIT - PROC:human CALL - PROC:likes, SUBGOAL:likes(mary,wine). MATCH - SUBGOAL:likes(mary,wine)., likesl:likes(mary,wine). {RESULT} SUCCEED EXIT - P R O C : l i k e s EXIT PROC:llkes

FIG. 10. A s a m p l e e x p l a n a t i o n .

38

M. C O O M B S A N D J. A L T Y

As stated above, the A D V I S O R ' s objective is to work on user explanations in order to identify understanding difficulties. Once identified, these are" (a) communicated to the user; (b) their implications for the current program and explanation are outlined; (c) the appropriate topics containing corrective information are brought to the user's attention; (d) the user is invited to make a correction and to re-run the program. To assist in tracing user difficulties, the A D V I S O R may also elicit explanation activity both with reference to the current problem declaration frames and in addition to them. The principles behind the operation of the A D V I S O R are derived from the guidance techniques employed by the most successful human computing advisors observed during the Computing Advisory Service Study reported in section 2. It may be recalled that they used what we described a conservative approach to analysis. Instead of starting their study of user programs and explanations by immediately seeking global patterns of error, they first carefully identified individual points of failure. Only after collecting a number of these did they attempt a unification by mapping them onto organized structures. It may be reasoned that this strategy was adopted because, although the correct identification of some organized pattern of faults gives great problem-solving power, it can be very misleading if such patterns are incorrectly applied. This was especially likely within a programming domain because a given fault can be the product of many different understanding problems. The M I N D P A D A D V I S O R adopts a similar strategy. Stored in the S T R U C T U R E G R A P H arc a number of error patterns termed "fault syndromes". These give a description of incorrect patterns of explanation, written in the same explanation grammar as that employed by the user, the whole pattern of faults being itself explained in terms of some particular misunderstanding or lack of knowledge. Also stored in the S T R U C T U R E G R A P H are descriptions of actions to be taken upon identifying such syndromes. When critiquing a user explanation, the A D V I S O R first proceeds cautiously, testing the validity of each assertion individually with reference to the problem program and its immediate explanatory context. This is done employing the descriptions of P R O L O G structures and execution present in the S T R U C T U R E G R A P H . A number of cycles at this level of analysis may be undertaken. During each pass through the user's arguments, the A D V I S O R adds comments to W O R K P A D on the current state of the analysis and promote further explanation by the user to test a hypothesis or gain additional evidence concerning the user's understanding problems. At the end of each pass, the A D V I S O R attempts to match the current evidence to one of the "fault syndromes" stored in the S T R U C T U R E G R A P H . If a syndrome is found with an adequate fit, its nature will be explained to the user and remedial action proposed. If a number of different syndromes are identified with a lower level of match, the A D V I S O R will now focus on seeking further evidence in order to select one of them with an appropriate level of certainty. If these evaluations fail, the system will then return back to the lower level of criticism. Factual knowledge within M I N D P A D is represented as a set of P R O L O G clauses as are both user- and system-generated explanations. It is thus possible to conduct evaluations by transferring clauses between these objects and allowing the A D V I S O R

39

E X P E R T SYSTEMS: A N A L T E R N A T I V E P A R A D I G M

to apply the same query types to the resulting knowledge as the user may apply to the topic information. The analysis of an explanation of a query addressed to an incorrect version of the program given in Fig. 11 will illustrate this process. The initial explanation and program are given below. llkesl:likes(john,mary). likes2:llkes(John,X):-human(X),likes(X,wlne).

humanl:human(mary). QUESTION: ?-likes(wine.mary). R E S U L T : P R O V E D yes

RUN:

FAIL

no

CALL - PROC:llkes, GOAL:llkes(wlne,mary). MATCH - gOAL:llkes(wlne,mary)., likes2:likes(john,X).

{RESULT} INSTANT - X:mary CALL - PROC:human, SUBGOAL:human(mary). MATCII - S U B G O A L : h u m a n ( m a r y ) . , humanl:human(mary). CALL - PROC:llkes, SUBGOAL:likes(mary,wlne). SUCCEED

FIG. 1 1. A n example of an incorrect and incomplete explanation.

Given the above explanation, the A D V I S O R makes its first scan through the text and notes that it is incomplete at the top procedural level. There are three procedure CALLs but no EXITs or FAILs. It thus begins the analysis by parsing for possible procedure exits and marks these in the explanation. To achieve such parsing, the A D V I S O R must make reference to constraints operating at the lower level of P R O L O G description. With the present example, this would provide the additional information that an incorrect result of the M A T C H operation had been declared on the first call. It is also noted that the user has failed to indicate that P R O L O G would first attempt a match to likesl, which would fail. However, this would not necessarily indicate that the user did not understand the matching process: he could have simply decided to abbreviate the explanation. As a result of this analysis the following annotations are written into the explanation text and denoted by angle-brackets (Fig. 12). Having made a preliminary analysis, the A D V I S O R relates the current evidence to fault syndrome descriptions in the S T R U C T U R E G R A P H , and finds that none of them match adequately. It thus continues by attempting to identify some important source of user error from the commented explanation which it can use to plan further action. In the present instance, the evidence points to a lack of understanding concerning the topic MATCH. The user is therefore directed to read this and to amend the explanation from what is learned (see Fig. 13). The A D V I S O R now works on this amended explanation. It is again incorrect, although, there is additional evidence concerning the understanding difficulty. First, the user has now included correctly the initial failed match with likesl. However, given a complete understanding of the topic, there is a contradiction with the result of the match with likes2. Secondly, the final match is not declared as being between a subgoal

40

M. C O O M B S A N D J. A L T Y

<< > > CALL - PROC:likes, G O A L : l i k e s ( w i n e , m a r y ) . MATCH - G O A L : l i k e s ( w i n e , m a r y ) . , l i k e s 2 : l i k e s ( j o h n , X ) .

{RESULT} INSTANT - X:mary < < < M A T C H would not SUCCEED>>> < < < L i k e l y FAIL; s u s p e c t u s e r c o n s i d e r s EXIT>>> CALL - PROC:human, S U B G O A L : h u m a n ( m a r y ) . ~TCH - SUBGOAL:human(mary)., humanl:human(mary). < < < N i s s i n g indication of whether MATCH SUCCEEDS; s u s p e c t u s e r c o n s i d e r s SUCCEEDs>>> <<>> CALL - PROC:llkes, S U B G O A L : l i k e s ( m a r y , w i n e ) . SUCCEED <<>>

FIG. 12. A D V I S O R ' s

preliminary analysis.

CALL - PROC:likes, G O A L : l l k e s ( w i n e , m a r y ) . HATCII - G O A L : l i k e s ( w l n e , m a r y ) . , l i k e s l : l i k e s ( j o h n , m a r y ) . NMATCH MATCH - G O A L : l i k e s ( w i n e , m a r y ) . , l i k e s 2 : l i k e s ( j o h n , X ) . INSTANT - X:mary CALL - PROC:human, S U B G O A L : h u m a n ( m a r y ) . MATCH - G O A L : h u m a n ( m a r y ) . , h u m a n 1 : h u m a n ( m a r y ) . SUCCEED

EXIT CALL - PROC:likes, S U B G O A L : l i k e s ( m a r y , w i n e ) . HATCH - S U B G O A L : l i k e s ( m a r y , w i n e ) . , G O A L : l i k e s ( w l n e , m a r y ) . SUCCEED EXIT - PROC:llkes EXIT - PROC:likes

FIG. 13.

User's amended explanation.

and a clause head, but between a subgoal and goal. Thirdly, it is declared as being successful. Even if one of the elements was a clause head, the match would have failed because the constants in the arguments of the two predicate structures are reversed. This pattern of evidence is sufficiently close to one of the fault syndromes to warrant the direct testing of the user's understanding. The test concerns questioning why the initial match with likesl failed, while that with likes2 succeeded. The reply is that the variable "X" needed to be instantiated with the value "mary". A s k e d why this was required, the user points out that it was needed so that the "X" of the final predicate structure could take the value "mary". In this case, the replies, when taken with the attempt to match a goal to a subgoal indicated that the user did not understand the topics H E A D , G O A L and S U B G O A L , and therefore did not properly understand this aspect of the M A T C H . As a result of this, he had made the c o m m o n error of attempting to work a program backwards from an assumed match, in this case with the final structure of likes2.

E X P E R T SYSTEMS: A N A L T E R N A T I V E P A R A D I G M

41

After a fault syndrome has been selected, the A D V I S O R instructs the user to correct the difficulty identified. Following this, the user is asked to assess the validity of the conclusion, and to make appropriate changes to the explanation. If these are incorrect, the analysis process continues with the focus on detail again. In the present instance, the user gave the explanation presented in Fig. 14 which records in detail the correct execution record of the query.

CALL - PROC:likes, GOAL:likes(wine,mary). MATCH - GOAL:like~(wine, mary)., llkes1:likes(john,mary). NMA'TClt MATCH - gOAL:]~kes(wine,mary)., likes2:likes(john,X). NMATCH FAIL - PROC:likes

FIG. 14. A correct record of program execution.

5. Steps to implementation We have been very sketchy in indicating implementation details such as the precise nature of critiquing heuristics to be used by the advisor. These have been omitted intentionally from the present discussion because our interest is in introducing computer-based guidance systems as a general class of knowledge-based program. Individual implementations may differ in the actual facilities provided, the level of program control, intelligence and the range of application. With reference to the P R O L O G guidance system, implementation decisions are based as far as possible on experiment. As mentioned in section 4, Pask has already implemented a number of tutorial systems within the bounds of Conversation Theory. The approach appears therefore to be viable, at least in its passive form. However, there are sufficient differences between a tutorial and a guidance context to proceed cautiously. Instead of attempting to build a prototype system first off, a range of experiments will be run to help determine the design of such features as the explanation cycle and the explanation grammar. With reference to the latter, great attention will have to be given to user friendliness because users may be unwilling to learn what amounts to a formal language in order to come to understand a second formal language. Our own approach involves the simulation of the computer consultation with the experimenter in the role of the system, progressively incorporating the computer-based facilities as they are developed. Such facilities include W O R K P A D for the building of explanations, and the S T R U C T U R E G R A P H as a database of information on P R O L O G . It is also possible to use a prototype A D V I S O R under the control of the experimenter to offer suggestions when the user declares that he is facing difficulty. Finally, it is expected that subject-specific elements such as the choice of appropriate learning and analysis heuristics to be employed by the A D V I S O R , and the nature of "fault syndromes" will emerge during experimentation. The guidance system is therefore being incrementally validated during construction, rather than waiting until it is a completed product when it is difficult to make even moderate changes in design.

42

M. COOMBS AND J. AI.TY

The M I N D P A D design section of this paper was written while the first author was visiting the University of Pittsburgh during the Winter Semester, 1983. Thanks are due to Jim Reggia for a detailed critique of early drafts and Jim Greeno and Harry Pople for a number of formative discussions over this period. Their encouragement was much valued, as were their comments on the viability of the A D V I S O R design. Thanks are also due to the Department of Computer Science, University of Pittsburgh for their hospitality, and to the Department of Computer Science, University of Strathclyde for giving the first author leave of absence to develop the M I N D P A D design. The Advisory Service Study reported in the first part of the paper was supported by Social Science Research Council grant number HR 4421.

References ALTY, J. L. & COOMBS, M. J. (1980). Face-to-face guidance of university computer users--I: a study of advisory services. International Journal of Man-Machine Studies, 12, 390-406. ALTY, J. L. & COOMBS, M. J. (1981). Communicating with university computer users: a case study. In COOMBS, M. J. & ALTY, J. L., Eds, Computing Skills and the User Interface. London: Academic Press. BARTLETI', F. C. (1932). Remembering: A Study in Social and Experimental Psychology. Cambridge: Cambridge University Press. BRUNER, J. S. (1957). Going beyond the information given. In GRUBER, H., HAMMOND, K. R. & SESSER, R., Eds, Contemporary Approaches to Cognition. Cambridge, Massachusetts: Harvard University Press. BRYD, L. (1980). Understanding the control flow of P R O L O G programs. D.A.L Research Paper No. 151, University of Edinburgh. CLANCEY, W. J. (1979). Tutoring rules for guiding a case method dialogue. International Journal of Man-Machine Studies, 11, 25-49. CLANCEY, W. J. 8,~ LETSINGER, R. (1981). NEOMYCIN: Reconfiguring a rule-based expert system for application to teaching. IJCAI-7, 829-836. CLOCKSIN, W. F. & MELLISII, C. S. (1981). Programming in PROLOG. Berlin: SpringerVerlag. COOMBS, M. J. & ALTY, J. L. (1980). Face-to-face guidance of university computer users--II: characterising advisory interactions. International Journal of Man-Machine Studies, 12, 407-429. COOMBS, M. J. & ALTY, J. L. (1982). An application of Sinclair's discourse analysis system to the study of computer guidance interactions. Journal of Human-Computer Interaction (submitted). COOMBS, M. J. & HUGHES, S. (1982). Extending the range of expert systems: principles for the design of a consultant. Paper presented at Expert Systems 82, Brunel University. DAVIS, R. (1983). Reasoning from first principles in electronic troubleshooting. International Journal of Man-Machine Studies, 19, 403-423. DAVIS, R. & LENA'I', D. (1982). Knowledge-based Systems in Artificial Intelligence. New York: Academic Press. FEIGENBAUM, E. A., BUCHANAN, B. G. & LEDERBERG, D. J. (1971). On generality and problem-solving: a case study using the D E N D R A L program. In MELTZER, B. & MICHIE, D., Eds, Machine Intelligence 6. Edinburgh: Edinburgh University Press. GREGORY, R. L. (1970). On how so little information controls so much behaviour. Ergonomics, 13, 25-35. KELLY, G. A. (1955). Psychology of Personal Constructs L New York: Norton. KULIKOWSKI, C. A. & WEISS, S. M. (1982). Representation of expert knowledge for consultation: the CASNET and E X P E R T projects. In SZOI.OVITS, P., Ed., Artificial Intelligence in Medicine. Boulder, Colorado: Westview Press. LEItNERT, W. G. (1978). The Process of Question Answering. Hillsdale, New Jersey: Lawrence Erlbaum. LENA'I', D. B. & HARRIS, G. (1978). Designing a rule system that searches for scientific discoveries. In WATERMAN, D. A. & HAYES-ROTH, F., Eds, Pattern Directed Inference Systems. New York: Academic Press.

EXPERT SYSTEMS: AN ALTF~RNATIVE PARADIGM

43

NEWELL,A. & SIMON,H. A. (1972). Human Problem Solving. Englewood Cliffs, New Jersey: Prentice-Hall. PARKER-RHODES, F. (1978). Inferential Semantics. Sussex: Harvester Press. PASK, G. (1975). Conversation, Cognition and Learning. Amsterdam: Elsevier. PASK, G. (1976). Conversation Theory: Applications in Education and Epistemology. Amsterdam: Elsevier. PIAGET, G. (1972). Psychology and Epistemology: Towards a Theory of Knowledge. London: Penguin University Books. SCHANK, R. C. (1979). Reminding and memory organization: an introduction to MOP's. Research Report 170. Department of Computer Science, Yale University. SCHANK, R. C. & ABELSON, R. (1977). Scripts, Plans, Goals and Understanding: An Inquiry into Human Knowledge Structures. Hillsdale, New Jersey: Lawrence Erlbaum. SHORTI.IFFE, E. H. (1976). Computer-based Medical Consultations: MYCIN. New York: Elsevier/North-Holland. SLEEMAN, D. H. & HARTLEY, R. J. (1979). ACE: a system which analyses complex explanations. International Journal of Man-Machine Studies, 11, 125-144. SUSSMAN, G. J. (1975). A Computer Model of Skill Acquisition. New York: American Elsevier. VAN D1JK, T. A. (1980). Macrostructures: An Interdisciplinary Study of Global Structures in Discourse, Interaction, and Cognition. Hillsdale, New Jersey: Lawrence Erlbaum.