Differences between belief and knowledge systems

Differences between belief and knowledge systems

COGNITIVE SCIENCE 3, 355-366 (1979) THEORETICAL NOTE Differences Between Belief and Knowledge Systems* ROBERT P. ABELSON Yale Universi~., Seven fea...

712KB Sizes 0 Downloads 54 Views

COGNITIVE SCIENCE 3, 355-366 (1979)

THEORETICAL NOTE

Differences Between Belief and Knowledge Systems* ROBERT P. ABELSON

Yale Universi~., Seven features which in practice seem to differentiate belief systems from knowledge systems are discussed. These are: nonconsensuality, "existence beliefs," alternative worlds, evaluative components, episodic material, unboundedness, and variable credences. Each of these features gives rise to challenging representation problems. Progress on any of these problems within artificial intelligence would be helpful in the study of knowledge systems as well as belief systems, inasmuch as the distinction between the two types of systems is not absolute.

As long ago as 1963 I wrote a paper entitled, "Computer simulation of hot cognition" (Abelson, 1963), and in 1965, "Computer simulation of individual belief systems" (Abelson & Carroll, 1965). These papers represented an interest which was then quite idiosyncratic. In 1979, this interest is still unusual, but with the advent of cognitive science there is now a place for the study of belief systems in relation to other aspects of human cognition and human affect. The study of belief systems is intimately related to work on knowledge systems, but has enough unique features to justify it as a separate topic. In this paper I will proceed as follows: First, I will set forth what I see as the differences between a "belief s y s t e m " and a "knowledge system," at the same time acknowledging that they have many points in common. Then I will explain why from an artificial intelligence standpoint it is difficult to construct belief system models, but nevertheless is important. Also I will outline what kinds of developments are in progess or are still lacking in the modeling of belief and knowledge systems.

A. THE DISTINGUISHING FEATURES OF "BELIEF SYSTEMS" The use of the term "belief system" can be highly confusing. Psychologists, political scientists and anthropologists tend to use the term in rather different senses. It would be fruitless to try to settle once and for all what is really meant *This paper is a slightly revised version of a symposiumtalk delivered at the First Annual Meeting of the Cognitive Science Society. 355

356

ABELSON

by "belief system." I do not propose here to attempt an analytical philosophical analysis. Instead, I will review some of the work on belief systems (without attempting a complete survey) in an attempt to identify features which in practice seem to be distinctive about beliefs and belief systems. I emphasize distinctive features because belief systems have much in common with knowledge systems, and were there no distinctive features at all then modeling belief systems would be just like modeling knowledge systems and would need no special structures or processes. Imagine, then, a stored body of structured knowledge. There is some network of interrelated concepts and propositions at varying levels of generality, and there are some processes by which a human or a computer accesses and manipulates that knowledge under current activating circumstances and/or in the service of particular current purposes. What features warrant calling this stored body of concepts a belief system? I see seven possibly important conditions. In the spirit of the modern emphasis on definition by prototype (Smith, Shoben & Rips, 1974; Rosch & Mervis, 1975) rather than by necessary and sufficient conditions, none of these seven features are individually definitive. In fact they vary somewhat in the degree to which they distinguish belief systems from knowledge systems. Any system embodying most of them, however, will have the essential character of a "belief system": 1. The elements (concepts, propositions, rules, e t c . ) o f a belief system are not consensual. That is, the elements of one system might be quite different from

those of a second in the same content domain. And a third system different from both. Individual differences of this kind do not generally characterize ordinary knowledge systems, except insofar as one might want to represent differences in capability or complexity. Belief systems may also vary in complexity, but the most distinctive variation is conceptual variation at a roughly comparable level of complexity. Consider some societal problem area like, say, the Generation Gap. Youngsters may have a highly articulated system of concepts blaming the problem on adult restrictiveness and insensitivity, whereas oldsters develop concepts around adolescent rebellion and immaturity. Meanwhile, psychologists may view the matter in terms of communication failure between generations. An interesting sidelight on the consensuality question is whether a belief system is " a w a r e , " in some sense, that alternative constructions are possible. Semantically, "belief" as distinct from knowledge carries the connotation of disputability--the believer is aware that others may think differently. From an artificial intel.ligence perspective, however, the issue of such awareness (and the problem of how to represent it) seems separable from the design of the belief system itself. And one can imagine belief system's, such as the very young child's construction of the life and works of Santa Claus, which are so naive as not to contemplate alternative realities. Now, if awareness of alternatives is not represented in a belief system itself, then obviously one cannot tell by looking within the system whether it is consensual or not. Consensuality, one might say, is not a

BELIEF AND KNOWLEDGE SYSTEMS

357

transparent feature of belief systems, and thus there are cases wohre it is not clear whether something is a belief system or a knowledge system. Consider so-called "cultural belief systems"' (D'Andrade, 1972). If every normal member of a particular culture believes in witches, then as far as they are concerned, it is not a belief system, it is a knowledge system. They know about witches. But the anthropologist who studies this culture is aware of many witchless cultures, and thus uses the label "belief system" without flinching. The other side of this same coin is that scientific knowledge can be viewed as mere belief by those outside the community of shared presuppositions about science as a way of knowing. For cognitive science, the point of this little discussion is that nonconsensuality should somehow be exploited if belief systems are to be interesting in their own right as opposed to knowledge systems. In an AI belief system program in particular, this could be done either by (a) representing the awareness of alternatives, or (b) embodying different belief systems by different data bases. Carbonell's (1978) POLn'tCS program does both of these things to some extent.

2. Belief systems are in part concerned with the existence or nonexistence of certain conceptual entities. God, ESP, witches, and assassination conspiracies are examples of such entities. This feature of belief systems is essentially a special case of the nonconsensuality feature. To insist that some entity exists implies an awareness of others who believe it does not exist. Moreover, these entities are usually central organizing categories in the belief system, and as such, they may play an unusual role which is not typically to be found in the concepts of straight knowledge systems. The central delusion of persecution afflicting paranoids would be one example. As in Colby's (1975) PARRYmodel of paranoid thinking, just about any topic of conversation is capable of activating the kernel threat the system perceives itself as facing. Other than by this property of obsessional centrality, however, it is not possible in inspecting a given belief system by itself to tell whether a given category embodies a novel existence belief. That is, "existence categories," like other nonconsensual concepts, are not a transparent feature of a belief system. By contrast, the following five features of belief systems are transparent. 3. Belief systems often include representations of"alternative worlds," typically the world as it is and the world as it should be. Revolutionary or Utopian ideologies (cf. Kanter, 1972) especially have this character. The world must be changed in order to achieve an idealized state, and discussions of such .change must elaborate how present reality operates deficiently, and what political, economic, social (etc.) factors must be manipulated in order to eliminate the deficiencies. This is, to be sure, a kind of problem-solving, but at a more abstract level than the usually studied problem-solving tasks in cognitive science. It is not a matter of finding the sequence of rules to apply to a starting state to reach a goal; it is a matter of rejecting the old rules and finding new ones which achieve

358

ABELSON

the goal state. In the terminology of Newell (1969) and others, this is an "illstructured problem." 4. Belief systems rely heavily on evaluative and affective components. There are two aspects to this, one "cognitive," the other "motivational." First, a belief system typically has large categories of concepts defined in one way or another as themselves " g o o d " or " b a d , " or as leading to good or bad. For the antinuclear activist, nuclear power is bad, pollution is bad, callous industry and lax regulation are bad, materialism and waste are bad, natural alternative energy sources are good, conservation is good, concern about hazards to human life is good, and so on. These polarities, which exert a strong organizing influence on other concepts within the system, may have a very dense network of connections rare in ordinary knowledge systems. From a formal point of view, however, the concepts of " g o o d " and " b a d " might for all intents and purposes be treated as cold cognitive categories just like any other categories of a knowledge system. Inferences about goodness and badness might be governed by rules such as the "balance principle" (Heider, 1946, 1958; Abelson & Rosenberg, 1958). Without going into all the details, an illustration of a rule following from the balance principle is: If z is good (bad), and y harms z, then y is bad (good). An interesting recent artificial intelligence program which refers a lot to good and bad entities but seems rather more like fi knowledge system than a belief system, is VEGE, in unreported work at Yale by Michael Dyer. VEGEgives advice on organic vegetable gardening much as a newspaper columnist answering readers' questions might. The balance principle is implicitly used throughout: anything which encourages pests or other entities harming vegetables is to be avoided, anything which discourages those entities is to be welcomed, and so on. Thus the formal logic of chains of helping and harming (or enabling and disenabling) relationships can proceed perfectly reasonably in a knowledge system having none of the features (1), (2), (3) above characterizing belief systems. Of course if the chains of relationship are nonconsensual, seeming to follow "psycho-logic" (Abelson & Rosenberg, 1958) rather than logic (e.g., "Only bloodshed will lead to peace"), then we are back on the ground of prototypic belief systems again. When the good and bad entities for the system have motivational force rather than simply categorical status, unique consecluences for belief systems are even more likely to emerge. By motivational force I mean that when affectively toned entities are activated, the processes of the system are altered. Thus a system that found some input exciting would process it more deeply, or if fearful would avoid it, and so on. A specialized instance of this possibility occurs in the conversational behavior of Colby's (1975) paranoid PARRY program, which changes rather drastically as a function of its levels of fear, anger, and anxiety. 5. Belief systems are likely to include a substantial amount of episodic material from either personal experience or (for cultural belief systems) from folklore or

BELIEF AND KNOWLEDGE SYSTEMS

359

(for political doctrines) from propaganda. Several years ago I interviewed in depth a number of strong believers in ESP and found that very frequently some striking personal experience was pivotal (Ayeroff & Abelson, 1976). One woman reported a confirmation of a vision she had of her husband being killed in an accident, another person claimed that he was able on particular occasions to know what his totally paralyzed sister was thinking, and so on. The force of such episodes is sometimes as subjective " p r o o f " of a belief, especially an existence belief like that of ESP, and sometimes as an illustration or object lesson to enrich a particular concept. For the belief system of Senator Barry Goldwater, for example, the time when the Russians built the Berlin Wall was a crucial opportunity lost. The Free World could have stood up to the Communists and changed the realities of the future rather than passively acquiescing to an unsatisfactory present. Many knowledge systems have no apparent need for such episodes, relying instead entirely on general facts and principles. It would seem odd for a program that understood let us say, the physics of balls rolling down inclined planes, to appeal to anecdotes such as the "time in May, 1938 when in Chicago a 20 kg. ball was rolled down a 30-degree incline." I do not mean to suggest, however, that knowledge systems necessarily must lack episodic material. Especially if the knowledge concerns social reality, as with the scripts, plans, goals and themes of Schank and Abelson's (1977) theory, a program could appropriately use combined episodic and semantic memory features. Indeed, Schank's (in press) new theorizing about MOPs has precisely this character. 6. The content set to be included in a belief system is usually highly " o p e n . " That is, it is unclear where to draw a boundary around the belief system, excluding as irrelevant concepts lying outside. This is especially true if personal episodic material is important in the system. Consider, for example, a parental belief system about the irresponsibility and ingratitude of the modern generation of youth. Suppose, as might very well be the case, that central to this system is a series of hurtful episodes involving the believer's own children. For these episodes to be intelligible, it would be necessary for the system to contain information about these particular children, about their habits, their development, their friends, where the family lived at the time, and so on. And one would have to have similar conceptual amplification about the " s e l f " of the believer. Each amplified concept would relate to new concepts themselves needing amplification, and there might be no end to it. Colby (1973), referring to an interviewing project attempting to elicit the belief system of a respondent about child-rearing, reports precisely this problem. Is it relevant to the subject's beliefs about child-rearing to consider her beliefs about nutrition? About safety in the streets? About sex? Now of course the same problem is encountered with knowledge systems. Openess is often a matter of degree. An expert on, say, moon rocks, might well need to know a lot about cosmology, geology, physical chemistry, and

360

ASELSON

mathematics, and the appropriate boundaries in each of these disciplines might not be well-defined because each bit of knowledge would drag new bits into the system. The reason I list unboundedness as a distinctive feature of belief systems is that belief systems always necessarily implicate the self-concept of the believer at some level, and self-concepts have wide boundaries, indeed. On the other.hand, knowledge systems usually exclude the Self, and limitation to restricted problem areas is conceivable. In order to avoid open boundaries, certain artificial intelligence programs (e.g., Winograd, 1972) have been exercised in highly circumscribed "small world" knowledge domains. The belief system theorist is usually hard put to find "small worlds."

7. Beliefs can be held with varying degrees of certitude. The believer can be passionately committed to a point of view, or at the other extreme could regard a state of affairs as more probable than not, as in "I believe that micro-organisms will be found on Mars." This dimension of variation is absent from knowledge systems. One would not say that one knew a fact strongly. There exist some examples of attempts to model variable credences or "confidence weights" of beliefs (Colby, Tesler & Enea, 1969; Becker, 1973) and how these change as a function of new information. A distinction should be made between the certitude attaching to a single belief and the strength of attachment to a large system of beliefs. Philosophical ayalyses of the difference between believing and knowing often implicate just single beliefs. But what is more interesting for cognitive science is the distinction at the systemic level. Whole belief systems are usually at least as confidently held as whole knowledge systems, even if they contain some weak individual beliefs. The key theoretical problems lie in the relationships between the credences attached to single beliefs and to the system containing those beliefs. In summary, I have sketched seven features which seem to characterize belief systems as distinct from knowledge systems. These are: nonconsensuality, "existence beliefs," alternative worlds, evaluative components, episodic material, unboundedness, and variable credences. None of these features is individually guaranteed to distinguish belief from knowledge; in combination they are very likely to do so.

B. THE DIFFICULTY AND THE IMPORTANCE OF STUDYING BELIEF SYSTEMS Belief systems have not been very popular objects of study in AI. Partly this is because the backgrounds of most people in computer science have led them to prefer more formal problems arising from mathematics or linguistics, and partly because belief systems present substantial modeling difficulties. The coming of age of cognitive science has introduced to AI a much Wider range of disciplines,

BELIEF AND KNOWLEDGE SYSTEMS

361

anthropology for example, and we can expect more motivation in the future for studying belief systems. But the difficulties will not so easily go away, and it is well to consider what they are. Each of the seven distinguishing features of belief systems creates theoretical or practical difficulties. Most of these difficulties, fortunately, produce interesting opportunities for needed new developments in cognitive science. We consider each of the seven features in turn. Nonconsensuality poses a problem because it is inefficient to design a huge system for a single target case. It is neither very easy nor very convincing to try to validate a model that simulates an individual case. There is always the suspicion that the model was fitted ad hoc, and there is always the sense of disappointment that the model is not generalizable. The trick, therefore, is to encompass individual variations within a common systemic framework, rather than having to start all over again each time a new individual is to be modeled. One excellent strategy for accomplishing this has been provided by Carbonell (1978). In his POLITICSprogram for representing American foreign policy views~\virtually all of the conceptual, elements are shared among belief systems of different shades. What differs between, say, left- and right-wing views is the network of perceived goal priorities for the major international actors. Thus the left-winger sees the Russians as very interested in preserving world peace and not much willing to push too hard to gain control of more territory, whereas the right-winger perceives the importance of these two goals to the Russians in the opposite relative order. This simple cognitive device turns out to produce major differences in systemic interpretation of real-world events, because the system relies very heavily on attributions about plans and goals. How far this device can be pushed to handle rionconsensuality in general remains to be seen. Existence beliefs as special cases of nonconsensuality are more troublesome. If what exists for one believer does not exist for the other, and these privileged constructs are highly central, then the common content core between the different systems declines drastically. Any general framework between systems would then have to rely on similar processes operating on disparate contents. While this is certainly conceivable, it is not totally satisfying. Furthermore, it is a possibility that systems with quite different existence beliefs might use different processing modes. A magician, a statistician, and a believer in ESP might have very little in common in the ways they reasoned about an apparent instance of mind-reading. To model the ways separately, each in its own terms, might be instructive and fun, but would not do much.to cure the ad hoc situation characterizing the modeling of a single nonconsensual belief system. Thus I see differences in existence beliefs as one of the most vexing of the characteristic features of belief systems. There has been a little bit of interest in AI in alternate worlds problems. The way this usually arises is that mental representations must often contain models of other mental representations. In the simplest sort of case, knowing whether someone else knows something may be a crucial piece of knowledge for

362

ABELSON

coherent planning or understanding of plans. When X asks Y something (in Schank & Abelson's, 1977, jargon, when the "ASK planbox for DELTAK N O W " is used), X presumes that Y might know the answer, and Y presumes that X does not know the answer (cf. Meehan, 1976, for the confusion that results when these presumptions are blatantly violated). There are more complex cases such as those that arise in the semantic analysis of concepts such as "promise" or "request," etc. If I hear you agree to my request to do X, then among other things (Bruce, 1975), I may believe that you believe that I believe that you can do X. Here there is a representation of a representation of a representation. In as yet unpublished work, Yorick Wilks has undertaken a systematic analysis of the rules by which embedded representations might operate without totally boggling the system containing them. He considers examples such as Interlocutor telling System, "I don't think Harry likes you; but don't tell him I told y o u . " Comprehension of this requires System to cognize Harry's representation of Interlocutor's representation of Harry's representation of System. (Or something of the sort.) Fortunately for most belief system work, the alternative worlds problem is not as complicated as this. For single beliefs one might worry about how to deal with embedded conceptualizations such as "I believe that you believe that I believe . . . . etc. , " but I do not know of anyone who is concerned with modeling a system of beliefs about a system of beliefs about a system of beliefs. Real-world belief systems are not often tortured by sinuous regress. As a believer I know my own values and prospects, and the values and prospects of my enemies and my friends. Alternative worlds come down largely to alternative goals and plans, the system vs. its enemies. Knowledge of "counterplanning" strategies is useful, and judging by the various developments in Carbonell (1978), Bruce and Newman (1978), and Wilensky (1978), the representation of such strategies is tractable. The evaluative components of belief systems give rise to the least wellunderstood problems. Earlier I said that evaluations of entities as good or bad create important cognitive categories, but that the more unique consequences of evaluation are motivational rather than solely cognitive. While intuitively it seems clear what is meant by "motivational," in practice the distinction between cognitive and motivational is quite subtle. This is especially true in the traditions of artificial intelligence and information processing psychology, where the habit of mind is to treat everything in some sense as cognitive. One can usually, perhaps always, simulate motivational phenomena by cognitive ones. An example will clarify this assertion. Some years ago, while first trying to simulate by computer the belief system processes (Abelson & Carroll, 1965) that characterized, say, the foreign policy views of Barry Goldwater, we included a motivated process of evaluative inconsistency reduction. Every time the system encountered a good actor involved in a bad action, or a bad actor involved in a good action, it went into a trouble-shooting mode de-

BELIEF AND KNOWLEDGE SYSTEMS

363

signed to explain away such a state of affairs. A major heuristic was rationalization (Abelson, 1963): for example, a good actor might perform a bad action because forced to by a bad actor, or because the bad action was only temporary, leading to a greater good in the long run. Another heuristic was denial, whereby the system refused to accept that the good actor was performing a bad action. These heuristics all seemed quite realistic. Later I realized, however, that inconsistency resolutions need not be " c o m p u t e d " time after time. The system can simply store an acceptable resolution to every familiar dilemma. What once may have been a motivated process becomes frozen into a cognitive package. Thus if we told the Goldwater Machine that the United States was harming innocent Vietnamese civilians, and it replied that this was in the service of freedom, an observer could not tell if this response was a motivated rationalization cooked up on the spot, or a well-rehearsed answer given entirely without either anguish or thinking. If alternatively the system denied the possibility that the United States was harming innocent civilians, an observer could not distinguish this as a " h o t " denial process from a cold search through an established conceptual taxonomy in which such an event is simply not recognized because there is no script or plan which could plausibly contain it. The response itself does not reveal its own motivational origins. There is presently roiling an interesting debate in social psychology around this very issue. In "attribution theory" (Jones et al., 1971), the study of how people assign causal responsibility for observed events, there is a controversy about whether so-called "defensive attribution" exists. That is, are there special processes by which people defend themselves against unwanted blame for unfortunate events, or do seemingly defensive phenomena arise merely as a side-effect of normal habits and categories of mind (Nisbett & Ross, 1979)? Recently, Tetlock and Levi (1980) have argued that there is no definitive way to settle this argument. Because of the undecidability between hot and cold origins for any given response, it is tempting for the model-builder to ignore emotional explanations in favor of cognitive explanations. It is more parsimonious and it is more congenial to people trained to study cognition. But I, along with Donald Norman (in press), have the growing sense that this is a mistake, that by so doing we might be closing off several generations of potential developments in cognitive science. While an output response itself may not reveal its genesis, the manner in which the response is given (e.g., its speed), and the history of the response over repeated exposure to appropriate activators may be very sensitive to i~s internal design. Evaluative or affective responses may not in fact simply be the same as cognitive judgments of good and bad. The social psychologist Robert Zajonc (1979) has presented seemingly persuasive evidence that affective responses to stimuli can occur prior to and independent from later cognitive responses. Briefly described, his prototypic demonstration is as follows: Subjects a r e ' shown a number of members from a set of unfamiliar stimuli (Japanese ideo-

364

ABELSON

graphs or Turkish words, say). Some members are displayed often, some rarely under rapid exposure conditions making' it di'fficult for subjects to articulate how often each is seen. Zajonc ,(1968) had previously demonstrated under standard conditions a "mere exposure" effect on liking--previouslynovel stimuli seen often are judged more pleasant than those seen once or twice. In the critical demonstrations under rapid exposure, subjects might not be able to recognize consciously which of two symbols they had seen more often, but could nevertheless reliably state a preference for the one they had indeed seen more often. Preference without recognition is attributed by Zajonc to a fast, crude, affective response system using cues picked up before the slower cognitive system reads the stimuli. He bolsters his position by the functional argument that animals have a need for fast responses to threats and opportunities in the environment without waiting for careful stimulus categorization. Rabbits run from apparent hawks which closer inspection would often show to be nonhawks. This provocative dual process theory seems certain to become a focus for psychological research and debate. In any case, the ability of belief systems to stir and express the passions of believers is an essential feature not to be found in knowledge systems, well worth our groping theoretical efforts to try to understand it. Indeed, it is at the root of criticisms of AI programs, sounded by Searle (1980), by Dreyfus (1979), and by others I that programs lack passions and purposes of their own. Of course these critics claim that programs are intrinsically incapable of anything resembling feeling. To what degree and in what sense this claim is true or false remains to be seen. But it will not be seen if we don't try to look. How to deal with episodic material is another very crucial and very hard problem. As noted, AI programs tend not to incorporate episodic knowledge. Partly this is because it is unclear how to integrate episodic with "semantic" information, and partly because to do it right, one ought to incorporate episodic material historically, as it arises. But this involves "growing" a program, rather than giving birth to it whole, and one is confronted again with the same inefficiency that characterizes the study of nonconsensual systems. Nevertheless, it is essential to cope with episodic knowledge in order to have, among other things, an adequate theory of memory. Schank's (in press) recent theorizing about memory takes the position that being reminded of past episodes is the essence of the understanding of present episodes. Higher-level knowledge structures such as scripts and plans serve as spines to which episodes are attached. No program, either knowledge system or belief system, has yet exercised such a scheme, but it seems a promising area for development. I have nothing very encouraging to say about the problem of unboundedness for personal belief systems, although for political or cultural belief systems in circumscribed areas, it seems possible to create bounded small worlds for modeling purposes. I do not know of any theoretical work which would guide us tWeizenbaum (1976).

BELIEFAND KNOWLEDGE SYSTEMS

365

in deciding whether there is ,substantial l~ss of~ validity, when boundaries are for a l o n g t i m e this will 'remain a matter of somewhat arbitrarily drawn. • ~Probably • judgment. Finally, and least hopefully, the problem of variable credences seems to me to be a tantalizing quagmire. It is very messy to write models in which each belief is indexed with a numerical or quasi-numerical credence value which goes up or down depending on the fate of the evidence supporting it or supporting related beliefs. However, if we are ever to model change in belief systems '(surely a very pivotal phenomenon in human affairs), then it seems as though we need to model the mechanism by which doubt spreads corrosively from belief to belief within a formerly stable system until a kind of "catastrophe" point is reached when whole bundles of beliefs are abandoned in favor of new ones. Possibly the assignment of numerical credences existing through time is not the way to approach this problem. Perhaps credences are somehow calculated only when needed in the face of some challenge, but otherwise a belief is a belief is a belief. I believe that this assertion is an insight of sorts, but I confess it doesn't tell us how to model the phenomenon in which doubt spreads through a belief system. Let me sum up the whole matter of belief systems as follows: The list of seven distinguishing features I have given is in effect a compendium of many frontier problems in cognitive science. One need not study belief systems as such to care about these problems, although a concern with belief systems makes these problems more obvious and more pressing. Some of the difficulties raised seem largely of a technical nature: how to model alternative worlds or variable credences, how to grapple with unboundedness, or how to integrate episodic and semantic knowledge. But in a larger sense most of the problems go deeper, and lie at the heart of what we are willing to include in a cognitive science. I hope that an increasing number of investigators will begin to try to model nonconsensual mental systems, and affective processes, and the ways in which personal experience enriches knowledge. Otherwise our models will not be fully faithful to the natural vicissitudes of the human mind.

REFERENCES Abelson, R. P. Computer simulation of 'hot cognition'. In S. Tomkins and S. Messick (eds.) Computer simulation ofpersonali~. New York: Wiley, 1963. Abelson, R. P. & Carroll, J. D. Computer simulation of individual belief systems. Aoierican Behavioral Scientist, 1965, 8, 24--30. Abelson, R. P. & Rosenberg, M. J. Symbolic psycho-logic: A model of attitudinal cognition. Behavioral Science, 1958, 4, 1-12. Ayeroff, F. & Abelson, R. P. ESP and F~;B: Belief in personal success at mental telepathy. Journal of Personality and Social Psychology, 1976, 34,240-247. Becker, J. D. A model for the encoding of experimental information In R. C. Schank and K. M. Colby (eds.) Computer models of thought and language. San Francisco: Freeman, 1973. Bruce, B. & Newman, D. Interacting plans. Cognitive Science, 1978, 2. 193--194.

366

ABELSON

Bruce, B. Belief systems and language understanding. AI report Number 21. Bolt, Beranek and Newman. January 1975. Carbonell, J. G. Jr. POLITICS: Automated ideological reasoning. Cognitive Science, 1978, 2, 1-15. Colby, K. M. Simulations of belief systems. In R. C. Schank and K. M. Colby (eds.) Computer models of thought and language. San Francisco: Freeman, 1973. Colby, K. M., Tesler, L. & Enea, H. Experiments with a search algorithm on the data base of a human belief structure. Stanford AI Project memo AI-94, August, 1969. Colby, K. M. Artificial paranoia: A computer sinndation of paranoid processes. New York: Pergamon, 1975. D'Andrade, R. Cultural belief systems. University of California at San Diego. Manuscript prepared as a report to the National Institute of Mental Health Committee on Social and Cultural Processes, November 1972. Dreyfus, H. L. What computers can't do: The limits of artificial intelligence. (Revised Edition). New York: Harper, 1979. Heider, F. Attitudes and cognitive organization. Journal of Psychology, 1946, 21, 107-112. Heider, F. The psychology of interpersonal relations. New York: Wiley, 1958. Kanter, R. M. Commitment and communi~,: Communes and utopias in sociological perspective. Cambridge, Mass. Harvard University Press, 1972. Jones, E. E., Kanouse, D. E., Kelly, H. H., Nisbett, R. E., Valins, S. & Weiner, B. Attribution: Perceiving the causes of behavior. Morristown: General Learning Press, 1971. Meehan, J. The metanovel: Writing stories by computer. Ph.D. dissertation. Yale University, 1976. Newell, A. Heuristic programming: Ill-structured problems. In J. Aronofsky (eds.) Progress in operations research, Vol. 1II. New York: Wiley, 1969. Norman, D. Twelve issues for cognitive science. Cogniti~,e Science, in press. Roscb, E. & Mervis, C. B. Family resemblances: studies in the internal structure of categories. Cognitive Psychology, 1975, 7, 573-605. Schank, R. Language and memory. Cognitive Science, in press. Schank, R. C. & Abelson, R. P. Scripts, plans, goals and understanding. Hillsdale, N.J.: Laurence Erlbaum Associates, 1977. Searle, J. Notes on artificial intelligence. Behavioral and Brain Sciences. In press, 1980. Smith, E. E., Shoben, E. J. & Rips, L. J. Structures and process in semantic memory: A featural model for semantic decisions. Psychological Review, 1974, 81,214--241. Tetlock, P. E. & Levi, A. Attribution bias: On the inconclusiveness of the cognition-motivation debate. Yale University, manuscript submitted for publication. Wilensky, R. Understanding goal-based stories. Ph.D. dissertation. Yale University, 1978. Winograd, T. Understanding natural language. New York: Academic Press, 1972. Weizenbaum, J. Computer power at~ human reason. San Francisco: Freeman, 1976. Zajonc, R. The attitudinal effects of mere exposure. Journal of Personality and Social Psychology Monograph Supplement. 1968, 9, No. 2, 1-27. Zajonc, R. Feeling and thinking: Preferences need no inferences. Address delivered at the American Psychological Association Convention, New York, September, 1979.