Change in view

Change in view

ARTIFICIAL INTELLIGENCE 119 Book Review G. Harman, Change in View (MIT Press, Cambridge, MA, 1986); 137 pages, $19.95. Reviewed by: Ronald P. Loui ...

351KB Sizes 6 Downloads 89 Views

ARTIFICIAL INTELLIGENCE

119

Book Review G. Harman, Change in View (MIT Press, Cambridge, MA, 1986); 137 pages, $19.95. Reviewed by: Ronald P. Loui

Departments of Computer Science and Philosophy, University of Rochester, Rochester, N Y 14627, U.S.A.

Scope, overview, and AI interest H a r m a n says his book is primarily concerned with human intelligence, and "may or may not be of much interest for artificial intelligence." Nevertheless, the first reviewers on the publisher's jacket are Jon Doyle and Bob Moore, both of whom applaud the book for its value to the AI community. Moore says of the book that "it brings out more forcefully than anything yet published the difference between reasoning and 'theroem-proving,' " which is quite a comment, considering that all of us have written on the subject. The text is peppered with terminology that AI people will think is their own. H a r m a n constantly maintains a distinction between "explicit belief" and "implicit belief," which is a natural AI practice, but not yet widespread among epistemologists. Even more striking is Harman's depiction of reasoning as "reasoned change in view." The adjectival use of " r e a s o n e d " is curious, and unmistakably like Doyle's "reasoned arguments" in TMS. Change in View is a quick sketch. Harman uses an expository style that is unusual for a philosopher, and computer scientists should welcome this. The style allows fast access to the basic ideas, and is typified by discussion rather than dialectic. Harman also provides a summary of almost every chapter, which can be skimmed easily (hence, I won't attempt a comprehensive summary of the material here). The result is a book that can be read by an AI audience with less time and effort than what is required for many AI papers. Change in View advances theses about effective reasoning by non-ideal reasoners, i.e., by humans, and perhaps also by other computationally bounded reasoners. It naturally divides into two parts, the first part having to do with beliefs (Chapters 1-7), and the second part about intentions (Chapters 8-9). The latter part is mostly analytic philosophy, and perhaps of interest to Artificial Intelligence 34 (1988) 119-124 0004-3702/88/$3.50 ~ 1988, Elsevier Science Publishers B.V. (North-Holland)

120

BOOK REVIEW

no one in AI except those who have already been reading about practical reasoning in the context of planning. The former part, though, the bulk of the book, is semiconstructive philosophy, and should stimulate anyone working on inference or knowledge representation. For AI workers in those areas, especially logicists, who think they are at an impasse, or suffering creative block, this book could be just what is needed. To those who do not need the therapy, but nevertheless want to know what is in the book, I address this review. I'll discuss two of the main distinctions that Doyle and Moore find valuable (reasoned revision versus logic, and probabilistic versus qualitative inference), comment on the other ideas that seem relevant, then consider what the book really achieves.

Reasoned revision versus logic

Harman enjoins us not to confuse "reasoned revision" with proof or argument; this subject matter is distinct from that of deductive logic, and even perhaps, distinct from the traditional subject matter of inductive logic and epistemology: "Many people will be inclined to suppose that logic has some sort of special relevance to the theory of r e a s o n i n g . . . ; I argue that this inclination should be resisted." (p. 11) Harman's wording sounds strong: e.g. "My conclusion is that there is no clearly significant way in which logic is specially relevant to reasoning." (p. 20) But when we find out what "clearly significant" and "specially relevant" are supposed to mean, we find that the claim is quite weak. Harman bases his conclusion on three observations. The fact that p is implied by one's view is not necessarily a good reason to believe p explicitly, because one wants to avoid useless or uninteresting inferences: one wants to "avoid clutter." The maxim to avoid logical inconsistency is no good, because sometimes inconsistency is unavoidable, unnoticed, or removable only at great cost. Finally, people don't seem to distinguish between purely logical knowledge and knowledge that is not purely logical; therefore, correct maxims should refer to "immediate implication" which is a psychological notion, instead of logical implication. Harman does not appear to dispute the idea that logic is relevant to his subject matter, though perhaps not "specially relevant." He does not appear to dispute the idea that logic will be required in the specification of schemata that really are good maxims for reasoning. What he really seems to be saying is that we should properly account for the non-ideality of our reasoners. Something more than logic is needed as a proper model of effective reasoning by non-ideal reasoners. Harman's claim does not go beyond the claims of David Israel [3] in Israel's A A A I 1980 treatment of "What's wrong with non-monotonic logic," though Harman does provide more explanation.

B O O K REVIEW

121

Probabilistic versus qualitative inference

Harman also argues that there should be qualitative principles for revising one's view. Bayesian conditionalization of probability assignments is one way of revising one's view (but is not qualitative): upon learning e, change one's view of h from prob(h) to prob(h given e). Conditionalization requires that the agent assign prior probabilities; in fact, the agent apparently must make a number of assignments that is exponential in the size of his hypothesis space. Furthermore, "the actual principles we follow do not seem to [refer to degrees of belief], and it is unclear how these principles might be modified to be sensitive to d e g r e e . . , of belief." (p. 27) Harman is thus inclined to assume that "belief is an all-or-nothing m a t t e r ; . . , it is too complicated for mere finite beings to make extensive use of probabilities." This is essentially the point of view taken by McCarthy and Hayes [5], when they dubbed reasoning with probabilities " 'epistemologically' inadequate." Again, Harman's claim sounds stronger than it is. The agent could assign just some of the prior probabilities, then use another inference principle like maximum entropy to construct the remaining probabilities. Then conditionalization can be performed. This is what Peter Cheeseman [1] tells us to do in his IJCAI-85 paper, "In Defense of Probability." This is not excluded by Harman. Maximum entropy would be one of those principles needed to supplement conditionalization. To use lust conditionalization would require all 2" prior probabilities over n hypotheses. It's false to say that it's too complicated for finite beings to make use of probabilities, since we can easily supplement conditionalization, but Harman can quibble about what he means by extensive use.

I think it's better if Harman just re-words his position here. Treating many of our strongly held beliefs as if they were fully believed, i.e., accepting many of our beliefs, has computational advantages. Many students of inductive methods require that some beliefs be treated in this way, and AI practitioners also seem to want to treat belief in this way; therefore we must provide rules that govern this practice. There may be other ways of reducing the computational burden imposed by conditionalization, e.g. by providing a rule like maximum entropy, and some of these ways do make extensive use of probabilities. But the really interesting and psychologically plausible rules are the ones that have to do with the revision of accpeted beliefs, in which conditionalization plays no part at all. Richmond Thomason, another philosopher who has been writing for the knowledge representation community, holds just this view (see for instance, [7]). Saying that acceptance is one way to ease the computational burden, though not the only way, seems to be strong enough to support and motivate Harman's subsequent discussion of belief-acceptance. Harman considers the

122

BOOK REVIEW

process of moving from "tentative acceptance" to "full acceptance." The first is related to working hypotheses, and "is not easy; it takes a certain amount of sophistication and practice . . . . " T h e second, on the other hand, is characterized by the "Principle of Conservatism: One is justified in continuing fully to accept something in the absence of a special reason not to." Harman claims that "since one does not h a v e . . , unlimited powers of record keeping and has a quite limited ability to survey reasons and arguments, one is forced to limit the amount of inquiry in which one is engaged and one must fully accept most of the conclusions one [tentatively] accepts, thereby ending inquiry." (p. 50) Harman tries to distinguish the various kinds of commitments to beliefs, and the processes by which one moves from one kind of commitment to another (e.g., from tentative acceptance to full acceptance). The general suggestion, so far as AI is concerned, is that inference principles based on full and tentative acceptance are good candidates for the replacement of conditionalization, and worth theorizing about.

Other suggestions Another general suggestion that may positively affect AI work is hidden in the last chapter. Regarding the decision to perform one action or another, Harman says: The basic i d e a . . , is to keep things simple. One tries to limit oneself to considering a single way of obtaining a single end. If there is a salient complication, either a sufficiently unhappy side effect of consequence or a possibly better course of action, then one tries to determine whether this is sufficient to overcome one's reason for doing the simple action . . . . The complication may or may not suggest a different, possibly complex end one might achieve . . . . (p. 107) Those who are looking for a qualitative decision theory that mates well with nonmonotonic logic should find Harman's suggestion fruitful. Here is a way to use reasoning to determine choices under uncertainty, that does not require explicit decision trees, utilities and probabilities. Harman also has a concrete suggestion for determining minimal changes in belief. He considers the "Simple Measure of Change in View: Take the sum of the number of (explicit) new beliefs added plus the number of (explicit) old beliefs given up." (p. 59) Harman toys with this measure, but it's clear that the topic deserves more attention, perhaps by someone with an AI temperament. Less useful to AI is his distinction between "positive undermining" and "negative undermining." Harman argues for the positive version: " O n e should stop believing p whenever one positively believes one's reasons for believing p are no good," instead of the negative version: " O n e should stop believing p whenever one does not associate one's belief in p with an adequate justifica-

BOOK REVIEW

123

tion." I don't think anyone in AI ever believed the negative version (although consider that TMSs will label a node out if none of its justifications is valid). Once again, Harman strives to make the claim larger than it is, by attributing the positive principle to "coherence theory" and the negative principle to "foundations theory." Harman himself admits that it would be difficult to relate his principles to the theories of justification that go by those names in epistemology.

Reliance on "people" arguments The major problem of Change in View is its reliance on "people" arguments. Harman's arguments for critical theses all rely on premises that read: "people normally do X." "People who recognize these and related implications do n o t . . , distinguish t h e m . . . " (p. 17f); "People do not normally associate with their beliefs degrees of c o n f i d e n c e . . . " (p. 22); " . . . since people rarely keep track of their r e a s o n s . . . " (p. 39). Harman uses concepts like "obvious" and "immediate" and "interest" which remain unanalyzed, and are supposed to have psychological meaning. Harman is aware of his psychologistic approach; he said he was attempting an essay on human intelligence, not on AI; the fact that the combinatorial explosion of probability information applies also to AI is supposed to be an accident. Harman cites the work of the psychologists Ross and Anderson with glee. But the "people normally do X " arguments are weak not only because they are not immediately transferrable to AI. They are weak mostly because it is not easy to understand what has been achieved in this book. Harman admits, "I find it hard to say whether the theory I want is a normative theory or a descriptive theory." (p. 7) Change in View professes not to be normative; nor could it be, since Harman constantly refers to human habits, which it may be the duty of a normative theory to advise against. It is not essentially descriptive. As a descriptive work, the book fails to pay enough attention to what humans actually do. Harman says he's engaged in "reflective equilibrium." That's the term used by those who want to defend the use of intuitive judgements in the construction of normative theories. John Rawls, for instance, builds an ethical theory by intuiting which acts are moral and which are immoral, building a theory that accords with those intuitions, then reflecting on the more difficult cases with the help of the theory. Essentially, everyone who builds a normative theory relies on reflective equilibrium. So saying that one is doing "reflective equilibrium" still does not tell us in what way one is doing something different from standard normative theorizing. We still need an explanation of why Harman has chosen not to chastise human agents for failing to associate degrees of confidence and for rarely keeping track of their reasons.

124

BOOK REVIEW

Harman's achievement I think the best way to think of Change in View is as an example of a kind of normative work that is appearing frequently in cognitive science. These arguments begin: " X ' s are so constituted that the best they can do is Y." For example, a philosopher writing on practical reasoning has said something like: " W e human beings are so constituted that we must sometimes implement a policy, rather than make every decision individually; that's the best we can do." Given the constraints imposed by the way in which X ' s are constructed, Y is the best that can be done; thus, doing Y is normative for an X. This is a good kind of normative work. Normative theorists who properly account for the non-ideality of their agents can't help but make normative theories that are going to be good prescriptive theories. If we replace " h u m a n s are so constit u t e d . . . " with "computationally limited symbol manipulators are so constituted . . . . " the A I community might recognize that the project is not much different from what it has been doing all this time. Perhaps Change in View should refer to the change in H a r m a n ' s view: here is a philosopher who had previously worked on principles of reasoning for ideal agents. Now he is concerned with normative theories for agents who are restricted in interesting ways. More concretely, Change in View dispels some misguided notions and exposes some lousy principles, which some might think would serve as a basis for a theory of inference for reasoning agents. We are already accustomed to setting off alarms whenever our colleagues presuppose decidability or deductive closure. Now we know to set off alarms at pure conditionalization strategies, and negative undermining, too. Most of the dispelled principles are p a p e r tigers. Like the point on conditionalization, we needn't alter the mistaken principle by much in order to m a k e it valid. Nevertheless, H a r m a n ' s book clears room for foundations for new theory, and adequately maps the space for future construction.

REFERENCES 1. Chssesman, P., In defense of probability, in: Proceedings IJCAI-85, Los Angeles, CA, 1985. 2. Doyle, J., A truth maintenance system, Artificial Intelligence 12 (1979) 231-272. 3. Israel, D., What's wrong with non-monotonic logic? in: Proceedings AAAI-80, Stanford, CA, 1980. 4. Loui, R., Acceptance and non-monotonicity in AI, Commun. and Cognition--Al 4 (1987). 5. McCarthy, J. and Hayes, P., Some philosophical problems from the standpoint of artificial intelligence, in: Machine Intelligence 4 (Edinburgh Univ. Press, 1969; reprinted in B. Webber and N. Nilsson, (eds.), Readings" in Artificial Intelligence (Tioga, Palo Alto, CA, 1981). 6. Moore, R., The role of logic in knowledge representation and commonsense reasoning, in: Proceedings AAA1-82, Pittsburgh, PA, 1982. 7. Thomason, R., The context-sensitivity of belief and desire, in: Proceedings AAAI/CSL1 Workshop on Actions and Plans, Timberline, 1986.