Problems of normative and descriptive decision theories

Problems of normative and descriptive decision theories

Mathemarical Social Sciences 27 (1994) 31-47 0165-4896/94/$07.00 @ 1994 - Elsevier Science 31 B.V. All rights Problems of normative decision the...

1MB Sizes 0 Downloads 58 Views

Mathemarical

Social Sciences 27 (1994) 31-47

0165-4896/94/$07.00

@ 1994 - Elsevier

Science

31 B.V. All rights

Problems of normative decision theories Anatol

reserved

and descriptive

Rapoport

University of Toronto,

Toronto, Ontario M5S lA1,

Communicated by M.D. Received 1 June 1993 Revised 26 July 1993

Canada

Intriligator

Abstract In the natural

sciences,

particularly

in the physical

sciences,

normative

theories

deal with behavior

of matter in idealized environments, such as vacua, frictionless surfaces, infinitely dilute solutions, etc. Thus a direct connection is established between theories of how things would be in idealized situations and how things are in reality. In the social sciences, particularly in economics and its adjoint, decision theory, this connection cannot be made, because real situations do not usually approximate idealized ones. Characteristic of the history of normative decision theory are frequent discoveries of paradoxes, which cannot be simply explained as violations of rationality by decision-makers. The resolution of these paradoxes often entails an enlargement of the repertoire on concepts. Most instructive is the resulting prescribe

bifurcation of the concept of rationality different courses of action to a ‘rational’

into ‘individual’ and ‘collective’ rationality, which actor. When decision theory is enriched in this way,

the relation between normative and descriptive decision theory can be established In particular, an ethical dimension enters decision theory per force. Key words: Normative Collective rationality

decision

theory;

Descriptive

decision

theory;

Paradox;

on firmer

Individual

grounds.

rationality;

If one were to encapsulate the principal concern of philosophers, I suppose one could do this by &pressing it in the question ‘What is X?’ That is, philosophy appears to be concerned with the ‘essences’ of things. The philosopher keeps asking questions like ‘What is life? What is virtue, truth, wisdom. . . ?’ And so on. Among these questions is ‘What is knowledge?’ After some reflection, the questions seems to belong to ‘metaphilosophy’ rather than to philosophy proper, since it represents a concern not so much with X in ‘What is X?’ as with the nature of the question itself, its origin and its consequences. The so-called philosophy of science, which could also be called a metaphilosophy, poses a sort of challenge to the philosopher. Central to this challenge are two questions: ‘What do you mean?’ and ‘How do you know?’ An answer to the first question spells out the criteria of a definition that confers legitimacy on questions in the form ‘What is X ?’ Answers to the second question SSDI

0165-4896(93)00730-I

32

A. Rapoport

I Mathematical

Social Sciences 27 (1994) 31-47

spell out the criteria that admit the question to the area of concern of science. That is, it spells out the criteria of truth. The challenge is most pronounced in the direction in the philosophy of science first called logical positivism, then logical empiricism. The criterion of meaningfulness embodied in that direction was embodied in the so-called ‘operational definition’. To make a term or phrase meaningful, the operational definition spells out the operations that one must perform in order to bring to light some invariant that justifies the use of the term of phrase. Often such a definition necessitates replacing the original question ‘What is X ?’ by another, reflecting the invariant revealed in the course of the operations. An example from physics will make this clear. Take the question ‘What is friction ?’ An operational definition may take the following form. Imagine a body being dragged across a horizontal surface. Suppose the weight of the body is varied by adding weights to it. You will find that in order to drag the body at a small constant speed, a certain force is required that can be registered on a spring balance. As the weight is changed, so is the force. But the ratio of the force to the weight will remain constant. This constant is called the coefficient of friction. Note that the answer to the question ‘What is friction?’ does not say what friction is but how to answer it, whereby an observed invariant (the ratio of force to weight) justifies giving a phenomenon a name. Some aspect of measurement always enters definitions of terms occurring in physics, because mature physics has become a thoroughly mathematicized science. But measurement is not an essential component of the operational definition. The revelation of an invariance is. Thus one could define the colour ‘red’ as follows. On some intersections of streets you will see lights that go on and off. These changes are accompanied by changes in the direction of the traffic flow. Traffic on one street sometimes stops, while the traffic on the street crossing it moves. Note the light associated with the standing traffic. Next, observe trucks moving rapidly and making a lot of noise. Many of them have ladders mounted on them. They are called ‘fire engines’. When you cut your finger, observe the substance that comes out of the wound. Observe ripe tomatoes. Can you see something that all these things have in common? It is called ‘the colour “red”‘. There is, of course, a scientific definition of ‘red’, reflecting a wavelength of light; but it is merely a more precise definition of the same thing. Note that it, too, refers to an invariant, namely to a range of wavelengths. In a way it is a better definition, because it provides a way of resolving disputes about whether something is or is not ‘really’ red. So the question of ‘essence’ is, in a way, embodied in the operational definition. But this essence is not discovered but established by agreement, an important point to which we will return. Establishing criteria for satisfactory answers to the question ‘How do you know?’ the logical empiricist accepts the traditional distinction between deduction and induction and emphasizes their complementarity in scientific cognition,

A. Rapoport

I Mathematical Social Sciences 27 (1994) 31-47

33

spelled out in textbook descriptions of the so-called ‘scientific method’. Results of observations are generalized in a hypothesis. Consequences of the hypothesis are established by deduction. Confirmation of these consequences establishes the degree of credibility of the hypothesis. Disconfirmation necessitates either a modification of the hypothesis or its rejection. The process defines a spiral that represents an advance toward ‘truth’. So much for the question ‘What is truth?’ which Pontius Pilate is said to have posed to Jesus. Note that this complementarity of induction and deduction represents the resolution of an age-long conflict between the rationalist and empiricist directions in the theory of knowledge. Plato was the most pronounced partisan of deduction and spurned induction. He is said to have advised his disciples to ignore the evidence of the senses. Looking at the sky, he is said to have insisted, will not teach anyone anything about the stars. One must look within oneself to discover their essence. Presumably, from the knowledge of the ‘essence’ of heavenly bodies, namely that they are divine, therefore perfect, the nature of their motions could be deduced. They must move in circles since the circle is the perfect figure. One supposes that the idea of planets moving in ellipses, and not around their centres, would have been dismissed by Plato as absurd. The example shows the bizarre result to which disdain for reliance on the evidence of the senses can lead. Yet is not all of mathematical knowledge derived from deduction alone? Is it not the case that empirical verification is neither necessary nor sufficient to prove a mathematical proposition? Plato’s extreme form of rationalism has been matched by equally extreme empiricism. One Alexander Bryan Johnson, an amateur philosopher (banker by profession) who wrote in the 1830s showed the same disdain for ‘theory’ and for deductive proof that Plato showed for observation. In his book, A Treatise on Language, Johnson discusses the report of a traveller that the tides on a certain island, instead of occurring fifty minutes later each day as they did everywhere else, always occurred at the same times- at midnight, at 6 a.m., at noon, and at 6 p.m. The story stirred up much discussion. Many refused to believe it. Johnson failed to see what all the excitement was about. If the tides do not behave in accordance with Newton’s theory, he wrote, so much the worse for the theory. A theory that does not fit the facts should be scrapped. Johnson’s discussion of the Achilles Paradox is even more peculiar. The Greeks worried about the ‘proof’ that Achilles, who runs ten times faster than the Tortoise can never overtake it, if the Tortoise gets a head start. For by the time Achilles traverses the length of the head start, the tortoise is one tenth of that distance ahead; by the time he has traversed that distance, the Tortoise is still ahead by one hundredth of it, and so on ad infinitum. Johnson refutes the argument by pointing out that an interval of, say, one-millionth of an inch is not perceivable by the senses, hence does not exist, and any argument referring to such a distance is therefore nonsensical. One need not go to antiquity or to the writings of amateur philosophers to find

34

A. Rapoport

I Mathematical Social Sciences 27 (1994) 31-47

instances of extreme ‘loyalty’ to either the deductive or the inductive mode of cognition. Galileo, who is usually pictured as the champion of the experimental method, was really a Platonist. He did not demonstrate the independence of gravitational acceleration from the mass of a falling body experimentally. He assumed that this was so. Had he approached the study of falling bodies empirically, he would have had to give preference to Aristotle’s theory, namely that the rate of falling is proportional to the weight. For the vast majority of falling bodies that we observe are raindrops, and their observed rate of fall (which is actually their terminal velocity) is indeed very nearly proportional to their volume, hence their weight, for that is the way falling spheres behave in a medium where resistance is proportional to velocity. On at least one occasion, Galileo showed his true Platonist colours. Having (erroneously) deduced that the period of a pendulum is independent of amplitude, he reported testing this result by timing similar pendulums at different amplitudes. Another early experimenter tried it and got a different result: the period increased with amplitude. He wrote to Galileo about it. Galileo simply ignored this information. At the other extreme are mathematicians of the so-called intuitionist school, who have sought to put limits on mathematical deduction by insisting that only constructive proofs are valid; that is, they effectively excluded concepts defined in terms of imagined infinite processes. For instance, many intuitionists rejected the axiom of choice on which the definition of the real number system is based. Emphasis on deduction and induction characterizes, respectively, normative and descriptive theories. Roughly, a normative theory purports to say how things ought to be or would be in certain idealized situations. A descriptive theory purports to say how things are under certain specified conditions. In the natural sciences, a normative theory emphasizes ‘would’; in the social sciences it emphasizes ‘should’. For example, a normative theory of falling bodies say how bodies would fall in a vacuum or how a perfect gas would behave. A descriptive theory of falling bodies describes how actual bodies fall in actual environments. A normative decision theory tells how a ‘rational’ actor ought to decide in certain precisely defined situations involving choice of actions or alternatives. A descriptive decision theory purports to describe how real actors behave or, at times, to predict how they will behave in situations that can be supposedly described with sufficient precision. Normative theories emphasize deduction; descriptive ones induction. It would seem that an amalgam of normative and descriptive approaches would enrich any theory, And indeed the marked successes and advances of science during the last few centuries can be attributed to this integration of the two approaches. One would, therefore, think that the same sort of integration would result in a flowering of decision theory. However, the situation in decision theory is different from that in the natural sciences. What made the amalgam of normative and descriptive theories in the natural, particularly the physical sciences fruitful was that the ideal conditions underlying normative theories could actually be approxi-

A. Rapoport

I Mathematical Social Sciences 27 (1994) 31-47

35

mated. In the behavioural sciences this is by no means the case. Furthermore, the fundamental concepts underlying the natural sciences, especially the physical sciences, are derived from what we can with certain confidence identify as the ‘building blocks of reality’. Most of us are convinced that there is such a thing in nature as, say, matter or energy, and that at least with respect to a particular observer the distance or the time interval between two events can be defined in such a way that it will appear the same to all observers in the same coordinate system of time and space. There are no comparable concepts in the sciences dealing with human behaviour. There are at most statistical regularities. The concepts that underlie theories relevant to social sciences were invented rather than discovered. Let us begin by building a normative decision theory from the ground up as it were. As has been said, a normative decision theory purports to say how a ‘rational’ actor ought to behave in certain sufficiently precisely specified decision situations. The first task of such a theory, therefore, is to define a ‘rational actor’. We can begin with a ‘folk’ definition. In common parlance, a rational actor is one who takes into account the possible consequences of his actions. To make the definition precise, we need to distinguish the conditions in which the actor finds himself. The simplest condition characterizes decisions under certainty. The certainty refers to the rigid one-to-one correspondence between available alternatives and consequences of choosing them. The situation is depicted in Fig. 1. The Ai represent the available alternatives, the 0; the resulting outcomes. If the actor can rank order the outcomes in accordance with his preferences, this rank ordering will impose a rank ordering on the alternatives. The rational actor can then state a srrafegy - a contingent plan of action: ‘I will choose alternative Ai which leads to my most preferred outcome Oi. If for some reason Ai is blocked, I will choose Aj, which leads to my next best alternative A j, . . . and so on.’ This is all there is to a normative theory in this simplest context. However, the relation of the normative theory to a descriptive or predictive theory of the same situation is far from simple. Note first that in stating the ‘solution’ of the decision problem, posed in the sense of a prescription to a ‘rational’ actor, implicit

A,...O, A,...O,

. A;..Oz

.. A, . . 0, . A,,... Fig. 1. Actions

and outcomes

0, in one-to-one

correspondence

36

A. Rapoport

I Mathematical

Social Sciences 21 (1994) 31-41

assumptions were involved, namely that the preference relation that produced the actor’s rank ordering of the outcomes was asymmetric and transitive. An asymmetric preference relation implies that if alternative A, is strictly preferred to Aj, then Aj cannot be preferred to Ai. A transitive preference relation means that if A i is preferred to A j and A j is preferred to A k, then A i must be preferred to A,. In real life, however, violations of asymmetry and transitivity are frequently observed. They can sometimes be explained by taking into account multiple aspects of the outcomes. For example, if the alternatives represent decisions to buy car Ai, Aj, or A,, and the corresponding outcomes represent ownership of the car bought, then the decision may be based on paired comparisons with respect to, say, price, safety, and fuel economy, whereby superiority in two of the three criteria defines the preference in a paired comparison. Then it is easy to show that in some commonly occurring situations Ai can be preferred to Aj, which can be preferred to A,, which can be preferred to Ai. This raises the question whether violations of this sort should be attributed to factors not taken into consideration or to ‘irrationality’ of the decision-maker. Questions of this sort require methods of investigation that have been developed independently of a normative theory. In fact, the more successful are these methods, the more vague the very concept of rationality becomes. In the extreme case it may become altogether useless theoretically if every violation can be explained by aspects of the situation previously not taken into consideration. For then every decision-maker can appear to be ‘rational’ and the hypothesis to the effect that a particular decision-maker is rational becomes unfalsifiable and hence theoretically sterile. This ‘dissolution’ of the concept of rationality is particularly prominent in the theory of utility. Recall the early formulation of the theory by Daniel Bernoulli, the discoverer of the so-called St. Petersburg Paradox. The paradox arises in the course of the game in which the gambler is paid, say, 2” kopeks if a fair coin falls heads exactly n times before ‘tails’ appears for the first time. The problem is to determine the maximum sum the ‘rational’ gambler should be willing to pay for the privilege of playing this game. If it is assumed that the ‘rational’ gambler is one who maximizes his expected gain, one is forced to the conclusion that he should be willing to pay any finite amount, since his expected gain is infinite. Bernoulli resolved the paradox by introducing the concept of a utility function, the expectation of which is maximized by the ‘rational’ gambler instead of expected monetary gain. It turns out that for many concave functions, i.e. functions of monetary gains that increase with the amount money at a decreasing rate, the maximum occurs at a finite value of the amount of money. The task of a descriptive theory to be adjoined to the normative one is now to investigate the utility functions of decision-makers. This problem is beset with formidable difficulties. First, it is necessary to establish conditions that insure the existence of such a function governing the decisions of the decision-maker whose utility function one is trying to determine.

A. Rapoport

I Mathematical Social Sciences 27 (1994) 31-47

37

These conditions are quite stringent. We have already seen that even in situations depicted as decisions under certainty, where an ordinal scale of preferences suffices, one must establish the existence of such an ordinal scale, which implies asymmetry and transitivity. In situations under risk, such as gambles, the requirements are more stringent. To define expected utility one must establish a utility scale at least as strong as an interval scale. Conditions guaranteeing the existence of such a scale were listed by Von Neumann and Morgenstern (see Lute and Raiffa, 1957, p. 23ff). They are hardly ever satisfied in any but the most trivial decision situations. They involve a certain consistency in choosing between alternatives associated with risky outcomes. Even if a utility scale can be established, other problems arise. A class of such problems has been subsumed under the so-called Allais Paradox. The actor is offered a choice between the following alternatives. A I : $1 ,OOO,OOO with certainty. B,: A lottery ticket that wins $5,000,000 with probability 0.10 or $1,000,000 with probability 0.89 or $0 with probability 0.1. Note that although the expected monetary gain of B, = (0.10)($5,000,000) + (0.89)($1,000,000) + (0.01)(O) = $1,390,000 is greater than that of A, = $l,OOO,OOO,a preference for A, cannot be regarded as ‘irrational’ since it may simply reflect the actor’s concave utility function for money. That is, for the actor who prefers A 1, the utility of an increase in the amount of money won is less than proportional to the amount of increase. On the other hand, an actor whose utility for money is linear in money or a convex function of money will prefer B,. We assume that the shape of an actor’s utility function for money reflects neither rationality nor irrationality; so we cannot impinge irrationality to an actor no matter which alternative he prefers. Next we offer the same actor a choice between the following alternatives. A,:

$l,OOO,OOOwith probability

0.11 or $0 with probability

0.89.

B,: $5,000,000 with probability

0.10 or $0 with probability

0.90.

It is conceivable

(and has actually been observed) that some actors will choose

A 1 in preference to B, and also B, in preference to A 2. There are also actors who choose B, in preference to A 1 and A, in preference to B,. These pairs of choices

can be shown to violate the principle of maximizing expected gain, whereby the violations cannot be explained by the shape of the utility functions. To see this, imagine a concrete representation of the four lotteries. A, is represented by 100 tickets, each of which wins $l,OOO,OOO.

38

A. Rapoport

I Mathematical Social Sciences 27 (1994) 31-47

B, is represented by 100 tickets, of which tickets l-10 win $5,000,000, 11-99 win $l,OOO,OOO,and 100 wins nothing. A, is represented by 100 tickets of which 1-11 win $l,OOO,OOO,and 12-100 win nothing. B, is represented by 100 tickets, of which l-10 win $5,000,000 and 11-100 win nothing. Consider the choice between A, and B,. If a ticket from 11 to 99 is drawn, the actor wins $l,OOO,OOOregardless of whether he chose A, or B,. Therefore with respect to these tickets the two lotteries are identical. Hence tickets 11-99 can be ignored in arriving at a decision. The difference between the two lotteries is reflected only in tickets l-10 and in ticket 100. Redefining the lotteries in terms of these tickets, we obtain the following picture. A 1 is represented by 11 tickets, each of which wins $l,OOO,OOO. B, is represented by 11 tickets, of which 10 win $5,000,000 and one wins nothing. Again the preference of A 1 over B, or vice versa has no bearing on the actor’s rationality. A risk-averse actor may choose A 1, preferring $l,OOO,OOOwith certainty to a 10 : 1 chance of winning $5,000,000 or nothing. A risk-prone actor is likely to have the opposite preferences. Now consider alternatives A, and B,. They are identical with respect to tickets 12-100. Accordingly, these tickets can be left out of consideration. The difference is reflected in tickets l-11. Thus, we have again essentially two lotteries with 11 tickets each. A,:

Every one of 11 tickets wins $1 ,OOO,OOO.

B,:

10 of the 10 tickets win $5,000,000, and one ticket wins nothing.

Therefore the actor who chooses A 1 over B, must choose A, over B, (which, if the irrelevant tickets are ignored, is exactly the same choice) if he is consistent. Surely, then, choosing A 1 over B, and B, over A, or B, over A 1 and A, over B, is evidence of inconsistency. If we regard consistency as a component of rationality, we must admit that the actor who chooses inconsistently chooses irrationally. It is noteworthy that J. Savage, author of a widely influential book, The Foundations of Statistics, chose inconsistently when offered the above pair of choices, and when the inconsistency was pointed out to him, changed his mind. He may have felt compelled to do so, since one of the postulates that underlies his theory of rational decision prescribes consistency of choices in just such situations. Savage’s axiom can be subsumed under the so-called principle of independence from irrelevant alternatives. (In this case we have a representation of irrelevant outcomes). As we shall see below, violation of this principle violates most vividly the gulf that separates normative from descriptive decision theory. First, however,

A. Rapoport

I Mathematical Social Sciences 27 (1994) 31-47

39

let us turn to situations that illustrate the ambiguity of the concept of rationality itself. Such situations are analysed in the theory of games. The theory of games can be naturally regarded as an extension of decision theory to situations in which more than one actor participates, whereby the interests of the participants do not in general coincide. In the simplest case there are two actors with diametrically opposed interests. Such situations are modelled by the so-called two-person, constant-sum games or, because the origin of the utility scale can be chosen arbitrarily, by a two-person, zero-sum game. In these games, outcomes of the actors’ decision are called payoffs, whereby what one actor (now called ‘player’) wins, the other loses. A two-person game can be represented by a matrix in which the rows represent the choices of one of the players, whom we will call Row, while the columns represent the choices available to the other player, whom we will call Column. Each cell of the matrix represents an outcome of the game, resulting from independent choices by Row and Column. The entry in each cell is a pair of numbers representing, respectively, the payoffs to Row and to Column. A rational choice is defined as one that maximizes the player’s payoff or, in certain cases, his expected payoff under the constraint that the other player is guided by the same principle. Thus, ‘rationality’ in this case and in the theory of games generally implies ascribing rationality to everyone concerned. Two-person, zero-sum games are those in which the payoffs of one of the players in each outcome are numerically equal to those of the other but are of the opposite sign. Thus, the sum of the payoffs in every outcome is zero. It follows that it suffices to designate each cell of a zero-sum game matrix by a single number, since the payoff of the other player is thereby determined. By convention these entries are payoffs to Row. Of theoretical significance is the distinction between games with a ‘saddlepoint’ and those without. A saddlepoint is an entry in the game matrix that is both a minimum in its row and a maximum in its column. It is shown in the normative theory of the two-person, zero-sum game that a saddlepoint represents a ‘rational’ outcome in the sense that it awards each player the maximum payoff that he can achieve under the constraints represented by the definition of rationality, namely the requirement that each player regards the other as rational in the same sense. This finding defines a prescription to each player: choose a strategy (in this case a row or a column of the game matrix) that contains a saddlepoint among its outcomes. If both players do this, the outcome is a saddlepoint. It is also shown that if the game matrix contains more than one saddlepoint, the choice of a saddlepoint strategy by each player will always result in a saddlepoint outcome, and the payoff to each player will be the same regardless of which saddlepoint strategy is chosen. A saddlepoint can be seen to represent a sort of equilibrium. Neither player can improve his payoff by choosing a different row (or column) while the other player

40

A. Rapoport

I Mathematical

Social Sciences 27 (1994) 31-47

does not shift. It is in this sense that each player does the best he can by choosing a strategy containing a saddlepoint. Some two-person, zero-sum games do not have saddlepoints. Hence the above prescription cannot be made. The concept of equilibrium can, however, be generalized so as to apply to these games. In fact, a fundamental theorem on two-person, zero-sum games (later extended to all two-person games represented by finite matrices) states that every such game has at least one equilibrium- an outcome from which neither player is motivated to shift, since a unilateral shift cannot improve his payoff. In this case we must speak of the expected payoff, because the outcome is only probabilistically determined. The equilibrium theorem can be justly regarded as a fundamental theorem of game theory. The existence of equilibria suggest straightforward prescriptions as in the case of the two-person, zero-sum game based on the principle of maximizing expected payoffs under the constraints of others’ efforts to do likewise. Equilibria also suggest predictions of outcomes based on the dynamics of competitive pursuits of interests, as in macroeconomics. We will illustrate the equilibrium in the case of a simple two-person game without a saddlepoint, represented by Matrix 1 (Fig. 2). Row chooses S, or T,; Column chooses S, or T2. The entries are payoffs to Row. Payoffs to Column are the same with opposite signs. Looking at the game from Row’s point of view, we must decide between S, and T, . Clearly S, is preferable if Column chooses S,. But if column chooses T2, T, is preferable. Suppose Row decides to ‘play safe’, i.e. to minimize his loss in case he guesses wrong about Column’s choice. Then he should choose S,. But if he assumes that Column is rational, he must ask himself how Column would choose if he knew that he, Row, chose S,. In that case Column would choose T2. But if Column chooses T2, then clearly T, is Row’s better choice. But if Column follows Row’s reasoning, he will choose S,, which again makes S, the better choice. Column reasons along similar lines, and both get into a vicious cycle. The concept of mixed strategy shows a way out of the difficulty. Suppose Row uses a random device to choose his strategy, which prescribes the choice of S, with probability 314 and T, with probability l/4. Then Row’s expected payoff is the same regardless of whether Column chooses S, or T2 or any probabilistic mixture of the two. Similarly, if Column chooses S, with probability 7112 and T2 with probability 5/12, his expected payoff will be independent of Row’s choice. In sum, Row can guarantee himself an expected payoff of 314, and Column can

Fig. 2. Matrix

1.

A. Rapoport

I Mathematical Social Sciences 27 (1994) 31-47

41

guarantee himself an expected payoff of -3/4. This outcome represents the ‘solution’ of the decision problem represented by the game. In a zero-sum game the principle of equilibrium coincides with two other principles of decision, namely the sure-thing principle and the maximin principle. The sure-thing principle prescribes the choice of a strategy (if such exists) which yields a larger payoff than any other strategy regardless of the co-player’s choice. Such a strategy is called a dominating strategy. Not every game has a dominating strategy. A maximin strategy is one that guarantees the ‘best of the worst’ payoff. For example, in the game represented by Matrix 1, Row’s maximin strategy is S, and Column’s is S,. In a game with saddlepoints, equilibrium strategies coincide with both sure-thing strategies (if such exist) and with maximin strategies. In a game without a saddlepoint, maximin strategies do not, in general, coincide with equilibrium strategies. Thus, we see that the equilibrium is a more general concept than either the sure-thing principle or the maximin principle. The existence of equilibria in all matrix games suggested the equilibrium principle as the unifying concept and thus the prescription of equilibrium strategies, pure or mixed, in every such matrix game, whether zero-sum or non-zero-sum. We will examine a well known non-zero-sum game called Chicken. The game is represented by Matrix 2 (Fig. 3). Since the game is non-zero-sum, both payoffs have to be entered in each cell of the matrix. We note that outcomes C, D, and D, C, are equilibria in the sense that neither player is motivated to shift to the other strategy, if the other does not. However, the equilibrium at CID, favours Column while the equilibrium at D,C, favours Row. Therefore the prescription of choosing a strategy that contains an equilibrium may be interpreted differently by the players. Row may choose D, so as to effect D,C,, the equilibrium he prefers, while Column may choose D, for the same reason. The outcome will then be D, D,, which is the worst outcome for both players. There is also a mixed strategy equilibrium in this game, which will result if each chooses strategy C with probability lO/ll and D with probability 1 /ll. The associated expected payoff of each player will then be zero. However, this expected payoff is less than the payoff associated with outcome C, C,, which awards +l to each player. It is also the outcome that results if each player chooses his maximin strategy. Unfortunately, C,C, is not an equilibrium. Each player is motivated to shift (provided the other does not shift). But if both shift to D, the worst outcome, DID,, obtains. Thus, it is not clear what a normative

Fig. 3. Matrix

2: The game of Chicken.

42

A. Rapoport

/ Mathematical Social Sciences 27 (1994) 31-47

Fig. 4. Matrix

3: Prisoner’s

Dilemma.

theory would prescribe in this case. And consequently, it is not clear what the ‘rational’ choice is. The concept of ‘rationality’ becomes ambiguous. The ambiguity becomes even more pronounced in the best known non-zerosum game, called Prisoner’s Dilemma. It is illustrated by Matrix 3 (Fig. 4). Note that in this game strategy D of both players dominates strategy C in the sense that D is the better choice regardless of the strategy chosen by the co-player. Hence, the sure-thing principle prescribes strategy D. Next, note that the maximin principle likewise prescribes D, since the worst payoff associated with D is -1, while the worst payoff associated with C is - 10. Finally, note that outcome DID, is the only equilibrium of the game. In sum, the three different principles all prescribe D. But if each player chooses D, both are worse off than they would be if each choose C. In spite of these difficulties, many game theoreticians insist that any outcome of a non-cooperative game should be an equilibrium. A non-cooperative game is one in which the players have no opportunity to make binding agreements on the strategies they will choose. If they do have such an opportunity, it stands to reason that only Pareto-optimal outcomes should be regarded as rational outcomes. A Pareto-optimal outcome is one that cannot be improved from the point of view of both (or all) players. Thus in Prisoner’s Dilemma outcomes CC,, C, D,, and DIC, are all Pareto-optimal. For instance, no outcome is preferred to C,C, by both players. It is not possible to improve the payoff of one without impairing the payoff of the other. The same holds for C, D, and for C,D, . However, outcome D, D, is not Pareto-optimal (it is also called Pareto-deficient), since outcome C,C, is preferred by both players. If the players can make a binding agreement, they can agree to choose C, and C,, respectively, thus assuring a symmetric, Pareto-optimal outcome, which appears to be eminently rational, even though it is not an equilibrium. The question arises whether the recommendation to choose strategy C can be justified in the context of a non-cooperative game, or, equivalently, whether the outcome C,C2 can be regarded as a rational outcome in spite of the temptation it induces in each player to ‘defect’ to D so as to get the largest possible outcome. The tendency to defect is induced also by the fear that if one does not defect, while the other does, one will get the worst possible outcome. The ambivalence can be resolved by the recognition that individual rationality should be distinguished from collective rationality. In the case of Prisoner’s

A. Rapoport

I Mathematical Social Sciences 27 (1994) 31-47

43

Dilemma, individual rationality clearly leads each player to choose D. However, collective rationality leads both of them to choose C to the advantage of both. Turning to results of a great number of experiments with Prisoner’s Dilemma we find that the choice of C (cooperative) strategy is by no means a rare occurrence. On the contrary, it is observed with moderate to large frequencies in iterated games and even with considerable frequency in games played once. Experiments with these games have been run under a great variety of conditions, and much has been learned about the behaviour of subjects in situations of this sort, namely where individual rationality prescribes one form of behaviour, while collective rationality prescribes another. In particular, the experiments reveal the limits of the concept of rationality derived from the supposed characteristic of homo economicus, for whom maximization of own utility or of its expectation is the principal guiding principle in arriving at decisions. The fundamental distinction between normative (or prescriptive) and descriptive decision theory was emphasized especially sharply by Jacob Marschak in his review of a large number of decision-making experiments in which he points to frequent violations of asymmetry and transitivity properties of the preference relations and suggests the underlying causes (multidimensionality of alternatives, inconsistencies in subjective probabilities and many other features characteristic of human decision-making). This distinction is much sharper in decision theory than, say, in physics. There normative theory (e.g. of the behaviour of matter) can be regarded as a limiting theory. It purports to say how matter would behave under certain idealized conditions and, what is most important, these conditions can at times be approximated. While there is no such thing in nature as a perfect vacuum, a perfect gas, a frictionless surface, or the like, these can, at times, be approximated very nearly. Normative decision theory also postulates an idealized, i.e. ‘rational’, decision-maker. There is no evidence, however, that such a person can be ‘approximated’ by a living human being. And that is not all. The fundamental concepts of physics are defined with sufficient precision to exclude the sort of glaring paradoxes we observe in decision theory. And we have seen that these paradoxes are generated by the failure to realize that ‘rationality’ is an ambiguous concept. In spite of the clear gulf between the results of deduction and induction in constructing theories of decision-making, some theoreticians still take the results of normative decision theory seriously as a foundation of predictive (as well as a prescriptive) science. The fixation on the equilibrium concept illustrates a fervent hope that branches of social science founded on decision theory, for example economics, political science, and management science, may eventually attain a degree of precision and predictive success like the physical sciences. A case in point is a critique of the theory of the n-person cooperative game by Harsanyi and Selten (1988). They write: . . . although classical game theory offers a number of alternative solution concepts for cooperative games, it fails to provide a clear criterion as to which

44

A. Rapoport

1 Mathematical

Social Sciences 27 (1994) 31-47

solution concept is to be employed in analyzing a real life social situation. Nor does it give a clear answer to the obvious question of why so many different solution concepts are needed. A brief look at the origins of the theory of n-person cooperative games suggests an answer to the posed question and brings out the fundamental difference between the descriptive and normative approaches, which is adumbrated by the conception of normative theory as a limiting case of descriptive theory. The classical theory of the n-person cooperative game is essentially a theory of allocation of joint gains accruing to groups of decision-makers who have formed a coalition. A coalition is a subset of a set of n players whose members have committed themselves to coordinate their strategies with the view to maximizing their joint payoff. In particular, the grand coalition is the set of all n players who cooperate with this end in view. A central problem in the theory of the cooperative n-person game is how the joint gain of the grand coalition should be distributed among its members. In attacking this problem, collective rationality is reflected in the assumption that all of the joint gain will be distributed (i.e. nothing will be thrown away). Individual rationality is reflected in the requirement that no individual player’s share is smaller than the gain that the individual could have gotten if he left the grand coalition and played the game ‘on his own’. The concept of rationality is further refined by considering besides the rationality of individual players and the collective rationality of the grand coalition also the rationality of each subset of players who can potentially form their own coalition, i.e. leave the grant coalition. (This is the ‘intermediate’ degree of rationality between individual and collective we mentioned above.) It turns out that these three criteria of rationality are not always compatible. That is to say the set of imputations (allocations of joint gains that satisfy the collective rationality of the grand coalition, the individual rationality of each member, and the rationality of each subset of the set of players) is often empty. When it is not, it is called the core of the game and is a natural candidate for a rational solution of the allocation problem. But when the core is empty, other solutions must be sought. Of these there is a plethora and each is based on different criteria of ‘rationality’, ‘equity’, or some other theoretical construct with positive connotation. I will mention a few by name, a more detailed discussion being outside the scope of this paper: the Von Neumann-Morgenstern solution, the Shapley Value, the bargaining sets of various orders, the kernel, the nucleolus, the proportional nucleolus, the anti-nucleolus, etc. For both a normative theory and a descriptive theory that aspires to predictive power comparable with that of physical theories, this multiplicity of possible ‘solutions’ constitutes an embaras des richesse. But it is a boon to descriptive theory. Cooperative n-person games are often acceptable models of real-life situations, for example of problems involving allocating costs or benefits or both among participants in a joint venture or among the various purposes of a public works project, e.g. water management. Certain standard practices have been

A. Rapoport

I Mathematical Social Sciences 27 (1994) 31-47

45

developed by managements or bureaucracies concerned with such allocation problems. It is instructive to compare these practices with the various solutions generated by the theory of the cooperative n-person game. Identifying the actual allocation practices with approximations to the different game-theoretic solutions reveals the relative importance of the various criteria of ‘rationality’ or ‘equity’ in the minds of people concerned with establishing practices and bookkeeping systems determining the allocations, revealing value systems of which the practitioners may not even be themselves aware. The search for a unique solution of a decision problem reflects a belief in ‘rationality’ as a concept the meaning of which is presumably intuitively obvious. This defines its normative context. Or else it reflects an assumption that human beings, if sufficiently informed, are ‘rational’ in the intuitively obvious sense of classical economic theory (methodological individualism). Exploitation of the multiplicity of solutions reflects a realization of the relative weights placed on different aspects of stability or equity in various social environments, an important contribution to empirically oriented social science. Another instructive example of the difference between normative and descriptive decision theory is provided by the reactions to Arrow’s famous impossibility theorem in the theory of social choice (Arrow, 1951). The theory of social choice is sometimes subsumed under the theory of collective decision, I think improperly, because a social choice is not really a collective decision. It is an instance of a multi-objective decision. The actors, in this context ‘voters’, do not choose among alternatives as in a decision problem involving several decision-makers. They only present rank orderings on a set of alternatives. The solution of a social choice problem is a function that maps each profile of preferences on a preference ranking representing the social preference ranking. Thus the problem is analogous to a multi-objective decision problem, where each voter corresponds to some aspect of an object, e.g. the price, or the safety record, or the fuel economy record of a car. The preference ranking corresponds to the scores each object received in comparison with other objects. The social preference ranking represents an amalgam of all the individual rankings. The normative theory of social choice specifies certain properties that the social choice function should have to satisfy some feature of decision. For instance, all possible rankings of the alternatives are permitted and are made independently by the voters. The social choice rule should map a unanimous preference ranking upon the same social ranking. There should be no dictator, i.e. a voter whose preference ranking automatically becomes the social ranking, and so on. Arrow’s Impossibility Theorem states that certain apparently innocuous criteria of this sort are incompatible. That is, there exists no rule of social choice that satisfies them all. At times this result was interpreted to mean that democracy was impossible. The empirical approach starts at the other end, as it were. Instead of listing the desiderata of criteria of democracy a priori, it examines political systems that are regarded by most people as ‘democratic’ and seek to discover what they have in

46

A. Rapoport I Mathematical Social Sciences 27 (1994) 31-47

common. It is then these observed common features that define ‘democracy’. This definition would have some theoretical leverage; for then it could be shown that some of the properties of the social choice function expressed as Arrow’s axioms are often absent, which suggests what features of social choice people do not regard as essential features of ‘democracy’. In my opinion the most significant advances of decision theory were made when the theory transcended the dominant paradigm of ‘methodological individualism’. The two-person, zero-sum game is still within this paradigm to the extent that its normative theory is still concerned with maximizing an individual utility (or utility expectation) albeit under the constraint that another equally ‘rational’ actor is attempting to do the same. Still the Other is assumed to be simply the negative image of self, i.e. an antagonist with diametrically opposed interests. Once game theory transcended the two-person, zero-sum paradigm, the path of its development was discovered to be strewn with paradoxes which could be resolved only by broadening the concept of rationality to include collective rationality whose prescriptions did not in general coincide with those of individual rationality. The theory now had to be grown on the soil of ‘dialectic opposition’ between the dictates of individual and collective rationality. In addition, gradations in between came to be recognized, for example the rationality of subsets of players in n-person cooperative game, called coalitions. It was Jacob Marschak who systematized the intermediate degrees of integration of collectives. The ‘game’ represents the lowest level. In a non-cooperative game, binding agreements between players are excluded, and rational decisions are defined from the point of view of each individual player only. On the next level is the ‘coalition’, which defines the collective interest of a group of individuals in ‘dialectic opposition’ to their individual interests. Then there is the ‘foundation’ - an entity with aims or goals of its own. Individuals who work for the foundation must serve these goals, which may or may not have any bearing on their individual goals. The foundation represents a higher form of integration because, by definition, if a member does not contribute to its goals, he is excluded from the system. Finally, the ‘team’ represents the highest form of integration. In a team, the individual interests of each member completely coincide with those of the team. The question arises whether a team is, formally speaking, identical with an individual. If so, has not the generalization of decision theory gone the full cycle, beginning with the individual actor and his interests and ending with him? AS far as normative theory is concerned, the answer is yes, for the decision problem reduces in the case of the team to a simple optimization, exactly where normative theory began. But the problems of descriptive theory are quite different. Recall that the fundamental question posited by elementary descriptive decision theory is: How does a real live individual actor make decisions? The question is answered by observations on individuals. The same question is posited with the regard to the team and is answered in the same way. However, a human team is an actor quite unlike the human individual. It faces problems of coordination,

A. Rapoport

I Mathematical Social Sciences 27 (1994) 31-47

47

information processing, and execution quite different from those the individual actor faces. For instance, in making decisions in a stochastic environment, an individual may be guided by estimates of probabilities of the states of the world. How can a team be guided by such estimates if the estimates differ from member to member? Observe that these differences need not reflect conflicts of interest. (Indeed, if they do, we are not dealing with a team.) But they nevertheless generate complications in the task of describing how a team arrives at decisions (see Marschak and Radner, 1972). We have defined the task of normative decision theory as that of determining the optimal decisions of rational actors. From the pragmatic point of view, the task could be defined also as that of finding ways of improving the quality of decisions, for example by broadening the efficiency of information gathering, by examining limitations based on biased preconceptions, and the like. Strictly speaking, these inquiries are not in the domain of decision theory proper, but they are necessary adjuncts to it as components of applied sciences, such as economics, political science, or management science. It is in this area that Jacob Marschak made the most significant contributions to decision theory. In my opinion, any science is enriched when it links the domain of pure thought with the domain of effective action.

References K.J. Arrow, Social Choice and Individual J.C. Harsanyi and R. Selten, A General Cambridge, MA, 1988).

Values (Wiley, New York, 1951). Theory of Equilibrium Selection

R.D. Lute and R. Raiffa, Games and Decisions (Wiley, J. Marschak and R. Radner, Economic Theory of Teams

in Games

New York, 1957). (Yale University Press,

(MIT

New Haven,

Press,

1972).