Mathematical Social Sciences 53 (2007) 123 – 133 www.elsevier.com/locate/econbase
Sharing beliefs about actions Kin Chung Lo ⁎ Department of Economics, York University, Toronto, Ontario, Canada M3J 1P3 Received 6 March 2006; received in revised form 5 December 2006; accepted 8 December 2006 Available online 29 January 2007
Abstract We consider a strategic situation in which each player may not know the probability distribution governing the information structures of his opponents, and consequently his beliefs about opponents' action choices are represented by a set of probability measures. Suppose that beliefs of all the players are common knowledge. Then for any subset of players, the marginal beliefs of those players (about the action choices of their common opponents) must share at least one probability measure. © 2007 Elsevier B.V. All rights reserved. Keywords: Agreeing to disagree; Belief functions; Common knowledge; Multiple priors; Speculation JEL classification: C72; D81
1. Introduction Both agreeing to disagree and speculation are common phenomena, but the former seems to be much more widespread than the latter. Very often agents agree to disagree–especially about what other agents are going to do–but they do not speculate. For instance, in almost any public discussion of current affairs, the agents involved could easily find at least some disagreement on how a political or economic episode will unfold; nevertheless, they may not put their money at stake. A concrete and dramatic real-life example can be found in Greenberg (2000, p. 197), who cites Kissinger's (1982, p. 802) view on the peace talks between Egypt and Israel following the 1973 war. It appeared that the two countries (at least implicitly) agreed to disagree on what the United States would do if negotiations broke down, and that was crucial to the success of the talks.
⁎Tel.: +1 416 736 2100x77032; fax: +1 416 736 5987. E-mail address:
[email protected]. 0165-4896/$ - see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.mathsocsci.2006.12.002
124
K.C. Lo / Mathematical Social Sciences 53 (2007) 123–133
The above casual observations constitute two paradoxes. First, agreeing to disagree is incompatible with the common prior assumption (Aumann, 1976). Second, if expected utility maximizers agree to disagree, then no matter how small the disagreement is, they will speculate against each other. In this sense, agreeing to disagree, and hence absence of speculation, cannot be explained by standard economic theory.1 We argue that the probabilistic representation of beliefs is responsible for both paradoxes. Independently, Ellsberg (1961) and related findings (summarized by Camerer and Weber, 1992) demonstrate that, in the presence of ambiguity, the beliefs of an agent are typically not representable by a probability measure. A natural way to accommodate Ellsberg type behavior is to allow beliefs to be represented by not necessarily just one probability measure, but a set of them. In fact, “multiple priors” is a key feature of many generalized expected utility models. A prominent example is Gilboa and Schmeidler's (1989) maxmin expected utility, which has also been used to explain absence of speculation. Specifically, Billot et al. (2000) show that there is no Pareto-improving speculation opportunity for risk-averse maxmin expected utility maximizers as long as they “partially agree,” in the sense that their beliefs are represented by (possibly different) sets of probability measures with a nonempty intersection.2 The principal contribution of this paper is to establish “agreeing to partially agree” in an interactive situation involving at least three players. In our setup, the source of ambiguity for a player is his ignorance of the probability law governing his opponents' information structures. More concretely, each of the players perfectly understands the random nature of basic uncertainty and his own signal, but he is ignorant about the random nature of his opponents' signals. So the beliefs of each player about his opponents' action choices are represented by a set of probability measures; moreover, the set contains exactly all the measures that are consistent with a belief function (Dempster, 1967; Shafer, 1976). We prove the following result: Suppose that beliefs of all the players are common knowledge. Then for any subset of players, the marginal beliefs of those players (about the action choices of their common opponents) must partially agree. The result on its own resolves the first paradox. In addition, since the result also says that the players agree to disagree “by the right amount,” it can be combined with Billot et al. (2000) to predict absence of speculation. Thus the second paradox is also resolved. We end the paper by relating it to Lo (2006), who uses essentially the same framework to establish (in his Proposition 1) that common knowledge of beliefs implies “full agreement.” 2. Belief functions and multiple priors The concept of a belief function can be presented in many ways (cf. Shafer, 1990); we briefly describe one that is most relevant to this paper. Let Ω and S be two nonempty finite sets. The set Ω contains an exhaustive list of answers to a question QΩ; and S contains an exhaustive list of answers to a different question QS. An agent knows that exactly one element in Ω is the correct answer to QΩ, and one element in S is the correct answer to QS. He also knows that, for each ω ∈ Ω, if ω is the correct answer to QΩ, then the correct answer to QS must lie in a specific nonempty subset Γ (ω) ⊆ S. Suppose there is sufficiently well-structured information available to 1
Without agreeing to disagree, there are standard explanations for absence of speculation (e.g., Milgrom and Stokey, 1982; Samet, 1998). 2 Subsequently, Billot et al. (2002) and Rigotti and Shannon (2005) characterize partial agreement in terms of Schmeidler's (1989) Choquet expected utility and Bewley's (2002) model of incomplete preference, which are variants of the maxmin expected utility model.
K.C. Lo / Mathematical Social Sciences 53 (2007) 123–133
125
the agent, so that he can describe his uncertainty on Ω with a probability measure μ; in contrast, conditional on each ω ∈ Ω, there is no information at all on which answer in Γ (ω) is more likely to be correct. The agent calculates that for any subset Z ⊆ S, X belðZÞu lðxÞ ð1Þ fxjCðxÞpZg
is the probability that the random subset Γ (ω) is contained in Z. Hence, in the absence of further information about S, he concludes that bel(Z ) is the minimum probability that the correct answer lies in Z. Formally, a belief function on S is any function bel given by Eq. (1), for some finite probability space (Ω, μ) and some multivalued mapping Γ from Ω to S. Obviously, bel is normalized (i.e., bel(0=) = 0 and bel(S ) = 1) and monotone (i.e., bel(Z) ≥ bel(Y) for all Y ⊆ Z ⊆ S); Shafer (1976) shows that bel is also convex (i.e., bel(Y ) + bel(Z ) ≤ bel(Y [ Z ) + bel(Y ∩ Z ) for all Y, Z ⊆ S ).3 Given these properties of bel, it follows from Shapley (1971) and Schmeidler (1972) that coreðbelÞufqjq is a probability measure on S; and for all ZpS; qðZÞzbelðZÞg
ð2Þ
is nonempty, and for all Z ⊆ S, belðZÞ ¼
min
qacoreðbelÞ
qðZÞ:
ð3Þ
The core of bel defined in Eq. (2) contains all the probability measures on S that are compatible with the information conveyed by bel, and Eq. (3) says that bel is the lower envelope of its core. In this sense, a belief function and its core are equivalent ways of representing the agent's beliefs. 3. Framework The following conventions will be adopted. As in the previous section, use ⊆ for subset; when only proper subset is intended, use ⊂ instead. Let N = {1,…, n}, with n ≥ 3. Index N by either i or j; unless specified otherwise or emphasis is necessary, it is understood that the index varies over all elements in N. For any M ⊂ N, the complement of M is denoted by N \ M. For any collection {Yj}j∈N of finite sets, and for any M ⊂ N, YM ≡ × j∈M Yj; similarly, for any i ∈ N, Y−i ≡ × j∈N \ {i}Yj. For any finite set X, the set of all probability measures on X is denoted by Δ(X ). Finally, we abuse notation in the usual manner: For any element x ∈ X, the symbol x is also used for {x}. The basic framework is standard. There is a finite set Ω of states, which can be interpreted as the space of basic uncertainty. For each player i ∈ N, there is a finite set Θi of signals. All ex ante uncertainty is reflected in the product space Ω × Θ ≡Ω × × i∈N Θi , with generic element (ω, θ) ≡ (ω, (θi)i∈N). Randomness is governed by a probability measure p ∈ Δ(Ω × Θ), in the sense that with probability p(ω, θ), ω is the true state and each player i confidentially receives the signal θi. Player i is endowed with a finite set Si of actions. For each θi ∈ Θi, let γi(θi) ∈ Si be the action taken by player i when he receives the signal θi. For our purpose, it is without loss of generality to assume that each state ω ∈ Ω occurs with positive probability, and likewise for each θi ∈ Θi; that
3
In fact, Shafer (1976) proves that belief functions are totally monotone capacities. By definition, all totally monotone capacities are convex, but not vice versa.
126
K.C. Lo / Mathematical Social Sciences 53 (2007) 123–133
P P P is, haH pðx; hÞN0 for all ω ∈ Ω, and xaX h−i aH−i pðx; hi ; h−i ÞN0 for all θi ∈ Θi. Define, for every (ω, θi) ∈ Ω × Θi, X pXHi ðx; hi Þ ¼ pðx; hi ; h−i Þ: ð4Þ h−i aH−i
In words, pΩ×Θi is the marginal of p on Ω × Θi. Unlike the physical setting, one of our knowledge assumptions is not completely standard. That is, every player i knows only the following two aspects of the probability measure p: (i) the marginal pΩ×Θi of p, and (ii) the support of p. To elaborate, i understands the probability law governing the “fundamentals” and his own information structure; in contrast, except for the possible combinations of signals at every ω, he is completely ignorant about his opponents' information structures. The limited knowledge of p as described here, and everything else described in the preceding paragraph, are common knowledge among the players. As a simple example, suppose Ω is the set of all possible states of the economy, and firm i is using an econometric model to estimate the true state. Naturally, firm i keeps both its econometric model and the actual estimate confidential, and thus the other firms are ignorant about them. Given its expertise, firm i knows the stochastic nature of the economy and its own model, the possible profiles of estimates at each state, as well as how each firm would respond to every possible estimate. For instance, firm i knows that if every firm receives a good signal, then the economy must actually be good, and each firm will respond by increasing production. It is useful to further clarify two points. First, knowledge of the support of p essentially means that the players agree on what events are possible (but not necessarily how likely they are). This assumption is not too demanding. In fact, assumptions of this nature are often adopted even when heterogeneity of beliefs (e.g., Morris, 1994, pp. 1329–1330; Stuart, 1997, p. 136) or ambiguity (e.g., Billot et al., 2000, p. 687) is the emphasis. Second, knowledge of the strategy profile (γi) i∈N is implied by the completeness of the state space Ω × Θ; that is, we take the view of Savage (1954) and Aumann (1987) that Ω × Θ is exhaustive, and each (ω, θ) ∈ Ω × Θ represents a resolution of all uncertainty, including the actions taken by the players. (But we do not take their view that the beliefs of a player are necessarily represented by a probability measure.) There is no doubt that the completeness assumption is often strong. Some efforts have been taken to relax it, and there is indeed some intriguing relationship between incomplete and ambiguous state spaces (cf. Dekel et al., 1998; Epstein and Marinacci, 2006). But at least in principle, the two concepts are distinct. A firm may be able to think of all hypothetical situations in which each firm will increase production, but it may find those situations ambiguous in terms of likelihoods. Let us proceed to the interim stage in which player i receives his signal θi; for brevity, call him “player θi” or simply “θi.” As θi perfectly understands pΩ×Θ i , he assigns to each ω the posterior probability pXHi ðx; hi Þ pX ðxjhi Þu X XH : i p ðˆ x; hi Þ
ð5Þ
xaX ˆ
Also, he deduces from the support of p and his opponents' strategy profile (γj)j∈N \ i that for each ω, the opponents' action profile at ω lies in C−i ðx; hi Þufs−i aS−i jah−i aH−i such that pðx; hi ; h−i ÞN0 and ðgj ðhj ÞÞjaN qi ¼ s−i g;
ð6Þ
K.C. Lo / Mathematical Social Sciences 53 (2007) 123–133
127
with no further idea which profile in Γ−i(ω,θi) is more likely to happen. Parallel to Eq. (1), the probability measure pΩ(·|θi) on Ω and the multivalued mapping Γ− i(·, θi) from Ω to S− i imply that, for every Z− i ⊆ S− i, bi ðhi ÞðZ−i Þu
X
pX ðxjhi Þ
ð7Þ
fxjC−i ðx;hi ÞpZ−i g
is the minimum probability that the opponents' action profile lies in Z− i. We call the belief function bi(θi) defined in Eq. (7) the conjecture of θi.4 Let M be any subset of N, with cardinality 0 b |M| ≤ n − 2.5 For every i ∈ N \ M, every belief function bi on S− i, and every ZM ⊆ SM, define bM i ðZM Þ ¼ bi ðZM SNq½i[M Þ:
ð8Þ
It can be easily verified that biM is a belief function on SM. Therefore, parallel to Eq. (2), the set M coreðbM i ÞufqaDðSM ÞjqðZM Þzbi ðZM Þ 8ZM pSM g
ð9Þ
of probability measures on SM must be nonempty. If bi(θi) = bi, then biM or its corresponding core as defined in Eq. (9) can be interpreted as θi's marginal conjecture regarding the action choices of the players in M. Finally, our results in the next section require a formal definition of common knowledge. Let Θ+ be the set of signal profiles that occur with positive probability. That is, Hþ ¼ fhaHjaxaX such that pðx; hÞN0g:
ð10Þ
For any θi ∈ Θi and any Ψ ⊆ Θ, say that θi knows Ψ if {θˆ ∈ Θ+|θˆ i = θi} ⊆ Ψ. As player θi is aware of his own signal and the support of p, he is able to exclude any θˆ ∈ Θ+ such that θˆ i ≠ θi. So θi knows Ψ if the set {θˆ ∈ Θ+|θˆ i = θi} of all signal profiles that he thinks possible is contained in Ψ. Define K1(Ψ ) = {θ ∈ Θ|For every i ∈ N, θi knows Ψ}, and then recursively, for every positive integer m, Km+1(Ψ ) = K1(Km(Ψ )). If θ ∈ Km (Ψ ) for every positive integer m, then say that θ commonly know Ψ. 4. Results To further simplify notation, b(θ) ≡ (bi(θi))i∈N; and for any profile b ≡ (bi)i∈N of belief functions, ||b|| ≡ {θ ∈ Θ|b(θ) = b}. According to Proposition 1 below, common knowledge of conjectures implies partial agreement, in the sense that all the relevant marginal conjectures intersect. Proposition 1. Fix a profile θ⁎ ∈Θ+ of signals and a profile b of belief functions. Suppose θ⁎ =. commonly know ||b||. Then, b(θ⁎)=b; and for every M ⊂ N with 0b |M|≤ n− 2, ∩i∈N⧹M core(bM i )≠ 0
Player θi's preference is not our main focus, and therefore will not be explicitly specified. With preferences, our framework can be used to formulate notions such as common knowledge of rationality (cf. Epstein, 1997; Ghirardato and Le Breton, 2000). 5 As our ultimate objective is to see whether the players in N \ M agree about the action choices of the players in M, the only consequence of imposing 0 b |M| ≤ n − 2 is that the trivial cases are excluded. 4
128
K.C. Lo / Mathematical Social Sciences 53 (2007) 123–133
Unlike Aumann (1976), Proposition 1 does not rule out agreeing to disagree. Of course, one could also allow agreeing to disagree by dropping the common prior assumption. But agreeing to disagree among expected utility maximizers must lead to speculation. In contrast, combining Proposition 1 with Billot et al. (2000), if every θ⁎i has a risk-averse maxmin expected utility preference over acts defined on S− i, with the associated set of probability measures equal to core(bi), then common knowledge of conjectures implies that no subset of the players would speculate with each other on the action choices of their opponents. Our value added is to insert epistemic content into the picture of Billot et al. (2000). They prove that partial agreement is equivalent to absence of speculation opportunity. But (how) can partial agreement actually happen?6 Proposition 1 is one step toward providing an answer. Intuitively, the proposition is applicable to scenarios like those described in the beginning of the paper. But we do not have a rigorous argument for how the players reach common knowledge of conjectures.7 As a concrete illustration of Proposition 1, consider the following • set Ω = {ωˆ , ω′} of states. • sets Θ1={θU, θD}, Θ2 = {θL, θR}, and Θ3 = {θA} of signals. • probability measure p ∈ Δ(Ω × Θ), where ˆ hU ; h L ; hA Þ ¼ pðx; ˆ hD ; hR ; hA Þ pðx;
ð11Þ
¼ pðx V; hU ; hR ; hA Þ
ð12Þ
¼ pðx V; hD ; hL ; hA Þ
ð13Þ
1 ¼ : 4
ð14Þ
• sets S1 = {U, D}, S2 = {L, R}, and S3 = {A} of actions. • strategies g1 : H1 YS1 ; where g1 ðhU Þ ¼ U ; and g1 ðhD Þ ¼ D: g2 : H2 YS2 ; where g2 ðhL Þ ¼ L; and g2 ðhR Þ ¼ R: g3 : H3 YS3 ; where g3 ðhA Þ ¼ A: We use the above specifications to work out in detail, from player 3's perspective, Eqs. (5)– (9). By Eqs. (11)–(14), 1 pX ðxjh ˆ A Þ ¼ pX ðx VjhA Þ ¼ : 2
ð15Þ
The support of p and the strategies γ1 and γ2 imply ˆ hA Þ ¼ fðU ; LÞ; ðD; RÞg and C−3 ðx V; hA Þ ¼ fðU ; RÞ; ðD; LÞg: C−3 ðx;
6
ð16Þ
There seems to be only informal stories in the literature on how partial agreement happens. For instance, Mukerji and Tallon (2004, p. 287) write: “Thus, the fact that people do not bet against one another on many issues could be interpreted not as evidence that they have the same beliefs but rather that they have vague beliefs about these issues, and that vagueness is sufficiently large to ensure that agents have overlapping beliefs.” 7 In the setting of Aumann (1976), Geanakoplos and Polemarchakis (1982) show that if the players are allowed to announce and revise their posteriors, then common knowledge of conjectures can be achieved.
K.C. Lo / Mathematical Social Sciences 53 (2007) 123–133
By Eqs. (15) and (16), we 8 1 > > > >1 > < b3 ðhA ÞðZ−3 Þ ¼ 2 1 > > > > > :2 0
129
have for every Z− 3 ⊆ S− 3, if fðU ; LÞ; ðD; RÞ; ðU ; RÞ; ðD; LÞgpZ−3 if fðU ; LÞ; ðD; RÞgpZ−3 and fðU ; RÞ; ðD; LÞgJZ−3 if fðU ; LÞ; ðD; RÞgJZ−3 and fðU ; RÞ; ðD; LÞgpZ−3
ð17Þ
if fðU ; LÞ; ðD; RÞgJZ−3 and fðU ; RÞ; ðD; LÞgJZ−3 :
Let b3 = b3(θA), and calculate the marginal of b3 on S1. It follows from Eq. (17) that b13 ðU Þub3 ðfðU ; LÞ; ðU ; RÞgÞ ¼ 0 and b13 ðDÞub3 ðfðD; LÞ; ðD; RÞgÞ ¼ 0: Hence core(b31) = Δ(S1). Along the same line, it can be verified that b1(θU ) = b1(θD) ≡ b1, where b1 can be identified with the probability measure assigning 1/2 to each of the two action profiles (L, A) and (R, A); similarly, b2(θL) = b2(θR) ≡ b2, where b2 can be identified with the probability measure assigning 1/2 to each of the two action profiles (U, A) and (D, A).8 Obviously, core(b21) contains only the probability measure assigning probability 1/2 to each of the two actions U and D. So core(b21) ∩ core(b31) ≠ =0, but core(b21) ≠ core(b31). Given that b(θ) = b for all θ ∈ Θ, θ commonly know ||b|| for all θ ∈ Θ. To summarize, we have an example in which the conjectures are common knowledge and the players partially–but not fully–agree. Given Proposition 1, it is natural to ask: Are there extra conditions under which common knowledge of conjectures implies full agreement? Obviously, a sufficient condition is that each core(biM ) is a singleton. This is basically Aumann and Brandenburger's (1995) result; that is, if conjectures are probabilistic and common knowledge, then all the relevant marginal conjectures must be identical. Lo's (2006, p. 7) Proposition 1, which is virtually the same as Proposition 2 below, provides another answer.9 Proposition 2. Suppose that for every ω ∈ Ω, jaN fhj aHj jpXHj ðx; hj ÞN0g ¼ fhaHjpðx; hÞN0g:
ð18Þ
Fix a profile θ⁎ ∈ Θ+ of signals and a profile b of belief functions. Suppose θ⁎ commonly know ||b||.Then b(θ⁎) = b; and for every M ⊂ N with 0 b |M| ≤ n − 2, biM = bjM for all i, j ∈ N \ M. The probability measure p defined in Eqs. (11)–(14) does not satisfy Eq. (18). To confirm, note that 1 1 1 pXH1 ðx; ˆ hU Þ ¼ ; pXH2 ðx; ˆ hR Þ ¼ ; pXH3 ðx; ˆ hA Þ ¼ ; 4 4 2 but p(ωˆ, θU , θR, θA) = 0. The essential condition in Eq. (18) is that, if each of the jth components θj of a signal profile θ is possible at ω, then θ is possible at ω. (Of course, the converse direction is always true.) The following is an example in which this condition is satisfied. Let Ω = {good, bad} and Θj = {good, neutral, bad} for all j ∈ N. Interpret Ω as the set of possible states of the economy, and Θj as the set of possible estimation results delivered by the econometric model of firm j. Suppose that if ω = good is the true state, then firm j must receive either the signal 8
In general, bi can be identified with a probability measure if and only if bi(Z−i) = 0 for every non-singleton Z−i. Lo's (proof of ) Proposition 1, which is explicitly for |M| = n − 2, can be easily modified for any 0 b |M| ≤ n − 2. There are also numerical examples in his paper illustrating the result.
9
130
K.C. Lo / Mathematical Social Sciences 53 (2007) 123–133
θj = good or θj = neutral; on the other hand, if ω = bad, then j must receive either θj = neutral or θj = bad. So the econometric models used by the firms are “globally similar.” But perhaps due to some minor differences in econometric methodology, the models are “locally not too similar,” in the sense that conditional on ω = good, no signal profile in {good, neutral} n is a null event; analogously for the state ω = bad. However, Eq. (18) does not say that conditional on each ω, the signals have to be uncorrelated. For instance, when ω = good, it may well be very probable that every firm j receives the signal θj = good. Suppose that Eq. (18) holds. Then Eq. (6) can be rewritten as C−i ðx; hi Þ ¼ jaN qi Cj ðxÞ; where Cj ðxÞufsj aSj jahj aHj such that pXHj ðx; hj ÞN0 and gj ðhj Þ ¼ sj g:
ð19Þ
Roughly speaking, Eq. (19) says that player i's signal has limited informational content; that is, while θi may provide additional information on the likelihoods of events in Ω, it provides no additional information on what action profiles of opponents are possible at each state in Ω. That is why Eq. (18) ensures that common knowledge of conjectures is sufficient for full agreement. In terms of the notation here, Lo's (2006) paper starts with the assumption that player i knows the following two aspects of p: (i) the marginal pΩ×Θi of p, and (ii) the support of pΩ×Θj for all j ∈ N. Note that knowing the latter does not imply knowing the support of p. In this case, it seems most natural for i to assume that j∈N{θj ∈ Θj|pΩ×Θj(ω, θj) N 0} is the set of all possible combinations of signals at ω. This extra assumption allows Eq. (18) to be used in Lo's Proposition 1. Further clarifications are provided in the Appendix. Acknowledgements I would like to thank two referees for their valuable comments. Appendix A Proof of Proposition 1. For each θ ∈ Θ+, define Πi (θ) = {θˆ ∈ Θ+|θˆi = θi}. Obviously, θ ∈ Πi(θ) for all θ ∈ Θ+; and the collection {Πi(θ)}θ∈Θ+forms a partition of Θ+. For any nonempty E ⊆ Θ+, say that E is a public event if for every i ∈ N, E is the union of (some or all) elements in {Πi(θ)}θ∈Θ+. With Θ+ in place of Θ+, the first paragraph in Lo's (2006, pp. 14–15) proof of Lemma 2 establishes that for every θ ∈ Θ+ and Ψ ⊆ Θ, θ commonly know Ψ if and only if there exists a public event E such that θ ∈ E ⊆ Ψ. By hypothesis, θ⁎ ∈ Θ+ and θ⁎ commonly know ||b||. So, starting from this point, fix a public event E with θ⁎ ∈ E ⊆ ||b||. Then we have b(θ) = b for all θ ∈ E, and in particular, b(θ⁎) = b. Use proji E to denote the projection of E on Θ i. For every M ⊂ N with 0 b |M| ≤ n − 2, every ZM ⊆ SM, every i ∈ N \ M, and every θi ∈ projiE, M bM i ðZM Þ ¼ bi ðhi ÞðZM Þ
ð20Þ
¼ bi ðhi ÞðZM SN q½i[M Þ ¼
X fxjC−i ðx;hi ÞpZM SN q½i[M g
pX ðxjhi Þ
ð21Þ ð22Þ
K.C. Lo / Mathematical Social Sciences 53 (2007) 123–133
X ¼
pXHi ðx; hi Þ
fxjC−i ðx;hi ÞpZM SNq½i[M g
X
ð23Þ
pXHi ðx; hi Þ
xaX
X
¼
131
X
fxjC−i ðx;hi ÞpZM SNq½i[M g h−i aH−i
X X
pðx; hi ; h−i Þ ð24Þ
pðx; hi ; h−i Þ
xaX h−i aH−i
X
¼
pX ðxÞpHi ðhi jxÞ
fxjC−i ðx;hi ÞpZM SNq½i[M g
X
;
pX ðxÞpHi ðhi jxÞ
ð25Þ
xaX
where pX ðxÞu
X
pðx; hÞ
8xaX
ð26Þ
haH
and
X pHi ðhi jxÞu
pðx; hi ; h−i Þ
h−i aH−i
8xaX
pX ðxÞ
8hi aHi :
ð27Þ
The equality in Eq. (20) is due to the fact that b(θ) = b for all θ ∈ E. Use Eq. (8) to rewrite Eq. (20) as Eq. (21); use Eq. (7) to rewrite Eq. (21) as Eq. (22); use Eq. (5) to rewrite Eq. (22) as Eq. (23); use Eq. (4) to rewrite Eq. (23) as Eq. (24); finally, use Eqs. (26) and (27) to rewrite Eq. (24) as Eq. (25). Next, we establish X bM i ðZM Þ ¼
bM i ðZM Þ
hi aproji E
X X
hi aproji E xaX
X ¼
xaX
ð28Þ
pX ðxÞpHi ðhi jxÞ X
X X
xaX
X ¼
pX ðxÞpHi ðhi jxÞ
pX ðxÞpHi ðhi jxÞ
hi aproji E fxjC−i ðx;hi ÞpZM SN q½i[M g
X ¼
X
xaX
ð29Þ
pX ðxÞpHi ðhi jxÞ
hi aproji E xaX
pX ðxÞ
X
pHi ðhi jxÞ
fhi aproji EjC−i ðx;hi ÞpZM SN q½i[M g
X
pX ðxÞ
xaX
X
hi aproji E
X
pX ðxÞ
pH ðhjxÞ
fhaEjC−i ðx;hi ÞpZM SN q½i[M g
X
xaX
ð30Þ
pHi ðhi jxÞ
pX ðxÞ
X haE
pH ðhjxÞ
;
ð31Þ
132
K.C. Lo / Mathematical Social Sciences 53 (2007) 123–133
where pH ðhjxÞu
pðx; hÞ pX ðxÞ
8xaX
8haH:
ð32Þ
The equality in Eq. (29) follows from the equivalence of Eqs. (20) and (25). Note that the two probability measures defined in Eqs. (27) and (32) satisfy, for every θi ∈ Θi and every ω ∈ Ω, X pHi ðhi jxÞ ¼ pH ðhi ; h−i jxÞ ð33Þ h−i aH−i
X
z
pH ðhi ; h−i jxÞ
8DpH:
ð34Þ
fh−i jðhi ;h−i ÞaDg
Also note that for the (in fact, for every) public event E, every θi ∈ projiE, every θ−i ∈ Θ−i, and every ω ∈ Ω, if pΘ(θi,θ−i|ω) N 0, then (θi, θ−i) ∈ Πi(θi,θ−i) ⊆ E. Thus, with D = E and θi ∈ projiE, the inequality in Eq. (34) becomes an equality. This in turn implies the equality in Eq. (31). Define ρ⁎ ∈ Δ(SM) as follows: for every ZM ⊆ SM, X X pX ðxÞ pH ðhjxÞ q⁎ ðZM Þ ¼
xaX
X xaX
fhaEjðgj ðhj ÞÞjaM aZM g
pX ðxÞ
X
pH ðhjxÞ
:
ð35Þ
haE
According to Eq. (6), for every ω and θ with p Θ (θ|ω) N 0, and every Z M ⊆ S M, if Γ−i(ω,θ i) ⊆ Z M × S Nq[i⋃M], then (γ j(θ j)) j∈M ∈ Z M. So we must have, for every ω ∈ Ω, X X pH ðhjxÞz pH ðhjxÞ: ð36Þ fhaEjðgj ðhj ÞÞjaM aZM g
fhaEjC−i ðx;hi ÞpZM SNq½i[M g
Eqs. (28)–(31) and Eqs. (35)–(36) imply ρ⁎(ZM) ≥ biM(ZM) for all ZM ⊆SM, and hence ρ⁎ ∈ core (biM). As i ∈N \M is arbitrary, ∩i∈N \ M core(biM) ≠ =0 is proved. □ Proof of Proposition 2. Note that pΘi(θi|ω) defined in Eq. (27) can be identified with βi(ω)(θi) defined in Lo (2006, p. 5, Section 3); and pΩ (ω|θi) defined in Eq. (5) can be identified with μ(ω|θi) defined in Lo's Eq. (9). Suppose that Eq. (18) is satisfied for all ω ∈ Ω. Then the set Θ+ defined in Eq. (10) becomes Θ+ defined in Lo's Eq. (14); and Eq. (19) implies that the belief function bi(θi) defined in Eq. (7) corresponds to the basic probability assignment ϕi(θi) in Lo's Eq. (11). Putting everything together, Proposition 2 is a restatement of Lo's Proposition 1. □ References Aumann, R.J., 1976. Agreeing to disagree. Annals of Statistics 4, 1236–1239. Aumann, R.J., 1987. Correlated equilibrium as an expression of Bayesian rationality. Econometrica 55, 1–18. Aumann, R.J., Brandenburger, A., 1995. Epistemic conditions for Nash equilibrium. Econometrica 63, 1161–1180. Bewley, T., 2002. Knightian decision theory. Part I. Decisions in Economics and Finance 25, 79–110. Billot, A., Chateauneuf, A., Gilboa, I., Tallon, J.-M., 2000. Sharing beliefs: between agreeing and disagreeing. Econometrica 68, 685–694. Billot, A., Chateauneuf, A., Gilboa, I., Tallon, J.-M., 2002. Sharing beliefs and the absence of betting in the Choquet expected utility model. Statistical Papers 43, 127–136.
K.C. Lo / Mathematical Social Sciences 53 (2007) 123–133
133
Camerer, C., Weber, M., 1992. Recent developments in modelling preference: uncertainty and ambiguity. Journal of Risk and Uncertainty 5, 325–370. Dekel, E., Lipman, B., Rustichini, A., 1998. Recent developments in modeling unforeseen contingencies. European Economic Review 42, 523–542. Dempster, A., 1967. Upper and lower probabilities induced from a multivalued mapping. Annals of Mathematical Statistics 38, 325–339. Ellsberg, D., 1961. Risk, ambiguity, and the Savage axioms. Quarterly Journal of Economics 75, 643–669. Epstein, L., 1997. Preference, rationalizability and equilibrium. Journal of Economic Theory 73, 1–29. Epstein, L., Marinacci, M., 2006. Coarse Contingencies. Manuscript. Department of Economics, University of Rochester, NY, USA. Geanakoplos, J., Polemarchakis, H., 1982. We can't disagree forever. Journal of Economic Theory 28, 192–200. Ghirardato, P., Le Breton, M., 2000. Choquet rationality. Journal of Economic Theory 90, 277–285. Gilboa, I., Schmeidler, D., 1989. Maxmin expected utility with non-unique prior. Journal of Mathematical Economics 18, 141–153. Greenberg, J., 2000. The right to remain silent. Theory and Decision 48, 193–204. Kissinger, H., 1982. Years of Upheaval. Little, Brown and Company, Boston. Lo, K.C., 2006. Agreement and stochastic independence of belief functions. Mathematical Social Sciences 51, 1–22. Milgrom, P., Stokey, N., 1982. Information, trade and common knowledge. Journal of Economic Theory 26, 17–27. Morris, S., 1994. Trade with heterogeneous prior beliefs and asymmetric information. Econometrica 62, 1327–1347. Mukerji, S., Tallon, J.-M., 2004. An overview of economic applications of David Schmeidler's models of decision making under uncertainty. In: Gilboa, I. (Ed.), Uncertainty in Economic Theory: Essays in Honor of David Schmeidler's 65th Birthday. Routledge, London, pp. 283–302. Rigotti, L., Shannon, C., 2005. Uncertainty and risk in financial markets. Econometrica 73, 203–243. Samet, D., 1998. Common priors and separation of convex sets. Games and Economic Behavior 24, 172–174. Savage, L., 1954. The Foundations of Statistics. John Wiley, New York. Schmeidler, D., 1972. Cores of exact games, I. Journal of Mathematical Analysis and Applications 40, 214–225. Schmeidler, D., 1989. Subjective probability and expected utility without additivity. Econometrica 57, 571–587. Shafer, G., 1976. A Mathematical Theory of Evidence. Princeton University Press, Princeton. Shafer, G., 1990. Perspectives on the theory and practice of belief functions. International Journal of Approximate Reasoning 4, 323–362. Shapley, L.S., 1971. Cores of convex games. International Journal of Game Theory 1, 11–26. Stuart, H.W., 1997. Common belief of rationality in the finitely repeated prisoners' dilemma. Games and Economic Behavior 19, 133–143.