Journal of Mathematical Economics 47 (2011) 22–28
Contents lists available at ScienceDirect
Journal of Mathematical Economics journal homepage: www.elsevier.com/locate/jmateco
Partial probabilistic information Alain Chateauneuf, Caroline Ventura ∗ CES, CERMSEM: Université de Paris I, 106-112 boulevard de l’Hôpital, 75647 Paris Cedex 13, France
a r t i c l e
i n f o
Article history: Received 26 February 2010 Received in revised form 11 July 2010 Accepted 28 September 2010 Available online 25 November 2010 JEL classification: D81 Keywords: Partial probabilistic information Exact capacity Core Extensions of set functions
a b s t r a c t Suppose a decision maker (DM) has partial information about certain events of a -algebra A belonging to a set E and assesses their likelihood through a capacity v. When is this information probabilistic, i.e. compatible with a probability? We consider three notions of compatibility with a probability in increasing degree of preciseness. The weakest requires the existence of a probability P on A such that P(E) ≥ v(E) for all E ∈ E, we then say that v is a probability lower bound. A stronger one is to ask that v be a lower probability, that is the infimum of a family of probabilities on A. The strongest notion of compatibility is for v to be an extendable probability, i.e. there exists a probability P on A which coincides with v on E. We give necessary and sufficient conditions on v in each case and, when E is finite, we provide effective algorithms that check them in a finite number of steps. © 2010 Elsevier B.V. All rights reserved.
1. Introduction Following the pioneering work of von Neumann and Morgenstern (1947) and Savage (1954), numerous authors have considered the problem of defining a mathematical framework allowing to model situations of uncertainty. Notably, Dempster (1967) and Shafer (1976, 1981) have proposed a representation of uncertain environments requiring a “lower probability” function (Dempster) or “degree of belief” (Shafer). This belief function is the lower envelope of a family of probabilities which are compatible with the given data. Though it is not additive in general, it is nevertheless a capacity. These works have been at the source of the study of the properties of the core of a capacity. In particular, Shapley (1971) has shown that the core of a convex capacity is not empty and has studied its geometry in detail, giving economic interpretations of his results. Our subject of interest in this paper is the situation of a decision maker (DM) who assesses the likelihood of certain events about which he has partial subjective or objective information. For example, in an uncertain environment, a DM can have partial information regarding a certain set of events E and form a belief v over E. In order to model this situation, we consider a -algebra A of subsets of a set of states of nature , E a subset of A, containing
∗ Corresponding author. E-mail addresses:
[email protected] (A. Chateauneuf),
[email protected] (C. Ventura). 0304-4068/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.jmateco.2010.09.007
and ∅, and v a (generalized) capacity on E (i.e. a monotone setfunction such that v(∅) = 0 and v() = 1) representing the DM’s subjective or objective assessment of likelihood of the events in E. To determine to what measure his assessment of likelihood is probabilistic, we consider three degrees of compatibility of v with a probability, corresponding to more and more precise probabilistic information. (Throughout the paper, “probability” means “finitely additive probability” and the two terms are used interchangeably.) (1) The first degree requires the existence of a probability that dominates v, that is v is a lower bound of a probability P on A. Noting PA , the set of finitely additive probabilities on A (i.e. the set of positive set functions P on A such that P() = 1 and that for all A, B in A, A ∩ B = ∅ ⇒ P(A ∪ B) = P(A) + P(B)), we shall call v a probability lower bound if there exists P ∈ PA such E in that P(E) ≥ v(E) for all E or in other words if its core C(v) = P ∈ PA |P(E) ≥ v(E)∀E ∈ E is non-empty. This is the minimal condition of compatibility. (2) For a DM who knows that the events he considers are governed by a probability which belongs to a given set P, the prudent behavior is to assess the likelihood of each event by minimizing its probability on P. v is then interpreted as a lower probability, that is as the lower envelope of a family of probabilities on A. We shall call v a lower probability if for all E in E, there exists P in C(v) such that P(E) = v(E), that is if v is exact. This is the second degree of compatibility. (3) The third degree of compatibility requires the likelihood function to be extendable to a probability on the algebra generated by the events under consideration, v is then interpreted as the restriction to E of a probability on A.
A. Chateauneuf, C. Ventura / Journal of Mathematical Economics 47 (2011) 22–28
We shall therefore say that v is an extendable probability if there exists P in PA such that P(E) = v(E) for all E in E. The following elementary example may serve as an illustration of our motivation: testing the reliability of samples with partial observations. From two successive, exhaustive and anonymous surveys of the students of a college, it follows that 30% are smokers and 80% drink alcohol. Here, the set of states of nature is = (s, a) , (s, na) , (ns, a) , (ns, na) where s, ns stands for “smoker”, “non smoker” and a, na stands for “drinks alcohol”, “does not drink alcohol”, E = (s, a) , (s, na) , (a, s) , (a, ns) ,
= 0.3 and v (a, s) , (a, ns) = 0.8. (s, a) , (s, na) A natural question arises: are these answers reliable, in other words, does there exist a probability P on 2˝ such that the data satisfy v = P|E ? Indeed, here this is the case and Theorem 4.3 below would furthermore allow to describe the set of compatible probabilities P. One might like moreover to test the answers to more embarrassing questions and thus process to a third exhaustive and anonymous survey delivering that 5% of the students both smoke and drink alcohol i.e. v (s, a) = 0.05. There one would conclude that this last information is not (strictly) reliable (the new v is not extendable to a probability on 2˝ ). Nevertheless v fulfills the first degree of compatibility without fulfilling the second one, from which one can conclude that the declared percentage 0.05 is an underestimation of the actual percentage of students both smoking and drinking alcohol. In this paper, we show that these criteria can be translated into explicit conditions on v, using Ky Fan’s theorem (Kuhn and Tucker, 1956). Unfortunately, the number of these conditions is infinite, even when the set of states of nature is finite. Using a trick due to Wolf (see Huber, 1981), we then show that it is possible to modify these criteria in such a way that, when E is finite (even if is not), there remains only a finite number of conditions, that can be checked through an effective algorithm in a finite number of steps. In Section 2, we give the definitions of the main notions needed. In Section 3, we give preliminary results which will be useful in the sequel. In Section 4, we state the main results of the paper. All the proofs are relegated to an Appendix A at the end of the paper.
v
2. Definitions We first give some definitions that will be used in the sequel. Definition 2.1. Let be a set. A collection A ⊂ 2 is a -system if it satisfies the following three properties: (1) (2) (3) (4) (5)
∅, ∈ A. A, B ∈ A, A ∩ B = ∅ ⇒ A ∪ B ∈ A. A ∈ A ⇒ Ac ∈ A. A is an algebra if it satisfies in addition: A, B ∈ A ⇒ A ∩ B ∈ A. An algebra A is a -algebra if it is closed under denumerable union i.e. if (An )n ∈ N is a sequence of elements of A, then ∪n ∈ N An ∈ A.
Definition 2.2. Let A be an algebra, MA will denote the set of finitely additive set-functions on A, i.e. the set of set-functions such that (A ∪ B) = (A) + (B) whenever A ∩ B = ∅ and PA ⊂ MA the set of finitely additive probabilities on A. Definition 2.3. Let be a set, the characteristic function of a proper subset A of will be noted A∗ . However for simplicity, the characteristic function of itself will be noted 1.
23
Definition 2.4. Let be a set and E a subset of 2˝ (in this paper, we will always assume that E contains ∅ and ) the algebra (respectively -algebra) generated by E is denoted by A(E) (respectively A (E)). Definition 2.5. Let be a set and E a subset of 2˝ . A set function v on E is called a (generalized) capacity if it is monotone i.e. A, B ∈ EA ⊂ B ⇒ v(A) ≤ v(B), v(∅) = 0 and v() = 1. Definition 2.6. Given a set , an algebra A of subsets of , E ⊂ A and v a capacity on E, the inner set-function v∗ associated with v is defined on A by:
v∗ (A) = sup v(E)|E ∈ E, E ⊂ A . Definition 2.7. Let v be a capacity defined on a subset E of an algebra A. The core of v is defined by: C(v) =
P ∈ PA |P(E) ≥ v(E)∀E ∈ E
(in case there is more than one algebra, we will note CA (v) the core of v with respect to A, in order to avoid any risk of confusion). Definition 2.8. Let E be a subset of an algebra A. A capacity v is / ∅ i.e. if there exists a said to be a probability lower bound if C(v) = probability P on A such that P ∈ C(v) i.e. such that P(E) ≥ v(E) for all E ∈ E. Definition 2.9. Let E be a subset of an algebra A. A capacity v is said to be a lower probability if v is exact i.e. if for all E ∈ E, there exists P on A such that P(E) = v(E) and P ∈ C(v). Definition 2.10. Let E be a subset of an algebra A. A capacity v is said to be an extendable probability if there exists a probability P on A such that P(E) = v(E) for all E ∈ E. Definition 2.11. A capacity v on an algebra A is said to be convex if whenever A, B ∈ A,
v(A ∪ B) + v(A ∩ B) ≥ v(A) + v(B). 3. Preliminary results First we introduce some preliminary results that will be useful in the rest of the paper. Lemma 3.1. Let A be an algebra of subsets of a set , E ⊂ A and v a capacity on E. Then v∗ is a capacity on A such that v∗ (E) = v(E) for all E ∈ E. Lemma 3.2. Let E be a subset of an algebra A on a set and v a capacity defined on E. Then C(v) = C(v∗ ). Proposition 3.3. (Teper proposition 1 (Teper, 2009)) Let A and B be two algebras of subsets of a set such that B ⊂ A, then every probability P on B can be extended into a probability on A and the set of extensions of P to a probability on A is equal to CA (P∗ ). 4. Partial probabilistic information We now come to the study of the situation of a decision maker (DM) who considers a set of states of nature and has partial subjective or objective information about certain events in a subset E of a -algebra A of subsets of . E will be naturally assumed to contain and ∅ and his assessment of likelihood v to be a monotone set function taking its values in [0, 1] and satisfying v() = 1 and v(∅) = 0, that is a capacity. We would like to give an answer to the three following questions:
24
A. Chateauneuf, C. Ventura / Journal of Mathematical Economics 47 (2011) 22–28
(1) When can v be interpreted as a lower bound for some probability P on A, that is when is the core of v non-empty? (2) When can v be interpreted as a lower probability, i.e. when is v the infimum of a family of probabilities on A? (3) When can v be interpreted as the restriction to E of a given probability on A? We first give a characterization of “probability lower bound”. Theorem 4.1.
The following assertions are equivalent:
(1) v is a probability lower bound. (2) For all n ∈ N∗ (= N \ 0 ), ai > 0 and Ai ∈ A (E) such that the functions A∗i are linearly independent, n
ai A∗i = 1 ⇒
n
i=1
ai v∗ (Ai ) ≤ 1.
i=1
We now come to a generalization to capacities defined on an arbitrary set of well-known characterizations of exactness previously obtained for capacities defined on -algebras (see e.g. Kannai, 1969; Schmeidler, 1972) or else finite algebras (see Huber, 1981). Theorem 4.2.
The following assertions are equivalent:
(1) v is a lower probability. (2) For all E ∈ E, n ∈ N∗ , ai ∈ R, Ai ∈ A (E) \ E such that ai > 0 if Ai = / and the functions A∗i are linearly independent. n
ai A∗i = E ∗ ⇒
i=1
n
ai v∗ (Ai ) ≤ v(E).
i=1
We are now interested in finding necessary and sufficient conditions for v to be an extendable probability. A necessary condition for v to be extendable is that, when E and Ec belong to E, then v(E) + v(E c ) = 1. Now, suppose that this condition is satisfied and set: E˜ =
A|A ∈ E or Ac ∈ E .
Then, we can unambiguously extend v to E˜ by setting:
v˜ (E) =
v(E) if E ∈ E 1 − v(E c ) ifE c ∈ E
With this definition, we can state the following theorem: Theorem 4.3.
The following assertions are equivalent:
(iii) Note that Theorems 4.1–4.3 are valid even when is not finite. When is finite, there is clearly only a finite number of conditions to check in order to see if the core of a (generalized) capacity is non-empty and if it is a lower or an extendable probability. Furthermore we note that, since the characteristic functions appearing in the left member of the relations in Theorems 4.1–4.3 are linearly independent, the coefficients are fixed by these relations. Therefore when E is finite (so that A (E) is also finite), one has only to check a finite number of relations even if is not assumed to be finite. However it is necessary to take into consideration all Ai in A (E) and not only those belonging to E. This cannot be improved by replacing A (E) by E in condition (2) of Theorem 4.1. Consider for instance the following example: := 1, 2, 3 , E :=
∅, , 1 , 2 and define v on E by: v(∅) = 0, v() =
1, v( 1 ) = v( 2 ) = 3/4. It is obvious that v is a generalized capacity and that the only way to obtain 1 as a linear combination with positive coefficients of linearly independent characteristic functions of elements of E is ∗ = 1. Since v() = 1, condition (2) of Theorem 4.1 where A (E) is replaced by E would However it is clear that C(v) is besatisfied. empty since v( 1 ) + v( 2 ) = 3/2 > 1. Theorems 4.1–4.3 provide effective algorithms in case E is finite. We give some details about this algorithm for the non-emptiness of the core, in the case ofTheorem 4.1. - From the data E := E1 , . . . , Ep , determine A (E) = A(E) in a finite number of steps by taking finite unions, intersections and complements. - From the data v(Ei ), 1 ≤ i ≤ p, compute v∗ (Ai ), Ai ∈ A(E), 1 ≤ i ≤ q = card(A(E)). Let r be the number of atoms of A(E). - For all 1 ≤ s ≤ r, determine the free subsets of card
s of A(E). - Then, retain only the free subsets
Remark 4.4. (i) In Theorem 4.3, one might like to have a simpler criterion than that of condition (2). For example, is it possible to do without (2) (b) i.e. does (2) (a) imply (2) (b)? Unfortunately this is not the case as can easily be shown (see Nehring, 1999 example 2 p. 202). (ii) Other authors have previously proved results very close to those of Theorems 4.1 and 4.2, although using a different, geometric, method which is valid only when is finite (see Azriely and Lehrer, 2007).
s
1
.
- Compute in a standard way the determined) coeffis (uniquely ∗ = 1. cients aik , if they exist, such that a A i k=1 k i k
- Retain only the linear combinations where all the coefficients aik are positive. s- Finally, check for those combinations whether a v (A ) ≤ 1. k=1 ik ∗ ik This allows to decide in a finite number of steps whether C(v) is empty or not. Two simple examples: (1) An example of function a set whose core is non-empty. 1 , 2, 3 and set v 1 = 1/3 Let = 1, 2, 3 , E = and v
2, 3
= 1/3.
(1) v is an extendable probability. (a) For all E ∈ E such that E c ∈ E, v(E) + v(E c ) = 1 and v˜ is a capacity on E˜ . (2) (b) For all n ∈ N∗ , ai > 0 and Ai ∈ A (E) such that the functions A∗i are linearly independent, Furthermore, letting PA (v) (respectively PA (˜v)) denote the set of probabilities on A which extend v (respectively v˜ ), we have that PA (v) = PA (˜v) = CA (˜v).
A∗i , . . . , A∗i
n
a A∗ i=1 i i
=1⇒
n
a v˜ (A ) i=1 i ∗ i
≤ 1.
It is obvious that C(v) = / ∅ (take for instance P
1
=
2 =P 3 = 1/3). One can also 4.1: check this by applying Theorem A(E) = ∅, , 1 , 2, 3 , v∗ (∅) = 0, v∗ = v∗ 1 = P
v∗
2, 3
= 1/3.
4
The only way to obtain early independent is to set:
∗
∗ = 1or 1
∗
+ 2, 3
= 1.
a A∗ i=1 i i
= 1 with Ai ∈ A(E) and A∗i lin-
A. Chateauneuf, C. Ventura / Journal of Mathematical Economics 47 (2011) 22–28
Adding these two inequalities, we obtain
In the first case, we have
v∗ =
1 ≤1 3
P∗ (A1 ) + P∗ (A2 ) − 2
and in the second
v∗
1
+ v∗
2, 3
=
2 ≤ 1. 3
(2) An example of a set function whose core is empty. Consider again the example of Remark 4.4. We have seen that C(v) is empty, however considering linear combinations of characteristic functions of events in E did not yield this result. By contrast, the emptiness of the core follows immediately from Theorem 4.1, indeed:
∗ 1
∗
+ 2
∗
+ 3
=1
and
25
< P(B1 ) + P(B2 ) = P(B1 ∪ B2 ) + P(B1 ∩ B2 ) ≤ P∗ (A1 ∪ A2 ) + P∗ (A1 ∩ A2 )
Which gives the result since is arbitrary. Now, since P∗ is convex, its core is non-empty (see Schmeidler, 1986), so that the only thing that remains to be proved is that Q = CA (P∗ ). Let Q ∈ Q and A ∈ A. Since, for every B ∈ B such that B ⊂ A, P(B) = Q(B) and Q(B) ≤ Q(A), we see that P∗ (A) ≤ Q(A) and therefore Q ⊂ CA (P∗ ). Conversely, let Q ∈ CA (P∗ ). Then Q|B ∈ CB (P), since, forallB ∈ B, Q(B) ≥ P∗ (B) = P(B). Now, since P is a probability, CB (P) = P 1 so that Q|B = P i.e. Q ∈ Q. Proof of Theorem 4.1. (1) implies (2): n a A∗ = 1. By Let P ∈ C(v), ai , Ai be given as in (2) such that i=1 i i Lemma 3.2, P(A) ≥ v∗ (A) for all A ∈ A (E). Therefore,
3 v∗ ( 1 ) + v∗ ( 2 ) + v∗ ( 3 ) = > 1. 2 5. Concluding comments
n
ai v∗ (Ai )
n
≤
ai P(Ai )
This paper is concerned with the situation of a DM who has partial information (objective or subjective) about certain events. We consider different degrees in which the assessment of likelihood the DM makes of these events is compatible with a probability and we establish simple criteria to test these degrees of compatibility. The weakest degree that we consider consists merely in requiring the existence of a probability which dominates the likelihood assessment of the DM and the strongest degree will be to require that the likelihood assessment of the DM is extendable to a probability. The intermediate degree is that the likelihood assessment of the DM is the lower envelope of a family of probabilities. In each case, when the set of events considered by the DM is finite, we show that these criteria are effective in the sense that they can be checked in a finite number of steps.
(2) implies (1): Since by Proposition 3.3, any probability on A (E) can be extended to a probability on A and by Lemma 3.2 C(v) = C(v∗ ), it is enough to show that C(v∗ ) is non-empty. In a first step, we show that C(v∗ ) = / ∅ if and only if for all n ∈ N∗ , ai > 0, Ai ∈ A (E),
Acknowledgement
n
We thank the anonymous referees for very helpful comments and suggestions. Appendix A. Proof
of
v(∅) = 0 v() = 1 A1 ⊂ A2 .
Lemma
v∗ (∅) = sup v(E)|E ∈ E, E ⊂ ∅
3.1.
=
by hypothesis. v∗ () = sup v(E)|E ∈ E, E ⊂ = by hypothesis. Let A1 , A2 ∈ A such that Since E ∈ E, E ⊂ A1 ⊂ E ∈ E, E ⊂ A2 , v∗ (A1 ) =
sup v(E)|E ∈ E, E ⊂ A1 ≤ sup v(E)|E ∈ E, E ⊂ A2 = v∗ (A2 ). Finally, it is obvious that v∗ extends v since v is monotone. Proof of Lemma 3.2. P ∈ C(v∗ ) and E ∈ E,
It is obvious that C(v∗ ) ⊂ C(v), since if
P(E) ≥ v∗ (E) ≥ v(E). Conversely, if P ∈ C(v) and A ∈ A. For all E ∈ E such that E ⊂ A, P(A) ≥ P(E) ≥ v(E), which clearly implies that P(A) ≥ v∗ (A). Proof of Proposition 3.3. Let P be a probability on B and Q :=
Q ∈ PA |Q|B = P be the set of extensions of P to A. We want to show that Q is non-empty and is equal to CA (P∗ ). We first note that P∗ is a convex capacity on A. Indeed, let A1 , A2 ∈ A, we must show that P∗ (A1 ) + P∗ (A2 ) ≤ P∗ (A1 ∪ A2 ) + P∗ (A1 ∩ A2 ). By the very definition of P∗ , for all > 0 we can find B1 , B2 ∈ B such that B1 ⊂ A1 , B2 ⊂ A2 and P∗ (A1 ) − < P(B1 ), P∗ (A2 ) − < P(B2 ).
i=1
n
=
=
ai A∗i ≤ 1 ⇒
ai A∗i
dP
i=1
dP
=1
i=1
i=1
n
ai v∗ (Ai ) ≤ 1 (∗)
i=1
It is clear that this condition is necessary. In order to show that it is sufficient it is enough to prove that it implies the existence of a functional f ∈ B∞ (A (E)) (the topological dual of the space of bounded functions on A (E), B∞ (A (E))) which satisfies the following conditions: (i) f (A∗ ) ≥ v∗ (A) ∀A ∈ A (E). (ii) f(∗ ) = 1. This will give the result, indeed P defined on A (E) by P(A) : = f(A∗ ) is then clearly a probability and by condition (i) it belongs to the core of v∗ . In order to show the existence of such a functional, we will use the following theorem of Ky Fan (see Theorem 13 p. 126 of “On systems of linear inequalities” Kuhn and Tucker, 1956). Ky Fan’s theorem has been used by various authors in a similar context, notably Delbaen (1974) and Kannai (1969). For example, using Ky Fan’s theorem,Kannai obtains a characterization
1 Indeed, let B ∈ B, then P(B) + P(Bc ) = 1 = Q(B) + Q(Bc ). Therefore, P(B) − Q(B) = Q(Bc ) − P(Bc ) and since P(B) − Q(B) ≤ 0 and Q(Bc ) − P(Bc ) ≥ 0, this shows that P(B) = Q(B).
26
A. Chateauneuf, C. Ventura / Journal of Mathematical Economics 47 (2011) 22–28
Since is a positive A (E)-measurable function which takes only a finite number of values there exist bj > 0, Bj ∈ A (E) such that
of those capacities whose core contains a countably-additive probability. Theorem (Ky Fan) Let (x ) ∈ J be a family of elements, not all 0, in a real normed linear space X, and let (˛ ) ∈ J be a corresponding family of real numbers. Let
:= Sup
⎧ n ⎨ ⎩
ˇj ˛j such that n ∈ N∗ , ˇ1 , . . . , ˇn ∈ R+ ∗ (=
ˇj xj || = 1
⎭
.
So that,
ai A∗i ||∞ = 1.(∗∗)
i=1
By ( ∗ ∗ ),
n
ai A∗i +
i=1
m
bj Bj∗ = 1
j=1
Therefore by ( ∗ ∗ ∗ ),
n
ai v∗ (Ai ) +
i=1
m
bj v∗ (Bj ) ≤ 1
j=1
and since bj > 0 and v∗ (Bj ) ≥ 0, we conclude that n
ai v∗ (Ai ) ≤ 1.
i=1
We now show that ( ∗ ∗ ∗ ) is equivalent to (2), which will conclude the proof. It is obvious that ( ∗ ∗ ∗ ) implies (2). To show the converse implication, we shall make use of Wolf’s trick (see Huber, 1981). Suppose that (2) holds and that ( ∗ ∗ ∗ ) is n not satisfied. Then, for a certain choice of ai and Ai , a A∗ = 1 i=1 i i
n
ai A∗i
≤1
i=1
and therefore by ( ∗):
and a v (A ) > 1, where we can suppose that n is the minii=1 i ∗ i mum integer for which these two relations hold simultaneously. Obviously, since (2) is supposed to hold, the functions A∗i are linearly dependent and therefore there exist bi ∈ R not all equal to zero such that n
ai v∗ (Ai ) ≤ 1.
bi A∗i = 0.
i=1
i=1
so that ≤ 1. Therefore by Ky Fan’s theorem there exists g ∈ B ∞ (A (E)) of norm satisfying (i). Now, either v(E) = 0∀E ∈ E, in which case the core of v is obviously non-empty or there exists E ∈ E such that v(E) > 0, in which case the zero-functional is not a solution and therefore by Ky Fan’s theorem, since 0 < ≤ 1, we obtain the desired functional by setting f = g/. We now show that (2) is equivalent to ( ∗ ), which will complete the proof. First, we show that we can assume that equality holds in the premise of ( ∗ ) i.e. that ( ∗ ) is equivalent to n
⎫ ⎬
j=1
We apply the theorem to the normed vector space B∞ (A (E)) and the family of vectors A∗ (A ∈ A (E)) and the corresponding family of real numbers v∗ (A). In to prove that is finite, we need to find an upper bound order n a v for ∗ (Ai ) over all families (ai )1≤i≤n , (Ai )1≤i≤n such that ai > i i=1 0, Ai ∈ A (E) and
n
n
j=1
n
bj Bj∗ .
x ∈ R, x > 0 ) and ||
(1) The system f(x ) ≥ ˛ ( ∈ J) has a solution f ∈ X if and only if is finite. (2) If the system f(x ) ≥ ˛ ( ∈ J) has a solution f ∈ X and if the zerofunctional is not a solution, then is equal to the minimum of the norms of all solutions f of this system.
n
m j=1
Then:
||
=
ai A∗i = 1 ⇒
i=1
n
ai v∗ (Ai ) ≤ 1 (∗ ∗ ∗)
i=1
n ∈ N∗ , a
where i > 0 and Ai ∈ A (E). Indeed ( ∗ ) obviously implies ( ∗ ∗ ∗ ). Now, in order to show that n ( ∗ ∗ ∗ ) implies ( ∗ ), suppose that a A∗ ≤ 1 and set i=1 i i
n
=1−
i=1
ai A∗i .
From which it follows that for all t ∈ R, n
(ai + tbi )A∗i = 1.
i=1
Set I =
t ∈ R|ai + tbi ≥ 0 ∀i ∈
n
1, . . . , n
.
Since b A∗ = 0 and the coefficients are not all equal to zero, i=1 i i there exist i, j such that bi > 0 and bj < 0, so that for all t ∈ I, −
aj ai ≤t≤− . bi bj
Hence I is a closed interval which contains n 0 in its interior and it is bounded. Therefore the function t → (a + tbi )v∗ (Ai ) reaches i=1 i its maximum at an endpoint t0 of I. At this point we still have n
(ai + t0 bi )A∗i = 1
i=1
while n i=1
(ai + t0 bi )v∗ (Ai ) ≥
n i=1
ai v∗ (Ai ) > 1.
A. Chateauneuf, C. Ventura / Journal of Mathematical Economics 47 (2011) 22–28
But since t0 is an endpoint of I, one of the coefficients must be equal to zero contradicting the minimality of n. Proof of 4.2. (1) implies (2): Theorem n ∗ = E ∗ as in (2). Since v is exact, there exists P ∈ C(v) Let a A i i=1 i such that P(E) = v(E). By Lemma 3.2, P(A) ≥ v∗ (A) for all A ∈ A (E) therefore, since ai > 0 if Ai = / , we have: n
ai v∗ (Ai )
≤
i=1
n
ai P(Ai )
n
i=1
=
ai A∗i
i=1
E dP
= P(E) = v(E). (2) implies (1): - In a first step, we show that (2) is sufficient, without restricting the A∗i to form a linear independent system. We must show that for all E ∈ E, there exists P ∈ C(v) such that P(E) = v(E). We need only consider E = / since otherwise condition (2) of Theorem 4.2 is exactly condition (2) of Theorem 4.1 and the existence of such a P follows as from that theorem. Proceeding in Theorem 4.1, given E ∈ E \ , we have to find f ∈ B∞ (A (E)) satisfying the following conditions: (i) (ii) (iii) (iv) (v)
f(∗ ) ≥ 1 f( − ∗ ) ≥ − 1 f (E ∗ ) ≥ v(E) f (−E ∗ ) ≥ −v(E) f (A∗ ) ≥ v∗ (A) ∀A ∈ A (E) \ , E .
Again as in Theorem 4.1, we derive the existence of such an f from Ky Fan’s theorem. Here we have to show that the upper bound of the quantities n
ai v∗ (Ai ) + av(E)
i=1
is finite, where E ∈ E, Ai ∈ A (E) \ E , n ∈ N∗ , a, ai ∈ R (ai > 0 if / ) are such that Ai = ||
n
ai A∗i + aE ∗ ||∞ = 1.(∗)
i=1
Since ( ∗ ) implies n
ai A∗i − ∗ + aE ∗ ≤ 0,
i=1
it is clear that if we assume that for E ∈ E, Ai ∈ A (E) \ ∗ E , n ∈ N , a, ai ∈ R (ai > 0 if Ai = / )
n i=1
ai A∗i
∗
+ aE ≤ 0 ⇒
n
- In a second step we intend to show that assuming as in (2) that the A∗i ’s are linearly independent is sufficient. Let us reason ad absurdum. Assume that E ∈ E, n ∈ N∗ , ai ∈ R, Ai ∈ A (E) \ E such that ai > 0 if Ai = / , the functions A∗i are linearly dependent and
E∗ ,
n
ai v∗ (Ai ) + av(E) ≤ 0 (2 )
n
a A∗ i=1 i i
=
a v (A ) > v(E). i=1 i ∗ i Let n be the minimum integer such that these relations are both satisfied and assume that Ai ∈ A (E) \ E for 1 ≤ i ≤ n − 1 and An = . Since the A∗i ’s are linearly dependent, there exist ci ’s ∈ R such
n
dP
∗
=
27
that c A∗ = 0 and indeed there exist i, j ∈ 1, . . . , n such that i=1 i i ci × cj < 0. Several cases must be considered: Case 1: There exist i, j ∈ 1, . . . , n − 1 such that ci × cj < 0. it is easy to see that I := In such a case, t ∈ R|ai + tci ≥ 0∀i ∈ 1, . . . , n − 1 is a compact interval containing 0 in its interior. n + tci ) A∗i = E ∗ and indeed For a t belonging to I, i=1 (ai ai + tci ≥ 0 ∀i ∈
1, . . . , n − 1 .
n
Furthermore, g(t) := + tci ) v∗ (Ai ) is linear, therefore i=1 (ai reaches its maximum at an endpoint t0 ∈ I, so since 0 ∈ I, one gets g(t0 ) > v(E) and n is decreased by at least one. But n was minimal, which leads to a contradiction. Case 2: For i ∈ 1, . . . , n − 1 all the ci ’s are non-negative (a similar proof applies if the ci ’s are non-positive). So indeed there exists ci0 > 0 for i ∈ 1, . . . , n − 1 and necessarily cn < 0. it is easy to see that I := In such a case, t ∈ R|ai + tci ≥ 0∀i ∈ 1, . . . , n − 1 writes I = [t0 , + ∞ ) with t0 < 0. n−1 + tci ) v∗ (Ai ) + (an + tcn ), so since Furthermore, g(t) := i=1 (ai cn < 0 and 0 ∈ I, g reaches its maximum on I at t0 , so since 0 ∈ I, one gets g(t0 ) > v(E), and n is decreased by at least one. But n was minimal, which leads to a contradiction. Proof of Theorem 4.3. We first prove that PA (v) = PA (˜v) = CA (˜v). This is obvious. Indeed: PA (v) = PA (˜v) : Clearly PA (˜v) ⊂ PA (v). For the converse inclusion, let P ∈ PA (v) and A ∈ E˜ . - If A ∈ E, then P(A) = v(A) = v˜ (A). - If Ac ∈ E, then P(A) = 1 − P(Ac ) = 1 − v(Ac ) = v˜ (A). PA (˜v) = CA (˜v) : It is obvious that PA (˜v) ⊂ CA (˜v). On the other hand, if P ∈ CA (˜v) and A ∈ E˜ then, since 1 = P(A) + P(Ac ) ≥ v˜ (A) + v˜ (Ac ) = 1, it is clear that P(A) = v˜ (A). (1)⇒ (2): Let P be a probability on A which extends v and E ∈ E such that E c ∈ E. Then, clearly v(E) + v(E c ) = P(E) + P(E c ) = 1. Furthermore, since v is an extendable probability, it is obvious that v˜ is a capacity on E˜ , so that a) is satisfied. Now, from what we have just shown P ∈ CA (˜v) = CA (˜v∗ ). Therefore, since ai > 0 n i=1
ai A∗i = 1 ⇒
n i=1
ai v˜ ∗ (Ai ) ≤
n i=1
ai P(Ai ) =
n
ai A∗i
dP = 1,
i=1
i=1
then ≤ 1 and that, by applying Ky Fan’s theorem, we can therefore find the desired functional, thereby showing that v is exact. The fact that (2 ) implies (2) without restricting the A∗i ’s to be linearly independent can be straightforwardly obtained in a similar way as in Theorem 4.1. This ends the first step of the proof.
so that (b) is satisfied. (2)⇒ (1): (a) implies that v˜ is a well-defined capacity on E˜ and by Theorem 4.1, (b) implies that CA (˜v∗ ) is non-empty. Since we have already shown that CA (˜v∗ ) = PA (v), we see that v is an extendable probability.
28
A. Chateauneuf, C. Ventura / Journal of Mathematical Economics 47 (2011) 22–28
References Azriely, Y., Lehrer, E., 2007. Extendable cooperative games. Journal of Public Economic Theory 9, 1069–1078. Delbaen, F., 1974. Convex games and extreme points. Journal of Mathematical Analysis and Applications 45, 210–233. Dempster, A.P., 1967. Upper and lower probabilities induced by a multivalued mapping. The Annals of Mathematical Statistics 38 (2), 325–339. Huber, P.J., 1981. Robust Statistics. John Wiley and Sons. Kannai, Y., 1969. Countably additive measures in core of games. Journal of Mathematical Analysis and Applications 27, 227–240. Kuhn, H.W., Tucker, A.W., 1956. Linear Inequalities and Related Systems. Princeton University Press, Study 38. Nehring, K., 1999. Capacities and probabilistic beliefs: a precarious coexistence. Mathematical Social Sciences 38, 197–213.
Savage, L., 1954. The Foundations of Statistics. John Wiley, New York. Schmeidler, D., 1972. Cores of exact games. I. Journal of Mathematical Analysis and Applications 40, 214–225. Schmeidler, D., 1986. Integral representation without additivity. Proceedings of the American Mathematical Society 97, 255–261. Shafer, G., 1976. A Mathematical Theory of Evidence. Princeton University Press, New Jersey. Shafer, G., 1981. Constructive probability. Synthese 48, 1–59. Shapley, L.S., 1971. Cores of exact games. International Journal of Game Theory 1, 11–26. Teper, R., 2009. Time continuity and non-additive expected utility. Mathematics of Operations Research 34, 661–673. von Neumann, J., Morgenstern, O., 1947. Theory of Games an Economic Behavior. Princeton University Press, New Jersey.