Regularity and robustness in monotone Bayesian games

Regularity and robustness in monotone Bayesian games

Journal of Mathematical Economics 60 (2015) 145–158 Contents lists available at ScienceDirect Journal of Mathematical Economics journal homepage: ww...

512KB Sizes 0 Downloads 107 Views

Journal of Mathematical Economics 60 (2015) 145–158

Contents lists available at ScienceDirect

Journal of Mathematical Economics journal homepage: www.elsevier.com/locate/jmateco

Regularity and robustness in monotone Bayesian games✩ A.W. Beggs Wadham College, Oxford University, Oxford, OX1 3PN, United Kingdom

article

info

Article history: Received 15 September 2014 Received in revised form 13 May 2015 Accepted 1 July 2015 Available online 13 July 2015

abstract This paper defines regular and weakly regular equilibria for monotone Bayesian games with onedimensional actions and types. It analyzes the robustness of equilibria with respect to perturbations. It also proves an index theorem and provides applications to uniqueness of equilibrium. © 2015 Elsevier B.V. All rights reserved.

Keywords: Bayesian games Monotone strategies Robustness Uniqueness Index theory Regularity

1. Introduction Bayesian games of incomplete information are widely used in applications. A particular class which has drawn much attention are those with equilibria in which agents’ actions are an increasing function of their signal, for example where the price a firm charges is an increasing function of better news about demand. One reason for their popularity is that Athey (2001), McAdams (2003) and Reny (2011) have established the existence of monotone pure strategy equilibria in increasing degrees of generality. Existence of equilibria is of little interest, however, if they are fragile to small perturbations of the game. This paper gives conditions for their robustness. It also analyzes their uniqueness. The paper considers Bayesian games with one-dimensional types and finitely many actions. These are commonly used in applications, for example in oligopoly theory or in global games. It considers equilibria in which agents’ strategies are monotone, that is the action taken is increasing in their type or signal and introduces a concept of regular equilibria. Regular equilibria are locally unique and vary smoothly in response to changes in parameters. This is a generalization of Harsanyi (1973)’s notion of regularity of Nash equilibrium in games of complete information. As an application, it is shown that equilibria of games of complete information

✩ An earlier version of this paper was circulated under the title ‘Regularity and Stability in Monotone Bayesian Games’ and also contained material on dynamic stability, an expanded version of which will now appear in a separate paper. E-mail address: [email protected].

http://dx.doi.org/10.1016/j.jmateco.2015.07.002 0304-4068/© 2015 Elsevier B.V. All rights reserved.

which are regular in Harsanyi (1973)’s sense are robust to addition of small amounts of incomplete information that preserve the monotonicity of the game. Jackson et al. (2012) recently showed that ϵ -equilibria in general Bayesian games are robust to perturbations. They argue that their results show that many refinements of equilibrium, such as global games, require perturbations which allow for behavior very different to that in the original game. The current paper studies a more restricted class of games but obtains stronger results in that conditions for robustness of exact equilibria are given. The paper also considers weaker notions of regularity where equilibria may not change smoothly but are locally unique and are Lipschitz-continuous functions of parameters. This work draws on recent results in the literature on operations research. The paper also proves an index theorem. This can be also used to give conditions for persistence of equilibria and in addition to give conditions for uniqueness of equilibria. Some games with monotone equilibria are supermodular and for these the tools of monotone comparative statics are available – see for example Topkis (1998) and Milgrom and Shannon (1994) – to analyze directions of change of the set of equilibria in response to changes in parameters. Those methods do not, however, allow one to say whether a particular equilibrium is robust to perturbations. In more detail, the paper proceeds as follows. Section 2 introduces the basic framework and Section 3 lays out the basic equilibrium conditions. It shows they can be represented by a set of first-order conditions. This representation may be of independent interest. Section 4 introduces the concepts of regularity and weak regularity. The definition of regularity is similar to that found in

146

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

general equilibrium theory and that due to Harsanyi (1973) in finite normal-form games. If an equilibrium is regular then it is a continuously differentiable function of parameters. The definition of weak regularity is less familiar and draws on developments in mathematical programming. In particular it uses an implicit function theorem due to Robinson (1995) which extends the standard implicit function theorem to cases where equilibria lie on the boundary. In the current case it allows one to deal with cases where the actions played in equilibrium may change if there are small perturbations. If an equilibrium is weakly regular then it is a Lipschitz-continuous function of parameters. Sufficient conditions for weak regularity are given in terms of diagonal-dominance conditions, that is own signals have a sufficiently strong effect on payoffs. Section 5 proves an index theorem. It shows that the index can be used to analyze persistence of equilibria even when they are not weakly regular. Section 6 examines the relationship to Harsanyi’s concept of regularity and shows that the definition of regularity here can be regarded as a generalization of his. It provides a strengthened result on the stability of equilibria in games of complete information which are regular in his sense. In particular, such equilibria are robust to perturbations in which signals are correlated not merely when they are independent, as Harsanyi assumed. It follows that games of complete information with multiple regular equilibria continue to have multiple equilibria if they are slightly perturbed. Section 7 provides applications of the index theorem to uniqueness, including to the case of global games. Diagonal dominance conditions, that is own signals have a sufficiently strong effect on payoffs, again play a key role. These results apparently contrast with those in Section 6 but can be understood in terms of the different limiting environments being perturbed. Carlsson and van Damme (1993) and subsequent authors have discussed this comparison but mainly in the case of Normal errors and the current paper allows more general results. Section 8 shows that in a reasonable sense regular equilibria are generic. Since in specific applications the games considered may be non-generic in the space of all monotone ones the weaker conditions for robustness and local uniqueness in this paper remain of interest. Section 9 concludes. 2. The model This section describes the model. Attention is restricted to pure strategy Bayesian equilibria. There are n players and each observes a signal ti , i = 1, . . . , n, drawn from a set Ti . Let T = T1 × · · · Tn . After observing his signal each player then takes an action ai drawn from a set Ai . Let A = A1 × · · · An . Payoffs and the distribution of signals may depend on a parameter θ ∈ Θ , where Θ is an open subset of a Banach space. In most applications Θ is an open subset of Euclidean space.

Assumption 4. Ui (a, t , θ ) is jointly continuous in t and θ for each a ∈ A, i = 1, . . . , n and f is jointly continuous in t and θ . For some of the analysis a stronger assumption will be made1 : Assumption 5. In addition to Assumption 4, Ui is differentiable in ti for each i and a and θ with a derivative which is continuous in t and θ . f is continuously differentiable with respect to t and θ . A strategy for player i is a measurable function σi : Ti → Ai . It is said to be increasing if σi (ti′ ) ≥ σi (ti ) for all ti and ti′ with ti′ ≥ ti . Let F (t−i | ti )  denote the conditional distribution function of signals in T−i = j̸=i Tj given ti , f (t−i | ti ) be their density and fi (ti ) be the marginal density of ti . If opponents use a measurable strategy σj : Tj → Aj , j ̸= i then i’s interim expected payoff conditional on receiving signal ti and taking action ai , denoting by σ−i the vector of opponents strategies, is Vi (ai , ti ; σ−i ) =



Ui (ai , σ−i (t−i ), t ) dF (t−i | ti ) t−i

σ−i will often be suppressed from the notation.

Player i’s ex ante expected payoff from using strategy σi is

Wi (σi , σ−i ) =



Ui (σi (ti ), σ−i (t−i ), t ) dF

T =

Vi (σi (ti ), ti , ; σ−i ) fi (ti ) dti .

ti

A set of strategies σ ∗ = (σ1∗ , . . . , σn∗ ) is an equilibrium if for each i, Wi (σi∗ , σ−∗ i ) ≥ Wi (σi , σ−∗ i ) for all strategies σi of player i. Recall that a real-valued function g (x, t ) of two variables on two ordered sets satisfies single crossing in (x, t ) if g (x′ , t ) ≥ (>) g (x, t ) ⇒ g (x′ , t ′ ) ≥ (>) g (x, t ′ ) for all x′ > x and t ′ > t. Assumption 6. If all players apart from i, i = 1, . . . , n, use increasing strategies σj : Tj → Aj then Vi satisfies single crossing in (ai , ti ) for each θ . If Assumption 6 holds then if all players apart from i use an increasing strategy, i has a best reply which is increasing (see Athey (2001) and Milgrom and Shannon (1994)). Existence of equilibrium under the assumptions above follows from the general results of Athey (2001). As noted in Athey (2001) sufficient conditions for Assumption 6 are

Assumption 2. The signals have a joint density f on T with respect to Lebesgue measure which is continuous in t and θ and strictly positive.

1. For all i,(i) Ui (a, t ) is supermodular in a and (ai , tj ), j = 1, . . . , n and (ii) S dF (t−i |ti ) dti is increasing for all sets whose indicator functions are increasing (affiliation is sufficient for (ii), see Athey (1996) and Milgrom and Weber (1982)). 2. For all i, (i) Ui (a, t ) is non-negative log-supermodular in (a, t ) and (ii) types are affiliated.

The assumption that all players share the same type space is without loss of generality and simplifies the notation. F denotes the cumulative distribution function of f .

If all other players use increasing strategies under (i) Vi (ai , ti ) is supermodular in (ai , ti ) and under (ii) Vi (ai , ti ) is log-supermodular, so Assumption 6 is satisfied.

Assumption 1. Ti is the interval [0, 1] for all i.

Assumption 3. Ai is a strictly ordered finite set with m(i) + 1 elements, 0, . . . , m(i), for i = 1, . . . , n. Each player i has a bounded measurable payoff function Ui (a, t , θ ) where a ∈ A and t ∈ T and θ ∈ Θ . Often the analysis is concerned with a fixed value of θ , so it will be suppressed from the notation.

1 Recall that a function h on an arbitrary domain, B, is said to be differentiable if there is function k defined on an open set O containing B which is differentiable and whose restriction to B equals h. In Assumption 5 the requirement on Ui is defined to mean that Ui has an extension to an open set containing T which is differentiable in ti .

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

3. Equilibrium conditions

his cutoffs can be written, given the strategies of the other players k−i , as

This section states the basic equilibrium conditions and shows they are equivalent to a variational inequality. As described by Athey (2001) a monotone strategy can be specified by a vector of cutoffs. For player i, kij = sup{ti ∈ [0, 1] | σi (ti ) < j} denotes the signal at which he first plays action j for 1 ≤ j ≤ m(i). The dummy cutoffs ki0 = 0 and kim(i)+1 = 1 are sometimes useful for notational purposes. Player i’s strategy set is

Σi ≡ {ki | 0 ≤ ki1 ≤ ki2 ≤ · · · ≤ kim(i) ≤ 1}. Let Σ = Σ1 × · · · Σn . Note that player 1’s payoff conditional on receiving signal t1 and taking action a1 , Vi as defined in Section 2, can be written as: V1 (a1 , t1 ) =

m(2) 

...

l2 =0

...



m(n)   l n =0

knl +1 n tn =knl

k2l +1 2 2

Ui (a1 , al2 , . . . , aln , t ) dF (t−i | ti )

and similarly for other actions and players. If an action is played over an interval of positive measure then it will be said to be active, otherwise it is inactive. An action j is active if and only if kij+1 − kij > 0. For any pair of actions j and l of player i let 1Ui (j, l, a−i , t ) = Ui (l, a−i , t ) − U (j, a−i , t ) be the change in payoff switching from action j to action l and



1Ui (j, l, σ−i (t−i ), t ) dF (t−i | ti ). t−i

Call two active actions adjacent if there is no active action lying between them. Lemma 1. If other players use increasing strategies then under Assumptions 1–4 and 6, a set of cutoffs ki for player i correspond to a strategy which is an optimal response if and only if: (a) If j and l are adjacent active actions then (i)

1Vi (j, l, kil ) = 0. (ii) For all p, with j ≤ p < l, 1Vi (j, p, kil ) ≤ 0. (b) If j is the least active action then for all l ≤ j 1Vi (l, j, ki0 ) ≥ 0

(1) (2)

(3)

(c) If j is the greatest active action then for all l ≥ j

1Vi (j, l, kim(i)+1 ) ≤ 0.

Wi (ki ; k−i ) =

m(i)   j =0

kij+1 kij

Vi (aj , ti )fi (ti ) dti .

(5)

An optimal strategy must maximize Vi for almost every ti and so one would obtain the same optimal strategy if one replaced fi by the uniform density and instead maximized

˜ i (ki ; k−i ) = W

m(i)   j =0

kij+1 kij

Vi (aj , ti ) dti .

(6)

˜ i is continuously differentiable in ki Under Assumptions 1–4 W with ˜i ∂W = −1Vi (j − 1, j, kij ). ∂ kij

t2 =k2l

n

1Vi (j, l, ti ) =

147

(4)

Condition (a)(i) simply expresses the fact that the player must be indifferent at the switch point between two active actions since under Assumptions 1–4 1Vi (j, l, ti ) is continuous in ti . Conditions (a)(ii), (b) and (c) rule out payoffs being raised by using a currently inactive action. Note that if an action is not used then an active action must come into use at exactly the same cutoff. The result follows directly from single crossing (Assumption 6). If j < l are indifferent at kil then by single crossing j must be (weakly) better for all lower signals and l weakly better for all higher signals. Iterating this argument yields that j is (weakly) better than all active actions in the range in which it is active. A similar argument implies that is better than any inactive action in this range as well. An equivalent characterization of equilibrium can be given from an ex ante perspective. A player’s ex ante payoffs as a function of

(7)

For a closed convex set, X , the normal cone at x is defined to be NX (x) = { y : ⟨y, z − x⟩ ≤ 0 ∀z ∈ X }. NX (x) represents the set of ‘normal vectors’ to X at x. The first-order necessary conditions for ˜ i can be written as ki to maximize W

˜ i (ki ) ∈ NΣi (ki ) ∇W

(8)

˜ i denotes the vector of partial derivatives of W ˜ i . That is where ∇ W ˜ any direction of increase of Wi must point outside Σi (see Borwein and Lewis (2006) Proposition 2.1). From (7), (8) is equivalent to −1Vi ∈ NΣi (ki )

(9)

where 1Vi is a vector with components 1Vi (j − 1, j, ). (9) is an example of a variational inequality. A comprehensive treatment of such problems can be found in Facchinei and Pang (2003). One then has (proof in the Appendix). kij

Lemma 2. If other players use increasing strategies then under Assumptions 1–4 and 6 the first-order necessary conditions for ki to be an optimal ex ante response are equivalent to the conditions given in Lemma 1 and so in particular are sufficient for optimality. The first-order conditions for maximizing Wi are equivalent to those ˜ i. for maximizing W The fact that the first-order conditions for maximizing Wi ˜ i follows from the are equivalent to those for maximizing W observation above that in order to maximize ex ante payoffs, Wi , an optimal strategy must specify that the agent pick an action which maximizes interim payoffs, Vi , for almost every signal ti . The same ˜ i . Since fi is strictly positive the is true if he wishes to maximize W sets of measure zero are the same under it as under the uniform distribution and so the agent must employ the same strategy, up to a null set, under fi as under the uniform distribution provided, as is the case here, he has the same conditional beliefs about the other players’ types (f (t−i |ti )) and so the same interim payoffs (Vi ). So, for example, if there are two actions the player will switch between them at the point where his payoff conditional on his signal is the same for both actions. The location of this point does not depend on his prior belief about his type (fi (ti )) but only on his conditional belief about the distribution of other players’ types given his signal (f (t−i |ti )). It follows that the equilibrium cutoffs ˜ i and the proof in must be the same whether one maximizes Wi or W the Appendix shows that the first-order conditions are equivalent. The fact that the first-order conditions are equivalent to those in Lemma 1 implies that they are sufficient for optimality. For

148

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

some intuition for sufficiency note that from (7) if payoffs are ˜ i is concave as 1Vi is decreasing in kij . More supermodular then W generally single crossing implies that (7) changes sign at most once and then from positive to negative, which is a sufficient condition for optimality of the first-order conditions in a onedimensional problem. A similar argument applies here. A more detailed discussion can be found in Beggs (2011). 4. Local uniqueness and sensitivity A basic question is the robustness of equilibria to small perturbations in the data of the problem. In other words, if the data are perturbed slightly is there an equilibrium close to the original one. The most general condition for this is that the equilibrium has a non-zero index, which is studied in the next section. In this section, attention is focused on more restrictive conditions which give stronger conclusions. In particular, conditions are given under which an equilibrium is and remains locally unique under perturbations and varies in a smooth or Lipschitz-continuous manner. Consider the equilibrium correspondence which gives for each value of θ the corresponding set of equilibrium cutoffs. A trivial result is that it is upper hemi-continuous: Lemma 3. Under Assumptions 1–4 and 6, the equilibrium correspondence is upper hemi-continuous in θ . This follows immediately from Lemma 1 and the continuity of Vi in k and θ . Lower hemi-continuity or stronger versions of robustness are of more interest. (1) in Lemma 1 gives a set of equations which must be satisfied by the active actions:

1Vi (j, l, kil ; k−i ) = 0

(10)

where j and l are adjacent active actions. For clarity the cutoffs of the other players, k−i , are explicitly noted in 1Vi to emphasize the fact that these equilibrium conditions depend on the choices of the other players. Provided the inequalities in Lemma 1 are strict, they will continue to hold if there are small perturbations to the game and can be neglected. That is currently inactive actions will remain so if the game is perturbed. One can apply the implicit function theorem to (10) to determine the effect of perturbations. If one combines (10) for all the players, one obtains a set of equations that must be satisfied by the cutoffs at which players switch between active actions. Note that only the active actions affect these equations. One can therefore regard the left-hand side of (10) as defining a function, Φ (kα ; α(k∗ )), where α(k∗ ) denotes the set of active actions and kα the corresponding cutoffs. That is as the cutoffs at which players switch between active actions are varied, the other cutoffs are also varied to keep the remaining actions inactive. For example if player i has three actions, 0, 1, 2, with 0 and 2 active and 1 inactive then 0 < ki1 = ki2 < 1. As ki2 is varied, ki1 is changed to keep ki1 = ki2 and action 1 inactive. Equivalently action 1 and its cutoff are dropped. Note that if a player only has one active action, then he contributes no equation to Φ . Assumption 5 implies immediately2 : Lemma 4. Under Assumption 5 1Vi (j, l, t ) is continuously differentiable in t.

2 See, for example, Billingsley (1995) Theorem 16.8.

Definition 1. An equilibrium k∗ will be said to be regular if (i) the set of equations, Φ (kα ; α(k∗ )), has a non-singular Jacobian at k∗ with respect to kα , (ii) all inequalities in (2)–(4) are strict. If every player has only one active action, then Φ is empty and this will be treated as satisfying (i). This definition is similar to that for finite Normal-form games given by Gul et al. (1993). In Section 6 it will be shown to be an extension of Harsanyi’s definition for Normal-form games. In Section 8 it is shown that if perturbations are rich enough then generically all equilibria are regular. If one is content to analyze generic situations then regularity is, therefore, an innocuous assumption. If one is not, then the concept of weak regularity introduced below allows one to make progress. If the inequalities in Lemma 1 are not all strict then there are inactive actions which are indifferent to some active action at its cutoff—let this set be βi (k∗ ). These actions may become active if the game is perturbed slightly. Consider the reduced game obtained by allowing player i to choose from αi (k∗ ) and some elements β ′  ⊆ βi (k∗ ). Call this set γi (that is γi = αi (k∗ ) ∪ βi′ ) and let γ = γi . Let Γ (k∗ ) be the set of all γ which can be formed in this way. By definition of βi (k∗ ) each element of βi′ is indifferent to some active action, so the set of indifference conditions in Φ can be augmented to read that for each action in γi is indifferent to each succeeding action at its cutoff:

1Vi (j, l, kil ; k−i ) = 0 for all adjacent actions j < l in γi . Call the resulting set of equations Φ (kγ ; γ ), where kγ denotes the set of cutoffs corresponding to the actions in γ . Definition 2. An equilibrium k∗ will be said to be weakly regular if the determinant of the Jacobian of Φ (kγ ; γ ) with respect to kγ has the same non-zero sign at k∗ for all γ ∈ Γ (k∗ ). If each player has only one active action for some γ , then the Jacobian is formally defined to have sign +1. Theorem 1. Under Assumptions 1–6, (i) If an equilibrium k∗ is weakly regular then it is locally unique. If all equilibria are weakly regular, then there is a finite number of equilibria. (ii) If an equilibrium k∗ is weakly regular for some θ ∗ , then there is an open neighborhood O of θ ∗ and Ω of k∗ and a function k(θ ) : O −→ Ω which is Lipschitz continuous and directionally differentiable in θ such that k(θ ) is the unique equilibrium in Ω . (iii) If the equilibrium is regular, then Lipschitz continuous can be replaced by continuously differentiable in the statement of (ii). The proof is in the Appendix following the proofs of the material in Section 5 as some of the concepts there shorten the proofs. In the regular case local uniqueness and part (iii) are standard consequences of the implicit function theorem. In the weakly regular case, this follows from results of Robinson (2003, 1995), summarized in the Appendix, which give a generalization of the implicit function theorem. Weak regularity is an example of what Robinson (2003) calls a coherent orientation condition. If the equilibrium is not regular, then small perturbations may cause different sets of constraints to bind. Equivalently, the equilibrium will lie on a different face of the constraint set. Weak regularity guarantees that the equations agree on whether the determinant of the Jacobian of the system is positive no matter which face one regards the equilibrium as belonging to. Robinson’s generalization of the implicit function

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

149

5. An index theorem

Fig. 1. A public goods game.

theorem shows that solution to the system is locally a singlevalued and Lipschitz-continuous function of the parameters.3 Continuous differentiability cannot be hoped for in general if different sets of constrains may bind according to how the system is perturbed. A broad perspective on this kind of extension to the implicit function theorem can be found in Dontchev and Rockafellar (2009). A sufficient condition for weak regularity is: γ

Lemma 5. Suppose that the Jacobian of Φ (k ; γ ) is a dominant diagonal matrix with positive diagonal at k∗ for any γ ∈ Γ . Then k∗ is weakly regular. The proof is immediate. A dominant diagonal matrix with all diagonal elements positive is a P-matrix: that is all of its principal minors are positive (see for example Nikaido (1968)). In particular, the condition guarantees that the Jacobian of Φ (kγ ; γ ) has a positive determinant at k∗ for all γ ∈ Γ and so by Definition 2 is weakly regular. In essence the condition requires that the effect of each agent’s own signal always dominates the effect of changes in other players’ cutoffs. To illustrate these results consider this example: Example 1. The following may be thought of as a public goods game where there is some benefit to contributing even if the other player does not (see Fig. 1): t1 and t2 are independently and uniformly distributed on [0, 1]. γ is positive. This game is therefore supermodular and satisfies Assumptions 1–6. In an equilibrium player i switches from action 0 to 1 at a cutoff ki . The expected increase in payoffs from playing action rather 1 rather than 0 if player i receives signal ki and player j’s cutoff is kj is 1Vi = ki −α −γ kj for player i. When α = 0 there is always an equilibrium with (k1 , k2 ) = (0, 0). An agent never plays action 0 but is indifferent between it and action 1 at the cutoff 0, so the equilibrium is not regular. If γ < 1, the dominant-diagonal condition ∂ 1Vi /∂ ki = 1 > γ = ∂ 1Vi /∂ kj is satisfied, so the equilibrium (0, 0) is weakly regular. As a result equilibrium behavior is Lipschitz-continuous around α = 0: for α < 0 the unique equilibrium is (0, 0) and for   α α , 1−γ . α > 0 the unique equilibrium is 1−γ If γ > 1, the dominant-diagonal condition does not hold. The equilibrium is not weakly regular: the Jacobian of 1V1 and 1V2 has a negative sign but recall that if (0, 0) is treated a pure equilibrium (dropping action 0 for both players)  the sign is defined to be +1. α α If α < 0 the equilibria are (0, 0), 1−γ , 1−γ and (1, 1), while for α > 0 the unique equilibrium is (1, 1), so behavior around α = 0 is not Lipschitz-continuous at (0, 0).

The results of this section rely on differentiability of payoff functions. Even if payoff functions are merely continuous, one can make some progress on robustness using index theory. The next section turns to this latter subject.

In this sub-section, an index theorem is stated. To this end a map whose fixed points coincide with the equilibria of the game is introduced. The construction used is similar to Kehoe (1980) in general equilibrium theory and Gul et al. (1993) in game theory. Section 5.1 constructs the map and interprets its properties. Section 5.2 gives a brief introduction to index theory. Section 5.3 states the main results in the current context. Section 5.4 applies the result to give a general condition for the persistence of equilibria. 5.1. The mapping Consider the map from h defined on Σ given by hij

(k1 , . . . , kn ) = kij − 1Vi (j − 1, j, kij )

for 1 ≤ j ≤ m(i), or in vector notation h = k − 1V where 1V is the vector with components 1Vi (j − 1, j, kij ). Intu-

itively, if 1Vi (j − 1, j, kij ) > 0, then action j is better than j − 1 at

kij , so player i should lower his cutoff. The image of h does not usually lie in Σ so h is composed with the orthogonal projection map onto Σ , ΠΣ . The map considered is then g = ΠΣ ◦ h. Note that as Σ is a product set, g is obtained by combining the projections of each agent’s strategy on their own strategy set. Since the projection mapping onto a closed, convex set is continuous it follows immediately that g is continuous. Furthermore Lemma 6. g is a continuous mapping. The fixed points of g coincide with the equilibria of the game. By the projection theorem,4 k∗ = ΠΣ (k∗ − 1V ) if and only if

−1V ∈ NΣ (k∗ ).

(11)

∗ Since NΣ (k∗ ) = i NΣi (ki ), this holds if and only if the firstorder conditions for each agent, (9) in Section 3, hold so k∗ is an equilibrium if and only if it is a fixed point.



5.2. Index theory For an open set O of Rp , some p, let F (O, Rp ) denote the set of all continuous maps from O to Rp with a compact (possibly empty) set of fixed points. The fixed point index is the unique function (see Granas and Dugundji (2003) p. 305 onwards for example) which assigns to any element ψ of F (O, Rp ), for any O, an integer I (ψ, O) which has properties (1)–(3) below: 1. Normalization If ψ : O −→ Rp is the constant map ω → ω0 , ω0 ∈ O, then I (ψ, O) = 1. 2. Additivity For any pair of disjoint open sets O1 , O2 ⊂ O, if the fixed points of ψ are contained in O1 ∪ O2 then I (ψ, O) = I (ψ, O1 ) + I (ψ, O2 ).

3 Given that the solution is single-valued and continuous one can deduce Lipschitz-continuity from elementary considerations, see for example Mas-Colell (1985) 2.7.3 in the context of demand theory. Without appeal to Robinson’s generalization of the implicit theorem, however, the former is not evident.

4 See for example Borwein and Lewis (2006) p. 19 exercise 8.

(12)

150

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

3. Homotopy Invariance Let H be a continuous function H : O × [0, 1] → Rp and let Ht denote the function o → H (o, t ). If ∪t {o : o is a fixed point of Ht } is compact, then I (H0 , O) = I (H1 , O). From these three properties, others follow including: 4. Excision If O1 ⊂ O2 and the fixed points of ψ are contained in O1 , then I (ψ, O1 ) = I (ψ, O2 ). 5. Existence If I (ψ, O) ̸= 0 then ψ has a fixed point in O. If ψ ∈ F (O2 , Rp ) and O1 ⊆ O2 then I (ψ|O1 , O1 ) is abbreviated as I (ψ, O1 ). If ψ has an isolated fixed point, o∗ , its index I (ψ, O) is the same for any open set O containing o∗ in which it is the sole fixed point. Call this common value Indψ (o∗ ) or Ind(o∗ ) if the context is clear. If ψ is differentiable at o∗ , o∗ will be said to be regular if 1 is not an eigenvalue of Dψ . The index of ψ at o∗ equals Indψ (o∗ ) = sgn det(I − Dψ|o∗ ).

(13)

McLennan (2014) gives a treatment of index theory for economists using (13) as a starting point and extending the index to continuous functions by approximation. 5.3. Results In order to apply the results of the previous subsection one needs to extend the domain of g to an open set containing Σ . Theorem 4 shows that the index does not depend on the extension in the regular and weakly regular cases. Lemma 7. Under Assumption 4 (respectively Assumption 5) h has an extension to a continuous (differentiable under Assumption 5) map, ˜ defined on an open set Ψ containing Σ , so g can be extended to a h, continuous map g˜ = ΠΣ (h˜ ). If all equilibria are weakly regular, and a fortiori if they are regular, they are isolated and finite in number. Let Eq denote the set of equilibria of the game. Theorem 2. Under Assumptions 1–4 and 6, if the game has a finite number of equilibria then, Ind(˜g , Ψ ) =



Ind(k∗ ) = 1.

k∗ ∈Eq

In particular this holds if all equilibria are regular or weakly regular. A proof can be found in the Appendix. The result follows from properties (1)–(3) of the index. It will be shown that the index can be computed straightforwardly in the regular and weakly regular cases. In particular it will be shown that a fixed point is regular in the sense of the previous subsection if and only if it corresponds to a regular equilibrium. In the case at hand, h˜ is differentiable in a neighborhood of Σ by construction. The only obstacle to differentiability of g˜ is that of ΠΣ . Σ is a polytope and so ΠΣ will be differentiable at a point k provided all points in a neighborhood of k project into the same face. At a regular equilibrium this is guaranteed by condition (ii) in Definition 1. The other condition needed is that 1 is not an eigenvalue of the derivative of g˜ at a fixed point. This is guaranteed by condition (i) in the definition of regularity. Theorem 3. Under Assumptions 1–6, at a regular equilibrium, k∗ , Ind(k∗ ) is equal to the sign of the determinant of the Jacobian of Φ (kα ; α(k∗ )) at k∗ .

Intuitively at a regular equilibrium, one can neglect the inactive strategies, so the index is the same as one in a game with only the active strategies. In such a game ΠΣ is the identity, so I − ΠΣ (I − 1V ) = 1V . The proof follows from standard linear algebra and is in the Appendix. A similar expression for the index holds in the case of weak regularity. At a weakly regular equilibrium, elements near to k∗ may project into different faces. g is therefore not in general smooth. Weak regularity, however, guarantees that, the expression for the index is the same whatever face one regards the mapping as projecting into. Theorem 4. Under Assumptions 1–6, if an equilibrium is weakly regular Ind(k∗ ) equals the common value of the sign of the determinant of the Jacobian of Φ (kγ ; γ ) (γ ∈ Γ ) at k∗ . The result is proven in the Appendix by considering a perturbation of g˜ which has a regular equilibrium at k∗ and using the homotopy invariance property (property (3) of the Index) to compute the index in the original game.5 Note that this implies that if all equilibria are weakly regular, the number of equilibria is odd. Section 8 gives conditions under which equilibria are generically regular and a fortiori weakly regular. Even without differentiability one can often compute the index at an isolated fixed point by using the homotopy invariance property of the index. For example the following well known result which follows directly from properties (1) and (3) of the index6 will be of use in Section 7. Lemma 8. Let ψ be a continuous function with no fixed point on the boundary, ∂ O, of the open set O, which has compact closure. Given x0 ∈ O, if for all x ∈ ∂ O, there is no λ ∈ (0, 1) such that λ(ψ(x) − x0 ) = x − x0 , then Ind(O, ψ) = 1. 5.4. Persistence of equilibria The index can also be used to prove stability properties of an equilibrium in the absence of regularity. It follows from properties (3) and (5) of the index that if an equilibrium has non-zero index then any game sufficiently close to the current one will have at least one equilibrium close to the current one: Theorem 5. Under Assumptions 1–4 and 6, if k∗ is an isolated equilibrium with Ind(k∗ ) ̸= 0 for θ = θ0 , then given any neighborhood O of k∗ , there is an equilibrium lying in O for sufficiently small θ . In the absence of weak regularity, however, one cannot in general guarantee that there is only one equilibrium in the stated neighborhood. This section has focused on isolated equilibria but one can extend the analysis to sets of equilibria. In particular one can extend the theorem above to consider the persistence of a set of equilibria.7 6. Relation to Harsanyi’s concept of regularity Harsanyi (1973) introduced a concept of regular equilibria for finite normal-form games and showed that regular mixed equilibria could be regarded as limits of pure equilibria of appropriately perturbed games. In this section it will be shown that one can regard the concept of regularity introduced in this paper as an extension of Harsanyi’s.

5 The approach is similar to that of Simsek et al. (2008), who present an index theorem for general variational inequalities using a different mapping. 6 See for example Granas and Dugundji (2003) Theorem IV.7.3. 7 See McLennan (2014) for example.

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

151

In essence (16) differs from Harsanyi’s definition by (i) dropping the inactive actions, (ii) comparing payoffs between adjacent active actions rather than comparing to a fixed action, (iii) changing variables from s to k. It is straightforward to check (see Appendix) that none of these changes affects the non-singularity of the relevant Jacobian (though signs may change). One therefore has Fig. 2. A coordination game.

If the players’ signals are independent and payoff-irrelevant then one can represent any mixed-strategy equilibrium of a normal-form game as an increasing map σi : Ti −→ Ai . The assumptions in this paper require Ai to have a natural order and impose single-crossing assumptions on payoffs. This is because the results stated earlier allow for correlation. The results of Radner and Rosenthal (1982) show that without such assumptions existence is not guaranteed even with payoff-irrelevant signals if they are correlated. Example 2 (Payoff-Irrelevant Signals). Consider the simple coordination game shown in Fig. 2. Assume that each player receives a signal ti , uniformly distributed on [0, 1] which does not affect either player’s payoffs and is independent of the other player’s signal. The payoff functions are trivially supermodular in actions and signals and since signals are independent Assumption 6 is satisfied. Assume each player picks action 1 if he observes a signal greater than some level ki , action 0 otherwise. There are three equilibria: k1 = k2 = 1, k1 = k2 = 0 and k1 = k2 = 23 . These correspond to the two pure and the mixed equilibrium of the ordinary game without payoff-irrelevant signals (in each, action 0 is played by player i with probability ki ). A question of interest is whether these equilibria are robust to perturbation. Harsanyi (1973), see also van Damme (1991), defines a regular equilibrium for an n-person game with finitely many actions as follows. Let player i have ν(i) actions and let si = (si1 , . . . , siν(i) )

be his mixed strategy. Fix a Nash equilibrium s = (s , . . . , s ) and pick an action for each player, without loss of generality action 1, which is played with positive probability. Let Π i (j, s−i ) be the expected payoff to player i from playing action j if other players are playing the strategy s−i . Form the set of equations ∗



sij = 1 i = 1, . . . , n

∗1

∗n

(14)

j

sij Π i (j, s∗−i ) − Π i 1, s∗−i







=0

j = 2, . . . , ν(i), i = 1, . . . , n.

(15)

Any equilibrium must satisfy these equations (though the converse is not true). An equilibrium is said to be regular if the Jacobian of this system of equations is non-zero. Note that this implies in particular that any action whose payoff equals those of the actions played in equilibrium must be played with strictly positive probability. Given a set of independent payoff-irrelevant signals, one can represent s∗ by assuming that player i plays action j over the interval [kij , kij+1 ) where Fi (kij+1 ) − Fi (kij ) = s∗j i (Fi being the cumulative distribution function of i’s signals). The definition of regularity in Section 4 reduces here to the requirement that if A∗i denotes the set of active actions and j+ denotes the least active action greater than j, the set of equations

Π (j, s−i ) − Π (j+, s−i ) = 0 j = min A∗i , . . . , max A∗i − 1, i = 1, . . . , n

(16)

has a non-zero Jacobian with respect to the cutoffs of the active actions.

Theorem 6. Under Assumptions 1–6, an equilibrium of a game with payoff irrelevant signals is regular if and only if it is regular in Harsanyi’s sense. Harsanyi considers perturbations of the game in which the signals become payoff relevant, that is payoff functions have the form: Ui (a, t ; θ ) = U¯ i (a) + θ ti where θ is a scalar parameter. Now a regular equilibrium has non-zero index so from Theorem 5 for sufficiently small θ , the perturbed game will have an equilibrium close to it. This is essentially Govindan et al. (2003)’s proof of Harsanyi’s theorem. They do not require assumptions about single crossing as they concentrate on a fixed equilibrium and assume that signals are independent. In the framework considered here the particular structure of perturbations considered by Harsanyi is not necessary. Consider a family of perturbations such that Ui (a, t ; θ ) is independent of t for θ = 0 (that is signals are payoff-irrelevant in the limit). The joint density of signals f (t , θ ) may also depend on the perturbations and is assumed to have independent signals when θ = 0. Theorem 7. Let payoffs and signals depend on a scalar parameter θ such that when θ = 0 payoffs are independent of signals and signals are independent, with Assumptions 1–4 and 6 true for all θ . If k∗ is a regular equilibrium of the game when θ = 0, then for any neighborhood of k∗ , there is θ0 > 0 such that for all positive θ < θ0 , the perturbed game has an equilibrium in this neighborhood. This follows immediately from Theorem 5. This shows that, under the current assumptions on payoffs, regular equilibria are robust to a very general set of perturbations. In particular if a game has multiple regular equilibria, it will still have multiple equilibria if it is subject to small perturbations. In the case of Example 2, for instance, it is easy to see that all three equilibria are regular in Harsanyi’s sense and so the game will have multiple equilibria if the game is perturbed.8 This may seem to contrast with literature on global games. As discussed in the next section, the unperturbed game corresponds to rather different environments in the two cases. 7. Applications to uniqueness In this section the index theorem will be used to consider the uniqueness of equilibrium. It is well understood that more direct approaches to uniqueness are often available. Index theory, however, provides an appealing conceptual approach to uniqueness. It also allows one to consider only local properties. As in the rest of the paper, attention is restricted to monotone equilibria. The results of Van Zandt and Vives (2007), however, show that in the supermodular case under mild assumptions there are greatest and least equilibria which are monotone. It follows that if there is a unique equilibrium in monotone strategies it is unique amongst all equilibria. From Theorem 2, if all equilibria are regular or weakly regular, it suffices to prove uniqueness to show that the index is +1 at all equilibria. Combining this theorem, Theorem 4 and Lemma 5 in Section 5 yields:

8 The two pure equilibria have index +1 and the mixed index −1.

152

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

Theorem 8. (a) Suppose that the Jacobian of Φ (kγ ; γ ) at k∗ is a dominant diagonal matrix with positive diagonal for any γ ∈ Γ . Then k∗ is weakly regular with index +1. (b) If (a) holds for all equilibria, then there is a unique equilibrium. In Example 1 in Section 4, the dominant diagonal condition holds for all values of α when γ < 1 and so equilibrium is unique. The latter is easily checked by directly computing the equilibrium in this example but the condition can be verified without doing so and so could have been used to prove uniqueness without finding the equilibrium explicitly. The same is true in more general games. For simplicity of discussion assume that a player’s payoff only depends on his own signal (private values), actions are binary and there are two agents, so agent i’s incremental payoff from switching from action 0 to 1 depends on his own signal and the action of the other agent (1Ui (aj , ti )). If both actions are active – similar considerations apply for any configuration of actions – the equilibrium condition for player 1 is

1U1 (0, k1 )F (k2 |k1 ) + 1U1 (1, k1 )(1 − F (k2 |k1 )) = 0. The effect of a change in k1 on the diagonal terms for player 1 in the Jacobian depends on (i) the direct effect on incremental payoffs (1Ui /∂ ti ), (ii) the effect on beliefs (F (k2 |k1 )) and the change in incremental payoffs (∆2 Ui ) caused by the other player switching actions. The corresponding off-diagonal terms depend on (iii) how k2 affects the same factors as in (ii). One natural strategy for uniqueness is to assume that own effects are strong (1Ui /∂ ti is large), cross-effects of actions are small (∆2 Ui is small) and that changes in signals have little effect on the distribution of beliefs. This is in essence, in a different language, the strategy followed by Mason and Valentinyi (2007) and Morris and Shin (2006). For example in a model of differentiated-goods oligopoly where firms set prices and receive signals about demand, uniqueness will obtain if signals are relatively uninformative about others’ information but have a strong effect on own demand, whilst changes in prices set by other firms do not have a large effect on the profitability of price changes. The uniqueness results in global games can be understood as giving conditions in which changes in cutoffs have offsetting effects on beliefs (F (k2 |k1 )), so factors (ii) and (iii) have equal magnitudes. In that literature (see for example Frankel et al. (2003)) it is assumed that a state ζ is distributed according to a density with support a connected subset of the real line. Each player observes a noisy signal ti = ζ + σ ui

(17)

where σ > 0 and ui has a density with support contained in [− 12 , 21 ]. Carlsson and van Damme (1993) observe that when σ is small enough posterior beliefs depend approximately only on the difference in signals: f (t−1 |t1 ) ≈ f˜ (t2 − t1 , . . . , tn − t1 ).

(18)

An increase in k1l therefore has the same effect on player 1’s beliefs about the distribution of other players’ actions as a simultaneous increase in all other players’ cutoffs of the same magnitude. The direct effect of signals on payoffs therefore dominates and the diagonal dominance condition holds.9 To develop this idea it is helpful to state a result which makes assumptions on non-primitives.

9 (18) cannot hold exactly near the boundary of support of g but ‘limit dominance’ assumptions are made which guarantee that equilibria occur in the interior.

Definition 3. k∗ is own-signal dominant if for any k in some neighborhood of k∗ , if δκ = kij − k∗j i is the largest change in absolute value in cutoffs then the distribution of other players actions given kij at k stochastically dominates (is dominated by) that at k∗ according as δκ is positive (negative). In other words if a player’s signal changes more than other players’ cutoffs then the effect of his signal on his beliefs about the distribution of other players’ actions dominates. This generalizes the additive signal structure above. Theorem 9. Suppose that Assumptions 1–4 and 6 hold, that Ui (a, t ) is supermodular in a, and that 1Ui (j, l, a−i , t ) is independent of t−i for all i, j < l and a−i , and is strictly increasing in ti . (a) If an equilibrium k∗ is own-signal dominant then it is locally unique and has index +1. ∂ 1Ui (j,l,a−i ,t ) (b) If Assumption 5 holds also and > 0 then in addition ∂ ti it is weakly regular. (c) If all equilibria are own-signal dominant then equilibrium is unique. (a) (and (c)) does not require differentiability and follows from Lemma 8. Intuitively, if j and l are adjacent active actions, so 1Vi (j, l, k∗j i ) = 0, and kij has the largest (positive) increase in

cutoffs then 1Vi (j, l, kij ) > 0, so equilibrium is locally unique. In the case of global games with private values whose signal structure implies the own-signal condition holds, Theorem 9 provides a uniqueness result. Mathevet (2010) proves a uniqueness result under similar but stronger conditions (he calls the own-signal dominance condition, the condition that beliefs are translationincreasing) by a contraction argument. In particular he assumes that any pair of actions is indifferent for some signal and assumes that the state influences 1Ui at a decreasing rate. The theorem can be extended to the case of common values. Global games need not be monotone. The strategy of Frankel et al. (2003) is to show that if the noise is small, a common values global game can be well approximated by a private values one with a uniform prior, which is monotone, and to show that such a game has a unique equilibrium. The diagonal dominance condition also makes it clear why uniqueness does not hold in general in the Harsanyian case. When the perturbation (θ ) is small, players’ own signals have little effect on their own payoffs or their beliefs about other players’ actions since in the limit signals are independent. By contrast a change in other player’s cutoff may have a large effect on expected payoffs if the equilibrium is mixed and so it may have index −1. The results in the Harsanyian and global games cases are contrasting. The respective limiting environments are, however, rather different. In the former case, the limiting environment is a game with independent, payoff-irrelevant signals with a finite number of equilibria. In the latter case, the limiting game, as σ tends to zero in (17) is one with perfectly correlated signals and in fact falls outside the framework of the paper. In general there will be an infinite number of equilibria. An earlier version of the paper, Beggs (2011), outlines how the concept of the index can be extended to the limiting case of perfectly correlated signals. Carlsson and van Damme (1993) compare the results of Harsanyi and the global games approach in the framework of Normal signals. The results of Section 6 allow one to remove the restriction to Normal signals. They show that if the limiting game is one of independent payoff-irrelevant signals then multiplicity cannot in general be removed by small perturbations. The global games analysis is relevant when the very different environment of perfectly correlated payoff-relevant signals is perturbed. The approach here is, of course, one of ex ante equilibrium. Weinstein and Yildiz (2007) give an analysis of robustness based on interim rationalizability, which is not directly comparable to the approach in this paper.

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

8. Genericity This section shows that regular equilibria are in a reasonable sense generic. Suppose that payoff functions depend on some Euclidean parameters θ ∈ Θ . Provided there is enough freedom to vary payoff functions it is clear that (i) and (ii) of Definition 1 will fail only for ‘exceptional’ values of θ . One must however restrict the perturbations so that the assumptions above remain satisfied, in particular single crossing (Assumption 6) must be maintained. f will be a fixed continuously differentiable strictly positive function such that Assumption 6 is satisfied for all payoff functions considered (for example affiliated). Attention is restricted to the case when payoff functions are supermodular but similar conclusions can be obtained if they are log-supermodular, essentially by considering the logarithms of the payoff functions (see Beggs (2011) for details). Definition 4. A set of games is said to be richly parameterized by

θ if (a) Assumptions 1–6 hold for all θ , (b) Ui is continuously differentiable in θ for all i, a, t and (c) for all θ , for any increasing strate j=1,m(i) gies of the agents the Jacobian matrix 1Vi,θ (j − 1, j, kij ) i=1,n has rank q.

Vi,θ denotes the derivative with respect to θ and q = m(1) + · · · + m(n) is the dimension of the strategy space (see Section 3). The set of games is richly parameterized under fairly weak assumptions if payoff functions are supermodular provided one is allowed constant additive perturbations: Lemma 9. (i) Suppose that for all θ , Ui (a, t ; θ ) is supermodular in a and (ai , tj ), j = 1, . . . , n, and Assumptions 1–5 hold. If Θ = Γ × ∆, Γ = Rq+n and Ui ((j, a−i ), t ; (γ , δ)) = Ui ((j, a−i ), t ; (0, δ)) + γji then the game is richly parameterized. (ii) The conclusion of (i) holds if Θ is diffeomorphic to a region with the indicated properties. A formal proof is in the Appendix but the result is fairly obvious as the conditions imply the utility functions can be varied independently in enough directions. (ii) is immediate as the chain rule implies that the rank is invariant under a diffeomorphic change of variables. A standard Transversality argument (see Appendix) then shows:

153

dimension of the set of actions. Attention will be restricted to the closed convex subset of S (Ω ), denoted by SM (Ω ), such that payoff functions are supermodular:

1Ui (l, l + 1, a−i , t ) decreasing in a−i for all t 1Ui (l, l + 1, a−i , t ) decreasing in t for all t ∈ T , j ̸= i, l ≤ m(i) − 1, ai .

(19) (20)

Theorem 11. If payoff functions belong to SM (Ω ) then the set of payoff functions for which the game has all its equilibria regular is a relatively open and dense subset. The proof is in the Appendix but is an easy consequence of Theorem 10. The perturbations used there are sufficient to establish density and openness is immediate. A similar result can be established using Anderson and Zame (2001)’s notion of relative prevalence. Let C be a convex, completely metrizable subset of a topological vector space Y . Recall that a Borel subset E ⊂ C is finitely shy in C if there is a finitedimensional subspace V ⊂ Y such that λV (E + y) = 0 for every y ∈ Y but λV (C + a) > 0 for some a ∈ Y , where λV denotes Lebesgue measure on V . Anderson and Zame (2001) show that a finitely shy set is relatively shy. The complement of a relatively shy set is said to be relatively prevalent. Theorem 12. If payoff functions belong to SM (Ω ) then the set of payoff functions for which the game has all its equilibria regular is the complement of a finitely shy set and so is relatively prevalent. The proof is in the Appendix. As above, Theorem 10 provides the essential tool. Dubey (1986) and Anderson and Zame (2001) prove similar results on the generic finiteness of equilibria of finite-dimensional smooth games. The arguments used here are a straightforward extension of theirs. 9. Conclusion This paper has provided a set of tools to analyze the properties of Bayesian games with monotone equilibria, in particular their robustness and stability. The analysis has been conducted in the setting of finitely many one-dimensional actions and onedimensional types, which is common in applications. Some extension to more general settings is possible but this is left to further work. Acknowledgments

Theorem 10. If the set of games is richly parameterized then for almost all θ ∈ Θ all the equilibria of the game are regular. If one does not care for a finite parametrization one can extend the results to show genericity if one considers the space of smooth payoff functions. This space is infinite-dimensional and the notion of genericity is less satisfactory as there is no analogue of Lebesgue measure available. One notion is that a set is generic if it is open dense. Another notion is that of prevalence, introduced into Economics by Anderson and Zame (2001). This has some of the properties of Lebesgue measure, although there are some interpretational difficulties. The results here will show genericity for both notions. Let Ω be an open set containing T . Let Ci1 (Ω ) be the set of bounded continuous functions with a bounded continuous 1 derivative with respect to ti endowed  with the  usual C metric:

 ∂g

d(g , h) = supΩ |g (t ) − h(t )| + supΩ  ∂ t − ∂∂th . Ci1 (Ω ) is a Banach i i space with this metric. Note that the derivative with respect to ti is required to be continuous in the entire vector t. It will be assumed that Ui (a, t ) belongs to Ci1 (Ω ) for each a and i, so a possible array of payoff functions for the game lies in S (Ω ) = Πi Π|A| Ci1 (Ω ) where |A| = q(1)q(2) . . . q(n)|A| = m(1)m(2) . . . m(n) is the



I am grateful to Atsushi Kajii, John Quah, two anonymous referees and seminar participants at Oxford and the SAET 2011 conference for helpful comments. Appendix Proof of Lemma 2. The proof proceeds by showing that the Kuhn–Tucker first-order conditions for maximizing Wi and the ˜ i are both Kuhn–Tucker first-order conditions for maximizing W equivalent to the conditions in Lemma 1 and are therefore equivalent to each other. Finally it is shown that the Kuhn–Tucker first-order conditions are equivalent to the Normal cone condition in (8). For convenience since only one player is considered, with the actions of other players held constant, i-subscripts and superscripts are dropped. Let m = m(i). The problem is max W s.t. k1 ≥ 0 kj ≥ kj−1 km ≤ 1.

j = 2, . . . , m

154

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

Since the constraints are linear, the Kuhn–Tucker first-order conditions are necessary (see Borwein and Lewis (2006) Section 7.2 exercise 3). These are, with λj the Lagrange multiplier corresponding to the jth constraint: f (k1 )1V (0, 1, k1 ) = λ1 − λ2

(21)

f (kj )1V (j − 1, j, kj ) = λj − λj+1

j = 2, . . . , m − 1

f (km )1V (m − 1, m, km ) = λm − λm+1 .

(22) (23)

If the jth constraint does not bind then action j − 1 is active and

λj = 0 (complementary slackness).10

Let the least active action be jmin . For this action kjmin = 0 = k0 . Similarly let jmax be the greatest active action. Recall that km+1 = 1. Adding together successive equations in (21)–(23) one obtains that for actions p and p′ , p < p′ ,

λp+1 − λp′ +1 =

p′ 

f (kj )1V (j − 1, j, kj ).

(24)

j=p+1

and finally that if jmax is the greatest action then

1V (jmax , l, km+1 ) ≤ 0 for all l ≥ jmax .

(32)

These are precisely the conditions of Lemma 1. Conversely suppose that the set of cutoffs satisfies the conditions of Lemma 1, that is (29)–(32) hold. It is straightforward to see that one can reverse the argument above to find a set of Lagrange multipliers so that the first-order conditions ((21)–(23) together with complementary slackness) are satisfied. In detail, first multiply (29)–(32) by f (k0 ), f (kj′′ ) or f (km+1 ) as appropriate to obtain (25)–(28). Next choose Lagrange multipliers as follows: Set

λl+1 = 0 if l is an active action.

(33)

If l < jmin set

λ l +1 =

jmin 

f (k0 )1V (j − 1, j, k0 )

j=l+1

The following two Facts follow immediately from complementary slackness and the definitions of cutoffs respectively: Fact 1 (a) (b) Fact 2 (a) (b)

λl ≥ 0 for all l, λl+1 = 0 if action l is active. kl = k0 for l ≤ jmin ,

=

jmin 

λl+1 = −

=− for all l < jmin

(25)

j=l+1

and that if j′ and j′′ are adjacent active actions then j=j′′ 

j=l 

f (kj′′ )1V (j − 1, j, kj′′ )

j=j′ +1

Using these Facts it follows from (24) that f (k0 )1V (j − 1, j, k0 ) ≥ 0

(34)

If j′ < l ≤ j′′ where j′ and j′′ are adjacent active actions set

if j′ and j′′ are adjacent active actions then kl = kj′′ for j′ < l ≤ j′′ , (c) kl = km+1 for l > jmax .

jmin 

f (kj )1V (j − 1, j, kj ).

j=l+1

j=l 

f (kj )1V (j − 1, j, kj )

(35)

j=j′ +1

while if jmax < l ≤ m set

λl+1 = −

j =l 

f (km+1 )1V (j − 1, j, km+1 )

j=jmax +1

f (kj′′ )1V (j − 1, j, kj′′ ) = 0

(26)

j=j′ +1

=−

j =l 

f (kj )1V (j − 1, j, kj ).

(36)

j=jmax +1

j=l



f (kj′′ )1V (j − 1, j, kj′′ ) ≤ 0 j′ < l ≤ j′′

(27)

j =j ′ +1

and finally that j =l 

f (km+1 )1V (j − 1, j, km+1 ) ≤ 0

for all l > jmax .

(28)

j=jmax +1

To derive (27), for example, apply (24) with p = j′ and p′ = l. From Fact 1, λj′ +1 = 0 and λl+1 ≥ 0. Using Fact 2(b) yields the result. The remainder of (25)–(28) is derived similarly. Since f (k) > 0 for all k one may divide it out from the

p′

inequalities above to obtain (since 1V (p, p′ , k) = j=p+1 1V (j − 1, j, k) for any actions p and p′ , p < p′ , and 1V (p, p, k) = 0 for all actions)

1V (l, jmin , k0 ) ≥ 0 for all l ≤ jmin ′

(29)

′′

and that if j and j are adjacent active actions then

1V (j′ , j′′ , kj′′ ) = 0

(30)

1V (j , l, kj′′ ) ≤ 0 j ≤ l ≤ j ′



′′

(31)

10 In the literature on optimization a binding constraint is sometimes said to be ‘active’ but no confusion should result with this usage.

The second equality in (34)–(36) follows from the observations in Fact 2 above. (25)–(28) guarantee that these multipliers are all non-negative. Moreover with these Lagrange multipliers (21)–(23) hold, as is straightforwardly checked. (33) guarantees that complementary slackness holds. It follows that the first-order conditions for maximizing Wi are equivalent to those of Lemma 1. That is the conditions of Lemma 1 hold if and only if there exists a set of multipliers such that (21)–(23) and complementary slackness hold. If one ˜ i one would have replaced f (k) by had instead maximized W 1 everywhere and gone through exactly the same steps with a different set of multipliers and shown that the first-order ˜ i are equivalent to those in Lemma 1. conditions for maximizing W It follows that a set of cutoffs satisfy the first-order conditions for ˜ i, maximizing Wi if and only if they satisfy those for maximizing W since they satisfy either if and only if they satisfy the conditions of Lemma 1. To see the equivalence with the Normal cone condition note that, writing k for ki as only player i is considered, by the projection theorem (see Borwein and Lewis (2006) p. 19 exercise 8), a vector v lies in NΣi (k) if and only if ΠΣi (k + v) = k, where ΠΣi is the projection mapping on Σi . ΠΣi solves the following problem: min

1  2

j

πj − kj − vj

2

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

s.t. π1 ≥ 0

(37)

πj ≥ πj−1 j = 2, . . . , m

(38)

πm ≤ 1.

(39)

As above, the Kuhn–Tucker conditions are necessary and since the objective function is concave are also sufficient. Hence πi is optimal if any only if there are non-negative multipliers λ1 , . . . , λm+1 such that

155

The first equality uses additivity and the second the fact noted in the next that Ind(kr ) = Ind(˜g , Or ) for any open set Or containing kr in which it is the only fixed point. If all equilibria are regular or weakly regular the set of equilibria is finite, so the theorem applies. 

π1 − k1 − v1 = λ1 − λ2 πj − kj − vj = λj − λj+1 j = 2, . . . , m − 1

(41)

πm − km − vm = λm − λm+1

(42)

Proof of Theorem 4. Note that Σ can be described as a set satisfying a set of linear inequalities Ck ≤ b, where C is a matrix and b a vector, with S an index set. A face is set such as (Ck)i = bi for i ∈ F ⊆ S and (Ck)i < bi for i ̸∈ F . Let CF and bF denote submatrices of C and b respectively formed by the rows corresponding to the binding constraints, where it can be assumed that CF has full rank.

with λj equal to zero if the jth constraint does not bind. ΠΣi (k + v) = k if and only if πj = kj satisfies the equations above, in which case they become:

Lemma 10. The projection on the set CF k = bF is given by the affine map PF = (I − CF′ (CF CF′ )−1 CF )k + CF′ (CF CF′ )−1 bF .

−v1 = λ1 − λ2 −vj = λj − λj+1 j = 2, . . . , m − 1

(43)

This follows elementary least squares calculations. It follows that

−vm = λm − λm+1

(45)

(40)

(44)

which are the same as the above with vj = −f (kj )1V (j − 1, j, kj ). That is the Kuhn–Tucker conditions for maximizing Wi hold if and only if ∇ Wi lies in NΣi (ki ). An identical argument, replacing f by 1 everywhere, shows that the Kuhn–Tucker conditions for ˜ i hold if and only if ∇ W ˜ i lies in NΣi (ki ). In particular maximizing W since, by the above, the Kuhn–Tucker first-order conditions for ˜ i hold, maximizing Wi hold if and only if those for maximizing W i ˜ they both hold if and only ∇ Wi ∈ NΣi (k ), which is (8).  Proof of Theorem 1. This follows the proof of Theorem 4.



Proof of Lemma 7. 1Vi (j, j + 1; k) is formally defined provided each kij lies in Ti , that is k ∈ Πi Πm(i) T i = T ∗ ⊃ Σ , even if k not lying in Σ cannot be interpreted as monotone strategies, and is differentiable on this set. A standard argument shows that each Ui can be extended to be a continuous function defined for all values of t if Assumption 4 holds (use the Tietze extension theorem—see Bredon (1993) Theorem 1.10.4) and to a function differentiable in ti in a neighborhood of Ti under Assumption 5 (by definition). Similarly under Assumption 4 for each i, f (t−i |ti ) can be extended to a positive, continuous function which vanishes outside some compact set containing T and is differentiable in a neighborhood of T under Assumption 5. Put together these provide an extension of 1Vi (differentiable under Assumption 5 by the argument of Lemma 4).  Proof of Theorem 2. From Lemma 7, g˜ maps the open set Ψ to Σ . Pick an arbitrary point σ0 in Σ and consider the constant map c : Ψ → Σ which maps all points in Ψ to σ0 . By Property 1 of the index (Normalization), I (c , Ψ ) = 1. On the other hand since Σ is convex the continuous map H with domain Ψ × [0, 1] defined by H (k, t ) = t g˜ (k) + (1 − t )c (k) takes values in Σ . If one considers the family of maps Ht (k), t ∈ [0, 1], defined by Ht (k) = H (k, t ) then the set {k : k is a fixed point of some Ht } = {k : H (k, t ) = k for some t } is compact (as H is continuous and Σ is compact). It follows from Property 3 of the index (Homotopy Invariance) that I (H0 , Ψ ) = I (H1 , Ψ ). Now H0 (k) = H (k, 0) = c (k) and H1 (k) = H (k, 1) = g˜ (k), so I (˜g , Ψ ) = I (c , Ψ ). From the previous step, I (c , Ψ ) = 1, so I (˜g , Ψ ) = 1. If the number of equilibria is finite, say k1 , . . . , kR , then one can find disjoint open sets O1 , . . . , OR with kr ∈ Or , and by Property 2 of the Index (Additivity) I (˜g , Ψ ) =

 r

I (˜g , Or ) =

 r

Ind(kr ).

Lemma 11. At a regular equilibrium, sgn det(I − Dg˜ |k∗ ) = sgn det (I − DPF (I − D1V )) D1V = sgn det −C F



CF′ 0



= sgn det affine map PF D1V restricted to CF k = bF = sgn det Jacobian of Φ (kα ; α(k∗ )) at k∗ . This lemma proves the theorem and is proven below.



Proof of Lemma 11. The first equality holds by definition since the equilibrium is regular. The second equality is a standard fact in linear algebra (see for example Theorem 4.2.7 in Facchinei and Pang (2003)), as is the third (see for example Mas-Colell (1985) 1.B.5.2). The final equality follows from the nature of the constraint set. A face F is defined by a set of constraints of the form (i) kim = · · · = kim+s = 0 or 1, (ii) kim = · · · = kim+t . F can therefore be parameterized by one coordinate, kim say, for each constraint of the form (ii) and each free component (which can be thought of as the case t = 0). The projection of a point l onto F sets the corresponding components of l to 0 or 1 in case (i) and to (lim + · · · lim+t )/t + 1 in case (ii). This implies that PF DV restricted to F has the form kim −→ DV i (m, m + t , kim )/(t + 1) so the result follows. An alternative proof is to use elementary row operations on the matrix in the second line.  Proof of Theorem 4. By Theorem 1, if k∗ is weakly regular there is a bounded open set O containing k∗ and no other equilibria. By Theorem 10.3.4 of Bredon (1993) there is an infinitely differentiable real-valued function φ with compact support which vanishes outside O and satisfies φ(k∗ ) = 1 and 0 ≤ φ(k) ≤ 1 for all k. The normal cone to Σ at k∗ is the product of the normal cones to Σi at k∗i . The normal cone  of a set of the form D = Cx ≤ b, which Σi is, at x∗ is the set i∈F λi ci , where ci are the rows of C and F denotes the binding constraints at x∗ . An element is in the relative interior of the normal cone if and only if all λi are strictly positive (see Facchinei and Pang (2003) Theorem 4.1.5). Hence one can find a vector v which lies in the relative interior of NΣ (k∗ ). Consider the function gϵ (k) = ΠΣ (k − 1V − ϵφv), where ϵ is a non-negative number. Since v ∈ NΣ (k∗ ), k∗ is a fixed point of gϵ for all ϵ ≥ 0. Moreover by Theorem 1, there is neighborhood O1 ⊆ O and ϵ0 > 0 such that k∗ is the unique fixed point of gϵ in the closure of O1 for all ϵ ≤ ϵ0 . As in Lemma 7, gϵ can be extended

156

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

to Ψ . By excision (property (4) of the Index), Ind(k∗ ) = Ind(O1 , g˜ϵ ). Fix ϵ with 0 < ϵ < ϵ0 . The point v˜ = −1V − ϵφ(k∗ )v lies in the relative interior of NΣ (k∗ ). Hence ΠΣ is differentiable at k˜ = k∗ + v˜ (see Definition 3.4.1 and Corollary 4.1.2 of Facchinei and Pang (2003)). Now Dg˜ϵ = DΠΣ (I −D1V −ϵv(Dφ)′ ), where ′ denotes transpose, and DΠΣ v = 0, so det I − Dg˜ϵ = det (I − DΠΣ (I − D1V )). According to Theorem 4.1.1 of Facchinei and Pang (2003), if DPΣ exists at k˜ it equals the linear map which is the projection onto the critical cone C (k∗ , Σ , v˜ ), where the critical cone is the set of elements of the tangent cone of Σ at k∗ which are orthogonal to v˜ . The tangent cone of a set of the form D = Cx ≤ b at x∗ is the set {δ : CF δ ≤ 0}. It follows from this and the characterization of the normal cone above, that if y lies in the relative interior of the normal cone at x∗ then C (x∗ , D, y) = {δ : CF δ = 0}. In the case at hand, this implies that DΠΣ at k˜ is the derivative of the linear map projecting on the face of Σ with all constraints corresponding to actions in Γ (k∗ ) binding. It follows ∗ that Ind(O1 , g˜ϵ ) = det I − Dg˜ϵ = det Jacobian of Φ (kγ ; γ ∗ ) at k∗ , ∗ where γ is the maximal element of Γ (all actions in Γ are treated as active). The last equality follows from Lemma 11. The family g˜ϵ t forms a homotopy between g˜ϵ and g˜ with no fixed points on the boundary of O1 . It follows from the homotopy property of the Index (property (3)) that Ind(k∗ ) = ∗ det Jacobian of Φ (kγ ; γ ∗ ) at k∗ .  Proof of Theorem 1. The following is a special case of Theorems 3 and 4 of Robinson (1995): Let C be a non-empty polyhedral convex set in Rn . Let Ω be an open subset of Rn with C ∩ Ω ̸= ∅. Let Θ be a Banach space and let f (x, y) be a continuously differentiable function defined on C ∩ Ω × Θ to Rn . Suppose that for y = y0 , x0 = x(y0 ) solves the variational inequality

− f (x, y) ∈ NC (x).

(46)

Let K = K (x0 , C , f (x0 , y0 )) be the critical cone of C at x0 with respect to f (see the proof of Theorem 4). K is a polyhedral convex set. Consider the map LK (h) = Dfx (x0 , y0 )ΠK (h) + h − ΠK (h) where DFx is the derivative of f with respect to x and ΠK is the projection map onto K . LK is a piecewise affine map: if F denotes a face of K then on the relative interior of F the normal cone of K takes a constant value, say NF . Then on the set σF = F + NF , ΠK coincides with the affine map aF projecting onto the affine hull of F . The collection NK = {σF : F a face of C } covers Rn . The map LK is said to be coherently oriented if the determinants of the affine maps Dfx (x0 , y0 )aF (h) + h − aF (h), F ∈ NK , all have the same nonzero sign. Suppose that LK is coherently oriented. There exist neighborhoods X of x0 and Y of y0 and a function x : Y −→ Rn such that x(y) is the unique point in C ∩ X solving (46). x(y) is Lipschitzcontinuous and directionally-differentiable. Robinson (1995) allows for slightly weaker assumptions about differentiability. A Lipschitz-continuous function is directionally differentiable if and only if it is what Robinson (1995) calls B-differentiable (see Facchinei and Pang (2003)). This result applied to C = Σ and f = 1V will immediately imply the results of Theorem 1 provided one can establish the coherent orientation condition. The critical cone K (k∗ , Σ , 1V ) is the cone based at k∗ generated by the directions in Σ which are orthogonal to 1V . In other words it is the cone generated by imposing the constraints corresponding to the weakly active actions, those in Γ (k∗ ), and dropping the others. The projection map onto the affine hull of a face of this cone

is the same as the projection map onto the affine subspace spanned by a corresponding face of Σ . As noted in the proof of Lemma 11, such a face can be written in the form CF k = bF . A calculation similar to that in the proof of Lemma 11 (see Facchinei and Pang (2003) Lemma 4.2.7) shows that each map D1VaF (k) + k − aF (k) has a determinant with sign equal to that of D1V det −C F



CF′ 0



which in turn equals that of det Jacobian of Φ (kγ ; γ ) at k∗ , where γ ∈ Γ (k∗ ) is the set of actions corresponding to F . The hypothesis implies that this sign is independent of γ , which establishes the coherent orientation condition.  Proof of Theorem 6. The result follows from the steps noted in the text before the theorem. Note that if an action is not played with positive probability it is inactive in the sense used here (and conversely). Furthermore if the equilibrium is regular in Harsanyi’s sense, then the action has strictly lower payoff than any action played in equilibrium (see Corollary 2.5.3 of van Damme (1991)), so condition (ii) of the definition of regularity in Section 4 is met. It follows from Lemma 2.5.2 of van Damme (1991) that an equilibrium which is regular in Harsanyi’s sense remains so if one deletes all the inactive actions. If all actions are active then the system (14) and (15) has a non-singular Jacobian if and only if the system



sij = 1 i = 1, . . . , n

j

  Π i (j, s∗−i ) − Π i 1, s∗−i = 0 j = 2, . . . , ν(i), i = 1, . . . , n does. The Jacobian of this system is non-singular if and only if the Jacobian of the system



sij = 1 i = 1, . . . , n

j

  Π i (j, s∗−i ) − Π i j − 1, s∗−i = 0 j = 2, . . . , ν(i), i = 1, . . . , n is non-singular as they are related by a non-singular transformation. Finally the formula Fi (kij+1 ) − Fi (kij ) = s∗j i defines a transformation the coordinates s with  i with non-singular Jacobiani between i s = 1 and the coordinates { k : k ≤ · · · ≤ kiν(i)−1 }. Hence if 1 j j the equilibrium is regular in Harsanyi’s sense then (16) has a nonsingular determinant, so condition (i) of regularity is met. Conversely, the steps can be reversed so if an equilibrium is regular in the sense of this paper it is regular in Harsanyi’s sense.  Proof of Theorem 9. To prove (a), by Lemma 8, it is enough to show that a neighborhood O of k∗ , there is no λ ≥ 1 such that for some k ∈ ∂ O

ΠΣ (k − 1V ) − k∗ = λ(k − k∗ ).

(47)

The case λ = 1 will show equilibrium is locally isolated. Near to k∗ , k − 1V must project into a face (possibly dependent on k) corresponding to actions in Γ (k∗ ), since actions which are strictly inactive at k∗ will remain so. Call this face F . If (47) holds, then it must hold with ΠF replacing ΠΣ . Since k∗ ∈ F , (47) can only hold if k ∈ F . From the form of the projection given after Lemma 11 it follows that for k ∈ F , ΠF (k − 1V ) = k − 1V˜ , where ∆V˜ (kim+s ) = 1V (j, l, kim )/(t + 1) if k′mi = · · · = k′mi +t , s = 0, . . . , t for k′ ∈ F . Since 1V˜ = 0 at k∗ , it follows that (47) holds if and

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

so this is so. The set of payoff functions with all equilibria regular is therefore dense. 

only if:

1V˜ (k) − 1V˜ (k ) = (1 − λ)(k − k ). ∗



That is, since λ ≥ 1, 1V˜ points inwards at k. This however contradicts the assumption that 1Ui (j, l, ti ) is strictly increasing in ti and own-belief dominance. These imply that for the greatest component of k − k∗ , the change in 1V˜ is in strictly the same direction as k − k∗ . (b) and (c) follow immediately from the preceding results. To show the dominant diagonal assumption consider an equal increase in kil and all other payoffs. By the own signal dominance assumption this increases 1Vi (kil , j, l) and by taking limits this implies that the effect of kil on beliefs dominates that of other players’ cutoffs. The assumption that diagonal dominance holds. 

157

∂ 1Ui ∂ ti

> 0 implies that

Proof of Lemma 9. The result in (i) is obvious since Vi can be written in the form Vi (j, k; (0, δ)) + γji and so 1Vi has the form

1Vi (j − 1, j, kij ; (0, δ)) + γji − γji−1 . The Jacobian of 1V with respect to γ then has −1 on the diagonal, 1 immediately to the right of the

diagonal, 0 elsewhere and so has full rank (q). Hence the Jacobian with respect to θ certainly has the required rank. Rank is preserved under a diffeomorphism, so (ii) is clear.  Proof of Theorem 10. A well known special case of the Thom transversality theorem (see for example Mas-Colell (1985) chapter 8.3.1) states that if F : X × Θ −→ Rm , with X ⊂ Rm and Θ ⊂ Rp both open, is a C r function with r > max{0, n − m} and F (x, θ ) = 0 H⇒ Fθ (x, θ ) has rank m, then there is a set Θ0 ⊂ Θ of measure zero such that if θ ̸∈ Θ0 , F (x, θ ) = 0 H⇒ Fx (x, θ ) has rank m. Any equilibrium, in which at least one player has more than one active action or only one active action but one which is indifferent to an inactive action at some signal, is the solution to a set of equations Φ (k∗ γ ; γ ; θ) = 0. All other cases are regular so do not need to be considered further. Since set of games is richly parameterized the Jacobian matrix =1,m(i) (1Vi,θ (j − 1, j, kij ))ji= has rank q. If p and l are adjacent 1 ,n actions in γ for player i in γi (k∗ ) then the corresponding equation in Φ , 1Vi (p, l, kil ) = 0 is obtained by setting cutoffs between p and l equal to kil and summing the corresponding equations in 1Vi (j − 1, j, kij ) = 0. It follows that Φθ has rank q(γ ), where q(γ ) is the number of equations in Φ . By the Transversality Theorem, with m = n = q(γ ) and r = 1, for a given γ , for almost every θ , Φ (kγ ; γ ; θ ) has an invertible Jacobian with respect to kγ at k∗ . Furthermore, for almost every θ all actions in γ are active. For if not some of the k∗ γ would be equal. k∗ γ would then solve the augmented system consisting of these equality constraints and Φ , with corresponding m > n = q(γ ), but this violates the conclusion of the Transversality theorem (in other words generically one cannot solve a system with more equations than unknowns). By considering all possible γ , it follows that for almost all θ all equilibria are regular.  Proof of Theorem 11. The assumptions made ensure Assumptions 1–6 all hold for possible arrays of payoff functions, so equilibria exist and satisfy the equations of Definition 1. 1Vi (j, l; ·) is continuous in Ui (j; ·) and Ui (l; ·) under the topology on payoff functions used here as is the Jacobian of Φ . It follows that equilibria are regular for an open set of payoff functions. To show density consider a fixed array of payoff functions and consider the perturbations Ui (j, ai , t ) + γji (which belong to SM (Ω )). The argument of the proof of Theorem 10 shows that for almost all γ the game has all equilibria regular. In particular one can find γ arbitrarily small

Proof of Theorem 12. SM (Ω ) is a closed, hence completely metrizable, and convex subset of the Banach space S (Ω ). For the space V in the definition of shyness take the set of payoff functions Ui (j, a−i , t ) = γji : this is a subset of SM (Ω ). λV is the obvious Lebesgue measure. Let E be the subset of SM (Ω ) for which some equilibria are irregular. Consider a fixed set of payoff functions −y = (Ui ) in S (Ω ). Suppose there is some γ such that the game with payoff functions Ui (j, ai , t ) + γji lies in E. The argument of the proof of Theorem 10 shows that the set of such γ has measure zero. If there is no such γ then E ∩ V − y = ∅. Hence one has λV (SM (Ω )) > 0 but λV (E + y) = 0 for all y. E is thus finitely shy in SM (Ω ). The set of payoff functions with regular equilibria is thus relatively prevalent.  References Anderson, R., Zame, W., 2001. Genericity with infinitely many parameters. Adv. Theor. Econ. 1. Athey, S., 1996. Characterizing properties of stochastic objective functions. Mimeo, MIT. Athey, S., 2001. Single crossing properties and the existence of pure strategy equilibria in games of incomplete information. Econometrica 69, 861–889. Beggs, A., 2011. Regularity and stability in monotone Bayesian games. Discussion Paper, University of Oxford. Billingsley, P., 1995. Probability and Measure, third ed. John Wiley and Sons, New York. Borwein, J., Lewis, A., 2006. Convex Analysis and Nonlinear Optimization, second ed. Springer Verlag, New York. Bredon, G., 1993. Toplogy and Geometry. Springer Verlag, New York. Carlsson, H., van Damme, E., 1993. Global games and equilibrium selection. Econometrica 61, 989–1018. Dontchev, A., Rockafellar, R.T., 2009. Implicit Functions and Solution Mappings: A View from Variational Analysis. Springer Verlag, Dordrecht. Dubey, P., 1986. Inefficiency of nash equilibria. Math. Oper. Res. 11, 1–8. Facchinei, F., Pang, J.-S., 2003. Finite Dimensional Variational Inequalities and Complementarity Problems. Two Volumes. Springer Verlag, New York. Frankel, D., Morris, S., Pauzner, A., 2003. Equilibrium selection in global games with strategic complementarities. J. Econom. Theory 108, 1–44. Govindan, S., Reny, P., Robson, A., 2003. A short proof of Harsanyi’s purification theorem. Games Econom. Behav. 45, 369–374. Granas, A., Dugundji, J., 2003. Fixed Point Theory. Springer Verlag, New York. Gul, F., Pearce, D., Stachhetti, E., 1993. A bound on the proportion of pure strategy equilibria in generic games. Math. Oper. Res. 18, 548–552. Harsanyi, J., 1973. Games with randomly disturbed payoffs: A new rationale for mixed-strategy equilibrium points. Internat. J. Game Theory 2, 1–23. Jackson, M., Rodriguez-Barraquer, T., Tan, X., 2012. ϵ -equilibria of perturbed games. Games Econom. Behav. 75, 198–216. Kehoe, T., 1980. An index theorem for general equilibrium models with production. Econometrica 48, 1211–1232. Mas-Colell, A., 1985. The General Theory of Economic Equilibrium: A Differentiable Approach. Cambridge University Press, Cambridge. Mason, R., Valentinyi, A., 2007. Existence and uniqueness of monotone pure strategy equilibrium in Bayesian games. Mimeo, Southampton. Mathevet, L., 2010. A contraction principle for finite global games. Econom. Theory 42, 539–563. McAdams, D., 2003. Isotone equilibrium in games of incomplete information. Econometrica 71, 1191–1214. McLennan, A., 2014. Advanced Fixed Point Theory for Economics. Technical Report. University of Queensland. Milgrom, P., Shannon, C., 1994. Monotone comparative statics. Econometrica 62, 157–180. Milgrom, P., Weber, R., 1982. A theory of auctions and competitive bidding. Econometrica 50, 1089–1122. Morris, S., Shin, H., 2006. Heterogeneity and uniqueness in interaction games. In: Blume, L., Durlauf, S. (Eds.), The Economy as a Complex Evolving System, III. Oxford University Press, Oxford, pp. 207–242. Nikaido, H., 1968. Convex Structures and Economic Theory. Academic Press, New York. Radner, R., Rosenthal, R., 1982. Private information and pure-strategy equilibria. Math. Oper. Res. 7, 401–409. Reny, P., 2011. On the existence of monotone pure-strategy equilibria in Bayesian games. Econometrica 79, 499–553. Robinson, S., 1995. Sensitivity analysis of variational inequalities by normal-map techniques. In: Giannessi, F., Maugeri, A. (Eds.), Variational Inequalities and Network Equilibrium Problems. Plenum Press, New York, pp. 257–269.

158

A.W. Beggs / Journal of Mathematical Economics 60 (2015) 145–158

Robinson, S., 2003. Variational conditions with smooth constraints: structure and analysis. Math. Program. B 97, 245–265. Simsek, A., Ozdaglar, A., Acemoglu, D., 2008. Local indices for degenerate variational inequalities. Math. Oper. Res. 33, 291–300. Topkis, D., 1998. Supermodularity and Complementarity. Princeton University Press, Princeton, NJ.

van Damme, E., 1991. Stability and Perfection of Nash Equilibria, second ed. Springer Verlag, Berlin. Van Zandt, T., Vives, X., 2007. Monotone equilibria in Bayesian games of strategic complementarities. J. Econom. Theory 134, 339–360. Weinstein, J., Yildiz, M., 2007. A structure theorem for rationalizability with applications to robust predictions of refinements. Econometrica 75, 365–400.