Computing Nash equilibria by iterated polymatrix approximation

Computing Nash equilibria by iterated polymatrix approximation

Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241 www.elsevier.com/locate/econbase Computing Nash equilibria by iterated polymatrix approx...

249KB Sizes 0 Downloads 90 Views

Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241 www.elsevier.com/locate/econbase

Computing Nash equilibria by iterated polymatrix approximation Srihari Govindana , Robert Wilsonb;∗ a Department

of Economics, The University of Western Ontario, London, Ont., Canada N6A 5C2 Business School, Stanford University, Stanford, CA 94305-5015, USA

b Stanford

Abstract This article develops a new algorithm for computing Nash equilibria of N -player games. The algorithm approximates a game by a sequence of polymatrix games in which the players interact bilaterally. We provide su0cient conditions for local convergence to an equilibrium and report computational experience. The algorithm convergences globally and rapidly on test problems, although in theory it is not failsafe because it can stall on a set of codimension 1. But it can stall only at an approximate equilibrium with index +1, thus allowing a switch to the global Newton method, which is slower but can fail only on a set of codimension 2. Thus, the algorithm can be used to obtain a fast start for the more reliable global Newton method. ? 2003 Elsevier B.V. All rights reserved. JEL classi+cation: C63; C72 Keywords: Game; Equilibrium; Algorithm; Homotopy; Polymatrix approximation

1. Introduction The standard algorithms for calculating Nash equilibria of N -player games use homotopy methods to trace equilibria in the graph of the Nash correspondence. The equilibria are traced above a path of games from a starting point that is a game whose equilibrium is known, to the terminal point that is the target game whose equilibria are wanted (Eaves, 1972; Eaves and Schmedders, 1999). The most e
Corresponding author. E-mail addresses: [email protected] (S. Govindan), [email protected] (R. Wilson).

0165-1889/$ - see front matter ? 2003 Elsevier B.V. All rights reserved. doi:10.1016/S0165-1889(03)00108-8

1230

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

simplices in a pseudomanifold, as in Eaves (1972, 1984), Eaves and Lemke (1981), Eaves and Scarf (1975), Scarf and Hansen (1973), and the version implemented in the package Gambit by McKelvey et al. (1996); cf. McKelvey and McLennan (1996). In some versions the homotopy is implicit because calculations are conducted in the strategy space, as in the adaptations of the basic Lemke and Howson (1964) algorithm by Lemke (1965), Shapley (1974), and Wilson (1992) for 2-player games and its extensions by RosenmGuller (1971) and Wilson (1971) for N -player games, and as in the interior-point method developed by van den van den Elzen and Talman (1991). A referee noted that even McKelvey and Palfrey’s (McKelvey and Palfrey, 1995) method of computing quantal-response equilibria can be interpreted as a homotopy with respect to the parameter that is the error level of quantal responses. In previous articles (Govindan and Wilson, 2002, 2003) we describe applications of Smale’s (Smale, 1976) global Newton method to computing equilibria of N -player games in normal form and extensive form. We show that its path is a homotopy whose image in the strategy space coincides when N = 2 with the path of the Lemke– Howson algorithm. The homotopy is well deAned: over each generic ray through the target game, if the starting game has a unique equilibrium (and thus its index is +1), the corresponding path in the equilibrium graph is one dimensional and without branches or loops. After reaching the target game, the path Ands a sequence of equilibria of the target game whose indices alternate between +1 and −1 if the game is generic. Our purpose in this article is to improve the performance of the global Newton method by providing a ‘fast start’ that jumps quickly to the vicinity of the target game. The next sections suggest the motivation for our approach. 1.1. Motivation For games with more than two players, homotopy methods follow paths in the strategy space that are nonlinear. Traversing nonlinear paths requires many small steps, or larger steps require error–correction procedures, or pseudomanifold representations require successive reAnements to obtain reasonable accuracy. The computational burden is compounded severely, moreover, by the characteristic feature that the path in the strategy space over the ray from the starting game to the target game has many twists and reversals. That is, the path typically involves many changes in the support of the strategies, and further, the trajectory reverses orientation where the index changes sign. A generic game along the path can have a number of equilibria that is exponential in the number of strategies (von Stengel, 1999), and the path can wind back and forth through many equilibria before making progress toward the target game. Even for games with only Ave players and Ave pure strategies per player, it is not unusual to see homotopy-based algorithms idle for many iterations at each of several intermediate games over which the trajectory oscillates back and forth through many equilibria (30 in some examples) with alternating indices +1 and −1 before resuming forward progress. This is consistent with Kohlberg and Mertens’ (Kohlberg and Mertens, 1986, Theorem 1) structure theorem, which says that the Nash graph is homeomorphic to the space of games, but it is discouraging to discover that the number of folds of the graph above a typical game – not the target game, just an intermediate game on the

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

1231

homotopy’s path to reach it – can be exponentially large, and not just in theory but sometimes in practice. To improve computational e0ciency, one needs an initial procedure that reaches quickly the vicinity of the target game. That is, if one can leap quickly to a pair ˜ ) in the Nash graph, where G˜ is a game near the target game G and  is one (G; ˜ ) the global Newton method of its equilibria with index +1, then from the point (G; can be applied e0ciently to And all equilibria of the target game G accessible via the ˜ ) (or if homotopy along the line segment from G˜ through G that continues from (G; ˜ from a perturbation of G˜ to ensure genericity of  is not an isolated equilibrium of G, ˜ This is much quicker than tracing all equilibria the associated ray from G through G). above the path from the usual starting point, which is a game su0ciently far on the ray from G that it has a unique equilibrium. This tactic relies on the presumption that ˜ ) is on the path from the usual starting point on the (generic) ray through G˜ and (G; G, and thus  is not on a loop o< the path of the global Newton method: the theorem in Section 4 justiAes this presumption when G˜ is su0ciently close to G. After this fast start, the burden of computing all equilibria of G accessible via the ray as the global Newton method (or any other algorithm based explicitly or implicitly on a homotopy) winds its way through the folds of the graph above the target game is worth the e
1232

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

In fact, the lower dimensionality of the space of payo
2. Formulation Denote the set of players by N = {1; : : : ; N }, where N ¿ 1. Use Sn to denote player n’s Anite set of  pure strategies, and  n , player n’s simplex of mixed strategies. Let  S = n Sn , = n n , and M = n mn where mn = |Sn |. The space of payo
S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

1233

normal-form gameswith player set N and pure strategy set S is the Euclidean space of dimension N × n mn . For a game in whose normal form is the (N +1)-dimensional array A=((Ans )s∈S )n∈N of payo
n =n

t∈S(−n)

where S(−n)= n =n Sn . The M -vector of all payo
only if

Gsn (; A) = max Gsn (; A):  s ∈Sn

A polymatrix game (or a bimatrix game if N = 2) is deAned by the payo< Bs;n s to player n from each pure strategy s ∈ Sn and each pure strategy s ∈ Sn of each other player n = n, and n’s payo
The linearity of the payo< function for a polymatrix game enables one to use a variant of the Lemke–Howson algorithm to calculate equilibria. The smaller dimensionality of polymatrix games, and the remarkable speed of the Lemke–Howson algorithm, are two motivations for using iterative approximation via polymatrix games. A third motivation is the natural relationship between a general game A ∈ and a Q In fact, one can consider this family to family of approximating polymatrix games in . be the collection of polymatrix games corresponding to the hyperplanes tangent to the payo< function G(·; A) at various mixed strategies  ∈ . The Jacobian ∇G(; A) of the partial derivatives of the payo< function G(·; A) : → RM at  ∈ is an M × M matrix with mn × mn -dimensional blocks of zeros along its diagonal. This Jacobian matrix is also the Jacobian matrix of the polymatrix game B ∈ Q for which 

Bs;n s = 9Gsn (; A)=@sn

for s ∈ Sn and n = n. Hereafter we say that the polymatrix game B approximates the game A at  if their Jacobians agree at ; that is, ∇G(; B) is proportional to ∇G(; A). The polymatrix game B that approximates A at  has a natural interpretation: player n’s bilateral interaction with each other player n = n is approximated by averaging  over the strategies of all other players n = n; n , using their mixed strategies n as the weights for computing the average payo< obtained from each combination of the pure strategies of players n and n . Thus,   n Bs;n s = Ans; s ; t t(n  ) : t∈S(−n; n )

n = n; n

1234

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

Hereafter we use DGA () ≡ 1=(N − 1)∇G(; A) so that G(; A) = DGA () ·  [see Lemma below]. We can then interpret DGA () as the Jacobian of the polymatrix game in Q that approximates the payo< of A ∈ at . Similarly, we represent a polymatrix game B ∈ Q by the constant Jacobian DGB = ∇G(; B) of its payo< function. Given the payo< array A ∈ for a game, and any vector g ∈ RM , deAne the perturbed game A ⊕ g ∈ via (A ⊕ g)ns; t = Ans; t + gs for each t ∈ S(−n). Similarly, when B is a polymatrix game, deAne the perturbed polymatrix game B ⊕ g ∈ Q via (B ⊕ g)ns; s = Bs;n s + gs =(N − 1) for each strategy s ∈ Sn of another player n = n. Note that when J Q the corresponding Jacobian is the Jacobian matrix DGB of a polymatrix game B ∈ ,   of B ⊕ g is J ≡ J ⊕ g, where Js; s = Js; s + gs =(N − 1) for pure strategies s and s of di
only if

Gsn (; A) = max Gsn (; A)  s ∈Sn

is equivalent when J = DGB = DGA () to sn ¿ 0

only if

Jsn  = max Jsn ;  s ∈Sn

which is the deAnition that  is an equilibrium of the polymatrix game B whose Jacobian is J . An immediate corollary is that if  is a regular equilibrium of A, then its index (+1 or −1) agrees with the index of  as an equilibrium of B, since in either case the index is the sign of the determinant of a Jacobian matrix derived from DGB , as in GGul et al. (1993). Q we use the variant of the Lemke–Howson To solve a polymatrix game B ∈ , algorithm described in Govindan and Wilson (2003). Choose a generic ray g ∈ RM and a scalar ∗ ¿ 0 su0ciently large that the polymatrix game B ⊕ ∗ g has the unique pure strategy equilibrium for which sn = 1 for s = Arg maxs ∈Sn gs . Then trace the one-dimensional path of equilibria () of the games {B ⊕ g| ∈ [0; ∗ ]} parameterized by  as  ↓ 0. Tracing the equilibria () is a linear operation consisting of a series of pivots (Gaussian eliminations) that continues until  = 0. By

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

1235

continuing the homotopy beyond the Arst terminus one can compute all equilibria accessible via the ray g. For instance, if the graph of equilibria is S-shaped above the line of games {B ⊕ g} in a neighborhood of  = 0 then the homotopy Ands all three equilibria and their indices are +1, −1, and +1 in the sequence of crossings of  = 0. This algorithm is very fast; e.g., in our implementation in APL on a Pentium III computer, a polymatrix game with Ave players, each with six pure strategies, requires an average of 0:03 s to reach the Arst equilibrium of the target game B and the other accessible equilibria are found quickly after that. In contrast, the global Newton method averages 1116 s to reach the Arst equilibrium of the general game A that B approximates. 3.1. The iterative polymatrix approximation algorithm A naive version of the IPA algorithm for a general game A ∈ proceeds by iteratively improving the strategy at which the approximation is computed, as in the following procedure: 1. Start by specifying a mixed strategy ˆ at which the game A is approximated initially. Also, use a generic vector g ∈ RM to deAne the ray for the homotopy used by the Lemke–Howson algorithm. 2. Approximate A by the polymatrix game B for which DGB = DGA (). ˆ 3. Use the Lemke–Howson algorithm to And the Arst equilibrium  of B reached by the homotopy that traces equilibria along the path of games {B ⊕ g} as  ↓ 0. 4. If  is su0ciently close to ˆ then either terminate with an approximate equilibrium, or invoke the global Newton method to And all equilibria of A accessible via the induced ray g() (deAned below). 5. Otherwise, move ˆ ‘toward’  and return to Step 2. The merits of this procedure lie in Steps 3 and 4. As mentioned, Step 3 is very fast. Step 4 reTects the fact that the procedure can be interrupted to allow continuation via the global Newton method, which is slower but more reliable. Continuation is possible for two reasons. First, the Lemke–Howson algorithm produces an equilibrium  of a game A ⊕ g() near A with the index +1, which is a key property required. To see this, recall that for each  on the path of the homotopy, the equilibrium () obtained by the Lemke–Howson method is a Axed point of the map f: → for which f(x) = r(x + [DGB ⊕ g] · x), where r : RM → is the retraction deAned by GGul et al. (1993). Because  = (0) is the Arst equilibrium of B on the trajectory as  ↓ 0, its index is +1 (and depends only on the Jacobian). Second, using the lemma,  is also an equilibrium of the game A ⊕ g() ∈ for which g() = [DGB − DGA ()]. Therefore, the global Newton method can be continued from the starting point (A ⊕ g(); ) along the line {A ⊕ g()|0 6  6 1} as again  decreases from 1 to 0 to And all equilibria of A accessible via the ray g(). There are two major deAciencies of the naive version above. The Arst is that Step 5 requires a sophisticated improvement procedure, and the second is that Step 4 may require continuation via the global Newton method because the procedure can cycle

1236

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

or stall. Each of these is addressed below, and then in Section 4 we present su0cient conditions for convergence. 3.2. An improvement procedure The main di0culty in Step 5 is that the Jacobian J of the map ˆ →  is di0cult to compute. Our implementation obtains good results with the simple expedient of using the rule of false position. The o<-diagonal entries of the Jacobian are taken to be zero, and each entry Jss on the diagonal is approximated by the rate of change of the displacement s − ˆs as a function only of ˆs , as measured over the previous two iterations of Step 3. This is equivalent to a quasi-Newton method based on a local approximation of the Jacobian of the map ˆ → . Thus, from Step 5 one could return to Step 2 with the new mixed strategy ˆ used for the approximation being the value of ˆ −  J −1 · [ − ] ˆ computed in Step 5, where J is the diagonal matrix and  ∈ (0; 1) is a small stepsize parameter. This is the improvement procedure implied by the local quasi-Newton method. In fact, however, the restriction to the strategy space complicates matters so it is better to apply this same procedure to values of the transformed variable z ∈ RM deAned by z ≡  + DGB  and then to recover  = r(z) by using the retraction r deAned by GGul et al. (1993). That is, r(z) is the unique value of  ∈ for which [ −]·[z−] 6 0 for all  ∈ . Computing the retraction is easy and almost instantaneous, and experience has shown that working in the full space RM of z-values is more robust. Thus, Step 1 begins with an initial choice z, ˆ Step 2 uses ˆ = r(z), ˆ and Step 5 returns to Step 2 with zˆ − J −1 · [z − z], ˆ where z =  + DGB  using  from Step 3, and the diagonal of J measures the rate of change of z as a function of zˆ over the previous two iterations. Although there are techniques for estimating the full Jacobian of the map zˆ → z using data from multiple previous iterations, and more powerful extrapolation methods (e.g., Richardson extrapolation), we have not implemented these and our numerical results reTect only the simple procedure of estimating the diagonal from the previous two iterations. 3.3. Cycling In Section 4 we establish su0cient conditions for local convergence of local Newton methods. However, it is apparently fundamental that the IPA algorithm can stall – which is why we emphasize the importance of the ability to continue from an intermediate point using the global Newton method. The reason a stall can occur is that the trajectory (), ˆ interpreted as a function of the strategy ˆ used to obtain the polymatrix approximation B for which DGB = DGA (), ˆ can encounter a “membrane” (a subset of codimension 1) on either side of which Step 5 implies movement toward the membrane. Thus the IPA algorithm cycles near such a membrane. A membrane of codimension 1 is not an impediment to the version of the global Newton method due to Keenan (1981) because, as explained in Govindan and Wilson (2003), its trajectory passes through singularities of codimension 1, and singularities of codimension 2 can be avoided by perturbing the game.

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

1237

To illustrate how a stall can occur, suppose the equilibrium graph for the polymatrix approximation B ⊕ g at ˆ is S-shaped above a neighborhood of  = 0: then Step 5’s movement of ˆ toward the Arst equilibrium  of B, say at the top of the S, can move the graph su0ciently that the top two equilibria disappear, and then the Arst equilibrium of the next polymatrix approximation is at the bottom of the S – and a further movement in that direction makes the Arst equilibrium reappear at the top of the S. (As in the global Newton method, such cycles might be avoided by reversing orientation: e.g., upon reaching a membrane, Step 5 improves  by retreating away from (not toward)  and continuing on the branch of the S with index −1, namely, by choosing  in Step 3 to be the second equilibrium reached by the Lemke–Howson algorithm. We have not developed this approach because of the apparent advantage of moving out of a stall by using the global Newton method – and our main purpose here is to present an algorithm that enables a fast start for the global Newton method.) 4. Radius of convergence In this section we provide a su0cient condition for our algorithm to converge locally to an equilibrium. More speciAcally, we construct a map whose Axed points, if they exist, are locally stable equilibria under the algorithm. We say that a subset of games is generic if its complement in has lower dimension, and we say that a game with payo< array A is generic if it lies in a generic subset of . Because DGA (·) is a polynomial, and equilibria are deAned by polynomial equalities and inequalities, all the sets and functions we deal with here are semi-algebraic. Let EQ be the graph of the equilibrium correspondence over the domain Q of polymatrix games, and let proj: EQ → Q be the natural projection. Also let S be the unit sphere in RM . According to Govindan and Wilson (2003), for each polymatrix game B there exists a lower-dimensional semi-algebraic subset C(B) ⊂ S of critical points such that for each g ∈ S \ C(B) the subset proj−1 ({B ⊕ g| ¿ 0}) of EQ is a Anite union of one-dimensional manifolds on which, for all but a Anite number of values of , all equilibria of B ⊕ g are regular (i.e., isolated, each with index +1 or −1). Fix a game A ∈ such that all its equilibria are regular, which is a property of generic games. Let X = {(; g) ∈ × S|g ∈ C(DGA ())} be the set of critical pairs. Then, applying the generic local triviality theorem (cf. Bochnak et al., 1998) to the projection map from X to , we have that dim(X ) ¡ (M − N ) + (M − 1). Once again by the generic local triviality theorem, applied now to the projection map from X to S, we have that for generic g the dimension of {(; g) ∈ X } is less than M −N . Therefore, for each g in a generic subset of S the dimension of {|(; g) ∈ X } is also less than M −N . Since the set E(A) of equilibria of the game A is Anite, it also true that for generic g ∈ S, (; g) ∈ X if  ∈ E(A). To sum up so far, for each generic g ∈ S, there exists a semi-algebraic, open and dense subset ∗ of such that for each  ∈ ∗ the equilibrium correspondence over {DGA ()⊕g| ¿ 0} is a Anite union of one-dimensional manifolds. Moreover, E(A) ⊂ ∗ . Now Ax a generic g ∈ S, and let f : ∗ → be the function that assigns to each  ∈ the Arst equilibrium of DGA () on the manifold that is reached as  decreases

1238

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

from ∞ to 0. By construction, a Axed point of f is an equilibrium with index +1 of the game A. However, in as much as f might not be extendable to a continuous function over , it might not have a Axed point. It can be shown that if the complement of ∗ has codimension 2 or more, then f has a Axed point. But this condition, while satisAed by an open set of games, is not generic. Let Z ∗ = r −1 ( ∗ ), where r is the retraction map deAned in Section 3. DeAne F : ∗ Z → RM by F(z) = f(r(z)) + G(f(r(z)); B) where B = DGA (r(z)). Since F is a semi-algebraic function on an open semi-algebraic subset, it is di
5. Pseudocode and numerical results The following pseudocode includes the amendments discussed in Section 3. We add here two optional features in steps 4 and 7 below to speed calculations: 1. Select a stepsize  ∈ (0; 1) and a generic vector g ∈ RM . Start with an initial zˆ ∈ RM . 2. Compute ˆ = r(z). ˆ 3. Compute DGA (), ˆ interpreted as the Jacobian of the polymatrix game B that approximates A at . ˆ 4. If the solution for ˆ remains optimal using the support of  in the previous iteration then skip to Step 6.

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

1239

Table 1 Summary of computational experience Nmn :

2

4

6

8

10

3 4 5 6 7 8 9 10 11 12

0.52 1.14 3.82 7.99 19.61 39.67 68.41 192.30 (∗ 2)468:36 ∗ ( 21)788:87

0.10 4.14 19.56 207.60 (∗ 57)623:02

2.47 21.62 (∗ 1)405:04

5.44 168.66

(∗ 29)405:90

18.15

12

14

45.16

(∗ 10)281:12

5. Use the Lemke–Howson algorithm to And the equilibrium  of B with index +1 that is the Arst equilibrium of B reached by the homotopy that traces equilibria along the path of games {B ⊕ g} as  ↓ 0. 6. Set z =  + DGA (). ˆ 7. If the angle between z− ˆ ˆ and z−ˆ is acute then rescale zˆ − ˆ to match z−, ˆ which is innocuous because such rescaling has no e
1240

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

each pair (N; mn ) with N 6 4, and at least 20 were solved for each pair with N ¿ 4. Each example included in the table was solved with  −  ˆ ¡ 10−6 and maximum −6 1 payo< error ¡ 10 . The long times for large examples are due to the many multiplications required in Step 3 to evaluate the Jacobian of the payo< function for a normal-form game. The time required can be much less when the game is speciAed in extensive form. Also, the times required to reach pure-strategy equilibria are generally much shorter. Although the theory indicates that the IPA algorithm can stall, we have not found a deAnitive example. However, this favorable computational experience does not obviate the role of the global Newton method, which we still see as the most powerful and reliable. The IPA algorithm Ands only a single equilibrium, whereas the global Newton method Ands all equilibria accessible via the designated ray g. The advantages of Anding numerous equilibria (in some examples, over 30) compensate for its slower speed, provided a fast start is obtained from the IPA algorithm. Therefore, our view is that the IPA algorithm is best interpreted as a fast start for the global Newton method, which can then be invoked to And all equilibria accessible via the incoming ray g. Acknowledgements This work was funded in part by grants from the Social Sciences and Humanities Research Council of Canada and the National Science Foundation of the United States. References Bochnak, J., Coste, M., Roy, M.F., 1998. Real Algebraic Geometry. Springer, Berlin. Eaves, B.C., 1972. Homotopies for the computation of Axed points. Mathematical Programming Study 3, 25–37. Eaves, B.C., 1984. A Course in Triangulations for Solving Equations with Deformations. Springer, Berlin. Eaves, B., Lemke, C., 1981. Equivalence of LCPs and PLSs. Mathematics of Operations Research 6, 475–494. Eaves, B.C., Scarf, H., 1975. The solution of systems of piecewise linear equations. Mathematics of Operations Research 1, 1–27. Eaves, B., Schmedders, K., 1999. General equilibrium models and homotopy methods. Journal of Economic Dynamics and Control 23, 1249–1279. Govindan, S., Wilson, R., 2002. Structure theorems for game trees. Proceedings of the National Academy of Sciences, USA 99, 9077–9080. Govindan, S., Wilson, R., 2003. A global Newton method to compute Nash equilibria. Journal of Economic Theory 110, 65–86. 1 A time marked with (∗ n) indicates a sample truncated by n examples in which convergence with the required accuracy was obtained in 1000 to 10 000 s. In particular, for (N; mn ) = (3,12), (4,10), (7,4), (12,2) respectively, there were 10, 29, 57, 21 additional examples that converged within 10 000 s, and the average time for the full sample was 634, 1046, 2869, 1695 s. A total of 31 examples among 1066 did not converge with the required accuracy within 10 000 s (of these 15 were for (N; mn ) = (4; 10), and 12 for (4; 12)). Median times average 87% of the mean times shown in the table, and 77% for the full sample. Previous test runs indicated that all the smaller examples would be solved successfully with the larger stepsize  = 0:05 in about half the time.

S. Govindan, R. Wilson / Journal of Economic Dynamics & Control 28 (2004) 1229 – 1241

1241

GGul, F., Pearce, D., Stacchetti, E., 1993. A bound on the proportion of pure strategy equilibria in generic games. Mathematics of Operations Research 18, 548–552. Harsanyi, J., 1975. The tracing procedure: a Bayesian approach to deAning a solution for n-person noncooperative games. International Journal of Game Theory 4, 61–94. Herings, P.J.J., Peeters, R.J.A.P., 2001. A di