J. theor. Biol. (1998) 193, 335–344 Article No. jt980705
War of Attrition with Individual Differences on RHP T K* K K† * Department of Zoology, Faculty of Science, Kyoto University, Sakyo, Kyoto 606-8502, Japan and † Department of Economics and Information Science, Shotoku University, Gifu, 1-38 Nakauzura, Gifu-city 500, Japan (Received on 29 August 1997, Accepted on 9 March 1998)
The fact that there always exists various kinds of almost continuous mutations for any animal population implies that players in competitions can never be perfectly symmetric in any sense. To develop a model to fit this reality, we consider war of attrition games in which players have continuously different resource holding potential (RHP). The RHP of each opponent is not known in our settings. Pure ESS functions and Nash equilibria are obtained under sufficiently rational conditions as unique solutions of certain differential equations among the class of Lebesgue measurable functions. They are normal in that a higher RHP induces a longer attrition time, which implies that a player with greater RHP always wins. This model includes as the limit the conclusions of Maynard Smith (1974, J. theor. Biol. 47, 209–221) and Norman et al. (1977, J. theor. Biol. 65, 571–578), which did not consider individual differences in RHP. Our results suggest that, by changing each player’s qualitative differences to continuous quantitative differences, some of the mixed ESS solutions previously found in discrete games may degenerate into pure ESS functions. Moreover, we found that the smaller the individual differences of RHP, the smaller is the mean pay-off of most individuals as well as the total pay-off of the population. 7 1998 Academic Press
1. Introduction The optimal strategy to compete for valuable resources is necessarily dependent on the opponent’s strategy. To analyse these situations, we must resort to game theory such as the war of attrition game (Maynard Smith, 1974; Maynard Smith & Parker, 1976; Norman et al., 1977; Bishop & Cannings, 1978; Bishop et al., 1978; Haigh & Rose, 1980; Hammerstein & Parker, 1982; McNamara & Houston, 1989; Nuyts, 1994; Sjerps & Haccou, 1994; Mesterton-Gibbons, 1996; Mesterton-Gibbons et al., 1996). The strategy sets and states of players have been assumed to be either discretely finite, or attributable to it, but in reality, both are often continuous, and whether the employed assumptions result in the same conclusion remains uncertain. The existence of an intermediate state may well change the mixed strategies and/or the stability of the solutions. 0022–5193/98/140335 + 10 $30.00/0
Thus, it would be more important to consider continuous models reflecting reality more closely than abstract discrete models. The existence of equilibrium in continuous models has been considered (e.g. Grafen, 1990a, b; Godfray, 1995; Johnston, 1994; Johnstone & Grafen, 1992; McAfee & McMillan, 1987), but are not widely applied because of their technical difficulties. We seek for pure ESS functions and/or Nash equilibria when there are differences in each player’s resource holding potential (RHP) although there is no difference in resource evaluation. We assume that players do not know the opponent’s RHP. In this game, the strategy set consists of the nonnegative real numbers, and so do the states of players. Hence, even a pure strategy becomes a strategy function of a player of state x as an ESS or a Nash equilibrium. Although similar settings have been considered by McNamara & Houston (1989) and Mesterton7 1998 Academic Press
. .
336
Gibbons et al. (1996), ours is far more generic in that there are no specific assumptions on the distribution of player’s type or functional form.
2. The Model Two players fight a war of attrition for an indivisible resource of value V. Each player has a different RHP and also a different attrition cost accordingly. The value of RHP is an abstract of her potential energy, vital power, endurance power against damages, etc. Players know their own RHP but do not know the opponent’s value. Let x denote RHP, and suppose the values of x amongst the competing players are independent continuous random variables, all having density function p(x) q 0. When a player’s RHP is x, her strategy is the attrition time function t(x), which implies that she fights up until t(x) and gives up thereafter. When the attrition time is the same, a player obtains the resource with probability 1/2. (Since this almost never happens, this assumption would be meaningless. See Appendix A for more details.) The cost incurred to a player with RHP value x when she fights until t is denoted by c(x, t)($C 2 ). Because a longer war of attrition should incur a higher cost, the derivative of this function with respect to the second variable c2 (x, t) = 1c(x, t)/1t is greater than 0. When the attrition time is 0, the cost is also 0, i.e. c(x, 0) = 0. Also, it is naturally assumed that for all t, c2 (x, t) is monotonically decreasing with respect to the first variable x, that is c1,2 (x, t) = 1 2c(x, t)/1x1t Q 0. This implies that, when the attrition time t incrementally increases, the additional cost is smaller for a player with a greater RHP value. Assuming a Q b, from c1,2 (x, t) Q 0 and c(x, 0) = 0 we obtain
gg t
0q
0
=
g
b
c1,2 (x, s) dx ds
a
t
(c1 (b, s) − c1 (a, s)) ds
0
=c(b, t) − c(b, 0) − c(a, t) + c(a, 0) =c(b, t) − c(a, t),
(1)
which means that a greater RHP implies a smaller cost after any time length of war. It is also natural that when t : a, c(x, t) : +a. That is, when the attrition time would become infinitely long, the cost would also become infinitely high. This cost function
c(x, t) is in fact a very general one including, say, Kt/x n (K is a constant and n q 0). Let us now define a pure ESS function. When t*(.) occupies a population, the average pay-off value of a player employing a strategy of attrition time t on a point where RHP is x is given by Ex (t, t*). Also, Hx (t, t*) = Ex (t*(x), t*) − Ex (t, t*). Note that in both Ex (t, t*) and Hx (t, t*), the first variable is a scalar but the second variable is a function which maps R to R. We define that a function t*(.) is a pure ESS function when for all x and all t($t*(x)), Hx (t, t*) q 0. In other words, for all x, Ex (t, t*) takes a unique maximum when t = t*(x) (Grafen, 1990a, b; Kura et al., 1997). Relaxing these conditions, we define a Nash equilibrium as follows: t*(.) is a Nash equilibrium when, for all x and t, Ex (t, t*) E Ex (t*(x), t*) (Fudenberg & Tirole, 1991). Note that a pure ESS function is necessarily a Nash equilibrium function with these definitions. We seek the conditions for a pure ESS function and/or a Nash equilibrium function to exist. It is natural to assume that optimal strategies stipulate that a greater RHP value induces a longer attrition time. In fact, a Nash equilibrium function does not have any flat interval (Appendix A), and is monotone (Appendix B), continuous (Appendix C), and absolutely continuous (Appendix D). For these reasons, we only have to consider those attrition functions t(.) in the class of strictly monotonically increasing absolutely continuous functions.
3. Analysis First, let us seek the conditions for t*(.) to be a Nash-equilibrium function. Suppose that a strictly monotonically increasing continuous function strategy t*(.) is a pure ESS. When, strategy t*(.) occupies a population, for all x, the pay-off values from t*(x) have to be greater than those from any intruding mutant strategy t, i.e. they have to be the greatest. We consider the relevant range of an intruding mutant t only as the closure of t*(.)’s range and (t = )0. This is because, if t is smaller than the t*(.)’s range, it can not win and its pay-off value is smaller than 0. If t is greater than the t*(.)’s range which is bounded by a, its pay-off value can not be greater than (t = )a. Let us denote f(x,a) p(s) ds = Q(x). There always exists an inverse function u(.) for a strictly monotone continuous function t*(.). Hence, we specify x* such that t*(x*) = t, that is u(t) = x*.
her. More rigorously, from (2) and the fact that t*(.) is a Nash-equilibrium,
Then we have Ex (t, t*) =
g
0 = E0 (0, t*) E E0 (t*(0), t*)
(V − c(x, t*(s)))p(s) ds − c(x, t)Q(x*). (2)
(0, x*)
When x and t*(.) are fixed, (2) should take the maximum when t = t*(x), that is x* = x. In other words, denoting Hx (t, t*) = Ex (t*(x), t*) − Ex (t, t*), for all x and t, Hx (t, t*) e 0
(3)
and Hx (t, t*) =
337
g
4c(x, t*(s)) − V5p(s) ds
(x, x*)
+ c(x, t)Q(x*) − c(x, t*(x))Q(x).
(4)
Then, we define
= −c(0, t*(0)) E − c(0, 0) = 0 implies c(0, t*(0)) = c(0, 0), then t*(0) = 0. This becomes the initial value of (6). From Appendices A to D, the candidate for a Nash-equilibrium solution of (6) with the initial value t*(0) = 0 is unique in the set of measurable functions. Nonetheless, it is only a necessary condition for t*(.) to satisfy differential equation (6). Let us next show that t*(.) is a Nash-equilibrium, and, more strictly, it is also a pure ESS function. For all x, if Ex (t, t*) takes the unique greatest value with respect to the first variable when t = t*(x), t* is a pure ESS function. This is a sufficient condition. We will show that if t*(.) is satisfied with (6), t*(.) is also satisfied with this sufficient condition. From (2),
t '(x)+ = lim infh : +0 (t(x + h) − t(x))/h
1Ex (t, t*)/1t = Vp(x*)/t*'(x*) − c2 (x, t)Q(x*)
t '(x)− = lim suph : −0 (t(x + h) − t(x))/h
=4c2 (x*, t) − c2 (x, t)5Q(x*).
By including 2a, these equations are always defined. Because t*(.) is strictly monotonic and continuous, t : t*(x) + 0 \ x* : x + 0 and t : t*(x) − 0 \ x* : x − 0. Taking the limit of Hx (t, t*)/(x* − x), we obtain from (3) and (4) when x* q x,
Since Q(x*) q 0 and c2 (x, t) is strictly monotonically decreasing with respect to the first variable x, from (7), 1Ex (t, t*)/1t is positive when x* Q x, that is t = t*(x*) Q t*(x), and it is negative when x Q x*, that is t*(x) Q t. This implies that Ex (t, t*) takes the unique greatest value when x = x*, that is t = t*(x). Hence, the solution of (6) is concluded to be a pure ESS function. Let us illustrate an example by assuming more concrete functions. If the cost of attrition time per unit time is constant and its proportion is strictly monotonically decreasing in x, we can write c(x, t) = tg(x), where g(.) is a strictly monotonically decreasing function. Then, (6) becomes
−Vp(x) + c2 (x, t*(x))Q(x)t*'(x)+ e 0
(5a)
when x* Q x, −Vp(x) + c2 (x, t*(x))Q(x)t*'(x)− E 0.
(5b)
Since a monotonically increasing continuous function is differentiable at almost all points (i.e. except for measure-zero points), t*'(x)+ = t*'(x)− at those points and we denote it as t*'(x). From (5a) and (5b), for a.e. x (almost all x), there exists a differentiable equation about t*(.): Vp(x) − Q(x)c2 (x, t*(x))t*'(x) = 0.
Vp(x) − Q(x)g(x)t*'(x) = 0
(8)
and the solution with the initial value t*(0) = 0 is t*(x) = V
g
x
0
(6)
Since t*(.) is a Nash-equilibrium and absolutely continuous, the solution of (6) is unique when the initial value is given. For an individual with the least RHP value, i.e. x = 0, the attrition time t*(0) is also 0. Because t*(.) is monotonically increasing, this individual never wins and any attrition time would be a waste of time for
(7)
p(s) ds . Q(s)g(s)
(9)
For example, if c(x, t) = Kt/x n (K is a constant and n q 0), that is g(x) = K/x n, and p(x) = exp(−x), (9) becomes t*(x) =
Vx n + 1 . K(n + 1)
Obviously this is monotonically increasing.
(10)
. .
338
4. The Density of the Individuals of Attrition Time Now we are back to the general case. Because t*(.) is monotonically increasing, solving inversely, we denote y = u(t) = t*−1(t). From (6), f(t), which is the density of players taking the strategy of attrition time t, becomes f(t) = p(y)/t*'(y) = Q(y)c2 (y, t)/V.
(11)
By transforming (6), we obtain
to the attrition time, i.e. c(x, t) = tg(x), (14) becomes f(t) : 4g(a)/V5exp(−tg(a)/V).
(15)
This equation is a negative exponential distribution derived by Maynard Smith (1974), in which the simplest version of war-of-attrition model was considered. In other words, our analysis totally incorporates the pioneer works of Maynard Smith (1974) and Norman et al. (1977).
Vp(x)/Q(x) = c2 (x, t*(x))t*'(x). We integrate this.
g
5. The Expected Pay-off and its Limit
x
Vp(s)/Q(s) ds = [−V log(Q(s))]
x 0
0
g
t*(x)
=
c2 (u(z), z) dz
0
Since Q(0) = 1, −V log(Q(x)) =
g
t*(x)
c2 (u(z), z) dz
(12)
0
that is,
0g
t*(x)
c2 (u(z), z) dz/V
Q(x) = exp −
0
1
Now let us consider the mean pay-off of the players when all players take the ESS. Amongst the mixed ESS derived in the models of Maynard Smith (1974) and Norman et al. (1977), there always exist players whose strategies are zero or close-to-zero attrition time. Hence their pay-off values should be equal for all players and be 0 by the Bishop & Cannings’ theorem (Bishop & Cannings, 1978). In contrast, with the present model the conclusion is different. When all players in the population take the pure ESS function t*(.), we denote the expected pay-off of the individuals with x on RHP as B(x). That is, B(x) = Ex (t*(x), t*). When x1 Q x2 , from (1), (2), and (7), it is concluded that B(x1 ) = Ex1 (t*(x1 ), t*) Q Ex2 (t*(x1 ), t*) Q Ex2 (t*(x2 ), t*) = B(x2 ).
(16)
is concluded. From (11), we get f(t) = Q(u(t))c2 (u(t), t)/V
0g
1
t
=4c2 (u(t), t)/V5exp −
c2 (u(z), z) dz/V . (13)
0
Let us now assume that p(x) weakly converges to Dirac’s da (x). This is the limiting case of smaller individual differences in players’ RHP. We consider the limit of (13). Since u(.) is the inverse function of t*(.), when p(.) : da (.), u(t) : a for all t(q0) (Appendix E), (13) weakly converges to
0g
t
f(t) : 4c2 (a, t)/V5exp −
c2 (a, z) dz/V
0
=4c2 (a, t)/V5exp(−c(a, t)/V).
In other words, the expected pay-off value Ex (t*(x), t*) is strictly monotonically increasing with respect to the RHP variable, x. Since B(0) = E0 (t*(0), t*) = 0, it is always positive when x q 0. Hence we get the ordinary conclusion that the more high RHP the individual has, the more pay-off he gains. As before, we consider p(.) : da (.). Let us define t*(x) = w, that is x = u(w). Then from (12) and u(t) : a (Appendix E), we get for all w,
−V log(Q(u(w)) =
1
g
w
c2 (u(z), z) dz :
0
(14)
This equation was derived by Norman et al. (1977). Furthermore, when the cost function is proportional
g
w
c2 (a, z) dz
0
= c(a, w).
This implies that for all w, V log(Q(u(w)) + c(a, w) : 0.
(17)
From (2), we have the following inequality B(x) = Ex (t*(x), t*) =
g
(V − c(x, t*(s)))p(s) ds − c(x, t*(x))Q(x)
(0, x)
QV −
g
c(x, t*(s))p(s) ds.
(0, x)
Putting x = u(w), we get B(u(w)) Q V −
g
u(w)
c(u(w), t*(s))p(s) ds.
0
Translating variables s to z in the integral such that t*(s) = z, that is s = u(z), the right side becomes
g
=V −
w
c(u(w), z))p(u(z))u'(z) dz
0
: V−
g
w
c(a, z)p(u(z))u'(z) dz,
0
now using (17), : V+
g
w
V log(Q(u(z)))p(u(z))u'(z) dz. (18)
0
Defining L(r) = f log(r) dr = r(log(r) − 1), notice that L(r) Q 0 for 0 Q r Q 1. Because of Q '(x) = −p(x), u(0) = 0, Q(0) = 1, denoting r = Q(u(z)), the above formula (18) can be transformed =V(1 − [r(log(r) − 1]Q(u(w)) Q(0) ) =−VL(Q(u(w))), using (17), : −VL(exp(−c(a, w)/V)).
(19)
Putting the above together in a strict sense, Lemma For any w and e(q0), when we take any p(.) which is sufficiently close to da (.), we get B(u(w)) Q e − VL(exp(−c(a, w)/V).
(20)
When w : a, c(a, w) : +a. Also, when r : 0 (0 Q r Q 1), L(r) is monotonically decreasing and L(r) : −0. Therefore, when w : a, the right hand side of (20) e − VL(exp(−c(a, w)/V) decreases monotonically and approaches to e. In contrast, when we fix p(.), (16) implies that the left-hand-side of (20) B(u(w)) is monotonically increasing. All of these
339
combined, (20) says that, ‘‘For all w and any e, when we take any p(.) which is sufficiently close to da (.), we obtain B(u(w)) E e’’. In other words, when p(.) : da (.), the pay-off of the individuals taking the ESS degenerates to zero, however long or short the attrition time w is. That is, when the individual differences in RHP become smaller, the limit of the pay-off naturally coincides with the zero average pay-off obtained by Maynard Smith (1974) and Norman et al. (1977). In this sense, this paper is an extension of these frontier research. Note that, when p(.) : da (.), the benefit of any individual converges to 0, which in turn implies that the average pay-off of the population f0a B(s)p(s) ds also degenerates to 0. 6. Discussion The assumptions employed here insure the unique existence of a normal solution function in the class of Lebesgue measurable functions and it implies a longer attrition time with a higher RHP value. As Grafen (1990b) suggested, the condition of the cost function 1 2c(x, t)/1x1t Q 0 plays the essential role for the derivation of sufficient conditions and proofs in the Appendices. That is, the condition that the marginal cost of an increase in the attrition time is smaller for better situated players would be the key element for the existence of a normal pure ESS function and the non-existence of an abnormal solution. Johnston (1994) suggested that there may exist a discontinuous solution in a model with mistakes in signal recognition of players. Wars of attrition analysed here, however, continue up until one of the players retreats and there is no ambiguity as for which is the winner. Hence, it is unlikely for a discontinuity of strategy function to exist. In the models of war-of-attrition by McNamara & Houston (1989), Mesterton-Gibbons (1996), and Mesterton-Gibbons et al. (1996), there exist realms without any ESS solution. These results of course stemmed from various differences in the assumptions. Nevertheless, when we are forced to compare their assumptions with ours, theirs allow the situation in which the condition of the cost function 1 2c(x, t)/ 1x1t Q 0 is not satisfied. This might cause the differences in their conclusions. Also, their model essentially assumed only one-dimensional differences and had little degree of freedom in the strategy of each player. This may be another reason. Our model assumed that players do not know the opponent’s RHP value. We agree that this is not realistic in some cases. If they are willing to estimate the opponent’s RHP, then opposing to this estimation, a strategy to make one’s own RHP appear
340
. .
higher than the true value will evolve in the long-run. It is not easy to build a model which includes these complex tactics. Yet, we obtained the same result from a model in which players display their RHP while paying some cost, recognize these values, and then determine whether to fight or to flee (Oura & Kura, in prep.). As somewhat expected, there exists a unique solution in the class of the Lebesgue measurable functions and it implies that a higher actual RHP induces a larger amount of display. In fact, this is understandable when we regard the attrition time function t*(.) in this model as an amount of display function. Our results were derived without specific functional forms and accordingly are very robust. The common sense strategy that a greater RHP induces a stronger action has a universal basis in nature. In this model, the pure ESS function t*(.) depends on not only the cost function c(,) but also the density function p(.). Equations (6), (8), and (9) show that the degree of increase of t*(x) is greater, i.e. t*'(x) tends to be greater, at points where p(x) is concentrated. Since p(.) is the density function of RHP, the smaller the differences in each RHP value, in other words, the more players with similar RHP values, the greater the differences in their strategies which are caused by little RHP differences. Let us explain more exactly. When p(x) converges to da (x), we know that the ESS function t*(.) becomes t*(x) : 0 (x Q a), t*(x) : a (x q a) (see the proof of Appendix E). That is, the difference in the attrition time between the individual with a − 0 on RHP and that of a + 0 on RHP becomes infinitely large. Mixed ESS have been found in various warof-attrition or similar models (e.g. Maynard Smith, 1974; Bishop & Cannings, 1978). However, these models assumed either non-existence of essential individual differences, or at most a finite number of states. The present results suggest that on the assumption of continuous differences in individual qualities, we may degenerate some of these mixed ESS into pure ESS. Similar conclusions were also drawn by Selten (1980) and Hammerstein (1981). Since a model is a simplification or an abstraction of the intricately fabricated reality, we necessarily have to neglect some factors for the sake of model-building as unimportant. Yet, inclusion of these neglected factors may well change the mixed ESS obtained previously into pure ESS such as derived here. The analysis of the expected pay-off in the last section shows that, in competitive situations, more equality in individual competence tends to suppress the benefit of most players and the population.
Inversely, more inequality in competence increases the total benefit of the group. This can be explained as follows. When the difference in each player’s competence is large, inferior players rapidly give up and the cost involved in competition is low. However, when the players have similar competitive ability, competition is more severe reflecting their effort to win by spending more energy than others. This is because the winner gets the whole resource whereas the loser gets nothing. As a result, the time of attrition tends to lengthen compared to the case of more inequality. This indicates that an equality in the players’ competence may contradict to the benefit of the whole group in a competitive environment. This ‘‘dilemma of the equality in competence’’ poses a serious or interesting problem not only for population biology but also for the game theoretical approach to the society of mankind. The analytical method in this paper is applicable to much wider game situations with players’ continuous differences. In general, it is difficult to prove the non-existence of pathological solutions. This is however not essential. The crucial thing is to derive differential equation (6), which is relatively easy. The model presented here is a basic type of war-of-attrition. By considering various additional conditions, we can analyse the ESS functions under more complicated situations. We thank M. Imafuku for his invaluable comments. We are also grateful to H. Oura for inspiring us.
REFERENCES B, D. T. & C, C. (1978). A generalized war of attrition. J. theor. Biol. 70, 85–124. B, D. T., C, C. & M S, J. (1978). The war of attrition with random rewards. J. theor. Biol. 74, 377–388. F, D. & T, J. (1991). Game Theory. Cambridge, MA: MIT Press. G, H. C. J. (1995). Signalling of need between parents and young:parent-offspring conflict and sibling rivalry. Am. Nat. 146, 1–24. G, A. (1990a). Sexual selection unhandicapped by the Fisher process. J. theor. Biol. 144, 473–516. G, A. (1990b). Biological signals as handicaps. J. theor. Biol. 144, 517–546. H, J. & R, M. R. (1980). Evolutionary game auctions. J. theor. Biol. 85, 381–397. H, P. (1981). The role of asymmetries in animal contests. Anim. Behav. 29, 193–205. H, P. & P, G. A. (1982). The asymmetric war of attrition. J. theor. Biol. 96, 647–682. J, R. A. (1994). Honest signalling, perceptual error and the evolution of ‘all-or-nothing’ displays. Proc. R. Soc. Lond. B. 256, 169–175. J, R. A. & G, A. (1992). The continuous Sir Philip Sidney game: a simple model of biological signalling. J. theor. Biol. 156, 215–234.
K, T., K, K. & S, T. (1997). The law of payoff consistency: games with continuous differences on resource values. J. Ethol. 15, 95–101. M S, J. (1974). The theory of games and the evolution of animal conflicts. J. theor. Biol. 47, 209–221. M S, J. & P, G. A. (1976). The logic of asymmetric contests. Anim. Behav. 24, 159–175. MA, R. P. & MM, J. (1987). Auctions and bidding. J. Econ. Liter. 25, 699–738. MN, J. M. & H, A. I. (1989). State-dependent contests for food. J. theor. Biol. 137, 457–480. M-G, M. (1996). On the war of attrition and other games among kin. J. Math. Biol. 34, 253–270. M-G, M., M, J. H. & D, L. A. (1996). On wars of attrition without assessment. J. theor. Biol. 181, 65–83. N, R. F., T, P. D. & R, R. J. (1977). Stable equilibrium strategies and penalty functions in a game of attrition. J. theor. Biol. 65, 571–578. N, E. (1994). Testing for the asymmetric war of attrition when only roles and fight duration are known. J. theor. Biol. 169, 1–13. S, R. (1980). A note on evolutionary stable strategies in asymmetric animal conflicts. J. theor. Biol. 84, 93–101. S, M. & H, P. (1994). Effects of competition on optimal patch leaving: a war of attrition. Theor. Popul. Biol. 46, 300–318.
+
g
341 4V/2 − c(x, t*(s))5p(s) ds
(A.2b)
T(t*(x))
−
g
c(x, t*(x))p(s) ds
(A.2c)
T((t*(x), a))
−
g
4V − c(x, t*(s))5p(s) ds
(A.2d)
g
4V/2 − c(x, t*(s))5p(s) ds
(A.2e)
T([0, t))
−
T(t)
+
g
c(x, t)p(s) ds
(A.2f)
T((t, a))
e0. APPENDIX A Theorem 1 Suppose that a Lebesgue measurable function t*(.) is a Nash-equilibrium in this game. Then t*(.) does not have any non-trivial interval on which t*(x) is constant. Proof S is a subset of real numbers R and we define a set T(S) = t*−1(S). The Lebesgue measure of measurable set A denotes m(A) = fA 1 dx. Note at first that there are only countable points in which the inverse image set of a point 4t5 has a positive Lebesgue measure, i.e. m(T(4t5)) q 0. For all x and t, the following equations are true. Ex (t, t*) =
g
g
4V − c(x, t*(s))5p(s) ds
T(t*(x))
g
4V/2 − c(x, t*(s))5p(s) ds
T(t*(x))
4V/2 − c(x, t*(s))5p(s) ds
=−(V/2)
g
p(s) ds.
(A.3)
T(t*(x))
T(t)
−
g
Hx (t, t*) : −
+
4V − c(x, t*(s))5p(s) ds
T([0, t))
+
When t : t*(x) + 0, because T([0, t)) : T([0, t*(x)]), T((t, a)) : T((t*(x), a)), and c(,) is continuous, the first term (A.2a) + the fourth term (A.2d) : −fT(t*(x)) 4V − c(x, t*(s))5p(s) ds the third term (A.2c) + the sixth term (A.2f) : 0. Since there always exists t in any upper or lower neighborhood of t*(x) such that the measure of T(4t5) is 0, we can consider the limit of such t sequence and neglect the fifth term (A.2e). That is, when t : t*(x) + 0,
g
c(x, t)p(s) ds
(A.1)
T((t, a))
Since Hx is nonnegative, the Lebesgue measure of T(4t*(x)5) must be zero, hence there is no non-trivial interval on which t*(x) is constant.
Hence, APPENDIX B
Hx (t, t*) = Ex (t*(x), t*) − Ex (t, t*) =
g
T([0, t*(x)))
4V − c(x, t*(s))5p(s) ds
(A.2a)
Theorem 2 Suppose that a Lebesgue measurable function t*(.) is a Nash-equilibrium in this game. Then t*(.) is strictly monotonically increasing.
. .
342
Proof Choose x1 and x2 such that x1 Q x2 and denote t1 = t*(x1 ), t2 = t*(x2 ). According to Appendix A, the measure of T(t*(x)) is 0 for all x, we can neglect the integral on the inverse image of one point, T(t*(x)). Then, from (A.1) of Appendix A
If t1 = t*(x1 ) q t2 = t*(x2 ), then
g
4c(x2 , t*(s)) − c(x1 , t*(s))5p(s) ds
=
T([t2, t1)
g
T((t1, a))
=Ex1 (t1 , t*) − Ex1 (t2 , t*) + Ex2 (t2 , t*) − Ex2 (t1 , t*)
g
T((t2, t1])
g
g
4c(x2 , t*(s)) − c(x1 , t*(s))
=
c(x1 , t1 )p(s) ds
T((t2, t1)
T((t1, a))
−
4c(x1 , t2 ) − c(x2 , t2 )5p(s) ds
+
T([0, t1))
−
+ c(x1 , t2 ) − c(x2 , t2 )5p(s) ds
g
4V − c(x1 , t*(s))5p(s) ds
=
4c(x2 , t1 ) − c(x1 , t1 )
+
Hx1 (t2 , t*) + Hx2 (t1 , t*)
+ c(x1 , t2 ) − c(x2 , t2 )5p(s) ds
g
4V − c(x1 , t*(s))5p(s) ds
g
T([0, t2))
4c(x2 , t1 ) − c(x1 , t1 )
+
T((t1, a))
+
g
c(x1 , t2 )p(s) ds
+ c(x1 , t2 ) − c(x2 , t2 )5p(s) ds.
T((t2, a))
+
g
Here, for all y and z such that y q z, c(x2 , y) − c(x1 , y) + c(x1 , z) − c(x2 , z)
4V − c(x2 , t*(s))5p(s) ds
T([0, t2))
g
y
4c2 (x2 , s) − c2 (x1 , s)5 ds Q 0
= −
g
z
c(x2 , t2 )p(s) ds
(8 from x2qx1 , c2 (x2 , s) − c2 (x1 , s) Q 0). Hence, the sign of the integrated function in (B.2) is negative, and p(s) q 0 implies that (B.2) is also negative. This contradicts
T((t2, a))
−
g
(B.2)
4V − c(x2 , t*(s))5p(s) ds
T([0, t1))
Hx1 (t2 , t*) + Hx2 (t1 , t*) e 0. +
g
c(x2 , t1 )p(s) ds
T((t1, a))
=−
g
4c(x1 , t*(s)) − c(x2 , t*(s))5p(s) ds
That is, t1 E t2 , which implies that t*(.) is a monotonically non-decreasing function. From Appendix A, t*(x) does not have a flat interval, i.e. it is strictly monotonically increasing.
T([0, t1))
APPENDIX C
g
Theorem 3 Suppose that a Lebesgue measurable function t*(.) is a Nash-equilibrium in this game. Then t*(.) is continuous.
4c(x1 , t1 ) − c(x2 , t1 )5p(s) ds
−
T((t1, a))
g
4c(x1 , t*(s)) − c(x2 , t*(s))5p(s) ds
+
Proof Let us denote as follows:
T([0, t2))
+
g
T((t2, a))
4c(x1 , t2 ) − c(x2 , t2 )5p(s) ds
(B.1)
f (x)+ = lim supx : x + 0 f(x) f(x)− = lim infx : x − 0 f(x).
From Theorem 2, t*(x) is strictly monotonically increasing. Therefore, to show that t*(x) is continuous, we have to show t*(x)+ = t*(x)− for all x. We consider x* : x 2 0 such that t = t*(x*) converges to t*(x)+ and t*(x)− . Since each integrated function in (A.2) is continuous with respect to x and t, 0 = Hx (t*(x), t*)
343
(C.2) converges to 0. Also, because Ex (t, t*) is continuous with respect to x and t, Hx1 (t2 , t*) = Ex1 (t1 , t*) − Ex1 (t2 , t*) : Ex (t*(x)− , t*) − Ex (t*(x)+ , t*). Hence, when x1 : x − 0 and x2 : x + 0, both sides of (C.2) converge to the following. Ex (t*(x)− , t*) − Ex (t*(x)+ , t*)
= Hx ((t*(x)+ , t*) = Hx ((t*(x)− , t*)
(C.1)
is concluded. Choosing x1 , x2 such that x1 Q x Q x2 and t1 = t*(x1 ) Q t2 = t*(x2 ), from (A.2) in Appendix A,
=
g
4c(x, t*(x)+ ) − c(x, t*(x)− )5p(s) ds.
T((t*(x) + , a))
(C.3) Here, from (C.1),
Hx1 (t2 , t*) = Ex1 (t1 , t*) − Ex1 (t2 , t*)
=−
g
Ex (t*(x)− , t*) − Ex (t*(x)+ , t*) = −Hx (t*(x)− , t*) + Hx (t*(x)+ , t*) = 0 yields that
c(x1 , t1 )p(s) ds
T((t1, a))
0= −
g
4c(x, t*(x)+ ) − c(x, t*(x)− )5p(s) ds
T((t*(x) + , a))
g
4V − c(x1 , t*(s))5p(s) ds
However, although p(s) q 0 and T((t*(x)+ , a)) q 0 implies
T([t1, t2))
+
c(x, t*(x)+ ) − c(x, t*(x)− ) = 0,
g
c(x1 , t2 )p(s) ds
because the cost function c(., t) is strictly monotonically increasing with respect to t, we conclude t*(x)+ = t*(x)− . That is, t*(.) is continuous.
T((t2, a))
=−
g
c(x1 , t1 )p(s) ds
T((t1, t2])
g
APPENDIX D Theorem 4 Suppose that a strictly monotonically increasing continuous function t*(.) is a Nash-equilibrium in this game. Then t*(.) is Lipschitz continuous in a arbitrary bounded domain. Therefore t*(.) is uniformly continuous in wider sense and absolutely continuous.
4V − c(x1 , t*(s))5p(s) ds
−
T([t1, t2))
+
g
4c(x1 , t2 ) − c(x1 , t1 )5p(s) ds
T((t2, a))
g
Proof For all x and t such that t Q t*(x), 0 E Hx (t, t*)
4c(x1 , t*(s)) − c(x1 , t1 ) − V5p(s) ds
=
T((t1, t2))
g
+
= 4c(x1 , t2 ) − c(x1 , t1 )5p(s) ds.
g
4V − c(x, t*(s))5p(s) ds
T([0, t*(x)))
(C.2)
T((t2, a))
If we suppose x1 : x − 0 and x2 : x + 0, then t1 = t*(x1 ) : t*(x)− and t2 = t*(x2 ) : t*(x)+ . Note that because t*(.) is monotonically increasing, T((t*(x)− , t*(x)+ )) = f or {t*(x)}. The first term in
−
g
c(x, t*(x))p(s) ds
T((t*(x), a))
−
g
T([0, t))
4V − c(x, t*(s))5p(s) ds
. .
344 +
g
c(x, t)p(s) ds
T((t, a))
=
g
4V − c(x, t*(s)) + c(x, t)5p(s) ds
T((t, t*(x)))
APPENDIX E
+4c(x, t) − c(x, t*(x))5
g
p(s) ds. (D.1)
T((t*(x), a))
We take t in the range of t*(.). Since there exists a inverse function of t*(.), u(.), i.e. u(t) = x*, the above equation becomes =
g
4V − c(x, t*(s)) + c(x, t)5p(s) ds
(x*, x)
+4c(x, t) − c(x, t*(x))5Q(x).
(D.2)
F(x, t, s) = 4V − c(x, t*(s)) +c(x, t)5p(s)
Proof Since it is very difficult to prove this about weak convergence with mathematical rigor, let us only sketch it. Transforming (6), Vp(x)/4c2 (x, t*(x))Q(x)5 = t*'(x).
Vp(x)/4mQ(x)5 q t*'(x).
and transform the above equation,
g
Lemma Denote t*(.) as the Nash-equilibrium in this game and suppose that p(x) weakly converges to Dirac’s da (x). Then u(t), the inverse function of t*(.), converges to a for all t q 0.
(E.1)
Because c2 (x, t) q 0 is continuous, for any bounded domain D in R2 = 4(x, t)5, there exists a small m(q0) such that c2 (x, t) q m. Hence, within the domain D,
We now let
c(x, t*(x)) − c(x, t) E
t*(.) is Lipschitz continuous in an arbitrary bounded domain. Therefore t*(.) is uniformly continuous in an arbitrary bounded domain (uniformly continuous in a wider sense) and absolutely continuous.
F(x, t, s) ds/Q(x)
(x*, x)
E(x − x*)4Maxs $ (x*, x) F(x, t, s)5/Q(x). (D.3) From the theorem of the mean, there exists w $ (t, t*(x)) such that c(x, t*(x)) − c(x, t) = (t*(x) − t)c2 (x, w). (D.4) From c2 (x, w) q 0 and (D.3), we obtain
(E.2)
We take a sufficiently large area. When p(.) : da (.), Q(x) = fxa p(s) ds : 1 for x Q a, the left hand side of (E.2) converges to 0. That is, t*(x) : 0 (x Q a). Next, we consider t*(a + e) for a sufficiently small e. Since c2 (x, t) is continuous, for any large bounded domain D, there always exist M such that 0 Q c2 (x, t) Q M. Defining D = 4(x, t): 0 E t Q T, 0 E x Q a + 15, when t*(x) Q T, the following is true from (6). Vp(x)/4MQ(x)5 Q t*'(x)
(E.3)
While noting that t*(0) = 0 and log(Q(0)) = 0, we integrate both sides to obtain
t*(x) − t*(x*) = t*(x) − t
4V/M5[−log(Q(x))]x0 = −4V/M5log(Q(x))
E(x − x*)4Maxs $ (x*, x) F(x, t, s)5/4Q(x)c2 (x, w)5. (D.5) From the assumption of continuity of c(,), c2 (,), p(.), and t*(.), 4Maxs $ (x*, x) F(x, t, s)5/4Q(x)c2 (x, w)5 is bounded in any bounded domain. In other words, this equation means that the ratio of change in t*(x) which is the function of x, is restricted by x itself, i.e.
Q t*(x).
(E.4)
implies Q(a + e) : 0, Since p(.) : da (.) t*(a + e) : a is concluded. Because t*(.) is monotonically increasing, t*(a + e) necessarily surpasses T. This is true for any T and e. Hence, when p(.) : da (.), t*(x) : a(a Q x). Since t*(.) is a monotonically increasing continuous function and u(.) is the inverse function of t*(.), together with t*(x) : 0(x Q a), u(t) : a is finally concluded.