Accepted Manuscript On the convergence of finite state mean-field games through Γ-convergence
Rita Ferreira, Diogo A. Gomes
PII: DOI: Reference:
S0022-247X(14)00173-5 10.1016/j.jmaa.2014.02.044 YJMAA 18304
To appear in:
Journal of Mathematical Analysis and Applications
Received date: 18 June 2013
Please cite this article in press as: R. Ferreira, D.A. Gomes, On the convergence of finite state mean-field games through Γ-convergence, J. Math. Anal. Appl. (2014), http://dx.doi.org/10.1016/j.jmaa.2014.02.044
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
On the convergence of finite state mean-field games through Γ-convergence Rita Ferreira ∗ and Diogo A. Gomes
†
February 21, 2014
Abstract In this paper we study the long time convergence (trend to equilibrium problem) for finite state mean-field games using Γ-convergence. Our techniques are based upon the observation that an important class of mean-field games can be seen as the Euler-Lagrange equation of a suitable functional. Therefore, by a scaling argument, one can convert the long time convergence problem into a Γ-convergence problem. Our results generalize previous results on long-time convergence for finite state problems.
R. Ferreira was partially supported by the Funda¸ca˜o para a Ciˆencia e a Tecnologia (Portuguese Foundation for Science and Technology) through grants SFRH/BPD/81442/2011 and PEst-OE/MAT/UI0297/2011 (CMA). D. Gomes was partially supported by CAMGSD-LARSys through FCTPortugal and by grants PTDC/MAT-CAL/0749/2012, UTA-CMU/MAT/0007 /2009, PTDC/MAT/114397/2009, and UTAustin/MAT/0057/2008.
1
Introduction
Mean-field games were developed in the engineering community by Peter Caines and his co-workers [HMC06, HCM07] and independently, at about the same time, by Pierre Louis Lions and Jean Michel Lasry [LL06a, LL06b, LL07a, LL07b]. This class of problems attempts to understand the limiting behavior of systems involving a very large number of identical rational agents whose interactions are described by differential games. Mean-field games arise in number of applications including social network dynamics [GMS13], growth theory in economics [LLG10a, ML11], environmental policy [ALT, LL07b], price formation [LL07a], non-renewable resources, oil production, and sustainable development models [LLG10b]. An important class of mean-field games concerns problems with a very large number of agents that can switch between a finite number ∗ Departamento de Matem´ atica, Instituto Superior T´ ecnico, 1049-001 Lisboa, Portugal and Centro de Matem´ atica e Aplica¸c˜ oes of the F.C.T-U.N.L., Quinta da Torre, 2829-516 Caparica, Portugal. e-mail:
[email protected];
[email protected] † Center for Mathematical Analysis, Geometry, and Dynamical Systems, Departamento de Matem´ atica, Instituto Superior T´ ecnico, 1049-001 Lisboa, Portugal and King Abdullah University of Science and Technology (KAUST), CSMSE Division , Thuwal 23955-6900. Saudi Arabia. e-mail:
[email protected]
1
of states, Id = {1, 2, 3, ..., d}. Each of the players is allowed to switch between the states by controlling the switching rate of a continuous time Markov chain. These problems have attracted the attention of various researchers, see [GMS10, GMS13, Gue11b, Gue11a]. A very natural question in mean-field games problems concerns its long time behavior. This problem was first addressed in discrete time, finite state in [GMS10]. Then it was investigated in continuous time: for finite state in [GMS13] and for continuous state in [CLLP12]. The arguments in those papers rely on certain uniform convexity and monotonicity hypotheses. In this paper we use a different approach namely by means of Γ-converge. The notion of Γ-convergence was introduced by De Giorgi in the 70’s and it consists in a functional approximation which respects the minimization process, that is, under mild hypotheses, the infima of a sequence of functionals converge to the infimum of the limit functional. Moreover, the minimization problem associated with the limit functional has solution. In the present work we start by observing that certain discrete state mean-field games are in fact the Euler-Lagrange equation of a suitable functional. Then we use a Γ-convergence argument to establish the convergence to a stationary mean-field game. We would like to stress that to the best of our knowledge this is the first work where Γ-convergence techniques are used within mean-field games. This method allow us to extend the results in [GMS13] as it requires weaker hypothesis. Furthermore, we believe that a wider class of mean-field games may be tackled by similar methods, in particular it may be possible to address a class of continuous state problems. This paper is organized as follows: in Section 2 we describe the set-up of finite state mean-field games. The mean-field equations are either, in the timedependent case, a system of ODEs (2.4), together with initial-terminal data (2.5), or, in the stationary case, by the algebraic equations (2.6). Then, in Section 3, we consider the class of potential mean-field games, which are meanfield games that have a variational formulation. More precisely, under condition (3.1), the mean-field equations (2.4) and (2.5) can be seen as the Euler-Lagrange equation of the functional (3.6). A number of estimates for discrete state meanfield games are recalled in Proposition 4.3, in Section 4, following [GMS13]. We then reformulate, in Section 5, by scaling, the mean-field equations in a form that is particularly convenient for the use of Γ-convergence methods, namely equations (5.4), together with the scaled variational principle (5.12). Several results on Γ-convergence are recalled in Section 6. The main convergence result is then proved in Section 7. We show, in Theorem 7.1, that the functional (5.12), Γ-converges to (5.13). In Corollary 7.2 we establish convergence of the associated infima and minimizers. We end Section 7 by relating the limit minimization problem with the stationary one. In particular, we provide conditions under which a sequence of solutions of (5.1)–(5.2) converge to a stationary solution of (2.6). This results holds under more general conditions than the ones in [GMS13] since we do not require uniform convexity or uniform monotonicity, just strict convexity and monotonicity. Also, we believe that our techniques may extend to continuous models, and therefore it may be possible to address long-time convergence for a different class of models and conditions than the ones considered in [CLLP12].
2
2
Finite state mean-field games
In this section we review some of the results from [GMS13] concerning finite state mean-field games. For convenience, we will use the same notation and conventions. We consider a (infinite) population of identical agents where each one of them has a state in Id . The states evolve randomly in time by following a controlled continuous time Markov chain. Each player controls its switching rate in order to optimize a certain functional. The distribution of the players among the different states is given by a probability vector θ ∈ P(Id ), where P(Id ) is the probability simplex 1 θ + ... + θd = 1 , θi ≥ 0 ∀i ∈ Id . We fix a reference player and denote its state at time t by the random variable it . The process it is a continuous time Markov chain whose switching rate from state it to a state j is denoted by αj (t). The only available information to this reference player about the other player’s state is the probability θ. The objective of each player is to minimize a certain functional, which is the same for every player since all players are identical. This objective functional is composed of two terms, a running cost and a terminal cost. The running cost d of the reference player is determined by a function c : Id × P(Id ) × (R+ 0 ) → R. This running cost c(i, θ, α) depends on the state i of the player, the distribution of the population among states θ, and the transition rate αj the player uses to change from state i to state j. The reference player has a terminal cost ψ : Id → R. Note that a function ψ : Id → R is identified naturally with a vector in Rd , however in this setting is more natural to consider ψ as a scalar function of a finite set. Given the distribution of the other players for all times, the objective of the reference player is to minimize v i (t) := inf Eα it =i α
T
c(is , θ(s), α(s)) ds + ψ iT
,
(2.1)
t
where the infimum is taken over all piecewise continuous, progressively measurd α able controls α : [t, T ] → (R+ 0 ) and Eit =i is the expectation conditioned to the event it = i given (the transition rate) α. We suppose that c is a continuous function and that the map α → c(i, θ, α) is convex and does not depend on αi . We define the generalized Legendre transform of the function c(i, θ, ·), as h(z, θ, i) := =
c(i, θ, μ) +
min +
μ∈(R0 )d
min
d μ∈(R+ 0 )
d j=1
μj (z j − z i )
(2.2)
c(i, θ, μ) + μ · Δi z ,
where for each i ∈ Id , Δi : Rd → Rd denotes the difference operator with respect to i, that is, Δi z := (z 1 − z i , · · · , z d − z i ),
for all z = (z 1 , . . . , z d ) ∈ Rd .
We note that z → h(z, θ, i) is a concave function. 3
(2.3)
We suppose that h is locally Lipschitz and differentiable in z. As shown in [GMS13], any solution to −v˙ i = h(Δi v, θ, i), satisfying v(T ) = ψ is the value function for (2.1), where v˙ stands for dv/dt. Additionally, if we define αj∗ (Δi z, θ, i) :=
∂
h(Δi z, θ, i) , j ∂z
the feedback strategy αi∗ (Δj v, θ, j) is an optimal switching policy for a player in state i. The mean-field Nash equilibrium hypothesis assumes that every player uses this switching strategy. This leads to the following system ⎧ d ⎪ ⎪ ⎨θ˙i = θj αi∗ (Δj v, θ, j), (2.4) j=1 ⎪ ⎪ ⎩−v˙ i = h(Δ v, θ, i), i
together with the initial-terminal conditions v i (T ) = ψ i ,
θ(0) = θ0 ,
(2.5)
where θ0 is the initial distribution of players. In addition to the time dependent problem, we will also need to consider stationary solutions, as defined next: ¯ v¯, λ) ¯ ∈ P(Id ) × Rd × R is called a stationary Definition 2.1. A triplet (θ, solution of (2.4) if ⎧ d ⎪ ⎪ ⎨ ¯ j) = 0 , θ¯j αi∗ (Δj v¯, θ, (2.6) j=1 ⎪ ⎪ ⎩h(Δ v¯, θ, ¯ i) = λ ¯, i for all i ∈ Id . ¯ v¯, λ) ¯ is a stationary solution for the MFG equations, then (θ, ¯ v¯ − λt1), ¯ If (θ, where 1 := (1, ..., 1), solves (2.4). In [GMS13] the convergence for the solutions to the time-dependent problem to stationary solutions was studied using strong convexity and monotonicity hypotheses.
3
Potential mean-field games
An important class of examples that we consider in this paper are potential mean-field games as discussed in [GMS13] and [Gue11b]. In these mean-field games, (2.4) can be regarded as an Euler-Lagrange equation of a suitable functional. For continuous state, the variational principles discussed in this section are the analog to the results in [GPSM12, GSM11]. Suppose h has the form ˜ i) + f (θ, i), h(z, θ, i) = h(z, 4
(3.1)
˜ : Rd × Id → R and f : Rd × Id → R is the gradient of a convex function. where h More precisely, we suppose that there exists a convex function F : Rd → R such ∂F = f (θ, i). that ∂θ i Let H : R2d → R be given by H(v, θ) :=
d
˜ i v, i) + F (θ) θi h(Δ
i=1
(3.2)
˜ · v, ·) + F (θ) , =θ · h(Δ ˜ · z, ·), with z ∈ Rd , represents the vector in Rd whose ith coordinate where h(Δ ˜ i z, i). A direct computation shows that (2.4) can be written as is h(Δ ⎧ ∂H ⎪ ⎪ = θ˙j , ⎪ ⎨ ∂v j (3.3) ⎪ ⎪ ∂H ⎪ ⎩ = −v˙ j . ∂θj This means the flow generated by equation (2.4) is Hamiltonian. If the function F is strictly convex in θ then the Hamiltonian H is strictly convex in θ. This allow us to consider the Legendre transform L(v, v) ˙ := sup − v˙ · θ − H(v, θ) θ∈Rd
= sup θ∈Rd
˜ · v, ·)) · θ − F (θ) = F ∗ (v˙ + h(Δ ˜ · v, ·)). − (v˙ + h(Δ
(3.4)
Moreover, assuming, in addition, that F has superlinear growth at infinity, then by the properties of the Legendre transform, given a solution (υ, ϑ) of (2.4) satisfying (2.5) it holds ˜ · υ(t), ·)). ˙ + h(Δ ϑ(t) = −∇F ∗ (υ(t) In particular,
(3.5)
˜ · υ(0), ·)). θ0 = −∇F ∗ (υ(0) ˙ + h(Δ
From this we conclude that any such υ is a critical point of the functional
T 0
˜ · v, ·)) dt − θ0 · v(0), F ∗ (v˙ + h(Δ
(3.6)
where we are looking for critical points v that have fixed boundary condition at T , namely v(T ) = ψ. Conversely, let υ be a critical point of the functional (3.6) and let ϑ be given by (3.4) with v replaced by υ so that (3.5) holds. Then (υ, ϑ) satisfies (2.4) and (2.5). In order for every component of ϑ to be non-negative, we require F ∗ to be non-increasing in each coordinate, that is, ∂F ∗ (p) ≤ 0, ∂pj
5
for all p ∈ Rd and j ∈ Id . From the Euler-Lagrange equation we have that d
ϑ˙ i = 0,
i=1
thus ϑ(t) is a probability vector for all t. In the stationary setting, for every fixed λ ∈ R, we consider simply the problem of minimizing ˜ · v, ·) − λ1) − λ , F ∗ (h(Δ (3.7) d d i where the minimization is performed over all v ∈ R satisfying i=1 v = 0, and, we recall, 1 = (1, ..., 1) ∈ Rd . If v¯ is a critical point of this problem, then setting ∂F ∗ ˜ (h(Δ· v¯, ·) − λ1), j ∈ Id , θ¯j := − ∂pj ¯ v¯, λ) satisfies (2.6). In particular, if λ ¯ ∈ R is such that we conclude that (θ, −
d ∂F ∗ j=1
∂pj
˜ · v, ·) − λ1) ¯ = 1, (h(Δ
(3.8)
¯ v¯, λ) ¯ is a stationary solution in the sense of Definition 2.1. We further then (θ, ¯ exists: if we minimize (3.7) over λ ∈ R, then denoting by λ ¯ observe that such λ a corresponding critical point we conclude that (3.8) holds. We will need also the following observation. For fixed λ ∈ R, consider the problem of minimizing 1 ˜ · v, ·) − λ1) dt − λ F ∗ (h(Δ (3.9) 0
d among all continuous functions v : [0, 1] → Rd satisfying i=1 vi (t) = 0. By a simple application of the Jensen’s inequality, taking into account that F ∗ is a ˜ i), i ∈ Id , is a concave convex, componentwise non-increasing function, and h(·, function, it is possible to show that it suffices to consider minimizers to (3.9) in the class of constant functions v. Therefore it is enough to look at minimizers of (3.7).
4
Estimates for finite state mean-field games
We recall now a number of estimates from [GMS13] concerning finite state mean-field games. We start with two definitions: Definition 4.1.
Let v ∈ Rd . In Rd /R we define the norm v := inf |v + λ1|∞ . λ∈R
It can be checked that for all v ∈ Rd , v =
maxi∈Id v i − mini∈Id v i · 2 6
d Definition 4.2. Let u = d1 j=1 uj . We say that h : Rd × P(Id ) × Id → R is contractive if there exists M > 0 such that if v > M , then the two following conditions hold for all θ ∈ P(Id ) and i ∈ Id : j
(4.1)
j
(4.2)
(Δi v) ≤ 0 for all j ∈ Id implies h(Δi v, θ, i) − h(v, θ, ·) < 0 , (Δi v) ≥ 0 for all j ∈ Id implies h(Δi v, θ, i) − h(v, θ, ·) > 0 .
Many mean-field games are contractive. For instance, in [GMS13] the authors prove that to the running cost c(i, θ, α) =
d αj2 j=1
2
+ f (θ, i) ,
(4.3)
for f (θ, i) continuous in θ ∈ P(Id ), corresponds a contractive Hamiltonian: 1 i [(u − uj )+ ]2 . 2 j=1 d
h(Δi v, θ, i) = f (θ, i) −
(4.4)
For contractive mean-field games the following result was established in [GMS13]: Proposition 4.3. tive. Then
Suppose h : Rd × P(Id ) × Id → R given by (2.2) is contrac-
(a) For M large enough, the set v ∈ Rd , v < M × P(Id ) is invariant backwards in time by the flow of equation (2.4). (b) There exist a stationary solution of (2.4).
5
Scaling
In order to study the long time behaviour of mean-field games we introduce a scaled version of (2.4), where = T1 , ⎧ d ⎪ ⎪ ⎨ θ˙i = θj αi∗ (Δj v , θ , j), (5.1) j=1 ⎪ ⎪ ⎩− v˙ i = h(Δ v , θ , i), i together with the initial-terminal conditions vi (1) = ψ i .
θ (0) = θ0 ,
(5.2)
d
We can assume i=1 ψ i = 0, without loss of generality. We observe also that the scaling in time does not change the bounds in Proposition 4.3. Hence, if (v , θ ) solves (5.1)–(5.2) with h as in Proposition 4.3, then (see [GMS13]) sup sup v (t) < +∞.
(5.3)
t∈[0,1] >0
Assume that h is as in Proposition 4.3. In order to write the scaled version of the functional (3.6) associated with (5.1)–(5.2) for potential mean-field games 7
as in Section 3, in a convenient form for the use of Γ-convergence we decompose v as follows. Let λ ∈ R, u : [0, 1] → Rd , and w : [0, 1] → R be defined by λ :=
1 0
d
h(Δi v , θ , i) dt ,
i=1
i v (t) − λ (1 − t) , d i=1 d
w (t) :=
1 λ ui (t) := vi (t) − w (t) − (1 − t). Observing that Δi u = Δi v for all i ∈ Id , (5.1) becomes ⎧ d ⎪ ⎪ ⎨ θ˙i = θj α∗ (Δ u , θ , j),
j
i
j=1 ⎪ ⎪ ⎩λ − w˙ − u˙ i = h(Δ u , θ , i). i
(5.4)
We claim that sup |λ | < +∞, >0
sup sup u (t) < +∞,
(5.5)
∈ [0, 1] and > 0,
(5.6)
I d and > 0,
(5.7)
t∈[0,1] >0
d
ui (t) = 0 for all t i=1 ui (1) = ψ i for all i ∈
sup u˙ ∞ < +∞,
(5.8)
w (0) = 0, w (1) = 0, for all > 0, sup w˙ ∞ < +∞.
(5.9)
>0
(5.10)
>0
In fact, (5.5) is a result of the hypotheses on h, (5.3), and the equality v = u . Condition (5.6) follows from the definition of u and w . On the other d d hand, since i=1 vi (1) = i=1 ψ i = 0, we deduce the second condition in (5.9), which in turn yields (5.7) in view of (5.2). We now notice that i 1 v˙ (t) + λ = − h(Δi v , θ , i) + λ , d i=1 d i=1 d
w˙ =
d
(5.11)
from which, together with (5.3), the first estimate in (5.5), and the hypotheses on h, we obtain (5.10). Moreover, integrating (5.11) over [0, 1] and using the equality w (1) = 0 already proved and the definition of λ , we get w (0) = 0. Thus, (5.9) holds. Finally, (5.8) follows from the identity u˙ i = − h(Δi v , θ , i) − w˙ i + λ having in mind the uniform bounds established above. Thus (5.5)–(5.10) hold. We now observe that the system of equations (5.4), together with (5.5)– (5.10), suggests that in the limit → 0 we have w → 0 and (θ , u , λ ) → ¯u ¯ where (θ, ¯u ¯ solves (2.6). (θ, ¯, λ), ¯, λ) From the variational point of view and for potential mean-field games as in Section 3 (see (3.6)), observing that θ0 · λ1 = λ since θ0 ∈ P(Id ), we look for 8
minimizers of
1 0
˜ · u, ·) − λ1) dt − θ0 · u(0) − λ F ∗ (w1 ˙ + u˙ + h(Δ
(5.12)
over λ ∈ R, u : [0, 1] → Rd , and w : [0, 1] → R according to (5.5)–(5.10). At least formally, the limit of the functional (5.12) is
1 0
˜ · u, ·) − λ1) dt − λ, F ∗ (w1 ˙ + h(Δ
(5.13)
which corresponds to (3.9) provided that w does not depend on t. In particular the boundary conditions are lost in the limiting procedure. To justify rigorously this limiting procedure we need to use Γ-convergence techniques, which we now proceed to address.
6
Preliminaries on Γ-convergence
In this section we recall some standard results on sequential lower semicontinuity of certain funtionals and on Γ-convergence. We refer the reader to [DM93, Bra02] for a comprehensive treatment and bibliography on Γ-convergence. Theorem 6.1. ([FL07, Thm. 5.14]) Let B be a Borel subset of RN with finite measure, let 1 p +∞, and let f : Rd → (−∞, +∞] be a lower semicontinuous function. Assume that there exists C > 0 such that f (z) −C(1 + |z|p ) for all z ∈ Rd if 1 p < +∞, f is locally bounded from below if p = +∞. Then the functional u ∈ Lp (B; Rd ) →
f (u(x)) dx B
is sequentially lower semicontinuous with respect to the weak convergence in Lp (B; Rd ) (weak star if p = +∞) if, and only if, f is convex. Definition 6.2. Let X be a Banach space, let F : X → R, and let δ > 0. We say that x ∈ X is a δ-minimizer of F in X if F (x) max
inf F (y) + δ, −
y∈X
1 . δ
Remark 6.3. If inf y∈X F (y) > −∞ and if δ is small enough, then x is a δ-minimizer of F in X if, and only if, F (x) inf y∈X F (y) + δ. Theorem 6.4. ([DM93, Prop. 8.16, Thm. 7.8, and Cor. 7.20]) Let X be a reflexive Banach space endowed with its weak topology, and let {Fn }n∈N be a sequence of functionals Fn : X → R equi-coercive in the weak topology of X. Assume that there is a functional F : X → R satisfying the two following conditions:
9
i) for every x ∈ X and for every sequence {xn }n∈N weakly converging to x in X, one has F(x) lim inf Fn (xn ); n→+∞
ii) for every x ∈ X there exists a sequence {xn }n∈N weakly converging to x in X such that F(x) = lim Fn (xn ). n→+∞
Then min F(x) = lim
inf Fn (x).
n→+∞ x∈X
x∈X
Moreover, if for each n ∈ N xn is a minimizer of Fn in X (or, more generally, a δn -minimizer, where {δn }n∈N is a sequence of positive numbers converging to 0) and x is a cluster point of {xn }n∈N , then x is a minimizer of F in X, and F(x) = lim sup Fn (xn ). n→+∞
If {xn }n∈N weakly converges to x in X, then x is a minimizer of F in X, and F(x) = lim Fn (xn ). n→+∞
Remark 6.5.
In view of i), condition ii) in Theorem 6.4 may be replaced by
ii)’ for every x ∈ X there exists a sequence {xn }n∈N weakly converging to x in X such that F(x) lim sup Fn (xn ). n→+∞
In the language of Γ-convergence, if {Fn }n∈N and F are as in Theorem 6.4, then {Fn }n∈N is said to Γ-converge to F in X as n → +∞ with respect to the weak convergence in X. Condition i) is called the “liminf inequality”, condition ii)’ the “limsup inequality”, while the sequence in condition ii) (or, equivalently, in ii)’) is called “a recovery sequence”.
7
Convergence of functionals, and its minima, associated with mean-field games
In this section we study the asymptotic behavior as → 0 of the functionals in (5.12) subjected to the conditions (5.5)–(5.10). The space of continuous functions is not the most appropriate one for this study as it is not a reflexive Banach space. For this reason we extend the functionals to the product space Lp × W01,p , for p ∈ (1, +∞), in a natural way (see (7.2) below). In Theorem 7.1 we establish the Γ-convergence of the sequence of these functionals and in Corollary 7.2 the convergence of the associated infima and minimizers. These two results provide a rigorous proof of the heuristics in the end of Section 5. We finish Section 7 with Remark 7.3, which relates the limit minimization problem obtained in Corollary 7.2 with the stationary setting (3.7) and (2.6). We start by making precise the hypotheses under which the results in this section hold. We suppose that F ∗ : Rd → R is a non-incresing, convex function, that is, F ∗ is convex and F ∗ (z) F ∗ (w) whenever z = (z 1 , . . . , z d ) and w = (w1 , . . . , wd ) ∈ Rd are such that z i wi for all i ∈ Id = {1, · · · , d}. We assume further that ˜ : Rd × Id → R is a locally Lipschitz function in the first variable and such that h ˜ i ·, i) is a concave function for all i ∈ Id . h(Δ 10
Recalling that Δi : Rd → Rd denotes the difference operator with respect to i ∈ Id (see (2.3)), we observe that for all c > 0 there is a constant L > 0, only depending on c and d, such that ˜ i w, i)| L|z − w| ˜ i z, i) − h(Δ |h(Δ
(7.1)
for all i ∈ Id , whenever z, w ∈ Rd are such that |Δj z|, |Δj w| c for all j ∈ Id . In fact, it suffices to notice that |Δi z − Δi w| =
d
|(z j − wj ) − (z i − wi )|2
1/2
j=1
d
2|(z j − wj )|2 + 2(d − 1)|z i − wi |2
1/2
j=1 j=i
2(d − 1)|z − w|.
˜ : Rd → Rd be the function defined by Finally, let h ˜ d (z)), ˜ ˜ 1 (z), · · · , h h(z) := (h
˜ i (z) := h(Δ ˜ i z, i) for all i ∈ Id , with h
¯ 0 be (arbitrary) positive constants. For each > 0 we and let R0 , M0 , and M define the functional F : Lp ((0, 1); Rd ) × W01,p (0, 1) × R → R by setting ⎧ 1
⎪ ⎪ ˜ ⎪ ˙ + u(t) ˙ + h(u(t)) − λ1 dt − θ0 · u(0) − λ F ∗ w(t)1 ⎪ ⎪ ⎪ ⎪ ⎪ 0 d ⎪ ⎪ ⎨ ¯ 0, ui (·) = 0, L1 -a.e. in (0, 1), if u ∈ Aψ , u(·) M F (u, w, λ) := i=1 ⎪ 1 ⎪ ⎪ ⎪ p ⎪ ∞ M0 , |λ| R0 , | u(t)| ˙ dt, w ˙ max ⎪ L (0,1) ⎪ ⎪ 0 ⎪ ⎪ ⎩+∞ otherwise, (7.2) d ¯ 0, where ψ ∈ Rd is such that i=1 ψ i = 0 and ψ M Aψ := u ∈ W 1,p ((0, 1); Rd ) : u(1) = ψ ,
and, we recall, 1 = (1, · · · , 1) ∈ Rd and z = 12 maxi∈Id z i − mini∈Id z i for z = (z 1 , . . . , z d ) ∈ Rd . Let F0 : Lp ((0, 1); Rd ) × W01,p (0, 1) × R → R be the functional defined by ⎧ 1
⎪ ⎪ ˜ ⎪ ˙ + h(u(t)) − λ1 dt − λ F ∗ w(t)1 ⎪ ⎪ ⎪ 0 ⎪ ⎪ d ⎨ ¯ 0, ui (·) = 0, L1 -a.e. in (0, 1), if u(·) M F0 (u, w) := ⎪ ⎪ i=1 ⎪ ⎪ ⎪ w ˙ L∞ (0,1) M0 , |λ| R0 , ⎪ ⎪ ⎪ ⎩ +∞ otherwise. (7.3) The following Γ-convergence’s result holds. 11
Theorem 7.1. Let F , > 0, and F0 be the functionals given by (7.2) and (7.3), respectively. Then the sequence {F }>0 Γ-converges as → 0+ to F0 with respect to the weak convergence in Lp ((0, 1); Rd ) × W01,p (0, 1) × R. Proof. Let { n }n∈N be an arbitrary sequence of positive numbers converging to zero. We will proceed in two steps. Step 1. We prove that for all {(un , wn , λn )}n∈N ⊂ Lp ((0, 1); Rd )×W01,p (0, 1)× R and (u, w, λ) ∈ Lp ((0, 1); Rd ) × W01,p (0, 1) × R such that un u weakly in Lp ((0, 1); Rd ), wn w weakly in W01,p (0, 1), and λn → λ in R as n → +∞, the following inequality holds F0 (u, w, λ) lim inf Fn (un , wn , λn ).
(7.4)
n→+∞
To prove (7.4), we may assume without loss of generality that lim inf Fn (un , wn , λn ) = M < +∞. n→+∞
¯ 0 and d uin (·) = 0 L1 -a.e. in Then for all n ∈ N, un ∈ Aψ , un (·) M i=1 1 (0, 1), 0 | n u˙ n (t)|p dt M0 , w˙ n L∞ (0,1) M0 , and |λn | R0 . Moreover, extracting a subsequence (that we will not relabel), we may assume that M = lim Fn (un , wn , λn ) n→+∞ 1
˜ n (t)) − λn 1 dt F ∗ w˙ n (t)1 + n u˙ n (t) + h(u = lim n→+∞
0
− n θ0 · un (0) − λn ,
n un 0 weakly in W 1,p ((0, 1); Rd ) as n → +∞,
wn w weakly star in W
1,∞
(0, 1) as n → +∞.
(7.5) (7.6) (7.7)
d
i 1 ¯ 0 and ˙ L∞ (0,1) We claim that u(·) M i=1 u (·) = 0 L -a.e. in (0, 1), w 1 M0 , and |λ| R0 . In fact, for all n ∈ N, and for L -a.e. t ∈ (0, 1),
¯ 0 un (t) = M
max uin (t) − min uin (t) i∈Id
i∈Id
2
ujn (t) − ukn (t) 2
(7.8)
for all j, k ∈ Id . Let t0 ∈ (0, 1) be a Lebesgue point for u and let δ > 0. Using the weak convergence un u in Lp ((0, 1); Rd ) as n → +∞, we conclude from (7.8) that t0 +δ j u (t) − uk (t) ¯0 1 dt M 2δ t0 −δ 2 for all j, k ∈ Id . Thus, letting δ → 0+ , j k ¯ 0 u (t0 ) − u (t0 ) · M 2
(7.9)
Taking the maximum over j ∈ Id and then the maximum over k ∈ Id in (7.9), we conclude that ¯ 0 for L1 -a.e. t ∈ (0, 1). u(t) M 12
On the other hand, by Theorem 6.1 applied to the real-valued convex function p d z ∈ Rd → f (z) := i=1 z i we get
1 0
d i p dt lim inf u (t) n→+∞ i=1
1 0
d i p dt = 0. u (t) n i=1
d
i 1 Thus, i=1 un (·) = 0 L -a.e. in (0, 1). We now observe that in view of the lower semicontinuity of the L∞ -norm with respect to the weak star convergence in L∞ , w ˙ L∞ (0,1) lim inf w˙ n L∞ (0,1) M0 . n→+∞
Finally, |λ| R0 since |λn | R0 and λn → λ in R. Hence the claim holds, which in particular implies that 1
˜ ˙ + h(u(t)) − λ1 dt − λ. F ∗ w(t)1 F0 (u, w, λ) = 0
We now prove that (up to a not relabeled subsequence) ˜ n (·)) − λn 1 w(·)1 ˙ + η(·) − λ1 w˙ n (·)1 + n u˙ n (·) + h(u
(7.10)
weakly in Lp ((0, 1); Rd ) as n → +∞, for some η ∈ Lp ((0, 1); Rd ). ¯ 0 , and In view of (7.8), that for all j, k ∈ Id , |ujn (t) − ukn (t)| 2M √ we deduce ¯ 0 for L1 -a.e. t ∈ (0, 1). This, together with the Lipschitz so, |Δi un (t)| 2dM condition (7.1), yields the existence of a positive constant L, only depending on ¯ 0 and d, such that for L1 -a.e. t ∈ (0, 1), M ˜ n (t)) − h(0)| ˜ ˜ ˜ ˜ n (t))| |h(u + |h(0)| L|un (t)| + |h(0)|. |h(u Hence
1
sup n∈N
0
˜ n (t))|p dt C |h(u
Therefore, (up to a not relabeled subsequence) for some positive constant C. ˜ n (·)) η(·) h(u
(7.11)
weakly in Lp ((0, 1); Rd ) as n → +∞, for some η ∈ Lp ((0, 1); Rd ), which, together with convergences λn → λ in R, wn w weakly in W01,p (0, 1), and (7.6), proves (7.10). We observe further that due to the continuity of the trace operator, we have that (7.12) n un (0) → 0 in Rd as n → +∞. We now prove that L1 -a.e. in (0, 1) and for all i ∈ Id , η i (·) hi (u(·)).
(7.13)
˜ and let δ > 0. In fact, let t0 ∈ (0, 1) be a Lebesgue point for η and h(u), Being hi a real-valued concave function, it is, in particular, continuous and thus bounded from above by an affine function (see, e.g., [FL07, Prop. 4.75]). Hence,
13
˜ n (·)) η(·) weakly in Lp ((0, 1); Rd ) as in view of convergences un u and h(u n → +∞, Theorem 6.1 implies that t0 +δ t0 +δ t0 +δ 1 1 1 η i (t) dt = lim sup hi (un (t)) dt hi (u(t)) dt, 2δ t0 −δ 2δ t0 −δ n→+∞ 2δ t0 −δ from which we conclude (7.13) by letting δ → 0+ . We finally observe that since F ∗ is a real-valued convex function, it is bounded from below by an affine function. Therefore, Theorem 6.1 and (7.10) yield 1
˙ + η(t) − λ1 dt F ∗ w(t)1 0
lim inf
n→+∞
1 0
˜ n (t)) − λn 1 dt, F ∗ w˙ n (t)1 + n u˙ n (t) + h(u
which, together with the hypothesis that F ∗ is non-increasing, (7.13), (7.12), and the convergence λn → λ in R, concludes Step 1. Step 2. We prove that for all (u, w, λ) ∈ Lp ((0, 1); Rd ) × W01,p (0, 1) × R there exists a sequence {(un , wn , λn )}n∈N ⊂ Lp ((0, 1); Rd ) × W01,p (0, 1) × R such that un u weakly in Lp ((0, 1); Rd ), wn w weakly in W01,p (0, 1), and λn → λ in R as n → +∞, and such that F0 (u, w, λ) lim sup Fn (un , wn , λn ).
(7.14)
n→+∞
Let (u, w, λ) ∈ Lp ((0, 1); Rd ) × W01,p (0, 1) × R be given. The only nontrivial ¯ 0 and d ui (·) = 0 L1 -a.e. in (0, 1), case is the case in which u(·) M i=1 w ˙ L∞ (0,1) M0 , and |λ| R0 , otherwise it suffices to define (un , wn , λn ) := (u, w, λ) for all n ∈ N. Fix any such (nontrivial) triplet (u, w, λ). Let ρ ∈ Cc∞ (R) be the function defined by − 1 if t ∈ (−1, 1), c e t2 −1 ρ(t) := 0 if t ∈ R\(−1, 1), where c > 0 is such that R ρ(t) dt = 1. Let cu,ρ := ρ C 1 (R) u L1 ((0,1);Rd ) ∈ R+ . Substep 2.1. We construct a sequence {un }n∈N ⊂ Lp ((0, 1); Rd ) satisfying the following conditions: un → u in Lp ((0, 1); Rd ) as n → +∞, √ 4 n un L∞ ((0,1);Rd ) cu,ρ , √ n u˙ n L∞ ((0,1);Rd ) cu,ρ , d
uin (·) = 0 L1 -a.e. in (0, 1),
(7.15) (7.16) (7.17) (7.18)
i=1
¯ 0 L1 -a.e. in (0, 1). un (·) M (7.19) √ For each n ∈ N set δn := 4 n , and define the standard smooth mollifier ρδn ∈ Cc∞ (R) by setting 1 t . ρ ρδn (t) := δn δ n 14
Observe that R ρδn (t) dt = 1, supp ρδn ⊂ (−δn , δn ), ρδn 0, and ρδn (−t) = ρδn (t) for all t ∈ R. Extend u by zero outside (0, 1) and define for t ∈ R, vn (t) := u(t)χJn (t), with Jn := [2δn , 1 − 2δn ],
and un (t) := (ρδn ∗ vn )(t) =
R
vn (s) ρδn (t − s) ds.
We claim that {un }n∈N satisfies (7.15)–(7.19). Since supp ρδn ⊂ (−δn , δn ), we have that supp un ⊂ [δn , 1 − δn ]. By well-known results on mollification, un ∈ W 1,p (R; Rd ) ∩ Cc∞ (R; Rd ), ρδn ∗ u → u in Lp (R; Rd ) as n → +∞, and ρδn ∗ (vn − u) Lp (R;Rd ) vn − u Lp (R;Rd ) , while, by Lebesgue Dominated Convergence Theorem, together with the fact that Jn ⊂ Jn+1 for all n ∈ N, and ∪n∈N Jn = (0, 1), vn = uχJn → u in Lp (R; Rd ) as n → +∞. Thus, using in addition Minkowski’s Inequality, we conclude that lim un − u Lp ((0,1);Rd )
n→+∞
= lim un − u Lp (R;Rd ) n→+∞
lim ρδn ∗ vn − ρδn ∗ u Lp (R;Rd ) + ρδn ∗ u − u Lp (R;Rd ) n→+∞
lim vn − u Lp (R;Rd ) + ρδn ∗ u − u Lp (R;Rd ) n→+∞
= 0, which proves (7.15). We now verify that (7.16) and (7.17) are also satisfied. In fact, we have that 1−2δn 1 1 t − s vn (s) ds ρ sup ρ(t) |u(s)| ds sup |un (t)| = sup δn δn t∈R t∈(0,1) t∈(0,1) R δn 2δn
1 cu,ρ , δn
and, using Lebesgue Dominated Convergence Theorem, 1−2δn 1 t − s 1 sup |ρ(t)| ρ ˙ (s) ds ˙ |u(s)| ds sup |u˙ n (t)| = sup v n δ 2 t∈R 2 δn t∈(0,1) t∈(0,1) R δn 2δn n
1 cu,ρ , δn2
√ which, recalling that δn = 4 n , yields (7.16) and (7.17). Finally, we show that (7.18) and (7.19) also hold. We have that for all t ∈ R, d i=1
uin (t) =
d R
vni (s) ρδn (t − s) ds =
i=1
15
1−2δn 2δn
d i=1
ui (s) ρδn (t − s) ds = 0
which proves (7.18). On the other hand, for all i, j ∈ Id and for all t ∈ (0, 1), we have that i uin (t) − ujn (t) vn (s) − vnj (s) = ρδn (t − s) ds 2 2 R 1−2δn i u (s) − uj (s) = ρδn (t − s) ds 2 2δn 1−2δn ¯ 0, ¯0 ρδn (t − s) ds M M 2δn
from which we obtain (7.19) by taking the maximum over i ∈ Id and then the maximum over j ∈ Id . Substep 2.2. We prove that the sequence constructed in Substep 2.1 is such that 1
˜ ˙ + h(u(t)) − λ1 dt F ∗ w(t)1 0 (7.20) 1
˜ n (t)) − λ1 dt. F ∗ w(t)1 ˙ + n u˙ n (t) + h(u = lim n→+∞
0
˜ together with the fact that u(·) , By the local Lipschitz continuity of h, ¯ 0 L1 -a.e. in (0, 1), and for all n ∈ N, we can find a positive constant un (·) M ¯ 0 and d, such that for all n ∈ N one has c, only depending on M
˜ n (t))| + |h(u(t))| ˜ c. sup |h(u t∈(0,1)
Using in addition (7.17), there is a positive constant c˜, only depending on R0 , ¯ 0 and d, such that M0 , M ˜ n ) L∞ ((0,1);Rd ) sup w ˙ L∞ (0,1) + n u˙ n L∞ ((0,1);Rd ) + h(u n∈N ˜ + h(u) ˜. L∞ ((0,1);Rd ) + |λ| c In view of the local Lipschitz continuity of F ∗ we can find another constant c¯, only depending on c˜, such that for L1 -a.e. t ∈ (0, 1), ∗
˜ ˜ n (t)) − λ1 − F ∗ w(t)1 F w(t)1 ˙ + n u˙ n (t) + h(u ˙ + h(u(t)) − λ1
˜ n (t)) − h(u(t)) ˜ c¯ n |u˙ n (t)| + L|un (t) − u(t)| , c¯ n u˙ n (t) + h(u where in the last inequality we used (7.1). By (7.17) and (7.15), we conclude that 1 ∗
˜ n (t)) − λ1 F w(t)1 ˙ + n u˙ n (t) + h(u lim n→+∞
0
˜ ˙ + h(u(t)) − λ1 dt = 0, − F ∗ w(t)1
which yields (7.20). Substep 2.3. We establish (7.14).
16
Define wn := w and λn := λ for all n ∈ N, and let {un }n∈N be the sequence √ constructed in Substep 2.1. For each n ∈ N, set δn := n , and let φn ∈ ∞ Cc (R; [0, 1]) be a smooth cut-off function such that ⎧ φ = 1 in [0, 1 − 2δn ], ⎪ ⎪ ⎨ n φn = 0 in [1 − δn , +∞), ⎪ ⎪ ⎩ φn L∞ (R) 2 · δn Now we define vn (t) := un (t)φn (t) + (1 − φn (t))ψ,
t ∈ (0, 1), n ∈ N.
We have that for all n ∈ N, vn ∈ W 1,p ((0, 1); Rd ) ∩ C ∞ ([0, 1]; Rd ),
vn (1) = ψ,
and so, vn ∈ Aψ . Moreover, vn → u in Lp ((0, 1); Rd ) as n → +∞,
(7.21)
due to (7.15) and to the pointwise convergence φn → 1 in (0,1) together with Lebesgue Dominated Convergence Theorem. Also, d
vni (t) = φn (t)
i=1
d
uin (t) + (1 − φn (t))
i=1
d
ψ i = 0,
i=1
d
d
where we used the fact that i=1 ψ i = 0 and i=1 uin (·) = 0 L1 -a.e. in (0, 1). Furthermore, v˙ n (t) = u˙ n (t)φn (t) + φ˙ n (t)(un (t) − ψ), and so, by (7.16) and (7.17), n v˙ n (t) L∞ ((0,1);Rd )
√
√ √ n cu.ρ + 2 4 n cu,ρ + 2 n |ψ|.
Thus, for all n ∈ N large enough we have that 1 | n v˙ n (t)|p dt M0 . 0
In particular, n vn 0 weakly in W 1,p ((0, 1); Rd ) as n → +∞. Consequently, by the continuity of the trace operator, n vn (0) → 0 in Rd as n → +∞. We ¯ 0 , and due to (7.19), we observe further that since 0 φn 1 and ψ M get for all i, j ∈ Id , t ∈ (0, 1), and n ∈ N, i i i j j vn (t) − vnj (t) φn (t) un (t) − un (t) + (1 − φn (t)) ψ − ψ 2 2 2 ¯ ¯ ¯ φn (t)M0 + (1 − φn (t))M0 = M0 .
17
¯ 0 L1 -a.e. in (0, 1). Consequently, Arguing as before, we obtain vn (·) M lim sup Fn (vn , wn , λn ) n→+∞
1
= lim sup 0
n→+∞
˜ n (t)) − λ1 dt ˙ + n v˙ n (t) + h(v F ∗ w(t)1 − n θ0 · vn (0) − λ
1
= lim sup 0
n→+∞
(7.22)
˜ n (t)) − λ1 dt − λ. ˙ + n v˙ n (t) + h(v F ∗ w(t)1
Finally, we observe that 1
˜ n (t)) − λ1 dt ˙ + n v˙ n (t) + h(v F ∗ w(t)1 0
1
= 0
F
∗
˜ n (t)) − λ1 dt + En , w(t)1 ˙ + n u˙ n (t) + h(u
(7.23)
where En := −
1 1−2δn
˜ n (t)) − λ1 dt ˙ + n u˙ n (t) + h(u F ∗ w(t)1 1
+ 1−2δn
˜ n (t)) − λ1 dt ˙ + n v˙ n (t) + h(v F ∗ w(t)1
˜ taking so that, taking into account the local Lipschitz continuity of F ∗ and h, into account the bounds satisfied by {un }n∈N and {vn }n∈N , and arguing as in Substep 2.2, we obtain 1 √ |En | c¯1 dt = 2¯ c1 δn = 2¯ c 1 n , 1−2δn
with c¯1 ∈ R+ independent of n ∈ N. Thus, |En | → 0 as n → +∞, which together with (7.22), (7.23), (7.20) and (7.21), concludes the proof of Substep 2.3 and of Theorem 7.1. Corollary 7.2. For each > 0, let G : W 1,p ((0, 1); Rd ) × W01,p (0, 1) × R → R and G0 : Lp ((0, 1); Rd ) × W01,p (0, 1) × R → R be the functionals defined by G (u, w, λ) :=
1 0
˜ ˙ + u(t) ˙ + h(u(t)) − λ1 dt − θ0 · u(0) − λ, F ∗ w(t)1
(u, w, λ) ∈ W 1,p ((0, 1); Rd ) × W01,p (0, 1) × R, and G0 (u, w, λ) :=
1 0
˜ ˙ + h(u(t)) − λ1 dt − λ, F ∗ w(t)1
18
(u, w, λ) ∈ Lp ((0, 1); Rd ) × W01,p (0, 1) × R, respectively. Consider also the sets Φ :=
(u, w, λ) ∈ W 1,p ((0, 1); Rd ) × W01,p (0, 1) × R : ¯ 0 and u(1) = ψ, u(·) M
ui (·) = 0 L1 -a.e. in (0, 1),
i=1
1
max 0
and
d
| u(t)| ˙ dt, w ˙ L∞ (0,1) p
M0 , |λ| R0 ,
Φ0 :=
(u, w, λ) ∈ Lp ((0, 1); Rd ) × W01,p (0, 1) × R : ¯ 0 and u(·) M
d
ui (·) = 0 L1 -a.e. in (0, 1),
i=1
w ˙ L∞ (0,1) M0 , |λ| R0 . Then min
(u,w,λ)∈Φ0
G0 (u, w, λ) = lim+ →0
inf
(u,w,λ)∈Φ
G (u, w, λ).
Moreover, if for each > 0 (u , w , λ ) is a minimizer of G in Φ (or, more generally, a δ -minimizer, where {δ }>0 is a sequence of positive numbers converging to 0) and (u, w, λ) is a cluster point of {(u , w , λ )}>0 , then (u, w, λ) is a minimizer of G0 in Φ0 , and G0 (u, w, λ) = lim sup G (u , w , λ ). →0+
If {(u , w , λ )}>0 weakly converges to (u, w, λ) in Lp ((0, 1); Rd ) × W01,p (0, 1) × R, then (u, w, λ) is a minimizer of G0 in Φ0 , and G0 (u, w, λ) = lim+ G (u , w , λ ). →0
Proof. Let F , > 0, and F0 be the functionals given by (7.2) and (7.3), respectively. If we prove that {F }>0 is equi-coercive in the weak topology of Lp ((0, 1); Rd ) × W01,p (0, 1) × R, then Corollary 7.2 is an immediate consequence of Theorems 6.4 and 7.1, observing that min
F0 (u, w, λ) =
inf
F (u, w, λ) =
(u,w,λ)∈Lp ((0,1);Rd )×W01,p (0,1)×R (u,w,λ)∈Lp ((0,1);Rd )×W01,p (0,1)×R
min
G0 (u, w, λ),
inf
G (u, w, λ).
(u,w,λ)∈Φ0 (u,w,λ)∈Φ
Fix s ∈ R. We claim that there exists a constant C > 0, only depending on R0 , ¯ 0 , and d, such that for all > 0, one has M0 , M {(u, w, λ) ∈ Lp ((0, 1); Rd ) × W01,p (0, 1) × R : F (u, w, λ) s} ⊂ {(u, w, λ) ∈ Lp ((0, 1); Rd ) × W01,p (0, 1) × R : u Lp ((0,1);Rd ) + w W 1,p (0,1) + |λ| C}. 0
19
In fact, if F (u, w, λ) s, then, in particular, |λ| R0 , w pW 1,p (0,1) 2M0p , 0 and d ¯ 0 L1 -a.e. in (0, 1), u(·) M ui (t) = 0. i=1
Thus, for all j ∈ Id , 1 |duj (t)|p dt = 0
1 0
p d d i j i u (t) + (u (t) − u (t)) dt
2p−1 2
i=1
i=1
1
0
d+1 2 (p−1)
d i p dt + u (t) i=1
1 0
p d j i dt (u (t) − u (t)) i=1
¯ p, M 0
from which, together with the estimates |λ| R0 and w pW 1,p (0,1) 2M0p , the 0 claim easily follows, and, consequently, finishes the proof of Corollary 7.2. Remark 7.3. Let G0 and Φ0 be as in Corollary 7.2. Then min G0 (u, w, λ) : (u, w, λ) ∈ Φ0 d ˜ · v, ·) − λ1) − λ : v ∈ Rd , v M ¯ 0, v i = 0, |λ| R0 , = min F ∗ (h(Δ i=1
(7.24) which corresponds to the minimization over |λ| R0 of problem (3.7). As we have seen in the end of Section 3, minimizers for the latter provide stationary solutions in the sense of Definition 2.1. To prove (7.24), we start by noticing that taking |λ| R0 , w ≡ 0, and ¯ 0 and d ui = 0, we conclude that minimum u ∈ Rd such that u M i=1 on the left-hand side of (7.24) is less than or equal to the minimum on the right-hand side of (7.24). 1 ˙ dt = w(1)−w(0) = 0. Moreover, Conversely, fix (u, w, λ) ∈ Φ0 . Then 0 w(t) 1 d d i setting v := 0 u(t) dt ∈ R , then i=1 v = 0 and, arguing as in Theorem 7.1, ¯ 0 . Hence, using Jensen’s inequality twice, recalling that F ∗ is convex v M and non-increasing while h is componentwise concave, we deduce that 1
˜ G0 (u, w, λ) = ˙ + h(u(t)) − λ1 dt − λ F ∗ w(t)1 0 1 ∗ ˜ h(u(t)) dt − λ1 − λ F 0
∗ ˜ F h(v) − λ1 − λ d ∗ ˜ d i ¯ v = 0, |λ| R0 , min F (h(Δ· v, ·) − λ1) − λ : v ∈ R , v M0 , i=1
from which the conclusion follows by taking the infimum over (u, w, λ) ∈ Φ0 . ˜ is strictly concave in Rd \R, Assume now that F ∗ is strictly convex and that h that is, for all 0 < μ < 1 we have ˜ ˜ ˜ h(μΔ i u + (1 − μ)Δi v, i) = μh(Δi u, i) + (1 − μ)h(Δi v, i) 20
implies u = v + k1, for some k ∈ R. Using once again Jensen’s inequality, we conclude that a solution (u, w, λ) ∈ Φ0 to min G0 (u, w, λ) : (u, w, λ) ∈ Φ0 is such that (w, u) does not depend on time. Thus, in this setting, Corollary 7.2 establishes in addition convergence of solutions of (5.1)–(5.2) to stationary solutions in the sense of Definition 2.1.
References [ALT]
Julien Salomon Aim´e Lachapelle and Gabriel Turinici. Computation of mean field equilibria in economics.
[Bra02]
Andrea Braides. Γ-convergence for beginners. Oxford: Oxford University Press, 2002.
[CLLP12] Pierre Cardaliaguet, Jean-Michel Lasry, Pierre-Louis Lions, and Alessio Porretta. Long time average of mean field games. Netw. Heterog. Media, 7(2):279–301,, 2012. [DM93]
Gianni Dal Maso. An introduction to Γ-convergence. Progress in Nonlinear Differential Equations and their Applications, 8. Birkh¨ auser Boston Inc., Boston, MA, 1993.
[FL07]
Irene Fonseca and Giovanni Leoni. Modern methods in the calculus of variations: Lp spaces. Springer Monographs in Mathematics. Springer, New York, 2007.
[GMS10]
D. Gomes, J. Mohr, and R. R. Souza. Discrete time, finite state space mean field games. Journal de Math´ematiques Pures et Appliqu´ees, 93(2):308–328, 2010.
[GMS13]
D. Gomes, J. Mohr, and R. R. Souza. Continuous time finite state mean-field games. To appear in Applied Mathematics and Optimization, 2013.
[GPSM12] Diogo A. Gomes, Gabriel E. Pires, and H´ector S´anchez-Morgado. A-priori estimates for stationary mean-field games. Netw. Heterog. Media, 7(2):303–314, 2012. [GSM11]
D. Gomes and H. Sanchez-Morgado. A stochastic Evans-Aronsson problem. To appear Trans. AMS, 2011.
[Gue11a]
O. Gueant. An existence and uniqueness result for mean field games with congestion effect on graphs. preprint, 2011.
[Gue11b]
O. Gueant. From infinity to one: The reduction of some mean field games to a global control problem. preprint, 2011.
[HCM07]
Minyi Huang, Peter E. Caines, and Roland P. Malham´e. Largepopulation cost-coupled LQG problems with nonuniform agents: individual-mass behavior and decentralized -Nash equilibria. IEEE Trans. Automat. Control, 52(9):1560–1571, 2007. 21
[HMC06]
Minyi Huang, Roland P. Malham´e, and Peter E. Caines. Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst., 6(3):221–251, 2006.
[LL06a]
Jean-Michel Lasry and Pierre-Louis Lions. Jeux a` champ moyen. I. Le cas stationnaire. C. R. Math. Acad. Sci. Paris, 343(9):619–625, 2006.
[LL06b]
Jean-Michel Lasry and Pierre-Louis Lions. Jeux a` champ moyen. II. Horizon fini et contrˆ ole optimal. C. R. Math. Acad. Sci. Paris, 343(10):679–684, 2006.
[LL07a]
Jean-Michel Lasry and Pierre-Louis Lions. Mean field games. Jpn. J. Math., 2(1):229–260, 2007.
[LL07b]
Jean-Michel Lasry and Pierre-Louis Lions. Mean field games. Cahiers de la Chaire Finance et D´eveloppement Durable, 2007.
[LLG10a] Jean-Michel Lasry, Pierre-Louis Lions, and O. Gu´eant. Application of mean field games to growth theory. preprint, 2010. [LLG10b] Jean-Michel Lasry, Pierre-Louis Lions, and O. Gueant. Mean field games and oil production. The Economics of Sustainable Development, Ed. Economica, 2010. [ML11]
B. Moll and R. Lucas. Knowledge growth and the allocation of time. preprint, 2011.
22