Continuous-time limit of repeated interactions for a system in a confining potential

Continuous-time limit of repeated interactions for a system in a confining potential

Available online at www.sciencedirect.com ScienceDirect Stochastic Processes and their Applications 125 (2015) 327–342 www.elsevier.com/locate/spa C...

230KB Sizes 0 Downloads 20 Views

Available online at www.sciencedirect.com

ScienceDirect Stochastic Processes and their Applications 125 (2015) 327–342 www.elsevier.com/locate/spa

Continuous-time limit of repeated interactions for a system in a confining potential✩ Julien Deschamps Universit`a degli Studi di Genova, Dipartimento di Matematica, Via Dodecaneso 35, 16146 Genova, Italy Received 5 March 2014; received in revised form 28 July 2014; accepted 20 August 2014 Available online 3 September 2014

Abstract We study the continuous-time limit of a class of Markov chains coming from the evolution of classical open systems undergoing repeated interactions. This repeated interaction model has been initially developed for dissipative quantum systems in Attal and Pautrat (2006) and was recently set up for the first time in Deschamps (2012) for classical dynamics. It was particularly shown in the latter that this scheme furnishes a new kind of Markovian evolutions based on Hamilton’s equations of motion. The system is also proved to evolve in the continuous-time limit with a stochastic differential equation. We here extend the convergence of the evolution of the system to more general dynamics, that is, to more general Hamiltonians and probability measures in the definition of the model. We also present a natural way to directly renormalize the initial Hamiltonian in order to obtain the relevant process in a study of the continuous-time limit. Then, even if Hamilton’s equations have no explicit solution in general, we obtain some bounds on the dynamics allowing us to prove the convergence in law of the Markov chain on the system to the solution of a stochastic differential equation, via the infinitesimal generators. c 2014 Elsevier B.V. All rights reserved. ⃝

Keywords: Stochastic differential equations; Relativistic diffusion processes; Infinitesimal generators; Classical open systems; Hamiltonian systems; Repeated interactions

1. Introduction In order to study open quantum systems, that is, quantum physical systems in interaction with an environment, the repeated interaction scheme was first introduced in [2]. This setup has the ✩ Work supported by ANR project “HAM-MARK”, N◦ ANR-09-BLAN-0098-01.

E-mail address: [email protected]. http://dx.doi.org/10.1016/j.spa.2014.08.006 c 2014 Elsevier B.V. All rights reserved. 0304-4149/⃝

328

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

advantage to furnish toy models for dissipative systems as quantum baths (see [4,5] for instance). Moreover, it corresponds to physical experiments as Haroche’s ones on photons in a cavity as presented in [9,10]. The repeated interaction scheme was developed for the first time for classical systems in [7] using a mathematical framework based on deterministic dilatations of Markov chains and stochastic differential equations described in [1]. The main idea in the latter is that Markov processes are all obtained from deterministic dynamical systems on a product of two spaces when ignoring one of the two components. This mathematical approach is therefore relevant for a description of open classical systems in which we usually do not have access to all the information on the environment. However, even if the first motivation and the point of view taken in this article are physical, the repeated interaction setup, as described in [1], turns out to not be restricted only to physical dynamics since it more generally proposes an other way to understand Markovian evolutions. These repeated interactions can be more precisely described as follows. We consider a system in interaction with a large environment. The latter is regarded as an infinite assembly of small identical pieces which act independently, one after the other, on the system during a small time step h. The state of each piece of the environment is randomly sampled from a probability measure representing a lack of knowledge on the environment which could arise from the inaccessibility of measurements of the environment or the impossibility to completely describe it. The advantage of this discrete-time model is that each interaction between the system and a piece of the environment is quite general, and can be explicitly described by a full Hamiltonian for instance, while the evolution of the system has a Markovian behavior facilitating the study of dynamics. A large class of Markov chains emerges in this model depending on the considered interaction and the probability measure on the environment. In this article we focus on a usual physical interaction which is a system in a confining potential for which the Hamiltonian is of the form ∥P∥2 ∥ p∥2 + V (q) + + W (Q) + η(q)β(Q), (1) 2 2 where q, p, Q and P are, respectively, the position and the momentum of the system and of each piece of the environment. In this operator, the map V represents the confining potential, the term ∥P∥2 /2 + W (Q) is the free dynamics of a part of the environment, that is, without interaction and η(q)β(Q) is the coupling term. Such an interaction can be found in the literature and is similar to the one studied in [6] for instance. In this model we are more particularly interested in the continuous-time limit of these repeated interactions, that is, the limit process given the evolution of the system when the interaction time h goes to 0. In the context of quantum systems, S. Attal and Y. Pautrat have shown in [2] that this model gives rise to usual quantum Langevin equations. For some classical systems, the limit evolution is proved in [7] to almost surely converge and in L p to the solution of a stochastic differential equation. However, in [7], the systems are seen as dynamical systems in order to follow the description proposed by [1]. But this point of view requires some restrictions on the model; the state of each piece of the environment has to be sampled from a Gaussian measure and the limit stochastic differential equation has locally Lipschitz and linearly bounded coefficients corresponding to “linear” interactions between the system and the environment. We here propose a different description of the repeated interaction scheme to free ourselves from these restrictions. Under some assumptions and after renormalizing the Hamiltonian H , we then prove the convergence in law of the Markov chain given the dynamics of the system to the H ( p, q, P, Q) =

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

solution of the stochastic differential equation  dqt = pt dt d pt = [−∇V (qt ) − m ∇η(qt )] dt − σ ∇η(qt ) d Wt ,

329

(2)

where (Wt )t∈R+ is a 1-dimensional Brownian motion and where m and σ 2 are respectively the mean and the variance of β(Q) with respect to the probability measure on the environment. This convergence is mainly based on some approximations of the solution of Hamilton’s equations since these equations have no explicit solution in general. This article is structured as follows. Section 2 is devoted to a description of the repeated interaction scheme different from the ones in [1,7]. In Section 3 we focus on the model of a system in a confining potential represented by (1). After a presentation of the assumptions on the system, we describe a natural way to renormalize the initial Hamiltonian in order to study the continuous-time limit of these repeated interactions. Then we show the convergence in law of the dynamics to the unique solution of the stochastic differential equation (2). 2. Classical repeated interactions The classical repeated interaction setup is a model for the dynamics of a “small” system undergoing interactions from a “large” environment. In this framework, the environment is assumed to be made up of an infinite number of identical pieces acting one after the other on the system during a time step h. Moreover each interaction is made independently from the others. Mathematically this setup can be described as follows. Since we consider Hamiltonian systems in this article, the state of the small system is represented by x = (q, p) in R2d where q and p are respectively the position and the momentum of the small system. The state of each part of the environment Ynh = (Q(nh), P(nh)) is sampled from a fixed probability measure µ on R2m in an independent way. This randomness represents a lack of knowledge on the environment. The state of the whole environment is then given by the sequence Y = (Ynh )n∈N . Now the system interacts with the environment in the following way. At each time nh, the h = (q(nh), p(nh)) starts to interact with the piece of the environment system whose state is X nh in the state Ynh during a time h. At time (n + 1)h, the system stops interacting with this piece and it is then coupled to the next part of the environment in the state Y(n+1)h and we resume. Since the parts of the environment are identical (even if they are in different states), we assume the existence of a map U (h) giving the state of the system after one interaction. More precisely, for a state x of the system and a state y of the piece of the environment before the interaction, the system is, after a time h, in the state U (h) (x, y). h ) The evolution of the system is then represented by the Markov chain (X nh n∈N satisfying the equality h h X (n+1)h (Y ) = U (h) (X nh (Y ), Ynh ).

Note that the interaction given by U (h) is deterministic and that randomness arises only from the state of the piece of the environment at the beginning of each interaction. h ) As explained in [1], the Markov chain (X nh n∈N is associated with the Markov transition 2d kernel Πh defined for all x in R by Πh (x, ·) = U (h) (x, ·)# µ, the image of µ by the mapping

(3) U (h) (x, ·).

330

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

All this procedure can be described as a deterministic dynamical system as in [1,7]. But, here, h ) we are only interested in the continuous-time limit of (X nh n∈N , that is, the limit process when the time parameter h goes to 0, and not in the whole dynamics. This approach allows us to investigate this limit in models with more general U (h) and µ. In the next section, we focus on a particular Hamiltonian interaction. We also present a way to obtain the map U (h) to study the convergence in law of the Markov chain to the solution of a stochastic differential equation. 3. System in a confining potential We now describe in detail the physical model we consider and particularly the interaction between the system and each part of the environment. The small system is a unit mass particle in Rd moving in a confining potential V . This function from Rd to R represents the potential energy and rules the evolution without interaction. The environment is a collection of unit mass particles in Rd too (without loss of generality we assume that the small system and each part of the environment are both d-dimensional). Their free-dynamics are of the same form, given by a potential W . The coupling term is η(q)β(Q) where the maps η and β go from Rd to R. Thus, the total Hamiltonian H representing the energy of the whole system for one interaction is H ( p, q, P, Q) = HS ( p, q) + H E (P, Q) + η(q)β(Q),

(4)

with HS (q, p) =

∥ p∥2 + V (q) and 2

H E (Q, P) =

∥P∥2 + W (Q), 2

where ∥·∥ denotes the euclidean norm of Rd . With an abuse of notation, in the following ∥·∥ shall also denote the Hilbert–Schmidt norm of a matrix. For such a Hamiltonian interaction, the evolutions of the system and each piece of the environment satisfy the following Hamilton’s equations of motion  q ′ (t) = p(t) and p ′ (t) = −∇V (q(t)) − ∇η(q(t))β(Q(t)) Q ′ (t) = P(t)

and

P ′ (t) = −∇W (Q(t)) − η(q(t))∇β(Q(t)).

These differential equations have no explicit solution in general and the only way to express them is their integral form  h q(h) = q0 + p(s) ds, (5) 0 h

 p(h) = p0 −

∇V (q(s)) + ∇η(q(s))β(Q(s)) ds,

(6)

0

for initial conditions q0 , p0 , Q 0 and P0 . 3.1. Assumptions At this stage there is no assumption on the Hamiltonian and on the measure µ we fix on the environment. In order to study the continuous-time limit and to get a consistent model we assume the following:

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

331

Assumptions (A). i. The functions V and η are twice continuously differentiable and the maps W , β are continuously differentiable. ii. The maps V and W are non-negative. iii. The potential V satisfies lim

∥q∥→+∞

V (q) = +∞.

  iv. There exists a positive Cη ≤ min 1, 1/(|m| + 1/2) such that ∥∇η(q)∥2 + |η(q)| + |η(q)|2 ≤ Cη V (q), for all q in Rd and where m = E [ β(Q) ]. v. For all Q in Rd , we have ∥∇β(Q)∥2 + |β(Q)|2 ≤ W (Q). vi. There exist a positive constant C and α ≥ 1 such that all the functions V , ∇V , D 2 V , η, ∇η, D 2 η, W , ∇W , β and ∇β are respectively bounded by C(1 + ∥q∥α ) and C(1 + ∥Q∥α ). vii. The random variable P possesses a moment of order 9α with respect to the measure µ and Q a moment of order max(9α, 5α 2 ). Before studying the limit evolution for such a model, we want to make some remarks on these assumptions: – Note that, from (ii), (iv) and (v), we can easily deduce that the Hamiltonian H is a nonnegative function. Thus, it can be viewed as an energy. – Hamilton’s equations only depend on the derivatives of the maps in H and not directly on these maps. The potentials V and W are therefore defined up to a constant. Hence, assumptions (i) and (ii) turn out to be natural and not restrictive for physical systems. – Thanks to (iii), the potential V is called confining. It avoids the explosion of the state of the system. – Conditions (iv) and (v) allow us to control the coupling part with the potentials V and W . This means that the free-dynamics are strong enough to rule the evolutions. Example (A Polynomial Example). A typical example in physical models is the case of polynomial maps and a Gibbs measure on the environment. Consider a Hamiltonian system in dimension 1 in which the interaction is given by p2 P2 + 3(q 6 + 1) + + 2(Q 8 + 1) + q 2 Q 2 . 2 2 We fix on the environment the related Gibbs probability measure H (q, p, Q, P) =

dµ=

e−P

2 /2−2(Q 8 +1)

d P d Q, Z where Z is a normalizing constant. This measure is invariant with respect to the free-dynamics of the environment and represents an environment at equilibrium. The harmonic oscillator example in [7] with a normal measure on the environment can be recovered in this context by taking V (q) = q 2 /2, W (Q) = Q 2 /2, η(q) = −q and β(Q) = Q. Since we have described one interaction between the system and each piece of the environment, we can now set up the repeated interaction scheme and investigate the continuoustime limit of the evolution of the system.

332

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

3.2. Renormalization of the Hamiltonian interaction In the previous sections, we have presented one interaction between the system and the environment. The state of the system after the interaction is (5) and (6) which can be roughly approximated for a small h by q(h) = q0 + h p0 + o(h) p(h) = p0 − h [∇V (q0 ) + ∇η(q0 )β(Q 0 )] + o(h).

(7) (8)

At each time step nh in this scheme, the state (Q, P) is sampled from the probability measure µ. Note that randomness only appears at order 1 with the term β(Q). If we study the continuoustime limit of the Markov chain defined from (5) and (6), we would obtain a deterministic limit evolution given by the solution of  dqt = pt dt d pt = [−∇V (qt ) − m ∇η(qt )] dt, where m = Eµ [β(Q)]. The term −m ∇η(qt ) represents in this equation the mean force applied by the environment on the system. However randomness on the environment is lost at the limit. Therefore, as in [7], it is convenient to reinforce the coupling term in order to keep it. But contrary to [7], the strengthening of the interaction is here directly made on the initial Hamiltonian as in quantum systems (see [2]). Hence for the study of the continuous-time limit, we consider the new Hamiltonian given by   ∥ p∥2 ∥P∥2 1 |m|2 H˜ ( p, q, P, Q) = + V (q) + mη(q) + + W (Q) + 2 2 h h 1 + √ η(q)(β(Q) − m). (9) h Note that we have the effective potential on the system, represented by the confining potential V and√the mean potential due to the coupling term, and a reinforcement of the interaction by a factor 1/ h. The potential of the environment is also changed in order to keep a non-negative function H˜ for all positive h. We shall see that this last change shall have no effect on the limit evolution. It can also be noticed that, when h equals 1, we recover (up to a constant) the Hamiltonian H . Now the map U (h) is obtained from Hamilton’s equations related to H˜ , that is, the position and the momentum are  h q(h) = q0 + p(s) ds (10) 0  h 1 p(h) = p0 − ∇V (q(s)) + m∇η(q(s)) + √ ∇η(q(s))(β(Q(s)) − m) ds. (11) h 0 h ) Then we get the Markov chain of the system (X nh n∈N satisfying the equality h h X (n+1)h (Y ) = U (h) (X nh (Y ), Ynh ).

The aim is now to prove the convergence of this Markov chain to the solution of the stochastic differential equation (2) when h goes to 0. 3.3. Convergence to the stochastic differential equation At this stage, the evolution of the small system is represented by the discrete-time Markov h ) chain (X nh n∈N . However, in order to study the convergence to a solution of a stochastic differ-

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

333

h ) ential equation, it is suitable to consider a continuous-time process related to (X nh n∈N . Thereh fore let (X t )t∈R+ be the process obtained by linearly interpolating in time the Markov chain like in [7]. More precisely we define (X th )t∈R+ by h X th = X ⌊t/ h⌋h +

 t − ⌊t/ h⌋h  h h X (⌊t/ h⌋+1)h − X ⌊t/ h⌋h , h

for all t in R+ . Note that because of the linear interpolation, the process (X th )t∈R+ is not Markovian anymore. However we can study the convergence in law when the time parameter h goes to 0. We now state the main result on the convergence of the process (X th )t∈R+ . Theorem 3.1. Under Assumptions (A) and for all initial conditions (q0 , p0 ) in R2d , the process (X th )t∈R+ converges in law when h goes to 0 to the solution of the stochastic differential equation  dqt = pt dt d pt = [−∇V (qt ) − m ∇η(qt )] dt − σ ∇η(qt ) d Wt , with X 0 = (q0 , p0 ) and where Wt is a one-dimensional standard Brownian motion. The proof of this result shall be based on the uniform convergence on compact sets of infinitesimal generators and the uniqueness of the solution of the martingale problem related to the limit generator. Therefore we now introduce some preliminary results and some necessary tools. The infinitesimal generator associated with the stochastic differential equation (2) is the operator L defined by     1    L= pi ∂q i − ∂q i V (q) + m ∂q i η(q) ∂ pi + σ 2 ∂q i η(q) ∂q j η(q) ∂ pi p j , (12) 2 i i, j acting on Cc∞ (R2d ), the set of infinitely differentiable functions having a compact support, and where the q i ’s, the pi ’s respectively denote the coordinates of the q and p, ∂q i = ∂/∂q i , . . . . Since the process (X th )t∈R+ is not Markovian, a generator cannot be associated in the same h ) way. But a discrete time generator can be related to the Markov chain (X nh n∈N . From its transition kernel Πh , one defines the generator Ah by    Ah f (x) = f (z) − f (x) Πh (x, dz) = E f (U (h) (x, Y )) − f (x) , for all f in Cc∞ (R2d ) and all x in R2d . As previously explained, these generators shall play a key role in the proof of Theorem 3.1 through their convergence on compact sets. However this convergence is not sufficient in general to deduce the one of processes. Indeed, there is no reason to have uniqueness of the process associated with the limit generator for instance. In order to study the Markov processes related to a given generator, we introduce the notion of martingale problem. Definition 1. Let x be in R2d . A process (Z t )t∈R+ is a solution of the martingale problem for (L, x), if, for all f ∈ Cc∞ (R2d ), the process  t f Mt = f (Z t ) − f (x) − L f (Z s )ds, 0

is a martingale with respect to Ft = σ (Z s , s ≤ t).

334

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

In order to prove existence and uniqueness (in law) of the solution of the martingale problem for (L , x) for all x in R2d , it is sufficient to show existence and uniqueness of the solution of the stochastic differential equation (2) under Assumptions (A) (see Theorem 1.1 Chapter 5 in [3]). Thus, the next lemma shall imply that the unique process related to L is (X t )t∈R+ . Lemma 1. For all positive T , and all (q0 , p0 ) in R2d , under Assumptions (A) the stochastic differential equation  dqt = pt dt d pt = [−∇V (qt ) − m ∇η(qt ) ] dt − σ ∇η(qt ) d Wt , has a unique solution X t = (qt , pt ) on [0, T ] with X 0 = (q0 , p0 ). Moreover, the solution satisfies E [ F(X t ) ] ≤ F(X 0 )eσ t , 2

where the map F is defined from F(q, p) =

(13) R2d

to R by

∥ p∥2 + V (q) + m η(q). 2

Proof. The result follows from Theorem 3.5 in [11] by proving that F is a suitable Lyapunov function for the stochastic differential equation. Condition (i) in Assumptions (A) implies that the coefficients of (2) are locally Lipschitz and locally linearly bounded. Hence the local existence and uniqueness of the solution of the stochastic differential equation (2) are guaranteed (see [12] where it is shown that these assumptions on the coefficients are sufficient). It remains to prove that F plays the role of a Lyapunov function for (2). First, under Assumptions (A), the map F is non-negative on R2d . Now let us compute the action of L on F. For all (q, p) in R2d , we get L F(q, p) =

1 2 σ ∥∇η(q)∥2 . 2

By (iv), we obtain L F(q, p) ≤ σ 2 F(q, p).

(14)

Note also that, since V and η satisfy the properties (iii) and (iv), the asymptotic behavior of the function F is the following inf

∥(q, p)∥>R

(15)

F(q, p) −→ +∞. R→+∞

Finally, Eqs. (14) and (15) give the result thanks to Theorem 3.5 in [11].



We now state some bounds on U (h) (x, Y ) which shall be later helpful for the uniform convergence on compact sets of the generator Ah to L. Recall that x, Y and U (h) (x, Y ) denote respectively (q0 , p0 ), (Q 0 , P0 ) and (q(h), p(h)) = U (h) (q0 , p0 , Q 0 , P0 ). Depending on the situation, we shall use both notations. Lemma 2. Assume that Assumptions (A) hold. Then we have for all positive R  3  1  (h)  lim sup E U (x, Y ) − x  = 0. h→0 h ∥x∥≤R

(16)

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

335

Moreover, for all i = 1, . . . , d, we get 1      lim sup  E q i (h) − q0i − p0i  = 0 h→0 ∥x∥≤R h 1      lim sup  E pi (h) − p0i − (−∂q i V (q0 ) − m∂q i η(q0 )) = 0 h→0 ∥x∥≤R h

(17) (18)

and, for all i, j = 1, . . . , d, 1    j  lim sup  E (q i (h) − q0i )(q j (h) − q0 )  = 0 h→0 ∥x∥≤R h 1    j  lim sup  E (q i (h) − q0i )( p j (h) − p0 )  = 0 h→0 ∥x∥≤R h 1      j lim sup  E ( pi (h) − p0i )( p j (h) − p0 ) − σ 2 (∂q i η(q0 ))(∂q j η(q0 )) = 0. h→0 ∥x∥≤R h

(19) (20) (21)

Proof. These bounds shall be obtained from the solution of Hamilton’s equations of motion related to H˜ . We start with a bound on the positions q and Q. For all t in [0, h], we have  t q(t) = q0 + p(s) ds 0



t

Q(t) = Q 0 +

P(s) ds. 0

Applying the Cauchy–Schwarz Inequality, we obtain   ∥q(t) − q0 ∥2 =  

t



t

0

2  d    = p(s) ds    j=1

≤t

0

t

d  t 2   p j (s) ds  ≤ t | p j (s)|2 ds j=1 0

∥ p(s)∥2 ds.

0

 ≤ 2t

t

H˜ (q(s), p(s), Q(s), P(s)) ds.

0

Since the Hamiltonian is conserved in time, the equality H˜ (q(s), p(s), Q(s), P(s)) = H˜ (q0 , p0 , Q 0 , P0 ) holds for all s. Then we end up with the inequality for all t in [0, h]  ∥q(t) − q0 ∥ ≤ C1 h H˜ (q0 , p0 , Q 0 , P0 ).

(22)

Notice that, under (iv), (v) and (vi), there exists a positive C2 such that C2 H˜ (q, p, Q, P) ≤ (∥ p∥2 + ∥q∥α + ∥P∥2 + ∥Q∥α + 1) h

(23)

336

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

for 0 < h ≤ 1. Indeed, using (iv) and (v), we get |mη(q)| ≤ |m|Cη V (q) ≤

1 |m|Cη V (q) h

and also C3 1 C3 (|η(q)|2 + |β(Q)|2 + 1) ≤ (V (q) + W (Q) + 1). √ |η(q)(β(Q) − m)| ≤ h h h Then we deduce (23) from the polynomial bound (vi). Finally, Eq. (22) becomes for all t in [0, h] √  ∥q(t) − q0 ∥ ≤ C4 h (∥ p0 ∥2 + ∥q0 ∥α + ∥P0 ∥2 + ∥Q 0 ∥α + 1). (24) For the same reason, we get for all t in [0, h] the same bound on the position of the environment √  ∥Q(t) − Q 0 ∥ ≤ C4 h (∥ p0 ∥2 + ∥q0 ∥α + ∥P0 ∥2 + ∥Q 0 ∥α + 1). (25) Let us now consider the momentum components of Hamilton’s equations. The momentum p of the system satisfies  t 1 p(t) = p0 − ∇V (q(s)) + m∇η(q(s)) + √ ∇η(q(s))(β(Q(s)) − m) ds. (26) h 0 As previously, under (vi), there exists a positive constant C5 such that  C5 t ∥ p(t) − p0 ∥ ≤ √ ∥q(s)∥α + ∥Q(s)∥α + 1, h 0 for 0 < h ≤ 1. Using (24), (25), and the convexity of the map z → z α , we obtain α √ α ∥q(s)∥α ≤ C6 (∥q0 ∥α + h ∥ p0 ∥2 + ∥q0 ∥α + ∥P0 ∥2 + ∥Q 0 ∥α + 1 ),

(27)

and, ∥Q(s)∥α ≤ C6 (∥Q 0 ∥α +

√ h

α



α

∥ p0 ∥2 + ∥q0 ∥α + ∥P0 ∥2 + ∥Q 0 ∥α + 1 ).

(28)

Hence, for 0 < h ≤ 1 and for all t in [0, h], the following bound holds √ ∥ p(t) − p0 ∥ ≤ C7 h(∥q0 ∥α + ∥Q 0 ∥α  α + ∥ p0 ∥2 + ∥q0 ∥α + ∥P0 ∥2 + ∥Q 0 ∥α + 1 + 1).

(29)

In the same way, since the equation on the environment is   t 1 t 1 ∇W (Q(s))ds − √ P(t) = P0 − η(q(s))∇β(Q(s))ds, h 0 h 0 we get, for 0 < h ≤ 1 and all t in [0, h], ∥P(t) − P0 ∥ ≤ C8 (∥q0 ∥α + ∥Q 0 ∥α +



α

∥ p0 ∥2 + ∥q0 ∥α + ∥P0 ∥2 + ∥Q 0 ∥α + 1 + 1). (30)

337

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

Since we have these bounds on the solution of Hamilton’s equations, we can now start the proof of the first limit (16). Using (24), (29) and the convexity of the map z → z 3 , we now have  3   (h) U (x, Y ) − x  ≤ C9 h 3/2 (∥q0 ∥3α + ∥Q 0 ∥3α  3α + ∥ p0 ∥2 + ∥q0 ∥α + ∥P0 ∥2 + ∥Q 0 ∥α + 1). Since 3α/2 ≥ 1, we get by convexity again that  3 2  (h)  U (x, Y ) − x  ≤ C10 h 3/2 (∥q0 ∥3α + ∥Q 0 ∥3α + ∥ p0 ∥3α + ∥q0 ∥3α /2 + ∥P0 ∥3α + ∥Q 0 ∥3α

2 /2

+ 1).

Then, for all positive R, under (i) and (vii), Eq. (16) holds. In order to prove the other limits, we need to consider higher orders in the writing of the solution q(h) and p(h) of Hamilton’s equations. Let us start with the position q(h). We have  h q(h) = q0 + p(s) ds 0   h  s 1 = q0 + p0 − ∇V (q(t)) + m∇η(q(t)) + √ ∇η(q(t))β(Q(t)) dt ds h 0 0 = q0 + h p0 − Mh (q0 , p0 , Q 0 , P0 ), where Mh (q0 , p0 , Q 0 , P0 ) =

 0

h

 0

s

1 ∇V (q(t)) + m∇η(q(t)) + √ ∇η(q(t))β(Q(t)) dt ds. h

Since the integral with respect to t in the term Mh has been previously bounded, we already know that ∥Mh (q0 , p0 , Q 0 , P0 )∥ ≤ C11 h 3/2 (∥q0 ∥α + ∥Q 0 ∥α  α + ∥ p0 ∥2 + ∥q0 ∥α + ∥P0 ∥2 + ∥Q 0 ∥α + 1 + 1). (31) Using the inequality a ≤ (1 + a 2 ) and by convexity, we obtain the bound ∥Mh (q0 , p0 , Q 0 , P0 )∥ ≤ C12 h 3/2 (∥ p0 ∥2α + ∥q0 ∥α + ∥P0 ∥2α + ∥q0 ∥α + 1). 2

2

(32)

Therefore, for all i = 1, . . . , d, we have 1   1    1      E q i (h) − q0i − p0i  =  E q i (h) − q0i − hp0i  ≤ E [ ∥Mh (q0 , p0 , Q 0 , P0 )∥ ] . h h h Again, for all positive R, using (32) and under assumptions (i) and (vii), the limit (17) follows. Let us now consider the momentum p(h). From Hamilton’s equations, we deduce that  t 1 p(h) = p0 − ∇V (q(s)) + m∇η(q(s)) + √ ∇η(q(s))(β(Q(s)) − m) ds h 0 √ = p0 − h∇η(q0 )(β(Q 0 ) − m) − h(∇V (q0 ) + m∇η(q0 )) − Nh (q0 , p0 , Q 0 , P0 ) (33)

338

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

where Nh (q0 , p0 , Q 0 , P0 ) =

h





0



s

[D 2 V (q(t)) · q ′ (t) + m D 2 η(q(t)) · q ′ (t)] dt ds

0 h s

+ 0



0 h



+ 0

0

s

1 √ D 2 η(q(t)) · q ′ (t)(β(Q(t)) − m) dt ds h 1 √ ∇η(q(t))⟨∇β(Q(t)), Q ′ (t)⟩ dt ds, h

with · denotes the product of a matrix and a vector and ⟨ , ⟩ the scalar product. We then study the term Nh (q0 , p0 , Q 0 , P0 ). Under assumption (vi) and using (27) and (29), we obtain    2  D V (q(t)) · q ′ (t) ≤ C(∥q(t)∥α + 1) ∥ p(t)∥ ≤ C13 (∥q(t)∥2α + ∥ p(t)∥2 + 1) ≤ C14 (∥q0 ∥2α + ∥ p0 ∥2α + ∥q0 ∥α + ∥P0 ∥2α + ∥q0 ∥α + 1). 2

2

For the same reason, we have the bound   2 2   m D 2 η(q(t)) · q ′ (t) ≤ C15 (∥q0 ∥2α + ∥ p0 ∥2α + ∥q0 ∥α + ∥P0 ∥2α + ∥q0 ∥α + 1). The third term in Nh can also be bounded as follows    2  D η(q(t)) · q ′ (t)(β(Q(t)) − m) ≤ C16 (∥q(t)∥α + 1) ∥ p(t)∥ (∥Q(t)∥α + 1) ≤ C17 (∥q(t)∥3α + ∥ p(t)∥3 + ∥Q(t)∥3α + 1) and, thus,     2  D η(q(t)) · q ′ (t)(β(Q(t)) − m) ≤ C18 ∥q0 ∥3α + ∥Q 0 ∥3α + ∥ p0 ∥3α + ∥q0 ∥3α

2 /2

+ ∥P0 ∥3α + ∥Q 0 ∥3α

2 /2

 +1 .

It remains the last term. The same kind of bound is obtained    ∇η(q(t))⟨∇β(Q(t)), Q ′ (t)⟩ ≤ C19 ∥q0 ∥3α + ∥Q 0 ∥3α + ∥ p0 ∥3α + ∥q0 ∥3α 2 /2  2 + ∥P0 ∥3α + ∥Q 0 ∥3α /2 + 1 . Finally, we end up with  ∥Nh (q0 , p0 , Q 0 , P0 )∥ ≤ C20 h 3/2 ∥q0 ∥3α + ∥Q 0 ∥3α + ∥ p0 ∥3α + ∥q0 ∥3α

2 /2

+ ∥P0 ∥3α + ∥Q 0 ∥3α

2 /2

 +1 .

Then, for all i = 1, . . . , d, it follows that  1 1      E pi (h) − p0i − (−∂q i V (q0 ) − m∂q i η(q0 )) ≤ E [ ∥Nh (q0 , p0 , Q 0 , P0 )∥ ] . h h As previously, assumptions (i), (vii) and the bound (34) give that Eq. (18) holds.

(34)

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

339

The remaining limits (19)–(21) can be proved in the same way. Therefore we now only show (21). For all i, j = 1, ..d, note that (26) implies j

( pi (h) − p0i )( p j (h) − p0 ) √  = h∂q i η(q0 )(β(Q 0 ) − m) + h(∂q i V (q0 ) + m∂q i η(q0 )) + Nhi (q0 , p0 , Q 0 , P0 )  √ j × h∂q j η(q0 )(β(Q 0 ) − m) + h(∂q j V (q0 ) + m∂q j η(q0 )) + Nh (q0 , p0 , Q 0 , P0 ) ij

= h(∂q i η(q0 ))(∂q j η(q0 ))(β(Q 0 ) − m)2 + h 3/2 Z h (q0 , p0 , Q 0 , P0 ), ij

where the other terms of order at least 3/2 are grouped together in Z h (q0 , p0 , Q 0 , P0 ). By   definition of σ 2 = E (β(Q 0 ) − m)2 , we obtain 1      j  E ( pi (h) − p0i )( p j (h) − p0 ) − σ 2 (∂q i η(q0 ))(∂q j η(q0 )) h  √   i j  ≤ hE Z h (q0 , p0 , Q 0 , P0 ) . But, using all the previous bounds that we have obtained, we get    2  ij  E Z h (q0 , p0 , Q 0 , P0 ) ≤ C21 ∥q0 ∥9α + ∥Q 0 ∥9α + ∥ p0 ∥9α + ∥q0 ∥5α  2 + ∥P0 ∥9α + ∥Q 0 ∥5α + 1 . Since we assume that (i) and (vii) hold, the result follows. This concludes the proof of these limits.  Since we have shown the uniqueness of the process related to the generator L (Lemma 1) and the uniform bounds of Lemma 2, we are now able to prove Theorem 3.1. 2d Proof of Theorem 3.1. The first step is to prove that for all R > 0 and all f in C∞ c (R ) we have 1    lim sup  Ah f (x) − L f (x) = 0. (35) h→0 ∥x∥≤R h

Then the convergence in law follows from the uniqueness of the process associated with L by applying Corollary 4.2 in Chapter 7 in [8]. In order to obtain the convergence of Ah to L on compact sets of R2d , we introduce the operators H and K defined by H f (x, z) =

2d 2d  ∂ 1  ∂2 (z i − x i ) i f (x) + (z i − x i )(z j − x j ) i j f (x), ∂x 2 i, j=1 ∂x ∂x i=1

and K f (x) =



H f (x, z) Πh (x, dz),

2d ∞ 2d for all x, z in R2d and all f in C∞ c (R ). We can now note that for all f in Cc (R ) and all x 2d in R 1  1  1  1       (36)  Ah f (x) − L f (x) ≤  Ah f (x) − K f (x) +  K f (x) − L f (x). h h h h

340

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

We start with the study of the first term on the right hand side of (36). Using the definition of these operators, we get  1  1  1     Ah f (x) − K f (x) = E [ f (z) − f (x) − H f (x, z) ] h h h  1  ≤ E  f (z) − f (x) − H f (x, z) . h Then applying Taylor’s Theorem, we deduce that there exists a constant C f such that for all x, z in R2d    f (z) − f (x) − H f (x, z) ≤ C f ∥z − x∥3 . Finally, we end up with 1  1   1    Ah f (x) − K f (x) ≤ C f E ∥z − x∥3 . h h h Using (16) we obtain 1  1   lim sup  Ah f (x) − K f (x) = 0. h→0 ∥x∥≤R h h Let us focus on the second term on the right hand side of (36). Note that the operators L and K are of the same form 

bi

i

∂ 1  i, j ∂ 2 + a , ∂xi 2 i, j ∂xi ∂x j i, j

where the coefficients ah and bhi of K are  1 i, j ah (x) = (z i − x i )(z j − x j ) Πh (x, dz), h and, bhi (x)

1 = h



(z i − x i ) Πh (x, dz),

and the a i, j ’s and the bi ’s of L are given by Eq. (12). The limits (17)–(21) imply that i, j

lim sup |ah (x) − a i, j (x)| = 0,

h→0 |x|≤R

and, lim sup |bhi (x) − bi (x)| = 0.

h→0 |x|≤R

We conclude that 1    lim sup  K f (x) − L f (x) = 0, h→0 ∥x∥≤R h and then (35) is proved.

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

341

Thanks to [13, Lemma 11.2.1], the uniform convergence on compact sets of the generators, represented by (35), is equivalent to the following uniform convergence on compact sets  1 (z i − x i )(z j − x j ) Πh (x, dz) −→ a i, j (37) h→0 h |z−x|≤1  1 (z i − x i ) Πh (x, dz) −→ bi (38) h→0 h |z−x|≤1 1 (39) Πh (x, R2d \ B(x, ε)) −→ 0 h→0 h for all i, j and all ε > 0. Finally, by Corollary 4.2 in Chapter 7 in [8], Properties (37)–(39), the existence and the uniqueness of the solution of the martingale problem related to L give the convergence in law of (X th )t∈R+ to the solution (X t )t∈R+ of the stochastic differential (2).  We are now able to deduce the limit evolution of the polynomial example previously described. Example. Theorem 3.1 can be applied to the model in which the Hamiltonian is p2 P2 + 3(q 6 + 1) + + 2(Q 8 + 1) + q 2 Q 2 2 2 and the measure on the environment is H (q, p, Q, P) =

dµ=

e−P

2 /2−2(Q 8 +1)

d P d Q, Z where Z is a normalizing constant. If m and σ 2 denote respectively the mean and the variance of Q 2 with respect to the measure µ, we obtain a limit evolution given by the solution of the stochastic differential equation  dqt = pt dt  d pt = −18qt5 − 2 m qt dt − 2σ qt d Wt . Acknowledgments We would like to thank Stephan De Bi`evre for his remarks and more particularly one on a mistake in the first version of this paper. References [1] [2] [3] [4] [5] [6] [7] [8]

S. Attal, Markov chains and dynamical systems: the open system point of view, Commun. Stoch. Anal. 4 (2010). S. Attal, Y. Pautrat, From repeated to continuous quantum interactions, Ann. Henri Poincar´e 7 (2006). R.F. Bass, Diffusions and Elliptic Operators, in: Probability and its Applications, Springer, 1997. L. Bruneau, S. De Bi`evre, C.-A. Pillet, Scattering induced current in a tight-binding band, J. Math. Phys. 52 (2) (2011). L. Bruneau, C.-A. Pillet, Thermal relaxation of a QED cavity, J. Stat. Phys. 134 (5–6) (2009) 1071–1095. S. De Bi`evre, P. Parris, Equilibration, generalized equirepartition and diffusion in dynamical Lorentz gases, J. Stat. Phys. 142 (2011) 356–385. J. Deschamps, Continuous limits of classical repeated interaction systems, Ann. Henri Poincar´e (2012). S.N. Ethier, T.G. Kurtz, Markov Processes: Characterization and Convergence, in: Wiley Series in Probability and Mathematical Statistics, 1986.

342

J. Deschamps / Stochastic Processes and their Applications 125 (2015) 327–342

[9] S. Haroche, S. Gleyzes, S. Kuhr, C. Guerlin, J. Bernu, S. Del´eglise, U. Busk-Hoff, M. Brune, J.-M. Raimond, Quantum jumps of light recording the birth and death of a photon in a cavity, Nature 446 (2007) 297. [10] S. Haroche, C. Sayrin, I. Dotsenko, X.X. Zhou, B. Peaudecerf, T. Rybarczyk, S. Gleyzes, P. Rouchon, M. Mirrahimi, H. Amini, M. Brune, J.-M. Raimond, Real-time quantum feedback prepares and stabilizes photon number states, Nature 477 (2011) 73. [11] R.Z. Has’minskii, Stochastic Stability of Differential Equations, in: Stochastic Modelling and Applied Probability, vol. 66, Springer, 2010. [12] N. Ikeda, S. Watanabe, Stochastic Differential Equations and Diffusion Processes, second ed., in: North-Holland Mathematical Library, 1989. [13] D.W. Stroock, S.R.S. Varadhan, Multidimensional Diffusion Processes, in: Classics in Mathematics, SpringerVerlag, Berlin, 2006, Reprint of the 1997 edition.