Available online at www.sciencedirect.com
ScienceDirect Stochastic Processes and their Applications (
)
– www.elsevier.com/locate/spa
Superprocesses with interaction and immigration Jie Xiong a , Xu Yang b,∗ a Department of Mathematics, Faculty of Sci. & Tech., University of Macau, Macau, China b School of Mathematics and Information Science, Beifang University of Nationalities, Yinchuan, China
Received 28 December 2015; received in revised form 16 April 2016; accepted 26 April 2016
Abstract We construct a class of superprocesses with interactive branching, immigration mechanisms, and spatial motion. It arises as the limit of a sequence of interacting branching particle systems with immigration, which generalizes a result of M´el´eard and Roelly (1993) established for a superprocess with interactive spatial motion. The uniqueness in law of the superprocess is established under certain conditions using the pathwise uniqueness of an SPDE satisfied by its corresponding distribution function process. This generalizes the recent work of Mytnik and Xiong (2015), where the result for a super-Brownian motion with interactive immigration mechanisms was obtained. An extended Yamada–Watanabe argument is used in the proving of pathwise uniqueness. c 2016 Elsevier B.V. All rights reserved. ⃝
MSC: primary 60J68; 60H15; secondary 60H05 Keywords: Superprocess; Interaction; Immigration; Pathwise uniqueness; Yamada–Watanabe argument
1. Introduction Superprocesses, which are also called Dawson–Watanabe processes, are mathematical models from biology and physics and have been studied by many authors; see [1,7,17] and the references therein. They were introduced to describe the asymptotic behaviour of populations ∗ Corresponding author.
E-mail addresses:
[email protected] (J. Xiong),
[email protected] (X. Yang). http://dx.doi.org/10.1016/j.spa.2016.04.032 c 2016 Elsevier B.V. All rights reserved. 0304-4149/⃝
2
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
undergoing random reproduction and spatial migration which are independent of the entire population. Typical examples of the models are biological populations in isolated regions, families of neutrons in nuclear reactions, cosmic ray showers and so on. For the case that the reproduction and migration mechanisms depend on the entire population as well as on individual’s position (called as branching interacting particle system (BIPS)), it was first proved in [11] that a sequence of BIPSs converges to a superprocess with interactive spatial motion and binary branching mechanism as the branching rate tends to infinity and the mass of each particle tends to zero. If we consider a situation where there are some additional sources of population from which immigration occurs during the evolution, we need to consider superprocesses with immigration; see [6,7,10] and the references therein. It is natural to imagine that there are some additional sources immigrating to BIPS. Thus our first purpose here is to establish an existence theorem of such a superprocess with interactive branching, immigration mechanisms, and spatial motion (briefly denoted as SIBIM). When we turn to the uniqueness in law of the superprocess, we generalize the work of Mytnik and Xiong [13], where the result for a super-Brownian motion with interactive immigration mechanisms was established. We show that the uniqueness of the solution to the martingale problem for a SIBIM holds under certain conditions, which is the second purpose of this work. To continue with the introduction we present some notation. Given a topological space V , let B(V ) denote the Borel σ -algebra on V . Let B(V ) be the set of bounded measurable functions on V . Let T > 0 and D([0, T ]; V ) denote the space of c´adl´ag paths from [0, T ] to V furnished with the Skorokhod topology. For a metric space V¯ let P(V¯ ) be the family of Borel probability measures on V¯ equipped with the Prohorov metric. Let B(R) be furnished with the supremum norm ∥ · ∥. We use C(R) to denote the subset of B(R) of bounded continuous functions. For any integer n ≥ 1 let C n (R) be the subset of C(R) of functions with bounded continuous derivatives up to the nth order. Let C0n (R) denote the space of functions in C n (R) vanishing at infinity. Let Ccn (R) be the subset of C n (R) of functions with compact supports. We use the superscript “+” to denote the subsets of positive elements of the function spaces, e.g., B(R)+ . For f, g ∈ B(R) write ⟨ f, g⟩ = R f (x)g(x)d x whenever it exists. Let M(R) be the space of finite Borel measures on R equipped with the weak convergence topology. For µ ∈ M(R) and f ∈ B(R) write ⟨µ, f ⟩ = f dµ. Let D(R) be the set of bounded right-continuous increasing functions f on R satisfying f (−∞) := limx→−∞ f (x) = 0. We write f (∞) for limx→∞ f (x) as f ∈ D(R). Then there is a 1–1 correspondence between D(R) and M(R) assigning a measure to its distribution function. We endow D(R) with the topology induced by this correspondence from the weak convergence topology of M(R). Then for any M(R)-valued stochastic process {X t : t ≥ 0}, its distribution function process {Yt : t ≥ 0} is a D(R)-valued stochastic process. In this paper we always use C0 to denote a positive constant whose value might change from line to line. Let ∇ and ∆ denote the first and the second order spatial differential operators, respectively. In the present paper we always assume that (A(µ))µ∈M(R) , a family of the generators of Feller semigroups on C(R), has the following properties: • • • •
All of their domains contain a vector space D which is independent of µ and dense in C0 (R). All constant functions belong to D, and A(µ)1 = 0 for all µ ∈ M(R). For each f ∈ D, there is a constant C so that ∥A(µ) f ∥ ≤ C for all µ ∈ M(R). For each f ∈ D, the mapping µ → ⟨µ, A(µ) f ⟩ is continuous.
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
3
Suppose that Φ and Ψ are respectively general interactive branching and immigration mechanisms defined by ∞ 1 2 [e−zu − 1 + zu]m(µ, x, du), Φ(µ, x, z) = b(µ, x)z + c(µ, x)z + 2 0 ∞ Ψ (µ, x, z) = q(µ, x)z + (1 − e−zu )n(µ, x, du), µ ∈ M(R), x ∈ R, z ≥ 0, 0
where b ∈ B(M(R)×R), c, q ∈ B(M(R)×R)+ , and both (u ∧u 2 )m(µ, x, du) and un(µ, x, du) are bounded kernels from M(R) × R to R+ (see e.g. [7, p. 303]). Let L ∈ M(R) and η(µ, d x) = q(µ, x)L(d x). Let D¯ be the class of functions on M(R) of the form F(µ) = G(⟨µ, f 1 ⟩, . . . , ⟨µ, f n ⟩) with G ∈ C 2 (Rn ) and { f 1 , . . . , f n } ⊂ D. For each F ∈ D¯ define A(µ)F ′ (µ; x)µ(d x), LF(µ) = AF(µ) + B F(µ), AF(µ) = R 1 ′ B F(µ) = − b(µ, x)F (µ; x)µ(d x) + c(µ, x)F ′′ (µ; x)µ(d x) 2 R R ∞ µ(d x) + F(µ + uδx ) − F(µ) − u F ′ (µ; x) m(µ, x, du) R 0 + F ′ (µ; x)q(µ, x)L(d x) R ∞ + L(d x) F(µ + uδx ) − F(µ) n(µ, x, du), 0
R
F ′ (µ; x)
where = limε→0+ [F(µ + εδx ) − F(µ)]/ε and F ′′ (µ; x) is defined by the limit with F(·) replaced by F ′ (·; x). For an M(R)-valued c´adl´ag process {X t : t ≥ 0}, we say it is a ¯ solution of the martingale problem P(A, Φ, Ψ ) if for each F ∈ D, t F(X t ) − F(X 0 ) − LF(X s )ds 0
is a local martingale. In this case we also say that {X t : t ≥ 0} is an (A, Φ, Ψ )-SIBIM. Our first main result in this paper, Theorem 3.3, asserts that a SIBIM arises as the limit of a sequence of the BIPSs with immigration (briefly denoted as BIPSIs) with high branching and immigrating rates and small mass of each particle. Our second main result, Theorem 5.1, is that under certain conditions on A, Φ and Ψ , the uniqueness of the solution to the martingale problem P(A, Φ, Ψ ) holds. Consequently, the martingale problem P(A, Φ, Ψ ) is well-posed and the SIBIM is a Markov process. To establish the uniqueness, we consider the distribution function process {Yt : t ≥ 0} of the SIBIM {X t : t ≥ 0}. We prove that {X t : t ≥ 0} is a SIBIM if and only if there are, on an extension of the original probability space, a Gaussian white noise {W (ds, du) : s ≥ 0, u > 0} and a compensated Poisson random measure { N˜ 0 (ds, dz, du) : s ≥ 0, z > 0, u > 0} so that {Yt : t ≥ 0}, defined by Yt (x) = X t (−∞, x], solves the following stochastic partial differential equation (SPDE): t Ys− (x) t ∞ Ys− (x) Yt (x) = Y0 (x) + c0 (u)W (ds, du) + z N˜ 0 (ds, dz, du) 0 0 0 0 0 t t ∗ + A0 (X s )Ys (x)ds + [η0 (X s , x) − b0 (X s , x)]ds, 0
0
4
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
where (A0 (µ))µ∈M(R) is a family of operators, b0 and η0 are two functions defined in Condition 2.2 to be given later, and A∗0 (µ) denotes the dual operator of A0 (µ) for µ ∈ M(R). The above equation is a purely formal SPDE. More rigorously, it should be understood in the following weak sense: For every f ∈ D, t Ys− (x) ⟨Yt , f ⟩ = ⟨Y0 , f ⟩ + f (x)d x c0 (u)W (ds, du) R 0 t ∞ 0 Ys− (x) + f (x)d x z N˜ 0 (ds, dz, du) R 0 0 0 t t + ⟨Ys , A0 (X s ) f ⟩ds + ⟨η0 (X s ) − b0 (X s ), f ⟩ds. (1.1) 0
0
Our work is strongly influenced by Mytnik and Xiong [13], where the special case A(µ) = ∆/2 and b(µ, x) = m(µ, x, (0, ∞)) = 0 for all µ ∈ M(R) and x ∈ R was considered. Here we need to mention that the pathwise uniqueness for some similar stochastic equations of distribution functions of superprocesses was first established in [2]. The distribution function processes of a super-Brownian and a Fleming–Viot process, a generalized Fleming–Viot process over the real line with Brownian mutation, and a super-L´evy process with general branching mechanism were considered in [16,8,5], respectively. The main difficulty in showing pathwise uniqueness of those SPDEs corresponding to the distribution function processes is the unbounded operators. The key for overcoming this problem is the transformation of those SPDEs into backward doubly stochastic equations and the using of the fact that the mutation generator and spatial motion are independent of the Fleming–Viot processes and superprocesses, respectively. Thus this method failed in proving the pathwise uniqueness of (1.1). To solve this problem, in this paper we establish another kind of extension of the Yamada–Watanabe argument. This method, which is by using integration by parts and Taylor’s formula, is simple and suitable to a class of SPDE driven by a Gaussian white noise and Poisson random measures with nonLipschitz coefficients. In particular, the derivative of the solution for (1.1) with respect to the spatial variable is not needed in this method. We think this is of independent interest. The remainder of this paper is organized as follows. In Section 2, we establish the equivalency between a SIBIM and its related SPDEs. In Section 3, we show that SIBIM is the limit process of a sequence of BIPSIs with high branching and immigrating rates and small mass of each particle. A criterion for pathwise uniqueness for a general SPDE driven by a Gaussian white noise and Poisson random measures with non-Lipschitz coefficients is studied in Section 4. The uniqueness of SIBIM is given in Section 5. An auxiliary proposition is presented in the Appendix. 2. SIBIM and its related SPDE Let S(R) denote the space of finite Borel signed measures on R endowed with the σ -algebra generated by the mappings µ → µ(B) for all B ∈ B(R). Put S(R)◦ = S(R)\{0}. In this section we establish the relationship between a SIBIM and the SPDE satisfied by its corresponding distribution process. To this end, we need to establish the following theorem which is analogous to [7, Theorem 9.18]. Theorem 2.1. Suppose that {X t : t ≥ 0} is an (A, Φ, Ψ )-SIBIM. Define an optional random measure N (ds, dν) on [0, ∞) × S(R)◦ by N (ds, dν) = 1{∆ X s ̸=0} δ(s,∆ X s ) (ds, dν), (2.1) s>0
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
5
–
where ∆X s = X s − X s− . Let Nˆ (ds, dν) be the predictable compensator of N (ds, dν) and let N˜ (ds, dν) denote the corresponding compensated random measure. Then {X t : t ≥ 0} has no negative jumps and Nˆ (ds, dν) = ds K (X s , dν) + ds H (X s , dν) with K (µ, dν) and H (µ, dν) given by ∞ F(ν)K (µ, dν) = µ(d x) F(uδx )m(µ, x, du), M(R)◦ R 0 ∞ F(ν)H (µ, dν) = L(d x) F(uδx )n(µ, x, du). M(R)◦
0
R
Moreover, for each f ∈ D+ , t ⟨X t , f ⟩ = ⟨X 0 , f ⟩ + ⟨X s , (A(X s ) − b(X s )) f ⟩ + Γ (X s , f ) ds + Mtc ( f ) + Mtd ( f ), 0
(2.2) where Γ (µ, f ) := ⟨η(µ), f ⟩ +
∞
f (x)un(µ, x, du),
L(d x) R
0
{Mtc ( f ) : t ≥ 0} is a continuous martingale with quadratic variation Mtd ( f ) =
t 0
M(R)◦
t
0 ⟨X s , c(X s ) f
2 ⟩ds
and
ν( f ) N˜ (ds, dν)
is a purely discontinuous martingale. Proof. Since the proof is quite similar to those of [7, Theorem 7.13] and [4, Theorem 2.3], we omit it here. Condition 2.2. (i) Suppose that D ⊂ C 1 (R). For each µ ∈ M(R) and f ∈ D, the mappings x → b(µ, x) and x → A(µ) f ′ (x) are integrable on R. Let A0 (µ) be the operator determined by A0 (µ) f ′ (x) = (A(µ) f (x))′ for each µ ∈ M(R) and f ∈ Cc1 (R). (ii) Define the two functions b0 (µ, ·) and η0 (µ, ·) by b0 (µ, x) := ⟨b(µ), 1(−∞,x] ⟩ and η0 (µ, x) = η(µ, (−∞, x]). There exist a function c0 ∈ B(R)+ and a bounded kernel (u ∧ u 2 )m 0 (v, du) on R+ so that for all µ ∈ M(R) and x ∈ R, c(µ, x) = c0 (µ(−∞, x])2 ,
m(µ, x, du) = m 0 (µ(−∞, x], du),
n(µ, x, du) ≡ 0. Finally, we state the main result of this section. Since the proof is essentially similar to [5, Theorem 3.1], we only present the result and omit the proof here. Theorem 2.3. Suppose that Condition 2.2 holds. Then a D(R)-valued c´adl´ag stochastic process {Yt : t ≥ 0} is the distribution function process of an (A, Φ, Ψ )-SIBIM {X t : t ≥ 0} if and only if there exist, on an enlarged probability space, a Gaussian white noise {W (ds, du) : s ≥ 0, u > 0}
6
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
with intensity dsdu and a compensated Poisson random measure { N˜ 0 (ds, dz, du) : s ≥ 0, z > 0, u > 0} with intensity dsm 0 (u, dz)du so that {Yt : t ≥ 0} solves Eq. (1.1). 3. Particle systems and superprocesses Inspired by the work of [6,11], in this section we construct a class of SIBIMs, which arises as the limit of a sequence of BIPSIs. We first introduce the BIPSI. Let λ ∈ B(M(R) × R)+ . Let g, h ∈ B(M(R) × R × [−1, 1]) and g(µ, x, ·), h(µ, x, ·) be probability generating functions for each µ ∈ M(R) and x ∈ R. In this section we consider a BIPSI with parameters (A, λ, g, L , h) (briefly denoted as (A, λ, g, L , h)-BIPSI), which is a measure-valued process {X t : t ≥ 0} (where X t (B) denotes the number of particles in B ∈ B(R) that are alive at time t ≥ 0), characterized by the following properties: • X 0 , a finite measure on R, describes the initial configuration of the system. • Each particle moves according to the law of a non homogeneous Feller process ξ whose generator A(X t ) may depend on the state of the system at time t. All particles are independent under the condition of the state of the system through X t . • For a particle which is alive at time r ≥ 0 and follows the path t {ξt : t ≥ r }, the conditional probability of survival during the time interval [r, t) is exp{− r λ(X s , ξs )ds}. • When a particle following the path {ξs : s ≥ r } dies at time t > r it gives birth to a random number of offsprings according to the probability distribution pk (X t , x) given by the generating function g(X t , x, ·) depending on the location x and the state of the system through X t . The offspring then start to move from their common birth site. • The entry times and locations of new immigrating particles are governed by a Poisson random measure with intensity dt L(d x). • The number of new particles entering at time t and location x according to the probability distribution qk (X t , x) is given by the generating function h(X t , x, ·) depending on the location x and the state of the system through X t . This kind of interaction in infinite particle systems is a mean field interaction, in the sense that the range of the interaction is infinite, i.e. all particles, at every time, have the same influence on some fixed particle. In the noninteracting case, branching-diffusion processes are Markov processes and are characterized by their generator on a sufficiently large class of functions (like cylindrical functions) or by the martingale problems they solve. More precisely, applying Itˆo’s formula to each particle alive, it is easy to obtain the generator of the point-measure-valued Markov process. In the interacting case, one can define in the same way the generator L0 of the interacting branching particle system with immigration considered as point-measure-valued process: For each F ∈ C 2 (R), f ∈ D, µ ∈ M(R), if F f (µ) = F(⟨µ, f ⟩) is a cylindrical function, L0 F f (µ) = A0 F f (µ) + B0 F f (µ) with 1 A0 F f (µ) := F ′ (⟨µ, f ⟩)⟨µ, A(µ) f ⟩ + F ′′ (⟨µ, f ⟩)⟨µ, A(µ) f 2 − 2 f A(µ) f ⟩, 2 ∞ B0 F f (µ) := λ(µ, x) pk (µ, x)[F(⟨µ, f ⟩ + (k − 1) f (x)) − F(⟨µ, f ⟩)]µ(d x) k=0 R ∞
+
k=0 R
qk (µ, x)[F(⟨µ, f ⟩ + k f (x)) − F(⟨µ, f ⟩)]L(d x).
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
7
–
That is, the process {X t : t ≥ 0} is a c´adl´ag M(R)-valued process such that for each f ∈ D, t F f (X t ) − F f (X 0 ) − L0 F f (X s )ds 0
is a local martingale. We now consider a sequence of (A, λn , gn , αn L , h n )-BIPSI. Let p (n) denote the reproduction law of the particle and q (n) the drop law of the immigration. That is for each µ ∈ M(R) and x ∈ R, gn (µ, x, z) =
∞
p (n) (µ, x)z k
and
h n (µ, x, z) =
k=0
∞
q (n) (µ, x)z k ,
z ∈ [−1, 1].
k=0
Let Pn , a probability measure on D([0, T ]; M(R)), denote the law of the process X t = εn δx i,n , εn > 0, i∈Itn
t
where Itn is the set of particles alive at time t, and xti,n is the location of the ith particle at time t. Then by the argument in the last paragraph, the martingale problem P(A, λn , gn , αn L , h n , εn ) holds: For each F ∈ C 2 (R) and f ∈ D, t F f (X t ) − F f (X 0 ) − Ln F f (X s )ds (3.1) 0
is a Pn -local martingale, where Ln F f (µ) := An F f (µ) + Bn F f (µ) An F f (µ) := F ′ (⟨µ, f ⟩)⟨µ, A(µ) f ⟩ + Bn F f (µ) := εn−1 + αn
∞ k=0 R ∞
εn ′′ F (⟨µ, f ⟩)⟨µ, A(µ) f 2 − 2 f A(µ) f ⟩, 2
(n)
λn (µ, x) pk (µ, x)[F(⟨µ, f ⟩ + εn (k − 1) f (x)) − F(⟨µ, f ⟩)]µ(d x) (n)
qk (µ, x)[F(⟨µ, f ⟩ + εn k f (x)) − F(⟨µ, f ⟩)]L(d x).
k=0 R
Condition 3.1. For µ ∈ M(R), x ∈ R, z ≥ 0 and n ≥ 1 define Φn (µ, x, z) = εn−1 λn (µ, x)[gn (µ, x, 1 − zεn ) − (1 − zεn )], Ψn (µ, x, z) = αn [1 − h n (µ, x, 1 − zεn )]. As n → ∞ the following assertions hold: (a) The sequences αn and infµ∈M(R) λn (µ, x) tend to infinity for each x ∈ R. (b) The sequence εn tends to zero and (εn αn )n≥1 is bounded. (c) There is a constant C so that for all µ ∈ M(R), x ∈ R and n ≥ 1, ∂Φ (µ, x, 0+) ∂Ψ (µ, x, 0+) n n + ≤ C. ∂z ∂z (d) The sequences Φn (µ, x, z) and Ψn (µ, x, z) converge to Φ(µ, x, z) and Ψ (µ, x, z), respectively, uniformly on M(R) × R × [0, h] for each h ≥ 0.
8
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
Recall that η(µ, d x) = q(µ, x)L(d x). Condition 3.2. For each µ ∈ M(R), define bounded measures b˜µ (d x) = b(µ, x)µ(d x),
c˜µ (d x) = c(µ, x)µ(d x)
and bounded kernels m˜ µ (d x, du) = m(µ, x, du)µ(d x),
n˜ µ (d x) = n(µ, x, du)L(d x).
The mappings µ → b˜µ , µ → c˜µ , µ → η(µ, ·),
and
µ → (u ∧ u 2 )m˜ µ (·, du), µ → u n˜ µ (·, du)
are continuous. In the following theorem we establish an approximation for the BIPSI with both high branching and immigrating rates λn and αn (tending to infinity) and small mass εn (tending to zero) of each particle. Theorem 3.3. Suppose that Conditions 3.1–3.2 are satisfied and that Pn is a solution of martingale problem P(A, λn , gn , αn L , h n , εn ). Then the following two assertions hold: (i) If the sequence of the laws of X 0 under Pn is tight in P(M(R)), then the sequence (Pn )n≥1 is tight in the space P(D([0, T ]; M(R))). (ii) If P is a limit point of Pn , P is a solution of the martingale problem P(A, Φ, Ψ ). Remark 3.4. If m(µ, x, (0, ∞)) = 0 and Ψ (µ, x, z) = 0 for all µ ∈ M(R) and x ∈ R, Theorem 3.3 was established in [11, Theorem 1]. Proof of Theorem 3.3. (i) The proof is a modification of that of [11, Theorem 1(1)]. We show the details here. By [15, Theorem 2.1], Pn is tight if and only if its projections are tight, that is if for a dense sequence ( f k )k≥0 of C0 (R) and f 0 = 1, the laws of {⟨X t , f k ⟩ : t ∈ [0, T ]} under Pn are tight in D([0, T ]; R) for each fixed k. So in the following let f ∈ D be fixed and we show that the laws of {⟨X t , f ⟩ : t ∈ [0, T ]} under Pn are tight in D([0, T ]; R). It is easy to see that ∞ (n) sup |Ln F(X t )| ≤ C0 ⟨X t , 1⟩ + C0 λn (X s , x) (k − 1) pk (X s , x)X s (d x) t∈[0,T ]
R
+ C0 αn εn
∞
k=0 (n)
kqk (X s , x)L(d x).
k=0 R
By Proposition A.1 and Condition 3.1(c) sup En sup ⟨X t , 1⟩ < ∞, n≥1
t∈[0,T ]
where En is the expectation with respect to Pn . Thus, sup En sup |Ln F(X t )| < ∞ n≥1
t∈[0,T ]
and for each F ∈ C 2 (R), F(⟨X t , f ⟩) − F(⟨X 0 , f ⟩) −
0
t
Ln F(X s )ds
(3.2)
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
9
–
is a Pn -martingale by [14, Theorem 51]. It then follows from [3, Theorems 3.2.2, 3.9.1 and 3.9.4], the laws of {⟨X t , f ⟩ : t ∈ [0, T ]} under Pn are tight. (ii) The proof is a modification of that of [4, Lemma 3.2] or [11, Theorem 1(2)]. Let P be a limit point of Pn . By Skorokhod’s representation, we may assume that the c´adl´ag processes (n) {X t : t ∈ [0, T ]} and {X t : t ∈ [0, T ]} with distributions Pn and P respectively are (n) defined on the same probability space (Ω , F , P) and that the sequence {X t : t ∈ [0, T ]} converges almost surely to {X t : t ∈ [0, T ]} on D([0, T ]; M(R)); see [3, p.102]. Define D(X ) = {t ∈ [0, T ] : P{X t = X t− } = 1}. Then the complement in [0, T ] of D(X ) is at most countable by [3, Lemma 3.7.7]. It follows from [3, Lemma 3.5.2] that for each t ∈ D(X ) (n) we have limn→∞ X t = X t almost surely. We shall show that for each f ∈ D+ , t exp{−⟨X t , f ⟩} − exp{−⟨X 0 , f ⟩} − L exp{−⟨X s , f ⟩}ds (3.3) 0
is a P-martingale. By (3.2), we know that (3.3) is a Pn -martingale as L replaced by Ln , where Ln exp{−⟨µn , f ⟩} := An exp{−⟨µn , f ⟩} + Bn exp{−⟨µn , f ⟩} with An exp{−⟨µn , f ⟩}
εn = exp{−⟨µn , f ⟩} −⟨µn , A(µn ) f ⟩ + ⟨µn , A(µn ) f 2 − 2 f A(µn ) f ⟩ 2
and Bn exp{−⟨µn , f ⟩} ∞ (n) = εn−1 λn (µn , x) pk (µn , x) exp{−⟨µn , f ⟩} exp{−εn (k − 1) f (x)} − 1 µn (d x) k=0 R ∞
+ αn
(n) qk (µn , x) exp{−⟨µn , f ⟩} exp{−εn k f (x)} − 1 L(d x)
k=0 R
eεn f (x) Φn (µn , x, εn−1 (1 − e−εn f (x) ))µn (d x) − exp{−⟨µn , f ⟩} Ψn (µn , x, εn−1 (1 − e−εn f (x) ))L(d x).
= exp{−⟨µn , f ⟩}
R
R
By the continuity condition on the operator A(µ), and Conditions 3.1(d) and 3.2, we get that for each t ∈ D(X ) (n)
(n)
(n)
lim Ln exp{−⟨X t , f ⟩} = lim An exp{−⟨X t , f ⟩} + lim Bn exp{−⟨X t , f ⟩} n→∞ n→∞ = exp{−⟨X t , f ⟩} −⟨X t , A(X t ) f ⟩ + ⟨X t , Φ(X t , ·, f (·))⟩ − ⟨L , Ψ (X t , ·, f (·))⟩
n→∞
= L exp{−⟨X t , f ⟩} almost surely. Then as the same argument in Step 1 of the proof of [4, Lemma 3.2], (3.3) is a martingale. This implies that for each θ > 0, t M(t, θ ) := exp{−θ ⟨X t , f ⟩} − exp{−θ ⟨X 0 , f ⟩} − L exp{−θ ⟨X s , f ⟩}ds 0
10
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
is a P-martingale. Observe that 0 ≤ M(t, θ )/θ ≤ ⟨X t , f ⟩ + ⟨X 0 , f ⟩ +
t
⟨X s , (A(X s ) − b(µ)) f ⟩ + Γ (X s , f ) ds.
0
Then E lim M(t, θ )/θ σ (X u : u ≤ s) = 0 θ→0+
by dominated convergence and lim M(t, θ )/θ = ⟨X t , f ⟩ − ⟨X 0 , f ⟩ −
θ →0+
t
⟨X s , (A(X s ) − b(µ)) f ⟩ + Γ (X s , f ) ds.
0
Thus ⟨X t , f ⟩ − ⟨X 0 , f ⟩ −
t
⟨X s , (A(X s ) − b(µ)) f ⟩ + Γ (X s , f ) ds
0
is a P-martingale. Now by the same argument as in Theorem 2.1, we get Eq. (2.2). Then by Itˆo’s formula, and the same argument as in the proof of [7, Theorem 7.13] or that of [4, Lemma 3.2] one finishes the proof. 4. Pathwise uniqueness for SPDE Suppose that E and E i (i = 0, 1) are Polish spaces, and that π and πi are σ -finite Borel measures on E and E i (i = 0, 1), respectively. For i = 0, 1 set Hi ∈ B(R+ × R+ × E i ). Let G ∈ B(R+ × R+ × E), β0 , ζ, B ∈ B(M(R) × R) and β1 ∈ B(M(R)). Let ν1 (µ, dz) be a ˇ := {z : |z| > 1}. Let z 2 ν2 (dz) be a finite Borel measure on bounded kernel from M(R) to R ˆ := {z : 0 < |z| ≤ 1}. For each x ∈ R and µ ∈ M(R) define R ν(µ, dz) = 1Rˇ (z)ν1 (µ, dz) + 1Rˆ (z)ν2 (dz). In this section we always assume that β2 ≥ 0, and that the operator A (µ) is defined by A (µ) f (x) = β0 (µ, x) f (x) + β1 (µ) f ′ (x) + β2 f ′′ (x) + [ f (x + z) − f (x) − f ′ (x)z1Rˆ (z)]ν(µ, dz), R◦
where R◦ := R \ {0} and f ∈ Cc2 (R). Let ∇x and ∆x denote the first and the second order spatial differential operators with respect to the variable x. Let (Ω , F , P) be a complete probability space furnished with filtration {Ft : t ≥ 0} and satisfy the usual conditions. Let {W (dt, du) : t ≥ 0, u ∈ E} denote a Gaussian white noise with intensity dtπ(du). For each i = 0, 1 let {Ni (dt, du) : t ≥ 0, u ∈ Ui } be a Poisson random measure with intensity dtπi (du). Suppose that {N0 (dt, du)} and {N1 (dt, du)} are independent of each other. Let { N˜ i (dt, du)} denote the compensated measure of {Ni (dt, du)}. In this section we consider the following SPDE which is a general form of (1.1): t Yt (x) = Y0 (x) + G(Ys− (x), s, u)W (ds, du) t 0 E t + H0 (Ys− (x), s, u) N˜ 0 (ds, du) + H1 (Ys− (x), s, u)N1 (ds, du) 0
+ 0
0
E0 t
A (X s )∗ Ys (x)ds +
t
0
E1
[B(X s , x) + ζ (X s , x)]ds,
(4.1)
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
11
–
where Ys ∈ D(R) and X s (−∞, x] = Ys (x) for each s ≥ 0, and A (µ)∗ is the dual operator of A (µ) for each µ ∈ M(R). This equation should be understood as: For each f ∈ Cc2 (R), t ⟨Yt , f ⟩ = ⟨Y0 , f ⟩ + f (x)d x G(Ys− (x), s, u)W (ds, du) R 0 E t + f (x)d x H0 (Ys− (x), s, u) N˜ 0 (ds, du) 0
R
+ R t
+
f (x)d x
E0
t 0
H1 (Ys− (x), s, u)N1 (ds, du) E1
⟨Ys , A (X s ) f ⟩ds +
0
t
⟨B(X s , ·) + ζ (X s , ·), f ⟩ds.
0
ˇ so that for Condition 4.1. (i) There exist a constant C and finite Borel measure ν11 (dz) on R ˇ f ∈ B(R) ˇ + and µ′ , µ′′ ∈ M(R), each x ∈ R, ˇ ≤ C, |ζ (µ′ , x)| + βi (µ′ ) + ν1 (µ′ , R) |B(µ′ , x) − B(µ′′ , x)| ≤ C|µ′ (−∞, x] − µ′′ (−∞, x]|, |ζ (µ′ , x) − ζ (µ′′ , x)| + |β0 (µ′ , x) − β0 (µ′′ , x)| + |β1 (µ′ ) − β1 (µ′′ )| ≤ Cρ(µ′ , µ′′ ), ′′ ′ ′′ ′ f (z)ν11 (dz), f (z)ν1 (µ , dz) ≤ ρ(µ , µ ) f (z)ν1 (µ , dz) − ˇ R
ˇ R
ˇ R
where ρ is a distance on M(R) defined by ′ ′′ ρ(µ , µ ) = e−|x| µ′ (−∞, x] − µ′′ (−∞, x]d x. R
(ii) The mapping x → x + H0 (x, s, u) is non-decreasing for each fixed s ≥ 0, u ∈ E 0 and there is a constant C > 0 so that 2 |H0 (x, s, u) − H0 (y, s, u)|2 π0 (du) |G(x, s, u) − G(y, s, u)| π(du) + E0 E + |H1 (x, s, u) − H1 (y, s, u)|π1 (du) ≤ C|x − y| E1
for all x, y ∈ R and s ≥ 0. Now we state the main result of this section. (1)
(2)
Theorem 4.2. Suppose that Condition 4.1 is satisfied and that {Yt : t ≥ 0} and {Yt (1) (2) are two c´adl´ag D(R)-valued solutions to (4.1) with Y0 = Y0 . Then (1) (2) P Yt (x) = Yt (x) for all x ∈ R and t ≥ 0 = 1.
: t ≥ 0}
Remark 4.3. (i) Suppose that the following hold: (a) A (µ) ≡ 21 ∆. (b) H0 (x, s, v) H1 (x, s, v) ≡ 0, B(µ, x) = ζ (µ, x) ≡ 0. Then Theorem 4.2 was established [16, Theorem 1.2]. N , B(µ, x) (ii) Suppose that the following hold: (a) A (µ) ≡ 12 ∆. (b) E 0 = ∆0 × [0, 1] ∞ ζ (µ, x) ≡ 0. (c) G(x, s, u) = 1{u≤x} − x, E = (0, 1], H0 (x, s, z, v) = i=1 (z i 1{vi ≤x} ∞ z i x), H1 (x, s, z, v) ≡ 0, π0 (dz, dv) = ( i=1 z i2 )−1 Ξ0 (dz)(dv)⊗N , where Ξ0 (dz)
= in = − is
12
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
nonzero finite measure ∞on the infinite simplex ∆0 defined by ∆0 = {x = (x1 , x2 , . . .) : x1 ≥ x2 ≥ · · · ≥ 0, i=1 xi ≤ 1}. Then Theorem 4.2 was established in [8, Theorem 5.1]. (iii) Suppose that the following hold: (a) β0 (µ, x) ≡ 0, β1 (µ) and ν1 (µ, dz) are independent of µ ∈ M(R). (b) E 0 = E 1 = R◦ × (0, ∞). (c) G(x, s, u) = 1{u≤x} , H0 (x, s, z, v) = H1 (x, s, z, v) = 1{v≤x} z, π0 (dz, dv) = 1Rˆ (z)m(dz)dv, π1 (dz, dv) = 1Rˇ (z)m(dz)dv, 2 where ∞ (z ∧ z )m(dz) is a finite measure on (0, ∞). (d) ζ (µ, x) ≡ 0, B(µ, x) = −bx − x 1 zm(dz) for some constant b. Then Theorem 4.2 was established in [5, Theorem 6.12]. For k ≥ 1 define a stopping time τk by (1) (2) τk = inf t ∈ [0, T ] : Yt (∞) + Yt (∞) > k (1)
with the convention inf ∅ = ∞. Since {Yt processes, then (1)
(2)
(2)
: t ≥ 0} and {Yt (1)
: t ≥ 0} are c´adl´ag D(R)-valued
(2)
sup [Yt (∞) + Yt (∞)] = sup [⟨X t , 1⟩ + ⟨X t , 1⟩] < ∞, t∈[0,T ]
t∈[0,T ]
which leads to lim τk = ∞,
P-a.s.
k→∞
(4.2) (1)
(2)
For s > 0, x ∈ R, u ∈ E and u i ∈ E i let Ys (x) = Ys (x) − Ys (x) and ¯ G(s, x, u) = G(Ys(1) (x), s, u) − G(Ys(2) (x), s, u), B¯ s (x) = B(X s(1) , x) − B(X s(2) , x), H¯ i (s, x, u i ) = Hi (Ys(1) (x), s, u i ) − Hi (Ys(2) (x), s, u i ), ζ¯s (x) = ζ (X s(1) , x) − ζ (X s(2) , x) for i = 0, 1. Let Φ ∈ Cc∞ (R) satisfy 0 ≤ Φ ≤ 1, supp(Φ) ⊂ (−1, 1) and R Φ(x)d x = 1. For t ≥ 0, m ≥ 1 and x, y ∈ R define Φmx (y) := Φm (x −y) := mΦ(m(x −y)) and vtm (x) = ⟨Yt , Φmx ⟩. It follows from (4.1) that for each m, k ≥ 1, x ∈ R and t ∈ [0, T ], t∧τk x m ¯ G(s−, y, u)W (ds, du) Φm (y) vt∧τk (x) = R 0 E t∧τk + H0 (s−, y, u) N˜ 0 (ds, du) dy 0
+
E0
Φmx (y)dy
R t∧τk
+ 0
0
t∧τk
H1 (s−, y, u)N1 (ds, du) +
0
E1
⟨Ys , A (X s(1) )Φmx ⟩ds +
0
t∧τk
t∧τk
⟨ B¯ s + ζ¯s , Φmx ⟩ds
⟨Ys(2) , A (X s(1) )Φmx − A (X s(2) )Φmx ⟩ds.
Then by stochastic Fubini’s theorem (see e.g. [7, Theorem 7.24]), for each m, k ≥ 1, x ∈ R and t ∈ [0, T ] we can conclude that a.s. t∧τk m ¯ vt∧τk (x) = ⟨G(s−, ·, u), Φmx ⟩W (ds, du) 0 E t∧τk + ⟨ H¯ 0 (s−, ·, u), Φmx ⟩ N˜ 0 (ds, du) 0
E0
J. Xiong, X. Yang / Stochastic Processes and their Applications ( t∧τk
+ 0
E1 t∧τk
+
⟨ H¯ 1 (s−, ·, u), Φmx ⟩N1 (ds, du) +
⟨Ys , A
0
(X s(1) )Φmx ⟩ds
+ 0
t∧τk
0
t∧τk
)
–
13
⟨ B¯ s + ζ¯s , Φmx ⟩ds
⟨Ys(2) , A (X s(1) )Φmx − A (X s(2) )Φmx ⟩ds.
For n ≥ 1 put an = exp{−n(n + 1)/2}. Let ψn ∈ Cc∞ (R) satisfy supp(ψn ) ⊂ (an , an−1 ), an−1 ψn (x)d x = 1, and 0 ≤ ψn (x) ≤ 2/(nx) for all x > 0 and n ≥ 1. Define φn (x) = n a|x| y ′ o’s 0 dy 0 ψn (z)dz for x ∈ R and n ≥ 1. Then ∥φn ∥ ≤ 1 and φn (x) → |x| as n → ∞. By Itˆ formula, for each m, n, k ≥ 1 and x ∈ R, t ∈ [0, T ] we have a.s. 1 t∧τk m ¯ ·, u), Φmx ⟩2 π(du) ds φn′′ (vsm (x))⟨G(s, φn (vt∧τ (x)) = k 2 0 E t∧τk Vn vsm (x), ⟨ H¯ 0 (s, ·, u), Φmx ⟩ π0 (du) + ds 0
E0
t∧τk
+ 0
E1 t∧τk
+ 0
t∧τk
+ 0
t∧τk
+ 0
Un vsm (x), ⟨ H¯ 1 (s−, ·, u), Φmx ⟩ N1 (ds, du)
φn′ (vsm (x)) ⟨ B¯ s + ζ¯s , Φmx ⟩ + ⟨Ys , A (X s(1) )Φmx ⟩ ds φn′ (vsm (x))⟨Ys(2) , A (X s(1) )Φmx − A (X s(2) )Φmx ⟩ds ¯ φn′ (vsm (x))⟨G(s−, ·, u), Φmx ⟩W (ds, du) E
t∧τk
+ 0
E0
Un vsm (x), ⟨ H¯ 0 (s−, ·, u), Φmx ⟩ N˜ 0 (ds, du),
(4.3)
where Un (x, z) :=φn (x + z) − φn (x) and Vn (x, z) =: Un (x, z) − zφn′ (x) for x, z ∈ R. Define J (x) = R e−|y| ρ0 (x − y)dy with the mollifier ρ0 given by ρ0 (x) = C exp −1/(1 − x 2 ) 1{|x|<1} , where C is a constant so that R ρ0 (x)d x = 1. By (2.1) in [12], for each n ≥ 0 there exist constants Cn′ , Cn′′ > 0 so that Cn′′ e−|x| ≤ |J (n) (x)| ≤ Cn′ e−|x| ,
x ∈ R,
(4.4)
which implies |J (n) (x)| ≤ Cn J (x),
(4.5)
x ∈R
for some constant Cn > 0. It follows from (4.3) and Fubini’s theorem that for each m, n, k ≥ 1, m E φn (vt∧τ (x))J (x)d x k R t∧τk 1 ¯ ·, u), Φmx ⟩2 J (x)d x = E ds π(du) φn′′ (vsm (x))⟨G(s, 2 0 E R t∧τk +E ds π0 (du) Vn vsm (x), ⟨ H¯ 0 (s, ·, u), Φmx ⟩ J (x)d x 0
E0
R
14
J. Xiong, X. Yang / Stochastic Processes and their Applications ( t∧τk
+E
π1 (du)
ds 0
E1
)
–
Un vsm (x), ⟨ H¯ 1 (s, ·, u), Φmx ⟩ J (x)d x
R
t∧τk +E ds φn′ (vsm (x))⟨ B¯ s + ζ¯s , Φmx ⟩J (x)d x 0 R t∧τk +E ds φn′ (vsm (x))⟨Ys , A (X s(1) )Φmx ⟩J (x)d x 0 R t∧τk +E ds φn′ (vsm (x))⟨Ys(2) , A (X s(1) )Φmx − A (X s(2) )Φmx ⟩J (x)d x 0
=:
6
R
Ii (m, n, k, t).
(4.6)
i=1
In the rest of this section, we give the estimation of Ii (m, n, k, t) (i = 1, 2, . . . , 6). To this end, we need the following three lemmas. Lemma 4.4. For each h ∈ C 2 (R) and g, g ′ ∈ C02 (R), ′′ ′ ′ h ′′ (x)g(x)d x. h(x)g (x)d x = h(x)g (x)d x = − h (x)g(x)d x and R
R
R
R
Proof. The assertions follow from integration by parts.
Lemma 4.5. If h ∈ B(R) has at most countably many jumps, then lim ⟨h, Φmx ⟩ = h(x),
m→∞
λ-a.e. x.,
where λ denotes the Lebesgue measure on R. Proof. By a change of variable, one sees that y h x− ⟨h, Φmx ⟩ = Φ(y)dy m R ∞ ∞ y y = h x− Φ(y)dy + h x+ Φ(−y)dy m m 0 0 tends to [h(x + 0) + h(x − 0)]/2 as m → ∞ by dominated convergence, where h(x + 0) and h(x − 0) are respectively the right and left limits of h at x. Since x → h(x) has at most countably many jumps, h(x) = h(x + 0) = h(x − 0) for almost every x ∈ R. This finishes the proof. Lemma 4.6. For each t ≥ 0 and k ≥ 1, t∧τk E ∥Ys ∥ + sup ∥vsm ∥ ds < ∞. 0
m≥1
Proof. Since |vsm (x)| = |⟨Ys , Φmx ⟩| ≤ ∥Ys ∥⟨1, Φmx ⟩ = ∥Ys ∥ for all x ∈ R, then t∧τk t∧τk t (1) (2) E sup ∥vsm ∥ds ≤ E ∥Ys ∥ds ≤ E Ys∧τk (∞) + Ys∧τk (∞) ds ≤ 2kt, 0
0
m≥1
which completes the proof.
0
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
15
–
Now let us establish the estimation for Ii (m, n, k, t) (i = 1, 2, . . . , 6) in the following. Lemma 4.7. For n, k ≥ 1 and t ≥ 0, we have lim sup I1 (m, n, k, t) ≤ C0 /n. m→∞
Proof. By the H¨older inequality and Condition 4.1(ii), x 2 x ¯ ¯ ⟨G(s, ·, u), Φm ⟩ π(du) ≤ Φm (y)dy G(s, y, u)2 π(du) ≤ C0 ⟨|Ys |, Φmx ⟩. E
E
R
It then follows from Lemmas 4.5–4.6 and dominated convergence that t∧τk φn′′ (Ys (x))|Ys (x)|J (x)d x ≤ C0 /n, lim sup I1 (m, n, k, t) ≤ C0 E ds m→∞
where 0 ≤
0
φn′′ (u)|u|
R
≤ 2/n for all u ∈ R was used in the last inequality.
Lemma 4.8. For n, k ≥ 1 and t ≥ 0, we have t∧τk lim I2 (m, n, k, t) = E ds π0 (du) Vn Ys (x), H¯ 0 (s, x, u) J (x)d x m→∞
0
E0
R
and t∧τk
lim sup I3 (m, n, k, t) ≤ C0 E m→∞
⟨|Ys |, J ⟩ds .
0
Proof. Since ∥φn′ ∥ ≤ 1, |Un (x, z)| ≤ |φn′ (x)z| ≤ |z|. Then by Condition 4.1(ii), t∧τk π1 (du) |⟨ H¯ 1 (s, ·, u), Φmx ⟩|J (x)d x I3 (m, n, k, t) ≤ E ds 0
≤E
E1
t∧τk
π1 (du)
ds
0
R
E1
≤ C0 E
t∧τk
J (x)d x
J (x)d x
ds R
| H¯ 1 (s, y, u)|Φmx (y)dy
R
R
0
|Ys (y)|Φmx (y)dy .
R
Letting m → ∞ in above inequality one gets the second assertion by Lemmas 4.5–4.6. Let us see the limit of I2 (m, n, k, t) in the following. By Taylor’s formula, for each x1 , x2 , z 1 , z 2 ∈ R, Vn (x1 , z 1 ) − Vn (x2 , z 2 ) = [Vn (x1 , z 1 ) − Vn (x1 , z 2 )] + [Vn (x1 , z 2 ) − Vn (x2 , z 2 )] = φn (x1 + z 1 ) − φn (x1 + z 2 ) − (z 1 − z 2 )φn′ (x1 ) 1 2 − z2 [φn′′ (x1 + z 2 h) − φn′′ (x2 + z 2 h)](1 − h)dh 0
z1
dθ
= z2
0
θ
φn′′ (x1
+ u)du −
1 ≤ ∥φn′′ ∥ · |z 12 − z 22 | + z 22 2
1
0
z 22
1
0
[φn′′ (x1 + z 2 h) − φn′′ (x2 + z 2 h)](1 − h)dh
|φn′′ (x1 + z 2 h) − φn′′ (x2 + z 2 h)|dh.
(4.7)
16
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
By Lemma 4.5, limm→∞ vsm (x) = Ys (x) for λ-a.e. x. It then follows dominated convergence that 1 ′′ m lim φn vs (x) + H¯ 0 (s, x, u)h − φn′′ Ys (x) + H¯ 0 (s, x, u)h dh = 0, λ-a.e. x. m→∞ 0
(4.8) Observe that | H¯ 0 (s, y, u) − H¯ 0 (s, x, u)| ≤
|H0 (Ys(i) (y), s, u) − H0 (Ys(i) (x), s, u)|.
i=1,2
Then using the H¨older inequality, Lemma 4.5 and Condition 4.1(ii) we know that as m → ∞, |⟨ H¯ 0 (s, ·, u), Φmx ⟩ − H¯ 0 (s, x, u)|2 π0 (du) E0 | H¯ 0 (s, y, u) − H¯ 0 (s, x, u)|2 π0 (du) Φm (x − y)dy ≤ E0 R ≤ C0 Φm (x − y)|Ys(i) (y) − Ys(i) (x)|dy → 0, λ-a.e. x. (4.9) i=1,2 R
Using Condition 4.1(ii) again one can also get | H¯ 0 (s, x, u)|2 π0 (du) ≤ C0 |Ys (x)| ≤ C0 ∥Ys ∥
(4.10)
E0
and
|⟨ H¯ 0 (s, ·, u), Φmx ⟩|2 π0 (du) ≤ π0 (du) H¯ 0 (s, y, u)2 Φmx (y)dy ≤ C0 ⟨|Ys |, Φmx ⟩ ≤ C0 ∥Ys ∥.
E0
E0
(4.11)
R
Together (4.9)–(4.11) imply 2 |⟨ H¯ 0 (s, ·, u), Φmx ⟩2 − H¯ 0 (s, x, u)2 |π0 (du) E0 2 = |⟨ H¯ 0 (s, ·, u), Φmx ⟩ − H¯ 0 (s, x, u)| · |⟨ H¯ 0 (s, ·, u), Φmx ⟩ + H¯ 0 (s, x, u)|π0 (du) E0 ≤ |⟨ H¯ 0 (s, ·, u), Φmx ⟩ − H¯ 0 (s, x, u)|2 π0 (du) · |⟨ H¯ 0 (s, ·, u), Φmx ⟩ E0
E0
+ H¯ 0 (s, x, u)|2 π0 (du) tends to zero for λ-a.e. x as m → ∞. Combining this with (4.7)–(4.8), (4.10), Lemma 4.6 and dominated convergence it is easy to see that t∧τk ds π0 (du) Vn Ys (x), H¯ 0 (s, x, u) J (x)d x I2 (m, n, k, t) − E 0
t∧τk
≤E
π0 (du)
ds
0
E0
E0
R
m Vn v (x), ⟨ H¯ 0 (s, ·, u), Φ x ⟩ s m
R
− Vn Ys (x), H¯ 0 (s, x, u) J (x)d x
tends zero as m → ∞. So one completes the proof.
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
17
–
Lemma 4.9. For n, k ≥ 1 and t ≥ 0, we have t∧τk lim sup I4 (m, n, k, t) ≤ C0 E ⟨|Ys |, J ⟩ds . m→∞
0
Proof. By Condition 4.1(i), for each s ≥ 0 x ∈ R and m ≥ 1, |⟨ζ¯s , Φmx ⟩| ≤ ⟨|ζ¯s |, Φmx ⟩ ≤ C0 ρ(X s(1) , X s(2) ), which completes the proof by Lemmas 4.5–4.6.
|⟨ B¯ s , Φmx ⟩| ≤ ⟨| B¯ s |, Φmx ⟩ ≤ C0 ⟨|Ys |, Φmx ⟩,
ˇ = {z : |z| > 1} and R ˆ = {z : 0 < |z| ≤ 1}. Recall that R Lemma 4.10. For n, k ≥ 1 and t ≥ 0, we have t∧τk J (x + z)ν11 (dz) d x . |Ys (x)| J (x) + lim sup I5 (m, n, k, t) ≤ C0 E ds m→∞
0
ˇ R
R
Proof. For each m, n, k ≥ 1 and t ≥ 0 define t∧τk φn′ (vsm (x))⟨β0 (X s(1) , ·)Ys , Φmx ⟩J (x)d x , I5,1 (m, n, k, t) = E ds R 0 t∧τk (1) φn′ (vsm (x))⟨Ys , ∇· Φmx ⟩J (x)d x , I5,2 (m, n, k, t) = E β1 (X s )ds R 0 t∧τk I5,3 (m, n, k, t) = β2 E ds φn′ (vsm (x))⟨Ys , ∆· Φmx ⟩J (x)d x , 0
R
I5,4,1 (m, n, k, t) t∧τk =E ds ν1 (X s(1) , dz) φn′ (vsm (x))[vsm (x − z) − vsm (x)]J (x)d x , ˇ 0 R R t∧τk Ys (y) ν2 (dz) φn′ (vsm (x))J (x)d x I5,4,2 (m, n, k, t) = E ds ˆ R R R 0 × [Φm (x − y − z) − Φm (x − y) − ∇ y Φm (x − y)z]dy . Then
I5 (m, n, k, t) =
I5,i (m, n, k, t) +
i=1,2,3
I5,4, j (m, n, k, t).
(4.12)
j=1,2
In the following we first show that sup I5,i (m, n, k, t) ∨ I5,4, j (m, n, k, t) n≥1, i=1,2,3, j=1,2
≤ C0 E 0
t∧τk
ds
⟨|Ys |, Φmx ⟩ J (x) + J (x + z)ν11 (dz) d x .
(4.13)
ˇ R
R
Since ∥φn′ ∥ ≤ 1 and supµ∈M(R),x∈R |β0 (µ, x)| < ∞ under Condition 4.1(i), it is easy to get the estimation (4.13) for I5,1 (m, n, k, t). Observe that x Ys (y)∇ y Φm (y)dy = − Ys (y)∇x Φmx (y)dy = −∇x (vsm (x)). (4.14) R
R
18
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
Then by Lemma 4.4 ′ m x φn (vs (x))⟨Ys , ∇· Φm ⟩J (x)d x = − φn′ (vsm (x))∇x (vsm (x))J (x)d x R R m |vsm (x)|J (x)d x, = − ∇x φn (vs (x)) J (x)d x = φn (vsm (x))J ′ (x)d x ≤ C0 R
R
R
where the fact φn (z) ≤ |z| and (4.4) were used in the last inequality. This implies the estimation (4.13) for I5,2 (m, n, k, t). Observe that x Ys (y)∆x Φmx (y)dy = ∆x (vsm (x)) Ys (y)∆ y Φm (y)dy = R
R
and φn′ (vsm (x))∆x (vsm (x)) = ∆x φn (vsm (x)) − φn′′ (vsm (x))|∇x (vsm (x))|2 . Since φn′′ (z) ≥ 0 for all z ∈ R, the last term is negative. It then follows from Lemma 4.4 and (4.5) that φn (vsm (x))J ′′ (x)d x ≤ C0 |vsm (x)|J (x)d x, ∆x φn (vsm (x)) J (x)d x = R
R
R
where φn (z) ≤ |z| was used in the last inequality. This implies the estimation (4.13) for I5,3 (m, n, k, t). Since |φn′ (u)| ≤ 1 for all u ∈ R, φn′ (vsm (x))[vsm (x − z) − vsm (x)]J (x)d x R
≤ ⟨|vsm (· − z)| + |vsm |, J ⟩ ≤ ⟨|vsm |, J (· + z) + J ⟩, which gives the estimation (4.13) for I5,4,1 (m, n, k, t) under Condition 4.1(i). By (4.4), there is a constant C so that for all x ∈ R and y ∈ [−1, 1], J (x + y) ≤ Ce−|x+y| ≤ Ce−|x|+|y| ≤ Cee−|x| ≤ C J (x). By Taylor’s formula, Mm,n (s, x, z) := φn′ (vsm (x))[vsm (x − z) − vsm (x) + z∇x vsm (x)] = φn (vsm (x − z)) − φn (vsm (x)) + z∇x φn (vsm (x)) − φn (vsm (x − z)) − φn (vsm (x)) − φn′ (vsm (x)) vsm (x − z) − vsm (x) 1 = z2 (1 − h)∆x φn (vsm (x − zh)) dh 0
1
− 0
≤ z2
0
1
2 (1 − h) vsm (x − z) − vsm (x) φn′′ vsm (x) + h(vsm (x − z) − vsm (x)) dh (1 − h)∆x φn (vsm (x − zh)) dh,
(4.15)
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
19
where φn′′ (u) ≥ 0 for all u ∈ R was used in the last inequality. Thus by (4.5), (4.14)–(4.15) and Lemma 4.4, φn′ (vsm (x))J (x)d x Ys (y)[Φm (x − y − z) − Φm (x − y) − ∇ y Φm (x − y)z]dy R
R
=
Mm,n (s, x, z)J (x)d x ≤
R 1
=
1
(1 − h)dh
0
(1 − h)dh
0
φn (vsm (x
∆x φn (vsm (x − zh)) J (x)d x R
− zh))J (x)d x ≤ C0 ′′
1
≤ C0
dh
0
1
φn (vsm (x))J (x
+ zh)d x ≤ C0
R
dh
0
R
φn (vsm (x − zh))J (x)d x
R
|vsm (x)|J (x)d x,
R
where φn (u) ≤ |u| for each u ∈ R was used in the last inequality. This implies that (4.13) holds. Therefore, one completes the proof by (4.12)–(4.13), Lemmas 4.5–4.6 and dominated convergence. Lemma 4.11. For n, k ≥ 1 and t ≥ 0, we have t∧τk lim sup I6 (m, n, k, t) ≤ kC0 E ⟨|Ys |, J ⟩ds . m→∞
0
Proof. For each m, n, k ≥ 1 and t ≥ 0 define I6,1 (m, n, k, t) t∧τk φn′ (vsm (x))⟨[β0 (X s(1) , ·) − β0 (X s(2) , ·)]Ys(2) , Φmx ⟩J (x)d x , =E ds R 0 t∧τk I6,2 (m, n, k, t) = E [β1 (X s(1) ) − β1 (X s(2) )]ds φn′ (vsm (x))⟨Ys(2) , ∇· Φmx ⟩J (x)d x , 0 R t∧τk ′ m (2) φn (vs (x))⟨Ys , Fm (s, x − ·)⟩J (x)d x , I6,3 (m, n, k, t) = E ds 0
R
where Fm (s, u) :=
ˇ R
f m (u, z)ν1 (X s(1) , dz) −
ˇ R
f m (u, z)ν1 (X s(2) , dz)
with f m (u, z) := Φm (u − z) − Φm (u). Then I6 (m, n, k, t) = I6,i (m, n, k, t). i=1,2,3
So it is enough to prove the following result to finish the proof t∧τk sup I6,i (m, n, k, t) ≤ kC0 E ⟨|Ys |, J ⟩ds .
(4.16)
0
m,n≥1, i=1,2,3
By the definition of τk , sup s∈[0,T ],x∈R
Ys(2) (x) ≤ k
on {s ≤ τk }.
(4.17)
20
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
Then by Condition 4.1(i) and |φn′ (u)| ≤ |u|, t∧τk I6,1 (m, n, k, t) ≤ C0 E ρ(X s(1) , X s(2) )ds |φn′ (vsm (x))|v¯sm (x)J (x)d x 0 R t∧τk t∧τk ≤ kC0 E ρ(X s(1) , X s(2) )ds ≤ kC0 E ⟨|Ys |, J ⟩ds , 0
0
(2)
(2)
where v¯sm (x) := ⟨Ys , Φmx ⟩. Since x → Ys (x) is non-decreasing, it is easy to check that ∇x (v¯s (x)) ≥ 0 for each s ≥ 0 and x ∈ R. Then by (4.14) and (4.5), ∇x (v¯sm (x))J (x)d x = ⟨v¯sm , J ′ ⟩ ≤ C0 ⟨v¯sm , J ⟩. φn′ (vsm (x))⟨v¯sm , ∇· Φmx ⟩J (x)d x ≤ R
R
Combining this with (4.17) one gets the estimation (4.16) for I6,2 (m, n, k, t). By Condition 4.1(i), |Fm (s, x)| ≤ ρ(X s(1) , X s(2) ) | f m (x, z)|ν11 (dz) ˇ R ≤ ρ(X s(1) , X s(2) ) [Φm (x − z) + Φm (x)]ν11 (dz). ˇ R
Then by (4.17), (1) (2) ˇ |⟨Ys(2) , Fm (s, x − ·)⟩| ≤ 2kν11 (R)ρ(X s , Xs )
on {s ≤ τk }, which implies the estimation (4.16) for I6,3 (m, n, k, t).
Proof of Proposition 4.2. By Lemma 4.5 and dominated convergence m E{φn (vt∧τ (x))}J (x)d x = E{φn (Yt∧τk (x))}J (x)d x. lim k m→∞ R
R
Combining (4.6) and Lemmas 4.7–4.11, one can see that t∧τk E{φn (Yt∧τk (x))}J (x)d x ≤ kC0 E ⟨|Ys |, J1 ⟩ds + J (n, k, t) + C0 /n, where J1 (x) := J (x) + J (n, k, t) := E
(4.18)
0
R
J (x + z)ν11 (dz) and t∧τk ds π0 (du) Vn Ys (x), H¯ 0 (s, x, u) J (x)d x .
ˇ R
0
E0
R
By [9, Lemma 3.1] and Condition 4.1(ii), 2 t∧τk ds π0 (du) 1{Ys (x)̸=0} H¯ 0 (s, x, u)2 /|Ys (x)|J (x)d x J (n, k, t) ≤ E n 0 E0 R ≤ C0 /n. Since limn→∞ φn (x) = |x| for each x ∈ R, letting n → ∞ in (4.18) one yields that t E{⟨|Yt∧τk |, J ⟩} ≤ kC0 E{⟨|Ys∧τk |, J1 ⟩}ds. 0
(4.19)
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
21
Similar to (4.19), one sees that for each n ≥ 1, s1 t E{⟨|Ys2 ∧τk |, J2 ⟩}ds2 E{⟨|Yt∧τk |, J ⟩} ≤ (kC0 )2 ds1 0 0 t s1 s2 ds1 ds2 E{⟨|Ys3 ∧τk |, J3 ⟩}ds3 ≤ (kC0 )3 0
0
0
··· ≤ (kC0 )
n
t
where Jn (x) := Jn−1 (x) + ˇ Then v := ν11 (R).
ˇ Jn−1 (x R
sn−1
ds2 · · ·
ds1 0
s1
0
0
E{⟨|Ysn ∧τk |, Jn ⟩}dsn ,
+ z)ν11 (dz). Observe that ⟨1, Jn ⟩ ≤ ⟨1, J ⟩[1 + v]n with
k⟨1, J ⟩[kC0 (1 + v)t]n , n ≥ 1. n! Letting n → ∞ we obtain E{⟨|Yt∧τk |, J ⟩} = 0 for each t ∈ [0, T ]. By (4.2) and Fatou’s lemma, we then have E{⟨|Yt |, J ⟩} = E lim ⟨|Yt∧τk |, J ⟩ ≤ lim inf E{⟨|Yt∧τk |, J ⟩} = 0 E{⟨|Yt∧τk |, J ⟩} ≤
k→∞
k→∞ (1)
(2)
for each t ∈ [0, T ], which implies that P{Yt (x) = Yt (x) for all x ∈ R} = 1 for all (1) (2) t ≥ 0. It follows that ⟨Yt , f ⟩ = ⟨Yt , f ⟩ almost surely for all t > 0 and f ∈ S (R) (the (1) Schwartz space of rapidly decreasing functions on R). By the right-continuities of t → ⟨Yt , f ⟩ (2) (1) (2) and t → ⟨Yt , f ⟩ we can conclude that P{⟨Yt , f ⟩ = ⟨Yt , f ⟩ for all t > 0} = 1 for all f ∈ S (R). Considering a suitable sequence { f 1 , f 2 , . . .} ⊂ S (R) we finishes the proof. 5. Uniqueness of SIBIM In this section we study martingale problem P(A, Φ, Ψ ). In the following we always assume that 1 2 ′′ ′ A(µ) f (x) = β(µ) f (x) + σ f (x) + [ f (x + z) − f (x)]ν(µ, dz) 2 ˇ R + [ f (x + z) − f (x) − f ′ (x)z]ν(dz), f ∈ Cc2 (R), ˆ R
ˇ and z 2 µ(dz) is a where β ∈ B(M(R)), σ ∈ R, ν(µ, dz) is a bounded kernel from M(R) to R ˆ finite Borel measure on R. Recall that Φ and Ψ was defined in Section 1. We also assume that ˇ < ∞, sup |β(µ)| + ν(µ, R) |β(µ′ ) − β(µ′′ )| ≤ C0 ρ(µ′ , µ′′ ), µ∈M(R)
sup |η(µ′ , (−∞, x]) − η(µ′′ , (−∞, x])| ≤ C0 ρ(µ′ , µ′′ ), x∈R
|⟨b(µ′ ) − b(µ′′ ), 1(−∞,x] ⟩| ≤ C0 |µ′ (−∞, x] − µ′′ (−∞, x]|, ′ ′′ ′ ′′ f (z)ν(µ , dz) − f (z)ν(µ , dz) ≤ ρ(µ , µ ) f (z)ν0 (dz) ˇ R
ˇ R
ˇ R
ˇ + and some bounded Borel measure ν0 on R. ˇ for each µ′ , µ′′ ∈ M(R), f ∈ B(R)
22
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
Theorem 5.1. Suppose that Condition 2.2 (ii) holds. Then the uniqueness of the solution to the martingale problem P(A, Φ, Ψ ) holds. If, in addition, µ → b˜µ (defined in Condition 3.2) and µ → η(µ, ·) are continuous, then the martingale problem P(A, Φ, Ψ ) is well-posed. Proof. The first assertion follows from Theorems 2.3 and 4.2. It is easy to check that Condition 3.2 is satisfied. Thus the second assertion follows from Theorem 3.3. Acknowledgements First author was supported by Macao Science and Technology Development Fund FDCT 076/2012/A3 and Multi-Year Research Grants of the University of Macau Nos. MYRG201400015-FST and MYRG2014-00034-FST. Second author was supported by NSFC (No. 11401012) and Natural Science Foundation of Ningxia (No. NZ15095). Appendix Proposition A.1. Suppose that P is a solution of the martingale problem P(A, λ, g, αL , h, ε) defined in Section 3 with (λn , gn , αn , h n , εn ) replaced by (λ, g, α, h, ε), and that ∞ ∞ λ(µ, x) (k − 1) pk (µ, x) + αε kqk (µ, x) < ∞.
sup
Q :=
x∈R,µ∈M(R)
k=0
(A.20)
k=0
Then E sup ⟨X t , 1⟩ ≤ C(Q, T ), t∈[0,T ]
where C(Q, T ) ∈ B((0, ∞) × (0, ∞))+ . Proof. Let F(x) = e−x and f (x) = θ for x ∈ R and θ > 0 in (3.1). Then exp(−θ ⟨X t , 1⟩) − exp(−θ ⟨X 0 , 1⟩) t ∞ − ε −1 exp(−θ⟨X s , 1⟩)ds λ(X s , x) pk (X s , x) e−(k−1)εθ − 1 X s (d x) 0
−α
t
k=0 R
exp(−θ ⟨X s , 1⟩)ds
0
∞
qk (X s , x) e−kεθ − 1 L(d x)
k=0 R
is a P-local martingale. Thus there is a stopping time τ˜n with limn→∞ τ˜n = ∞ almost surely so that for each n ≥ 1, E exp(−θ ⟨X t∧τ˜n , 1⟩) − exp(−θ ⟨X 0 , 1⟩) t∧τ˜n ∞ −1 =E ε exp(−θ ⟨X s , 1⟩)ds λ(X s , x) pk (X s , x) e−(k−1)εθ − 1 X s (d x) 0
+α
t∧τ˜n
0
exp(−θ⟨X s , 1⟩)ds
k=0 R ∞
qk (X s , x) e−kεθ − 1 L(d x) .
k=0 R
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
23
–
Letting n → ∞ we derive E exp(−θ ⟨X t , 1⟩) − exp(−θ ⟨X 0 , 1⟩) t ∞ λ(X s , x) pk (X s , x) e−(k−1)εθ − 1 X s (d x) exp(−θ ⟨X s , 1⟩)ds = E ε −1 0
+α
t
exp(−θ ⟨X s , 1⟩)ds
0
k=0 R ∞
qk (X s , x) e−kεθ − 1 L(d x) .
k=0 R
By the derivative relative to θ on the both sides of above equation, E ⟨X t , 1⟩ exp(−θ ⟨X t , 1⟩) − ⟨X 0 , 1⟩ exp(−θ⟨X 0 , 1⟩) ∞ t =E exp(−θ ⟨X s , 1⟩)ds λ(X s , x)(k − 1) pk (X s , x)e−(k−1)εθ X s (d x) 0
k=0 R
t + ε −1 E ⟨X s , 1⟩ exp(−θ⟨X s , 1⟩)ds 0 ∞ × λ(X s , x) pk (X s , x) e−(k−1)εθ − 1 X s (d x) k=0 R ∞ t + αε E exp(−θ ⟨X s , 1⟩)ds kqk (X s , x)e−kεθ L(d x) 0
k=0 R
∞ t +αE ⟨X s , 1⟩ exp(−θ ⟨X s , 1⟩)ds qk (X s , x) e−kεθ − 1 L(d x) . 0
k=0 R
It then follows from (A.20) that t E ⟨X t , 1⟩ exp(−θ ⟨X t , 1⟩) ≤ ⟨X 0 , 1⟩ + Q⟨L , 1⟩ + Q E ⟨X s , 1⟩ exp(−θ⟨X s , 1⟩) ds. 0
By Gronwall’s inequality one derives E ⟨X t , 1⟩ exp(−θ ⟨X t , 1⟩) ≤ [⟨X 0 , 1⟩ + Q⟨L , 1⟩]e Qt . By Fatou’s lemma we conclude that E{⟨X t , 1⟩} ≤ [⟨X 0 , 1⟩ + Q⟨L , 1⟩]e Qt . Define Φ(µ, f ) = ε −1
∞
λ(µ, x) pk (µ, x) e−(k−1)ε f (x) − 1 + (k − 1)ε f (x) µ(d x)
k=0 R
ε + ⟨µ, A (µ) f 2 − 2 f A (µ) f ⟩ 2 and b(µ, f ) =
∞
λ(µ, x)(k − 1) pk (µ, x) f (x)µ(d x),
k=0 R
Γ (X s , f ) = εα
∞ R k=0
kqk (X s , x) f (x)L(d x).
(A.21)
24
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
As the same argument in Theorem 2.1, one can see that t ⟨X t , f ⟩ = ⟨X 0 , f ⟩ + [⟨X s , A (X s ) f ⟩ + Φ(X s , f ) − b(X s , f ) 0
+ Γ (X s , f )]ds + M¯ tc ( f ) + M¯ td ( f ), where { M¯ tc ( f )} is a continuous local martingale with quadratic variation process ε⟨µ, A (µ) f 2 − 2 f A (µ) f ⟩ and t M¯ td ( f ) = ν( f ) N˜¯ (ds, dν) M(R)◦
0
is a purely discontinuous local martingale with predictable compensator Nˆ¯ (ds, dν) defined by Nˆ¯ (ds, dν) = ε −1 ds X s (d x) + αds L(d x)
∞
λ(X s , x) pk (X s , x)δε(k−1)δx (dν)
k=0 ∞
qk (X s , x)δkεδx (dν).
k=0
t Then ⟨X t , 1⟩ = ⟨X 0 , 1⟩ + 0 [Φ(X s , 1) − b(X s , 1) + Γ (X s , 1)]ds + M¯ td (1). It is easy to see that t t E sup ν(1) N˜¯ (ds, dν) ≤ 2 E sup ν(1) Nˆ¯ (ds, dν) M(R)◦
t∈[0,T ]
0
≤ 2E
T
ds
0
∞
+ αε
λ(X s , x)| R
t∈[0,T ]
0
M(R)◦
∞
(k − 1) pk (X s , x)|X s (d x)
k=0
kqk (X s , x)L(d x)
R k=0
≤ 2Q
T
E{⟨X s , 1⟩ + ⟨L , 1⟩}ds,
0
which implies the assertion by (A.21).
References [1] D.A. Dawson, Measure-valued Markov Processes, in: Lecture Notes in Math., vol. 1541, Springer, Berlin, 1993, pp. 1–260. [2] D.A. Dawson, Z. Li, Stochastic equations, flows and measure-valued processes, Ann. Probab. 40 (2012) 813–857. [3] S.N. Ethier, T.G. Kurtz, Markov Processes: Characterization and Convergence, Wiley, New York, 1986. [4] H. He, Discontinuous superprocesses with dependent spatial motion, Stochastic Process. Appl. 119 (2009) 130–166. [5] H. He, Z. Li, X. Yang, Stochastic equations of super-L´evy processes with general branching mechanism, Stochastic Process. Appl. 124 (2014) 1519–1565. [6] Z. Li, Measure-valued branching processes with immigration, Stochastic Process. Appl. 43 (1992) 249–264. [7] Z. Li, Measure-Valued Branching Markov Process, Springer, Berlin, 2011. [8] Z. Li, H. Liu, J. Xiong, X. Zhou, Some properties of the generalized Fleming–Viot processes, Stochastic Process. Appl. 123 (2013) 4129–4155. [9] Z. Li, F. Pu, Strong solutions of jump-type stochastic equations, Electron. Commun. Probab. 17 (2012) 1–13. [10] Z. Li, T. Shiga, Measure-valued branching diffusions: immigrations, excursions and limit theorems, J. Math. Kyoto Univ. 35 (1995) 233–274. [11] S. M´el´eard, S. Roelly, Interacting measure branching processes. Some bounds for the support, Stoch. Stoch. Rep. 44 (1993) 103–121.
J. Xiong, X. Yang / Stochastic Processes and their Applications (
)
–
25
[12] I. Mitoma, An ∞-dimensional inhomogeneous Langevin equation, J. Funct. Anal 61 (1985) 342–359. [13] L. Mytnik, J. Xiong, Well-posedness of the martingale problem for superprocess with interaction, Illinois J. Math. 59 (2015) 485–497. [14] P.E. Protter, Stochastic Integration and Differential Equations, second ed., Springer, Berlin, 2005. [15] S. Roelly, A criterion of convergence of measure-valued processes: application to measure branching processes, Stoch. 17 (1986) 43–65. [16] J. Xiong, SBM as the unique strong solution to an SPDE, Ann. Probab. 41 (2013) 1030–1054. [17] J. Xiong, Three Classes of Nonlinear Stochastic Partial Differential Equations, World Scientific, Singapore, 2013.