JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.1 (1-11)
Applied Numerical Mathematics ••• (••••) •••–•••
Contents lists available at ScienceDirect
Applied Numerical Mathematics www.elsevier.com/locate/apnum
Compensated θ -Milstein methods for stochastic differential equations with Poisson jumps Quanwei Ren a,∗,1 , Hongjiong Tian b,2 a b
College of Science, Henan University of Technology, Zhengzhou, Henan 450001, P.R. China Department of Mathematics, Shanghai Normal University, 100 Guilin Road, Shanghai 200234, P.R. China
a r t i c l e
i n f o
Article history: Received 23 June 2019 Received in revised form 30 August 2019 Accepted 9 September 2019 Available online xxxx Keywords: Compensated θ -Milstein methods Mean-square convergence Asymptotic mean-square stability Poisson jump
a b s t r a c t In this paper, we are concerned with numerical methods for solving stochastic differential equations with Poisson-driven jumps. We construct a class of compensated θ -Milstein methods and study their mean-square convergence and asymptotic mean-square stability. Sufficient and necessary conditions for the asymptotic mean-square stability of the compensated θ -Milstein methods when applied to a scalar linear test equation are derived. We compare the asymptotic mean-square stability region of the linear test equation with that of the compensated θ -Milstein methods with different θ values. Numerical results are given to verify our theoretical results. © 2019 IMACS. Published by Elsevier B.V. All rights reserved.
1. Introduction Consider jump-diffusion Itô stochastic differential equations (SDEs) of the form
d X (t ) = X (0− )
f (t , X (t − ))dt + g (t , X (t − ))dW (t ) + h(t , X (t − ))dN λ (t ),
t ∈ [0, T ],
= X 0 ∈ R,
(1.1)
where f (t , x) : [0, T ] × R → R is the drift coefficient, g (t , x) : [0, T ] × R → R is the diffusion coefficient, h(t , x) : [0, T ] × R → R is the jump coefficient, and X (t − ) = lims→t − X (s). For t ≥ 0, W (t ) is a standard Wiener process and N λ (t ) is a homogeneous Poisson process (independent of W (t )) with jump intensity λ, both defined on an appropriate complete probability space (, Ft , P ) with a filtration (Ft )t ∈[0, T ] satisfying the usual conditions (i.e., it is increasing and right-continuous, while F0 contains all P - null sets). Let X 0 be F0 -measurable, independent of W (t ) and N λ (t ) for all t ∈ [0, T ] and satisfy
E| X 0 |2 < ∞.
(1.2)
We assume that the functions f (t , x), g (t , x) and h(t , x) satisfy the global Lipschitz condition
*
Corresponding author. E-mail address:
[email protected] (Q. Ren). 1 The work of this author is supported by the High-Level Personal Foundation of Henan University of Technology (No. 2017BS023), the youth support project for basic research of Henan University of Technology (2018QNJH17), Foundation of Henan Educational Committee (No. 19A110013) and NSF of China (No. 11801146). 2 The work of this author is supported in part by E-Institutes of Shanghai Municipal Education Commission under Grant No. E03004, the National Natural Science Foundation of China under Grant Nos. 11671266 and 11871343, and the Natural Science Foundation of Shanghai under Grant No. 16ZR1424900. https://doi.org/10.1016/j.apnum.2019.09.009 0168-9274/© 2019 IMACS. Published by Elsevier B.V. All rights reserved.
JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.2 (1-11)
Q. Ren, H. Tian / Applied Numerical Mathematics ••• (••••) •••–•••
2
|ϕ (t , x) − ϕ (t , y )| ≤ L ϕ |x − y |,
for ϕ (t , x) = f (t , x), g (t , x), or h(t , x),
(1.3)
and the linear growth condition
|ϕ (t , x)|2 ≤ K ϕ (1 + |x|2 ),
for ϕ (t , x) = f (t , x), g (t , x), or h(t , x),
(1.4)
where L ϕ and K ϕ are positive constants independent of x, y ∈ R. Then a unique global solution for (1.1) exists (see, e.g., [16]). Stochastic models of this form arise in a range of scientific, engineering and financial applications (see, e.g., [2,6,16]). There exists a growing number of works on numerical methods for the solution of jump-diffusion Itô SDEs (1.1). Maghsoodi [12] used the Taylor expansions in the semi-group of conditional expectations to derive first-order mean-square convergent schemes for the numerical solution of jump-diffusion SDEs. Maghsoodi [13] presented first-order strongly and weakly convergent numerical schemes for the general nonlinear jump-diffusion SDEs. Gardon´ proposed strong order 1 and order 1.5 convergent numerical approximations in [4] and [5] for solving (1.1), respectively. Higham and Kloeden [8] presented and studied the strong convergence and asymptotic mean-square stability of the split-step backward Euler and compensated split-step backward Euler methods under the one-sided Lipschtiz condition for the drift coefficient. Higham and Kloeden [9] constructed a class of θ -methods and analyzed the strong convergence and asymptotic mean-square stability. In [10], Higham and Kloeden proved that three variants of the backward Euler method converge with strong order of one half under one-sided Lipschitz and polynomial growth conditions on the drift coefficient and global Lipschitz conditions on the diffusion and jump coefficients. Chalmers and Higham [1] studied the asymptotic stability of the jump-diffusion Itô SDEs and the numerical simulations using θ -Euler method discretizations. Wang and Gan [17] investigated the strong convergence and asymptotic mean-square stability of compensated θ -Euler method for jump-diffusion Itô SDEs. Ma, Ding and Ding [11] discussed the mean-square dissipativity of some numerical methods for jump-diffusion Itô SDEs. Our aim in this paper is to propose a class of compensated θ -Milstein methods for the jump-diffusion Itô SDEs (1.1) and obtain their mean-square convergence and asymptotic mean-square stability. We present the compensated θ -Milstein methods in Section 2 and discuss their mean-square convergence in Section 3. In Section 4, sufficient and necessary conditions for the asymptotic mean-square stability of the compensated θ -Milstein methods when applied to a linear test equation are provided. Numerical results are given in Section 5 to confirm our theoretical results. Finally, some concluding remarks are included in Section 6. 2. Compensated θ -Milstein methods We first introduce the compensated Itô-Taylor expansion for the solution of jump-diffusion Itô SDEs (1.1). Such stochastic expansion generalizes the deterministic Taylor formula and the Itô-Taylor expansion [14] for diffusions to the case of SDEs with Poisson jumps, and will be used to construct the compensated θ -Milstein methods. Let u (t , x) be twice continuously differentiable in x and once in t, and X (t ) be the solution of (1.1). It follows from the stochastic jump-diffusion chain rule [7,15] that
t u (t , X (t )) = u (t 0 , X (t 0 )) +
L (0) u (s, X (s− ))ds +
t0
t +
t
L (1) u (s, X (s− ))dW (s)
t0
L (−1) u (s, X (s− ))dN λ (s),
0 ≤ t0 ≤ t ≤ T ,
(2.1)
t0
where
⎧ 1 2 ⎪ ⎨ ut (t , x) + f (t , x)u x (t , x) + 2 g (t , x)u xx (t , x), L ( j ) u (t , x) = g (t , x)u x (t , x), ⎪ ⎩ u (t , x + h(t , x)) − u (t , x),
j = 0, j = 1, j = −1.
Introduce the compensated Poisson process N λ (t ) := N λ (t ) − λt. Note that the stochastic process N λ (t ) is a martingale and satisfy
E N λ (t ) − N λ ( s ) = 0, E | N λ (t ) − N λ (s)|2 = λ(t − s),
0 ≤ s ≤t ≤ T,
(2.2)
and the isometry
⎡⎛
⎢ E ⎣⎝
t t0
Defining
⎞2 ⎤ t ⎥ u (s, X (s))d N λ (s)⎠ ⎦ = λ E[u 2 (s, X (s))]ds, t0
0 ≤ t0 ≤ t ≤ T .
(2.3)
JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.3 (1-11)
Q. Ren, H. Tian / Applied Numerical Mathematics ••• (••••) •••–•••
f λ := f (t , x) + λh(t , x) and
3
L ( 0) u (t , x) = L (0) u (t , x) + λ[u (t , x + h (t , x)) − u (t , x)],
we may rewrite the jump-diffusion system (1.1) and the Itô’s formula (2.1) as
d X (t ) = f λ (t , X (t − ))dt + g (t , X (t − ))dW (t ) + h(t , X (t − ))d N λ (t ),
(2.4)
and
t u (t , X (t )) = u (t 0 , X (t 0 )) +
t
−
L ( 0) u (s, X (s ))ds + t0
t +
L (1) u (s, X (s− ))dW (s)
t0
L (−1) u (s, X (s− ))d N λ (s),
(2.5)
t0
respectively. To obtain the Itô-Taylor expansion for u (t , X (t )), we need additional definitions and notations. A row vector α = 1} for i ∈ {1, 2, · · · , l}, is called a multi-index of length l = l(α ) ∈ N . The multi-index ( j 1 , j 2 , · · · , jl ), where j i ∈ {0, 1, − of length zero is denoted by v. Then the set of all multi-indices α is denoted by
1}, i ∈ {1, 2, · · · , l}, l ∈ N}. M = { v } ∪ {( j 1 , j 2 , · · · , jl ) : j i ∈ {0, 1, − Given α ∈ M with l(α ) ≥ 1, we write α − for the multi-index obtained by deleting the last component of α and −α for the multi-index obtained by deleting the first component of α . We define a subset A ⊂ M an hierarchical set as a nonempty set of multi-indices such that supα ∈A l(α ) is finite and −α ∈ A for each α \ { v }. The remainder set B (A) of any given hierarchical set A is defined as B (A) = {α ∈ M \ A : −α ∈ A}. Let ρ and τ be two stopping times with 0 ≤ ρ ≤ τ ≤ T almost surely. For a multi-index α ∈ M, we will denote a multiple stochastic integral of processes, which have almost all trajectories left-hand side continuous right-hand side limited on [0, T ], defined recursively in the following way:
⎧ u (τ ), ⎪ ⎪ ⎪ ⎪ ⎪ ,z ⎪ ⎨ ρτ I αρ − [u (·)]dz, ρ ,τ I α [u (·)] = τ ρ ,z ⎪ ⎪ ⎪ ρ I α − [u (·)]dW ( z) ⎪ ⎪ ⎪ ⎩ τ I ρ ,z [u (·)]d N λ ( z), ρ α−
t ,t
when l = 0, α = v , when l ≥ 1, jl = 0, (2.6)
when l ≥ 1, jl = 1,
1. when l ≥ 1, jl = − t ,t
If the integrand is u ≡ 1, we abbreviate I α0 [u ] as I α0 . For a sufficiently smooth function u = u (t , x), the Itô-Taylor expansion of u = u (t , X (t )) for an hierarchical set A of multi-indices is given by
u (t , X (t )) =
( L α u )(t 0 , X (t 0 )) I tα0 ,t +
α ∈A
t ,t
I α0 [ L α u ],
(2.7)
α ∈B(A)
where X (t ) is the solution of (1.1) and L α represents the composition operation L ( j 1 ) ◦ · · · ◦ L ( jl ) for α = ( j 1 , · · · , jl ), l > 0. Applying it to the coefficients f , g , h, we can replace X by the stochastic Itô-Taylor expansion, which allows higher order approximations to be made if the coefficients are expanded sufficiently many times. Consider an equidistant discretization 0 = t 0 < t 1 < · · · < t N = T of the time interval [0, T ] with a uniform step size t = T for a fixed natural number N. Denote tn = n t and X n ≈ X (tn ), n = 0, 1, 2, · · · , N. We propose a class of compensated N θ -Milstein methods to give numerical approximation { Xn } to the solution of (1.1) by means of
X n+1 = X n + [(1 − θ) f λ (tn , X n ) + θ f λ (tn+1 , X n+1 )] t + g (tn , X n ) W n + h(tn , X n ) N λ,n t ,t
t ,t
n +1 +( L (1) g )(tn , Xn ) I (n1,1n)+1 + ( L (−1) g )(tn , Xn ) I (n− 1,1)
t ,tn+1 +( L (1) h)(tn , Xn ) I (n1,− 1)
t ,tn+1 + ( L (−1) h)(tn , Xn ) I (n− 1,− 1) ,
(2.8) n = 0, 1 , · · · , N − 1 ,
1, 1}, are the where θ ∈ [0, 1] is a parameter, W n = W (tn+1 ) − W (tn ), N λ,n = N λ (tn+1 ) − N λ (tn ) and I (nj ,nj+1) , j 1 , j 2 ∈ {− 1 2 multiple stochastic integrals defined in (2.6). When θ = 0, the method (2.8) reduces to the compensated (forward) Milstein scheme and is explicit in all the drift, diffusion and jump. For θ > 0, the method (2.8) is implicit in the drift. The scheme (2.8) corresponds to the compensated trapezoidal rule when θ = 12 , while it corresponds to the compensated backward Milstein method when θ = 1. t ,t
JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.4 (1-11)
Q. Ren, H. Tian / Applied Numerical Mathematics ••• (••••) •••–•••
4
3. Mean-square convergence To study the mean-square convergence order of the compensated θ -Milstein methods, we need some definitions and existed results. Let Y ∈ L 2 (, R) be a square-integrable random variable. The mean-square norm of Y , with E the expectation with respect to P , is defined by Y L 2 = [E|Y |2 ]1/2 . Definition 3.1. The compensated θ -Milstein method (2.8) for solving SDEs (1.1) is called mean-square convergent with order
γ if the global error of the numerical approximation satisfies max
k=1,2,··· , N
X (tk ) − Xk L 2 ≤ C t γ ,
where the constant C > 0 is independent of the step size t. Lemma 3.1. [4] Suppose that Conditions (1.2)-(1.4) hold for SDE (1.1). Then its solution X (t ) satisfies
E[| X (t )|2 ] ≤ e Ct (1 + E[| X (0)|2 ]) − 1,
∀ t ∈ [0, T ],
where the constant C is independent of t. Lemma 3.2 (Gronwall’s inequality [3]). Let an , n = 1, · · · , N, and C 1 , C 2 be nonnegative numbers. Assume that
an ≤ C 1 + C 2
n −1 1
N
a i , n = 1, · · · , N ,
i =1
hold. Then
max an ≤ C 1 e C 2 .
n=1,··· , N
Before estimating the global error of the numerical approximation, we consider its local error which is understood to be the defect when the exact solution is inserted into (2.8). The local error denoted by L n+1 is now defined as
L n+1 = X (tn+1 ) − X (tn ) − [(1 − θ) f λ (tn , X (tn )) + θ f λ (tn+1 , X (tn+1 ))] t
− g (tn , X (tn )) W n − h(tn , X (tn )) N λ,n t ,t
t ,t
t ,t
t ,t
n +1 −( L (1) g )(tn , X (tn )) I (n1,1n)+1 − ( L (−1) g )(tn , X (tn )) I (n− 1,1)
(3.1)
n n +1 n +1 −( L (1) h)(tn , X (tn )) I (n1,− 1) − ( L (−1) h )(tn , X (tn )) I (− 1,− 1) , n = 0, 1, · · · , N − 1.
Lemma 3.3. Assume that the coefficients f λ (t , x), g (t , x), h(t , x) are twice continuously differentiable in x and once in t, and L ( j 1 ) (ϕ ), L ( j 1 , j 2 ) (ϕ ), j 1 , j 2 = 0, 1, −1, ϕ = f λ (t , x), g (t , x), h(t , x), satisfy the linear growth condition (1.4). Then the local error (3.1) of the compensated θ -Milstein scheme (2.8) admits the representation
L n+1 = Rˆ 2n
t ,tn+1
+ Sˆ 1n.5 n+1 , n = 0, 1, · · · , N − 1, t ,t
(3.2)
with
Rˆ 2n
t ,tn+1
L 2 = O( t 2 ), Sˆ 1n.5 n+1 L 2 = O( t 2 ). 3
t ,t
(3.3)
Proof. Applying the Itô-Taylor expansion (2.7) for the case A = { v } on the interval [tn , tn+1 ] to the function f λ (t , X (t )), we have t ,tn+1
f λ (tn+1 , X (tn+1 )) = f λ (tn , X (tn )) + R 1n
t ,t
( f λ ) + S 0n.5 n+1 ( f λ ),
(3.4)
where t ,tn+1
R 1n
t ,tn+1
( f λ ) = I 0n
t ,t
t ,tn+1
( L (0) f λ ), S 0n.5 n+1 ( f λ ) = I 1n
t ,t
n n +1 ( L (1 ) f λ ) + I − ( L (−1) f λ ). 1
1}, we obtain Using the Itô-Taylor expansion (2.7) to u (t , x) = x with A = { v , (k), (0), (k, j ) : k, j = 1, −
JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.5 (1-11)
Q. Ren, H. Tian / Applied Numerical Mathematics ••• (••••) •••–•••
5
X (tn+1 ) − X (tn ) = f λ (tn , X (tn )) t + g (tn , X (tn )) W n + h(tn , X (tn )) N λ,n t ,t
t ,t
t ,t
t ,t
n +1 +( L (1) g )(tn , X (tn )) I (n1,1n)+1 + ( L (−1) g )(tn , X (tn )) I (n− 1,1) n n +1 n +1 +( L (1) h)(tn , X (tn )) I (n1,− 1) + ( L (−1) h )(tn , X (tn )) I (− 1,− 1)
t ,tn+1
+ R 2n
t ,t
+ S 1n.5 n+1 ,
(3.5)
where t ,tn+1
R 2n
t ,t
= I (n0,0n)+1 ( L (0) f λ ),
t ,t
t ,t
t ,t
t ,t
t ,t
n n +1 S 1n.5 n+1 = I (n1,0n)+1 ( L (1) f λ ) + I nn+1 ( L (−1) f λ ) + I (n0,1n)+1 ( L ( 0) g ) + I (0,− 0) h ) 1) ( L ( (−1,0)
t ,t
t ,t
t ,t
t ,t
t ,t
t ,t
n n +1 n n +1 n +1 + I (n0,1n,+11) ( L (0,1) g ) + I (n0,− 0,−1) g ) + I (1,− 1,1) ( L ( 1,1) ( L (1,−1) g ) + I (− 1,− 1,1) ( L (−1,−1) g )
t ,t
t ,t
n n +1 n n +1 1 + I (n1,1n,+11) ( L (1,1) g ) + I (n0,1n,+− 0,1) h ) + I (1,1,− 1) ( L ( 1) ( L (1,1) h ) + I (− 1,1,− 1) ( L (−1,1) h )
t ,t
t ,t
t ,t
t ,t
n n +1 n n +1 n n +1 n +1 + I (n− 0,−1) h ) + I (1,− 1,1,1) ( L (−1,1) g ) + I (0,− 1,− 1) ( L ( 1,− 1) ( L (1,−1) h ) + I (− 1,− 1,− 1) ( L (−1,−1) h ).
By using (3.4)-(3.5), we may write the local error (3.1) in the form (3.2) with
Rˆ 2n
t ,tn+1
t ,tn+1
= R 2n
t ,t t ,t t ,t t ,t t ,tn+1 − θ I (n0) n+1 ( L (0) f λ ) t , Sˆ 1n.5 n+1 = S 1n.5 n+1 − θ[ I (n1) n+1 ( L (1) f λ ) + I (n− 1) ( L (−1) f λ )] t .
The estimates (3.3) follow from Lemma 3.1 and the properties of integrals. This completes the proof.
2
Theorem 3.1. Assume that the coefficients f λ (t , x), g (t , x), h(t , x) are twice continuously differentiable in x and once in t, and the L ( j 1 ) (ϕ ), L ( j 1 , j 2 ) (ϕ ), j 1 , j 2 = 0, 1, −1, ϕ = f λ (t , x), g (t , x), h(t , x), satisfy the linear growth condition (1.4) and the Lipschitz condition (1.3). Then the compensated θ -Milstein scheme (2.8) is mean-square convergent with order γ and the global error of the numerical approximation satisfies
max
n=1,2,··· , N
X (tn ) − Xn L 2 ≤ C t ,
(3.6)
where C is independent of t. Proof. Subtracting (2.8) from (3.1), we have
X (tn+1 ) − X n+1 = X (tn ) − X n + t {(1 − θ)[ f λ (tn , X (tn )) − f λ (tn , X n )] + θ[ f λ (tn+1 , X (tn+1 )) − f λ (tn+1 , X n+1 )]} N λ,n +[ g (tn , X (tn )) − g (tn , Xn )] W n + [h(tn , X (tn )) − h(tn , Xn )] t ,t
+[( L (1) g )(tn , X (tn )) − ( L (1) g )(tn , Xn )] I (n1,1n)+1 t ,t
t ,t
n n +1 n +1 +[( L (−1) g )(tn , X (tn )) − ( L (−1) g )(tn , Xn )] I (n− 1,1) + [( L (1) h )(tn , X (tn )) − ( L (1) h )(tn , X n )] I (1,− 1)
t ,t
n +1 +[( L (−1) h)(tn , X (tn )) − ( L (−1) h)(tn , Xn )] I (n− 1,− 1) + L n+1 ,
Denote
n = 0, 1, · · · , N − 1.
εn = X (tn ) − Xn . Then we obtain the recursive relation
εn+1 = εn + t f n + n I tn ,tn+1 + Ln+1 , := φn
:= ψn
(3.7)
where
f n := (1 − θ)[ f λ (tn , X (tn )) − f λ (tn , Xn )] + θ[ f λ (tn+1 , X (tn+1 )) − f λ (tn+1 , Xn+1 )], n I
tn ,tn+1
:= [ g (tn , X (tn )) − g (tn , Xn )] W n + [h(tn , X (tn )) − h(tn , Xn )] N λ,n t ,t
+[( L (1) g )(tn , X (tn )) − ( L (1) g )(tn , Xn )] I (n1,1n)+1 t ,t
t ,t
n n +1 n +1 +[( L (−1) g )(tn , X (tn )) − ( L (−1) g )(tn , Xn )] I (n− 1,1) + [( L (1) h )(tn , X (tn )) − ( L (1) h )(tn , X n )] I (1,− 1)
t ,t
n +1 +[( L (−1) h)(tn , X (tn )) − ( L (−1) h)(tn , Xn )] I (n− 1,− 1) .
It follows from (3.7) that
JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.6 (1-11)
Q. Ren, H. Tian / Applied Numerical Mathematics ••• (••••) •••–•••
6
εn+1 = ε0 +
n
φ i +
i =0
n
ψ i +
n
i =0
L i +1 ,
i =0
which implies
⎡
n 2 n 2 n 2 ⎤ |εn+1 | ≤ 4 ⎣|ε0 | + φ i + ψ i + L i +1 ⎦ . 2
2
i =0
i =0
(3.8)
i =0
We first estimate the expectation of the second term in (3.8) as follows
n 2 n n E φi ≤ N t 2 E| f n |2 ≤ 2T t L 2f λ (1 − θ)2 E|εi |2 + θ 2 E|εi +1 |2 i =0 i =0 i =0 n E|εi |2 , ≤ 2T t L 2f λ θ 2 E|εn+1 |2 + C θ
(3.9)
i =0
where L f λ is the Lipschitz constant for f λ and C θ = 2 max{θ 2 , (1 − θ)2 }. Since every ψi is Ft i+1 -measurable and E( ψi |Ft i ) = 0, the expectation of products of terms from different subintervals vanishes. Thus, there exists a positive constant L such that
n 2 n n E ψ i ≤ E| i I t i ,t i+1 |2 ≤ 6 t L 2 E|εi |2 . i =0
i =0
(3.10)
i =0
Using (3.9)-(3.10) and setting C 1 := 2T L 2f θ 2 , C 2 := 2T L 2f C θ + 6L 2 , we arrive at λ
λ
E|εn+1 |2 ≤ 4 E|ε0 |2 + C 1 t E|εn+1 |2 + C 2 t
n
E|εn |2 + E|
i =0
If necessary, we may choose some value t 0 such that 4C 1 t < 2
2
E|εn+1 | ≤ 8E|ε0 | + 8C 2 T
n 1
N
i =0
n
L i + 1 |2 , n = 0 , 1 , · · · , N − 1 .
i =0 1 2
for all t < t 0 and conclude that
n 2 E|εi | + 8E L i +1 . 2
(3.11)
i =0
Similar to the deduction of (3.10), we obtain
n n 2 n n ! "2 t i ,t i +1 t i ,t i +1 t ,t 2 ˆ t i ,t i+1 2 E L i +1 = E Rˆ 2 + Sˆ 1.5 E Rˆ 2i i+1 + E S 1.5 ≤ O( t 2 ). ≤N i =0
i =0
i =0
(3.12)
i =0
It follows from (3.11)-(3.12) and Lemma 3.2 that
max
n=0,1,··· , N −1
$ # E|εn+1 |2 ≤ Cˆ E|ε0 |2 + O( t 2 ) ,
where Cˆ = 8 exp(8C 2 T ). Taking the square root of both sides of the above inequality yields the desired estimate (3.6). This completes the proof. 2 4. Asymptotic mean-square stability To study the asymptotic mean-square stability of a compensated θ -Milstein method, we apply it to the scalar linear test equation
d X (t ) = σ X (t − )dt + μ X (t − )dW (t ) + ν X (t − )dN λ (t ),
(4.1)
where σ , μ and ν are real values. The test equation (4.1) is said to be asymptotically mean-square stable if any solution of (4.1) satisfies limt →∞ E[ X 2 (t )] = 0. It is known [9] that it is asymptotically mean-square stable if and only if
2σ + μ2 + λν (2 + ν ) < 0. The set
(4.2)
JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.7 (1-11)
Q. Ren, H. Tian / Applied Numerical Mathematics ••• (••••) •••–•••
&
%
7
#
$
S S D E = (σ , μ, ν ) ∈ R3 : lim E[ X 2 (t )] = 0 = (σ , μ, ν ) ∈ R3 : 2σ + μ2 + λν (2 + ν ) < 0 , t →∞
(4.3)
is called the asymptotic mean-square stability region of the test equation (4.1). By analogy with the asymptotic mean-square stability of the test equation, we introduce the following definition for the compensated θ -Milstein method (2.8). Definition 4.1. The compensated θ -Milstein method (2.8) applied to the linear test equation is said to be asymptotically mean-square stable if its numerical solution X n satisfies
lim E[ X n2 ] = 0.
n→∞
Note that t ,t I nn+1 (−1,1)
t ,t I (n1,1n)+1
t n +1
N λ (tn+1 )
=
[W ( (i )) − W (tn )] − λ
i = N λ (tn )+1
W (s)ds + λ W (tn ) t , tn
1
= ( W n2 − t ), 2
t ,tn+1 In (1,−1)
1 t ,t t ,t 1 2 = W n N λ,n − I nn+1 , I nn+ = ( N λ, n − N n ), (−1,1) (−1,−1) 2
where (i ) is the instant of the i-th jump of the Poisson process N λ (t ). Applying the compensated θ -Milstein method (2.8) to the test equation (4.1), we obtain
X n +1 =
1 [1 + (1 − θ)(σ + λν ) t + μ W n + ν N λ,n 1 − θ(σ + λν ) t 1 1 2 + μ2 ( W n2 − t ) + μν W n N λ,n + ν 2 ( N λ, n − N λ,n − λ t )] X n , 2 2
whenever 1 − θ(σ + λν ) t = 0. We write the above recursion in the form
2 X n +1 = a + b W n + c N λ,n + d W n2 + p W n N λ,n + q N λ, n Xn ,
(4.4)
where
a= d=
1 + (1 − θ)(σ + λν ) t −
1 2
μ2 t − 12 λν 2 t
1 − θ(σ + λν ) t 1 2
2
μ
1 − θ(σ + λν ) t
,
p=
, b=
μ 1 − θ(σ + λν ) t
, c=
ν − 12 ν 2 , 1 − θ(σ + λν ) t
1 2 ν μν 2 , q= . 1 − θ(σ + λν ) t 1 − θ(σ + λν ) t
Notice that W i , N λ,i , i = 1, 2, · · · , N − 1 are mutually independent and
E[ W n ] = 0, E[ W n 2 ] = t , E[ W n 3 ] = 0, E[ W n 4 ] = 3 t 2 , 2 3 4 N λ,n ] = 0, E[ N λ,n ] = λ t , E[ N λ,n ] = λ t , E[ N λ,n ] = λ t (1 + 3λ t ). E[
Squaring both sides of (4.4) and then taking the expectation, we get
" ! E[ Xn2+1 ] = a2 + t b2 + λc 2 + 3d2 t + λ p 2 t + λ(1 + 3λ t )q2 + 2ad + 2λq(a + c + d t ) E( Xn2 ).
(4.5)
According to Definition 4.1, we obtain the following theorem. Theorem 4.1. The compensated θ -Milstein method (2.8) is asymptotically mean-square stable if and only if
C ∗ := a2 + t b2 + λc 2 + 3d2 t + λ p 2 t + λ(1 + 3λ t )q2 + 2ad + 2λq(a + c + d t ) < 1. The stability condition (4.6) can be expressed in terms of parameters
(4.6)
σ , μ, ν , λ, θ and t.
Corollary 4.1. The compensated θ -Milstein method (2.8) is asymptotically mean-square stable if and only if
' "2 ( 1! 2 2 2 < 0. 2σ + μ + λν (2 + ν ) + t (1 − 2θ)(σ + λν ) + μ + λν 2
2
(4.7)
JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.8 (1-11)
Q. Ren, H. Tian / Applied Numerical Mathematics ••• (••••) •••–•••
8
Fig. 1. Comparison of asymptotic mean-square stability regions: θ = 0 (left) and θ = 0.25 (right). (For interpretation of the colors in the figure(s), the reader is referred to the web version of this article.)
Analogous to the set S S D E , we define the asymptotic mean-square stability region of the compensated θ -Milstein method for given (θ, t ) as
'
%
S θ − Mil (θ, t ) = (σ , μ, ν ) ∈ R3 : 2σ + μ2 + λν (2 + ν ) + t (1 − 2θ)(σ + λν )2 +
1! 2
μ2 + λν 2
"2 (
& <0 .
Comparing (4.7) with (4.2), it is easy to see that the stability condition (4.7) is satisfied for arbitrary t > 0 if the stability condition (4.2) and (1 − 2θ)(σ + λν )2 +
θ¯ =
1 2
+
1 (μ2 + λν 2 )2 4 (σ + λν )2
1 2
)
μ 2 + λν 2
*2
≤ 0 hold. So, we define (4.8)
.
Corollary 4.2. For all t > 0,
• S θ−Mil (θ, t ) ⊂ S S D E for 0 ≤ θ < θ¯ ; • S θ−Mil (θ, t ) = S S D E for θ = θ¯ ; • S θ−Mil (θ, t ) ⊃ S S D E for θ > θ¯ . For 0 ≤ θ < θ¯ and (σ , μ, ν ) ∈ S S D E , the stability condition (4.7) is satisfied if and only if
t < −
2σ + μ2 + λν (2 + ν )
(1 − 2θ)(σ + λν )2 + 12 (μ2 + λν 2 )2
(4.9)
.
Using the sufficient and necessary conditions given in (4.2) and Corollary 4.1, we may plot the asymptotic mean-square stability regions geometrically. Set
y := (σ + λν ) t , z := (μ2 + λν 2 ) t . The stability conditions (4.3) and (4.7) become
2 y + z < 0,
(4.10) 2
2 y + z + (1 − 2θ) y +
1 2
2
z < 0.
(4.11)
The stability regions S θ− Mil (θ, t ) and S S D E are plotted in R2 ( y , z). For example, Fig. 1 shows the asymptotic mean-square stability regions of the SDE (4.1) (white black boundary) and the compensated θ -Milstein method (light-grey area) with θ = 0 (left) and θ = 0.25 (right), while Fig. 2 shows the asymptotic mean-square stability regions of the SDE (4.1) (white black boundary) and the compensated θ -Milstein method (light-grey area) with θ = 0.5 (left) and θ = 1 (right). 5. Numerical results In this section we present some numerical results to confirm the mean-square convergence and the asymptotic mean2 methods. square stability of the proposed compensated θ -Milstein Here the second moments E[ X n ] are approximated by averaging 2000 sample paths, that is, E[ X n2 ] ≈
1 2000
+2000 i =1
| Xni |2 , where Xni is the i-th path simulation.
JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.9 (1-11)
Q. Ren, H. Tian / Applied Numerical Mathematics ••• (••••) •••–•••
9
Fig. 2. Comparison of asymptotic mean-square stability regions: θ = 0.5 (left) and θ = 1 (right).
Fig. 3. Mean-square convergence rate of the compensated θ -Milstein method with θ = 0, 0.25, 0.5 and 1. Table 1 Values of C ∗ of the compensated θ -Milstein method for (B) and (C). Method
θ =0
θ = 0.25
θ = 0.5
θ =1
The value of C ∗ for (B) The value of C ∗ for (C)
0.9513 6.4012
0.8627 1.8620
0.8088 0.8324
0.7561 0.4205
Example 5.1. Consider the following scalar linear test problem
d X (t ) = σ X (t − )dt + μ X (t − )dW (t ) + ν X (t − )dN λ (t ), X (0) = X 0 ,
t ∈ [0, T ].
The solution to this problem is
X (t ) = X (0) exp[(σ −
1 2
μ2 )t + μ W (t )](1 + ν )N λ (t ) .
(A) Take σ = 1, μ = 1, ν = 0.5, λ = 1, X 0 = 1 and T = 1. We solve this problem by the compensated θ -Milstein method with θ = 0, 0.25, 0.5, 1 and t = 2−5 , 2−6 , 2−7 , 2−8 , 2−9 . In order to demonstrate the convergence rate, the maximum average sample errors are recorded in Fig. 3. From the numerical results, we observe that the compensated θ -Milstein method with different parameter θ is mean-square convergent of order 1. 1 (B) Take σ = −2, μ = 1, ν = 10 , λ = 10, X 0 = 10. We choose θ = 0, 0.25, 0.5, 1 and t = 0.5 and record the values C ∗ in Theorem 4.1 in the first row of Table 1, which shows that the corresponding compensated θ -Milstein method is asymptotically mean-square stable. The values log(E| X n |2 ) against t plotted in Fig. 4 confirm the theoretical result. 1 (C) Take σ = −6, μ = 2, ν = 10 , λ = 10, X 0 = 10. We use compensated θ -Milstein method with θ = 0, 0.25, 0.5, 1 and stepsize t = 0.5 to solve this problem. The values of C ∗ in Theorem 4.1 are recorded in the second row of Table 1. The values log(E| X n |2 ) plotted in Fig. 5 verify that the method with θ = 0.5 or θ = 1 is asymptotically mean-square stable,
JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.10 (1-11)
Q. Ren, H. Tian / Applied Numerical Mathematics ••• (••••) •••–•••
10
Fig. 4. Values of log(E[| X n |2 ]) against t of the compensated θ -Milstein method with θ = 0, 0.25, 0.5 and 1 for (B).
Fig. 5. Values of log(E[| X n |2 ]) against t of the compensated θ -Milstein method with θ = 0, 0.25, 0.5 and 1 for (C).
while the method with θ = 0 or θ = 0.25 is unstable. This also confirms our theoretical result. 6. Concluding remarks We proposed a class of compensated θ -Milstein methods for SDEs with Poisson jumps and studied the mean-square convergence of the proposed methods. Sufficient and necessary conditions for the asymptotic mean-square stability of the compensated θ -Milstein methods are obtained. Numerical results confirm our theoretical results. References [1] G.D. Chalmers, D.J. Higham, Asymptotic stability of a jump-diffusion equation and its numerical approximation, SIAM J. Sci. Comput. 31 (2) (2008) 1141–1155. [2] R. Cont, P. Tankov, Financial Modelling with Jump Processes, Chapman & Hall/CRC Press, Boca Raton, FL, 2004. [3] J. Dixon, S. Mckee, Weakly singular discrete Gronwall inequalities, Z. Angew. Math. Mech. 66 (11) (1986) 535–544. ´ The order of approximations for solutions of Itô-type stochastic differential equations with jumps, Stoch. Anal. Appl. 22 (3) (2004) 679–699. [4] A. Gardon, ´ The order 1.5 approximation for solutions of jump-diffusion equations, Stoch. Anal. Appl. 24 (6) (2006) 1147–1168. [5] A. Gardon, [6] P. Glasserman, Monte Carlo Methods in Financial Engineering, Springer, Berlin, 2003. [7] F.B. Hanson, Applied Stochastic Processes and Control for Jump-diffusions: Modeling, Analysis, and Computation, SIAM, Philadelphia, 2007. [8] D.J. Higham, P.E. Kloeden, Numerical methods for nonlinear stochastic differential equations with jumps, Numer. Math. 101 (1) (2005) 101–119. [9] D.J. Higham, P.E. Kloeden, Convergence and stability of implicit methods for jump-diffusion, Int. J. Numer. Anal. Model. 3 (2) (2006) 125–140. [10] D.J. Higham, P.E. Kloeden, Strong convergence rates for backward Euler on a class of nonlinear jump-diffusion problems, J. Comput. Appl. Math. 205 (2) (2007) 949–956.
JID:APNUM AID:3659 /FLA
[m3G; v1.261; Prn:20/09/2019; 13:58] P.11 (1-11)
Q. Ren, H. Tian / Applied Numerical Mathematics ••• (••••) •••–•••
11
[11] Q. Ma, D. Ding, X. Ding, Mean-square dissipativity of several numerical methods for stochastic differential equations with jumps, Appl. Numer. Math. 82 (4) (2014) 44–50. [12] Y. Maghsoodi, Mean square efficient numerical solution of jump-diffusion stochastic differential equations, Sankhya 58 (1) (1996) 25–47. [13] Y. Maghsoodi, Exact solutions and doubly efficient approximations of jump-diffusion Itô equations, Stoch. Anal. Appl. 16 (6) (1998) 1049–1072. [14] G.N. Milstein, Numerical Integration of Stochastic Differential Equations, vol. 313, Springer, Dordrecht, 1995. [15] P.E. Protter, Stochastic Integration and Differential Equations: A New Approach, Springer, Berlin, 1990. [16] K. Sobczyk, Stochastic Differential Equations with Applications to Physics and Engineering, Springer, Dordrecht, 1991. [17] X. Wang, S. Gan, Compensated stochastic theta methods for stochastic differential equations with jumps, Appl. Numer. Math. 60 (9) (2010) 877–887.