Statistics and Probability Letters 156 (2020) 108599
Contents lists available at ScienceDirect
Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro
Solvability of anticipated backward stochastic Volterra integral equations✩ Jiaqiang Wen a,b , Yufeng Shi a , a b
∗
Institute for Financial Studies and School of Mathematics, Shandong University, Jinan 250100, Shandong, China Department of Mathematics, Southern University of Science and Technology, Shenzhen 518055, Guangdong, China
article
info
a b s t r a c t
Article history: Received 17 December 2018 Received in revised form 20 May 2019 Accepted 25 August 2019 Available online 12 September 2019 MSC: 60H10 60H20
In this paper, we focus on a new class of equations called the anticipated backward stochastic Volterra integral equations (ABSVIEs, for short). In this class of equations, the generator involves not only the present information of the solution but also its future one. Under the Lipschitz condition, we obtain the existence and uniqueness of the adapted M-solutions. Meanwhile, a comparison theorem is also proved. © 2019 Elsevier B.V. All rights reserved.
Keywords: Backward stochastic Volterra integral equation Backward stochastic differential equation Anticipated generator Time-advanced
1. Introduction Throughout this paper, let (Ω , F , F, P) be a complete filtered probability space on which a d-dimensional standard Brownian motion W = {W (t); t ⩾ 0} is defined, where F = {Ft ; t ⩾ 0} is the natural filtration of W augmented by all the P-null sets in F . Let T > 0 be a time horizon, and K ⩾ 0 be a constant. Consider the following integral equations on a finite horizon [0, T ]: Y (t) = ψ (t) +
T
∫
g(t , s, Y (s), Z (t , s), Z (s, t))ds − t
T
∫
Z (t , s)dW (s),
(1.1)
t
where ψ (·) and g(·) are some given maps. Such an equation is referred to as a backward stochastic Volterra integral equation (BSVIE, for short). The unknown processes are the pair (Y (·), Z (·, ·)) of F-adapted processes taking values in Rm × Rm×d . For convenience, we call ψ (·) the terminal value and g(·) the generator of the corresponding BSVIE (1.1). And denote by anticipated BSVIEs, also known as BSVIEs with anticipated generator, we mean that in BSVIE (1.1), the generator g(·) involves not only the present information of solution (Y (·), Z (·, ·)) but also its future one. Recently, as a natural extension of the backward stochastic differential equations (BSDEs, for short, see below for description), Yong initiated the study of BSVIE (1.1), see Yong (2006, 2008), where, the notion of adapted M-solutions was ✩ This work is supported by the National Key R&D Program of China (Grant No. 2018YFA0703900), the National Natural Science Foundation of China (Grant Nos. 11871309, 11371226, 11671229, 11071145, 11526205, 11626247 and 11231005), the Foundation for Innovative Research Groups of National Natural Science Foundation of China (Grant No. 11221061) and the 111 Project, China (Grant No. B12023) ∗ Corresponding author. E-mail addresses:
[email protected] (J. Wen),
[email protected] (Y. Shi). https://doi.org/10.1016/j.spl.2019.108599 0167-7152/© 2019 Elsevier B.V. All rights reserved.
2
J. Wen and Y. Shi / Statistics and Probability Letters 156 (2020) 108599
introduced and the well-posedness of BSVIE (1.1) was established. From then on, BSVIEs are extensively used in different fields of risk management, capital allocation, stochastic optimal control problem, and so on. For example, in Yong (2007), a class of dynamic convex and coherent risk measures are identified introduced as a component of the adapted M-solutions of BSVIEs. Kromer and Overbeck (2017) introduced a differentiability result for BSVIEs and apply this result to derived continuous-time dynamic capital allocations. Shi et al. (2015), Wang and Zhang (2017) investigated the optimal control problems of BSVIEs. Some other recent developments of BSVIEs can be found in Agram and Øksendal (2015), Djordjević and Janković (2013), Shi et al. (0000), Wang et al. (0000), Wang (0000), Wen and Shi (0000), Wang and Yong (2015), Yong (2017), etc., among theories and applications. Now, let us recall the following BSDE:
{
−dYt = f (t , Yt , Zt )dt − Zt dWt , YT = ξT .
t ∈ [0, T ];
(1.2)
Since in 1990 Pardoux and Peng (1990) firstly investigate the general nonlinear case of (1.2), BSDEs attract many researchers’ interest, such as mathematical finance (El Karoui et al., 1997), stochastic control (Yong and Zhou, 1999), and partial differential equations (Pardoux and Peng, 1992). At the same time, for better applications, BSDE itself has been developed into many different branches. For example, besides the BSVIEs as presented above, recently, Peng and Yang (2009) initiated the study of BSDEs with anticipated generator, also known as the anticipated backward stochastic differential equations (ABSDEs, for short), which can be regarded as a new duality type of stochastic differential delay equations (SDDEs, for short). They obtained the existence and uniqueness of adapted solutions under the condition that f (·) is uniformly Lipschitz function. Shortly after, in applications, by regarding ABSDE as a duality type of SDDEs, Chen and Wu (2010) obtained the maximum principle for stochastic optimal control problem with delay. Along this way, Huang and Shi (2012) get the Maximum principle for optimal control of fully coupled forward–backward stochastic differential delayed equations. In theory, the existence and uniqueness of anticipated BSDEs with non-Lipschitz coefficients has been established by Wu et al. (2012), and Wen and Shi (2017a,b) and Douissi et al. (2019) analyze the solvability of ABSDE driven by a fractional Brownian motion and its applications in optimal control problem. As important development of BSDEs, besides the mathematical extension, the anticipated BSVIEs also have important applications in stochastic optimal control problems with delay. However, to our best knowledge, no study about anticipated BSVIEs is available up to now. Therefore, in this paper, we focus on the solvability of this class of equations. In detail, we study the following anticipated BSVIEs:
⎧ ∫ T ⎪ ⎪ Y (t) = ψ (t) + g(t , s, Y (s), Z (t , s), Z (s, t), Y (s + δs ), Z (t , s + ζs ), Z (s + ζs , t))ds ⎪ ⎪ ⎨ ∫ T t ⎪ − Z (t , s)dW (s), t ∈ [0, T ]; ⎪ ⎪ ⎪ t ⎩ Y (t) = ψ (t), t ∈ [T , T + K ]; Z (t , s) = η(t , s), (t , s) ∈ [0, T + K ]2 \ [0, T ]2 ,
(1.3)
where δ (·) and ζ (·) are two deterministic R+ -valued continuous functions defined on [0, T ], and ψ (·) and η(·) are some given processes. Comparing with (1.1), the distinct development of (1.3) is that the generator g(·) involves both the present and the future values of the solution (Y (·), Z (·, ·)), however, the main difficulty to solve equation (1.3) also comes from the anticipated terms. We will overcome this difficulty by combining with the technique of Peng and Yang (2009). In detail, under Lipschitz condition, we first prove the existence and uniqueness of the adapted M-solution of (1.3) respectively. Then, as comparison theorem is an important tool and plays an important role in the theory and application of anticipated BSVIEs, a kind of comparison theorem is also presented. In the coming future papers, we will focus on the application of anticipated BSVIEs in stochastic optimal control problem with delay. This article is organized as follows. In Section 2, some preliminaries are introduced. The solvability of anticipated BSVIE (1.3) is established in Section 3, and a comparison theorem of anticipated BSVIEs is obtained in Section 4. 2. Preliminaries
√
The Euclidean norm of a vector x ∈ Rm will be denoted by |x|, and for a m × d matrix A, we define |A| = TrAA∗ . As usual, we understand equalities and inequalities between random variables in the P-almost sure sense. Let T > 0 be a time horizon, and K ⩾ 0 be a constant. we denote
∆ = {(t , s) ∈ [0, T ]2 | 0 ⩽ t ⩽ s ⩽ T },
∆c = {(t , s) ∈ [0, T ]2 | 0 ⩽ s < t ⩽ T }.
Moreover, for any t ∈ [0, T + K ] and Euclidean space H, we introduce the following spaces:
{
L2Ft (Ω ; H) = ξ : Ω → H⏐ξ is Ft -measurable, ∥ξ ∥2 ≜ E|ξ |2
⏐
(
) 21
} <∞ ,
J. Wen and Y. Shi / Statistics and Probability Letters 156 (2020) 108599
{
3
L2FT (0, T + K ; H) = ψ : Ω × [0, T + K ] → H⏐ψ (t) is FT ∨t -measurable,
⏐
∥ϕ∥L2
FT (0,T +K )
{
T +K
( ∫
|ψ (t)|2 dt
≜ E
) 12
} <∞ ,
0
L2F (0, T ; H) = Y : Ω × [0, T ] → H⏐Y (·) is F-adapted,
⏐
( ∫ T )2 1 } ∥Y ∥L2 (0,T ) ≜ E |Y (s)| ds 2 < ∞ , F 0 { ⏐ 2 ⏐ LF (∆; H) = Z : Ω × ∆ → H For t ∈ [0, T ], Z (t , ·) is F-adapted, ) 21 ( ∫ T∫ T } |Z (t , s)|2 dsdt ∥Z ∥L2 (∆) ≜ E <∞ , F t 0 { ⏐ 2 2 2 LF ([0, T ] ; H) = Z : Ω × [0, T ] → H⏐For t ∈ [0, T ], Z (t , ·) is F-adapted, ( ∫ T∫ T ) 12 } |Z (t , s)|2 dsdt ∥Z ∥L2 ([0,T ]2 ) ≜ E <∞ . F
0
0
For any β ⩾ 0, let H∆ be the space of all pairs (Y , Z ) ∈ L2F (0, T ; Rm ) × L2F (∆; Rm×d ) under the following norm 2
T
[ ∫
∥(Y (·), Z (·, ·))∥H2 ≡ E
(
βt
2
T
∫
0
)
e |Z (t , s)| ds dt
e |Y (t)| +
∆
βs
2
] 12
< ∞.
t
2 Clearly, H∆ is a Hilbert space. Similarly, we can define L2FT (0, T ; H), L2F (0, T + K ; H), L2F ([0, T + K ]2 ; H), etc. For convenience, we rewrite BSVIE (1.1) as follows:
Y (t) = ψ (t) +
T
∫
g(t , s, Y (s), Z (t , s), Z (s, t))ds −
∫
T
Z (t , s)dW (s),
t ∈ [0, T ],
(2.1)
t
t
where ψ (·) ∈ L2FT (0, T ), and g : ∆ × Rm × Rm×d × Rm×d × Ω −→ Rm is B(∆ × Rm × Rm×d × Rm×d ) ⊗ FT -measurable such that s ↦ → g(t , s, y, z , ϑ ) is F-progressively measurable for all (t , y, z , ϑ ) ∈ [0, s] × Rm × Rm×d × Rm×d with s ∈ [0, T ]. (H1) There exists a constant L ⩾ 0 such that, for all (t , s) ∈ ∆, y, y′ ∈ Rm , z , z ′ , ϑ , ϑ ′ ∈ Rm×d ,
( ) |g(t , s, y, z , ϑ ) − g(t , s, y′ , z ′ , ϑ ′ )| ⩽ L |y − y′ | + |z − z ′ | + |ϑ − ϑ ′ | , ∫ T∫ T and E |g0 (t , s)|2 dsdt < ∞, where g0 (t , s) = g(t , s, 0, 0, 0). 0
t
Definition 2.1. A pair of (Y (·), Z (·, ·)) ∈ L2F (0, T ; Rm ) × L2F ([0, T ]2 ; Rm×d ) is called an adapted solution of BSVIE (2.1) if it satisfies (2.1) in the usual Ito’s sense. Moreover, an adapted solution (Y (·), Z (·, ·)) of BSVIE (2.1) is called an adapted M-solution if the following relation holds: t
∫
Z (t , s)dW (s),
Y (t) = E[Y (t)] +
0 ⩽ t ⩽ T.
0
Proposition 2.2 (Yong, 2008, Theorem q3.7). Under (H1), for any ψ (·) ∈ L2FT (0, T ; Rm ), BSVIE (2.1) admits a unique adapted M-solution. Proposition 2.3 (Shi and Wang, 2012, Lemma 3.1). Consider the following simple BSVIE Y (t) = ψ (t) +
T
∫
g(t , s)ds − t
T
∫
Z (t , s)dW (s),
t ∈ [0, T ],
t
2 where ψ (·) ∈ L2FT (0, T ; Rm ) and g(·, ·) ∈ L2F (∆; Rm ). Then the above equation has a unique adapted solution (Y , Z ) ∈ H∆ , and the following estimate holds: T
∫
(
E
eβ t |Y (t)|2 +
0
βT
⩽ Ce
2
|ψ (t)| dt +
E 0
)
eβ s |Z (t , s)|2 ds dt
t
T
∫
T
∫
β
T
∫
C
T
∫
E 0
(2.2) eβ s |g(t , s)|2 dsdt .
t
Hereafter C is a positive constant which may be different from line to line.
4
J. Wen and Y. Shi / Statistics and Probability Letters 156 (2020) 108599
2 Proposition 2.4 (Wang and Yong, 2015, Theorem 3.4). For i = 0, 1, assume g i = g i (t , s, y, z) satisfy (H1). Let (Y i , Z i ) ∈ H∆ be respectively the adapted solutions of the following BSVIEs,
Y i (t) = ψ i (t) +
T
∫
g i (t , s, Y i (s), Z i (t , s))ds −
T
∫
t
Z i (t , s)dW (s),
t ∈ [0, T ].
t
Suppose g(t , s, y, z) satisfies (H1) such that y ↦ → g(t , s, y, z) is nondecreasing with g 0 (t , s, y, z) ⩽ g(t , s, y, z) ⩽ g 1 (t , s, y, z),
∀(t , y, z) ∈ [0, s] × Rm × Rm×d ,
s ∈ [0, T ].
Moreover, g z (t , s, y, z) exists and ×m g z1 (t , s, y, z), . . . , g zd (t , s, y, z) ∈ Rm , d
∀(t , y, z) ∈ [0, s] × Rm × Rm×d ,
where is the set of all (m × m) diagonal matrices. Then for any ψ (·) ∈ 2 the corresponding unique adapted solution (Y i , Z i ) ∈ H∆ satisfies ×m Rm d
i
Y 0 (t) ⩽ Y 1 (t),
s ∈ [0, T ],
, T ) satisfying ψ 0 (t) ⩽ ψ 1 (t), t ∈ [0, T ],
L2FT (0
a.s., a.e. t ∈ [0, T ].
3. Existence and uniqueness In this section, we study the existence and uniqueness of the adapted M-solutions of anticipated BSVIE (1.3). First, instead of (1.3), we consider the following class of anticipated BSVIE:
⎧ ⎨ ⎩
Y (t) = ψ (t) +
T
∫
g(t , s, Y (s + δs ), Z (t , s + ζs ), Z (s + ζs , t))ds − t
Y (t) = ψ (t),
t ∈ [T , T + K ];
Z (t , s) = η(t , s),
∫
T
Z (t , s)dW (s),
t
t ∈ [0, T ];
(3.1)
(t , s) ∈ [0, T + K ]2 \ [0, T ]2 ,
where δ· and ζ· are two given R+ -valued continuous functions such that s + δs ⩽ T + K and s + ζs ⩽ T + K for all s ∈ [0, T ]. Moreover, there exists a constant M ⩾ 0 such that, for all non-negative and integrable g1 (·) and g2 (·, ·), and t ∈ [0, T ],
⎧ ∫ T ∫ T +K ⎪ ⎪ g1 (s + δs )ds ⩽ M g1 (s)ds, ⎪ ⎪ ⎪ t t ⎪ ⎪ ∫ T +K ⎨ ∫ T g2 (t , s)ds, g2 (t , s + ζs )ds ⩽ M ⎪ ⎪ t t ⎪ ∫ T +K ⎪ ∫ T ⎪ ⎪ ⎪ ⎩ g2 (s, t)ds. g2 (s + ζs , t)ds ⩽ M
(3.2)
t
t
We introduce the following assumption concerning the generator g of (3.1): (H2) Assume for all (t , s) ∈ ∆, g(t , s, ξ , η, ς, ω) : L2Fr (Ω ; Rm )×L2Fr (Ω ; Rm×d )×L2Fr (Ω ; Rm×d )×Ω −→ L2Fs (Ω ; Rm ), where 1
2
3
r1 , r2 , r3 ∈ [s, T + K ]. Moreover, there exists a constant L ⩾ 0 such that for all (t , s) ∈ ∆, ξ (·), mξ (·) ∈ L2F (s, T + K ; Rm ), η(t , ·), mη(t , ·), ς (·, t), mς (·, t) ∈ L2F (s, T + K ; Rm×d ), we have
|g(t , s, ( ξ (r1 ), η(t , r2 ), ς (r3 , t)) − g(t , s, mξ (r1 ), mη(t , r2 ), mς (r3 , t))|
)
⩽ LEFs |ξ (r1 ) − mξ (r1 )| + |η(t , r2 ) − mη(t , r2 )| + |ς (r3 , t) − mς (r3 , t)| .
Remark 3.1. Note that for all (t , s) ∈ ∆, g(t , s, ·) is Fs -adapted, which ensures that the solution of anticipated BSVIE (3.1) is Fs -adapted. In the following, for simplicity of presentation, we let H2 [0, T + K ] be the space of all pairs (Y (·), Z (·, ·)) ∈ L2F (0, T + K ; Rm ) × L2F ([0, T + K ]2 ; Rm×d ) under the following norm
[ ∫ ∥(Y (·), Z (·, ·))∥H2 [0,T +K ] ≜ E
T +K
(
eβ t |Y (t)|2 +
0
T +K
∫
)
eβ s |Z (t , s)|2 ds dt
] 21
< ∞.
0
Furthermore, let M2 [0, T + K ] be the set of all pairs (Y (·), Z (·, ·)) ∈ H2 [0, T + K ] such that the following relation holds, t
∫
Z (t , s)dW (s),
Y (t) = E[Y (t)] +
0 ⩽ t ⩽ T + K.
0
Then for any (Y (·), Z (·, ·)) ∈ M2 [0, T + K ], one can show that
∫
T +K
(
∫
T +K
E 0
⩽ 2E
eβ t |Y (t)|2 +
T +K
∫ 0
(
βt
2
T +K
∫
e |Y (t)| + 0
)
eβ s |Z (t , s)|2 ds dt
t
)
eβ s |Z (t , s)|2 ds dt .
(3.3)
J. Wen and Y. Shi / Statistics and Probability Letters 156 (2020) 108599
5
In fact, from (3.3), it is easy to obtain T +K
∫
t
∫
e |Z (t , s)| dsdt ⩽ E
E 0
βs
2
T +K
∫
e
0
βt
|Z (t , s)|2 dsdt 0
0 T +K
∫
t
∫
βt
(3.4)
e |Y (t)| dt .
⩽E
2
0
This means that we can use the following as an equivalent norm in M2 [0, T + K ]:
[ ∫ ∥(Y (·), Z (·, ·))∥M2 [0,T +K ] ≜ E
T +K
(
eβ t |Y (t)|2 +
T +K
∫
0
) ] 21
eβ s |Z (t , s)|2 ds dt
.
t
We now state and prove the existence and uniqueness of the adapted M-solutions to anticipated BSVIE (3.1). Theorem 3.2. Under (H2), for any (ψ (·), η(·, ·)) ∈ M2 [0, T + K ], the anticipated BSVIE (3.1) admits a unique adapted M-solution (Y (·), Z (·, ·)) ∈ M2 [0, T + K ]. Proof. For any (y(·), z(·, ·)) ∈ M2 [0, T + K ], consider the following equation:
⎧ ∫ ⎪ ⎪ Y (t) = ψ (t) + ⎪ ⎨
T
g(t , s)ds −
T
∫
Z (t , s)dW (s),
t ∈ [0, T ];
t
t
Y (t) = ψ (t), t ∈ [T , T + K ]; ⎪ ⎪ ⎪ ⎩ Z (t , s) = η(t , s), (t , s) ∈ [0, T + K ]2 \ [0, T ]2 ,
(3.5)
where g(t , s) = g(t , s, y(s + δs ), z(t , s + ζs ), z(s + ζs , t)). 2 From Propositions 2.2 and 2.3, we see that BSVIE (3.5) has a unique adapted solution (Y (·), Z (·, ·)) ∈ H∆ . Now we define
Z (·, ·) on ∆c from the following: t
∫
Z (t , s)dW (s),
Y (t) = E[Y (t)] +
t ∈ [0, T ].
0
Then, combining Y (t) = ψ (t) when t ∈ [T , T + K ] and Z (t , s) = η(t , s) when (t , s) ∈ [0, T + K ]2 \ [0, T ]2 with (ψ (·), η(·, ·)) ∈ M2 [0, T + K ], we have that (Y (·), Z (·, ·)) ∈ M2 [0, T + K ] is an adapted M-solution of (3.5). Thus, the mapping I(y(·), z(·, ·)) = (Y (·), Z (·, ·)) defined by (3.5) is well-defined. In the following, we shall prove that I is a contraction on M2 [0, T + K ]. For two arbitrary elements (y(·), z(·, ·)) and (y′ (·), z ′ (·, ·)) ∈ M2 [0, T + K ], set (Y (·), Z (·, ·)) = I(y(·), z(·, ·)),
(Y ′ (·), Z ′ (·, ·)) = I(y′ (·), z ′ (·, ·)).
Moreover, we denote their differences by (yˆ (·), zˆ (·, ·)) = (y(·) − y′ (·)), (z(·, ·) − z ′ (·, ·)), (Yˆ (·), Zˆ (·, ·)) = (Y (·) − Y ′ (·)), (Z (·, ·) − Z ′ (·, ·)). By the estimate (2.2), one has T
∫
(
E
2
eβ t |Yˆ (t)| + T
∫
C
β
2
)
eβ s |Zˆ (t , s)| ds dt
t
0
⩽
T
∫
T
∫
E 0
⏐ eβ s ⏐g(t , s, y(s + δs ), z(t , s + ζs ), z(s + ζs , t))
t
⏐ − g(t , s, y′ (s + δs ), z ′ (t , s + ζs ), z ′ (s + ζs , t))⏐2 dsdt .
6
J. Wen and Y. Shi / Statistics and Probability Letters 156 (2020) 108599
For the right side of the above inequality, from (H2), (3.2) and (3.4), we have
∫
T
T
∫
0
eβ s ⏐g(t , s, y(s + δs ), z(t , s + ζs ), z(s + ζs , t))
⏐
E t
⏐ − g(t , s, y′ (s + δs ), z ′ (t , s + ζs ), z ′ (s + ζs , t))⏐2 dsdt ( ) T 2 2 2 ⩽ 3L2 E eβ s |ˆy(s + δs )| + |ˆz (t , s + ζs )| + |ˆz (s + ζs , t)| dsdt 0 ∫ t ∫ T +K ∫ T +K T +K ( 2 2) 2 2 ⩽ 3L MT E eβ s |ˆy(s)| ds + 3L2 M E eβ s |ˆz (t , s)| + |ˆz (s, t)| dsdt 0 0 ∫ t [ ∫ T +K T +K ∫ T +K 2 2 βs 2 e |ˆy(s)| ds + E eβ s |ˆz (t , s)| dsdt ⩽ 3L M(T + 1)E 0 ∫ 0 t ] T +K ∫ t 2 +E eβ s |ˆz (t , s)| dsdt 0 ∫ T +K 0 ∫ T +K ∫ T +K ( ) ( ) 2 2 2 βs ⩽ 6L M T + 1 E e |ˆy(s)| ds + 3L2 M T + 1 E eβ s |ˆz (t , s)| dsdt 0 0 t ) ∫ T +K ∫ T +K ( ( ) 2 2 eβ s |ˆz (t , s)| ds dt . eβ t |ˆy(t)| + ⩽ 6L2 M T + 1 E ∫
T ∫
(3.6)
t
0
Hence, note that C may be different from line to line, we deduce
) ∫ T 2 2 βs ˆ ˆ e |Z (t , s)| ds dt E e |Y (t)| + 0 t ) ∫ T +K ( ∫ T +K C 2 2 βs βt ⩽ E e |ˆz (t , s)| ds dt . e |ˆy(t)| + β t 0 ∫
T
(
βt
Note that Yˆ (t) = 0 when t ∈ [T , T + K ], and Zˆ (t , s) = 0 when (t , s) ∈ [0, T + K ]2 \ [0, T ]2 , we then have
∫
T +K
(
E 0
⩽
β
T +K
∫
C
2
eβ t |Yˆ (t)| +
T +K
∫ t
(
βt
2
T +K
∫
0
)
)
eβ s |ˆz (t , s)| ds dt . 2
e |ˆy(t)| +
E
2
eβ s |Zˆ (t , s)| ds dt
t
Let β = 2C + 1, then the mapping I is a contraction on M2 [0, T + K ], which implies that anticipated BSVIE (3.1) admits a unique adapted M-solution (Y (·), Z (·, ·)) ∈ M2 [0, T + K ]. This completes the proof. ■ Next, we come back to the general anticipated BSVIE (1.3), and for which we need the following assumption: (H3) Assume for all (t , s) ∈ ∆, g(t , s, y, z , ζ , ξ , η, ς, ω) : Rm × Rm×d × Rm×d × L2Fr (Ω ; Rm ) × L2Fr (Ω ; Rm×d ) × 1
2
L2Fr (Ω ; Rm×d ) × Ω −→ L2Fs (Ω ; Rm ), where r1 , r2 , r3 ∈ [s, T + K ]. Moreover, there exists a constant L ⩾ 0 such that 3
for all (t , s) ∈ ∆, y, my ∈ Rm , z , mz , ζ , mζ ∈ Rm×d , ξ (·), mξ (·) ∈ L2F (s, T + K ; Rm ), η(t , ·), mη(t , ·), ς (·, t), mς (·, t) ∈ L2F (s, T + K ; Rm×d ), we have
|g(t( , s, y, z , ζ , ξ (r1 ), η(t , r2 ), ς (r3 , t)) − g(t , s, my, mz , mζ , mξ (r1 ), mη(t , r2 ), mς (r3 , t))| ) [ ] ⩽ L |y − my| + |z − mz | + |ζ − mζ | + EFs |ξ (r1 ) − mξ (r1 )| + |η(t , r2 ) − mη(t , r2 )| + |ς (r3 , t) − mς (r3 , t)| . We have the following existence and uniqueness for anticipated BSVIE (1.3). Since the proof of the following theorem is similar to the proof of Theorem 3.2 without substantial difficulty, we only state the main result without a detailed proof. Theorem 3.3. Under (H3), for any (ψ (·), η(·, ·)) ∈ M2 [0, T + K ], Eq. (1.3) admits a unique adapted M-solution (Y (·), Z (·, ·)) ∈ M2 [0, T + K ]. Remark 3.4. In Theorems 3.2 and 3.3, when t ∈ [0, T + K ], the terminal coefficient ψ (t) is not FT ∨t -measurable but
Ft -adapted. In fact, we let ψ (t) be Ft -adapted just for simplicity of presentation, and Theorems 3.2 and 3.3 also hold
when ψ (t) is FT ∨t -measurable.
J. Wen and Y. Shi / Statistics and Probability Letters 156 (2020) 108599
7
Example 3.5. Consider the following linear anticipated BSVIE:
⎧ ∫ T( [ ]) ⎪ ⎪ Y (t) = ψ (t) + Y (s) + Z (t , s) + Z (s, t) + EFs Y (s + δs ) + Z (t , s + ζs ) + Z (s + ζs , t) ds ⎪ ⎪ ⎨ ∫ T t ⎪ − Z (t , s)dW (s), t ∈ [0, T ]; ⎪ ⎪ ⎪ t ⎩ Y (t) = ψ (t), t ∈ [T , T + K ]; Z (t , s) = η(t , s), (t , s) ∈ [0, T + K ]2 \ [0, T ]2 . From Theorem 3.3 we see that, for any (ψ (·), η(·, ·)) ∈ M2 [0, T + K ], the above equation admits a unique adapted M-solution (Y (·), Z (·, ·)) ∈ M2 [0, T + K ]. 4. Comparison theorem As we all know that comparison theorem is an important tool and plays an important role in the theory and application of anticipated BSVIEs, so in this section, we investigate a comparison theorem. In detail, we consider the following class of anticipated BSVIEs: for i = 0, 1,
⎧ ⎨ ⎩
Y i (t) = ψ i (t) +
T
∫
g i (t , s, Y i (s), Z i (t , s), Y i (s + δs ))ds −
Z i (t , s)dW (s),
t ∈ [0, T ];
t
t
Y i (t) = ψ i (t),
T
∫
t ∈ [T , T + K ].
(4.1)
For the above anticipated BSVIE, we only need the values Z i (t , s) of Z i (·, ·) for (t , s) ∈ ∆ and the notation of M-solution is not necessary. It is easy to see that under (H3), for any ψ i (·) ∈ L2FT (0, T + K ; Rm ), Eq. (4.1) admits a unique adapted solution (Y i (·), Z i (·, ·)) ∈ L2F (0, T + K ; Rm ) × L2F (∆; Rm×d ). Theorem 4.1. For i = 0, 1, let g i satisfy (H3). Suppose g = g(t , s, y, z , ξ ) satisfies (H3) such that y ↦ → g(t , s, y, z , ξ ) is nondecreasing, and for all (t , s, y, z) ∈ ∆ × Rm × Rm×d , g(t , s, y, z , ·) is increasing, i.e., g(t , s, y, z , ξ1 (r)) ⩽ g(t , s, y, z , ξ2 (r)), if ξ1 (r) ⩽ ξ2 (r) with ξ1 (·), ξ2 (·) ∈ L2F (s, T + K ; Rm ), r ∈ [s, T + K ]. Moreover g 0 (t , s, y, z , ξ ) ⩽ g(t , s, y, z , ξ ) ⩽ g 1 (t , s, y, z , ξ ),
∀(t , s, y, z , ξ ) ∈ ∆ × Rm × Rm×d × L2Fr (Ω ; Rm ),
×m and the partial derivative g z (t , s, y, z , ξ ) exists with g z1 (t , s, y, z , ξ ), . . . , g zd (t , s, y, z , ξ ) ∈ Rm . Then for any ψ i (·) ∈ d 2 0 1 LFT (0, T + K ) satisfying ψ (t) ⩽ ψ (t), t ∈ [0, T + K ], we have
Y 0 (t) ⩽ Y 1 (t),
a.s., a.e. t ∈ [0, T + K ].
Proof. Choose ψ (·) ∈ L2FT (0, T + K ; Rm ) such that
ψ 0 (t) ⩽ ψ (t) ⩽ ψ 1 (t),
t ∈ [0, T + K ].
Let (Y (·), Z (·, ·)) ∈ L2F (0, T + K ; Rm ) × L2F (∆; Rm×d ) be the unique adapted solution to the following anticipated BSVIE:
⎧ ⎨ ⎩
Y (t) = ψ (t) + Y (t) = ψ (t),
T
∫
g(t , s, Y (s), Z (t , s), Y (s + δs ))ds − t
t ∈ [T , T + K ].
T
∫
Z (t , s)dW (s),
t ∈ [0, T ];
t
We set ˜ Y0 (·) = Y 1 (·) and consider the following equation: ⎧ ∫ T ∫ T ⎨ ˜ ˜ ˜ ˜ ˜ Y1 (t) = ψ (t) + g(t , s, Y1 (s), Z1 (t , s), Y0 (s + δs ))ds − Z1 (t , s)dW (s), t t ⎩ ˜ Y1 (t) = ψ (t), t ∈ [T , T + K ].
t ∈ [0, T ];
(4.2)
Let (˜ Y1 (·), ˜ Z1 (·, ·)) ∈ L2F (0, T + K ; Rm ) × L2F (∆; Rm×d ) be the unique adapted solution of (4.2). Since
{
g(t , s, y, z , ˜ Y0 (s + δs )) ⩽ g 1 (t , s, y, z , ˜ Y0 (s + δs )), ψ (t) ⩽ ψ 1 (t), t ∈ [0, T + K ].
(t , s, y, z) ∈ ∆ × Rm × Rm×d ;
By Proposition 2.4, we obtain that
˜ Y1 (t) ⩽ ˜ Y0 (t),
a.s., a.e. t ∈ [0, T + K ].
(4.3)
Next, we consider the following equation:
⎧ ∫ T ∫ T ⎨ ˜ ˜ Y2 (t) = ψ (t) + g(t , s, ˜ Y2 (s), ˜ Z2 (t , s), ˜ Y1 (s + δs ))ds − Z2 (t , s)dW (s), t t ⎩ ˜ Y2 (t) = ψ (t), t ∈ [T , T + K ].
t ∈ [0, T ];
(4.4)
8
J. Wen and Y. Shi / Statistics and Probability Letters 156 (2020) 108599
And let (˜ Y2 (·), ˜ Z2 (·, ·)) ∈ L2F (0, T + K ; Rm ) × L2F (∆; Rm×d ) be the unique adapted solution of (4.4). Since ξ ↦ → g(t , s, y, z , ξ ) is increasing, and note (4.3), we have g(t , s, y, z , ˜ Y1 (s + δs )) ⩽ g(t , s, y, z , ˜ Y0 (s + δs )),
(t , s, y, z) ∈ ∆ × Rm × Rm×d .
Therefore, similar to the above discussion, we obtain
˜ Y2 (t) ⩽ ˜ Y1 (t),
a.s., a.e. t ∈ [0, T + K ].
By induction, we can construct a sequence {(˜ Yk (·), ˜ Zk (·, ·))}k⩾1 ∈ L2F (0, T + K ; Rm ) × L2F (∆; Rm×d ) such that
⎧ ∫ T ∫ T ⎨ ˜ ˜ ˜ ˜ ˜ g(t , s, Yk (s), Zk (t , s), Yk−1 (s + δs ))ds − Zk (t , s)dW (s), Yk (t) = ψ (t) + t t ⎩ ˜ Yk (t) = ψ (t), t ∈ [T , T + K ].
t ∈ [0, T ];
And similarly, we deduce Y 1 (t) = ˜ Y0 (t) ⩾ ˜ Y1 (t) ⩾ · · · ⩾ ˜ Yk (t) ⩾ · · · ,
a.s., a.e. t ∈ [0, T + K ].
In the following, we show {(˜ Yk (·), ˜ Zk (·, ·))}k⩾2 is a Cauchy sequence. By the estimate (2.2),
∫
T
(
E
2
eβ t |˜ Yk (t) − ˜ Yk−1 (t)| +
0
⩽
β
T
∫
C
∫
2
)
eβ s |˜ Zk (t , s) − ˜ Zk−1 (t , s)| ds dt
t
T
e
E
T
∫
βs
(
g(t , s, ˜ Yk (s), ˜ Zk (t , s), ˜ Yk−1 (s + δs ))
t
0
) ˜ ˜ ˜ − g(t , s, Yk−1 (s), Zk−1 (t , s), Yk−2 (s + δs )) 2 dsdt ( ∫ T∫ T C 2 2 eβ s |˜ Yk (s) − ˜ Yk−1 (s)| + |˜ Zk (t , s) − ˜ Zk−1 (t , s)| ⩽ E β t 0 ) 2 + |˜ Yk−1 (s + δs ) − ˜ Yk−2 (s + δs )| dsdt ) ∫ T ∫ T( C 2 2 βs ˜ βt ˜ ˜ ˜ e |Zk (t , s) − Zk−1 (t , s)| ds dt ⩽ E e |Yk (t) − Yk−1 (t)| + β t 0∫ T +K C 2 + E eβ t |˜ Yk−1 (t) − ˜ Yk−2 (t)| dt . β 0 In other words, (1 − ⩽
C
β
C
∫
T
(
2 e |˜ Yk (t) − ˜ Yk−1 (t)| +
)E
β∫
0 T +K
E
βt
T
∫
) 2 ˜ ˜ e |Zk (t , s) − Zk−1 (t , s)| ds dt βs
t 2 e |˜ Yk−1 (t) − ˜ Yk−2 (t)| dt .
βt
0
Note that the constant C > 0 in the above can be chosen independent of β ⩾ 0. And ˜ Yk (t) −˜ Yk−1 (t) = 0 when t ∈ [T , T + K ] for every k ⩾ 2. Thus by choosing β = 3C , we obtain T +K
[∫ E 0
⩽ ⩽
∫
1 2 1
2
eβ t |˜ Yk (t) − ˜ Yk−1 (t)| dt +
T +K
E 0 [∫
0
T
∫
2
eβ s |˜ Zk (t , s) − ˜ Zk−1 (t , s)| dsdt
]
t
2
eβ t |˜ Yk−1 (t) − ˜ Yk−2 (t)| dt
T +K
E
T
∫
2
eβ t |˜ Yk−1 (t) − ˜ Yk−2 (t)| dt +
∫
T
∫
T
2
eβ s |˜ Zk−1 (t , s) − ˜ Zk−2 (t , s)| dsdt
]
2 0 [∫ ] ∫0 T ∫t T T +K 1 k−2 2 2 βt ˜ ⩽( ) E e |Y2 (t) − ˜ Y1 (t)| dt + eβ s |˜ Z2 (t , s) − ˜ Z1 (t , s)| dsdt . 2 0 0 t
It follows that {˜ Yk (·)}k⩾2 and {˜ Zk (·, ·)}k⩾2 are respectively Cauchy sequences in L2F (0, T + K ; Rm ) and L2F (∆; Rm×d ). Denote their limits by ˜ Y (·) and ˜ Z (·, ·), respectively. Finally, from Theorem 3.3, we have Y (t) = ˜ Y (t),
a.s., a.e. t ∈ [0, T + K ].
Hence we obtain Y (t) ⩽ Y 1 (t),
a.s., a.e. t ∈ [0, T + K ].
J. Wen and Y. Shi / Statistics and Probability Letters 156 (2020) 108599
9
Similarly, we can prove that Y 0 (t) ⩽ Y (t),
a.s., a.e. t ∈ [0, T + K ].
Therefore, our conclusion follows.
■
Remark 4.2. One may note that the generator g(·) of (4.1) involves the anticipated information of Y (·), and independent of the anticipated information of Z (·, ·). Since there are some essential difficulties when g(·) involves the anticipated information of Z (·, ·), we would like to discuss this topic in the near future paper. Example 4.3. Let g 0 (t , s, ξ (r)) = −EFs [|ξ (r)|] − 1, g 1 (t , s, ξ (r)) = EFs [|ξ (r)|] + 1, and we choose g(t , s, ξ (r)) = EFs [ξ (r)]. It is easy to check that g 0 , g 1 and g satisfy the assumptions of Theorem 4.1. Then, if the terminal condition satisfies ψ 0 (t) ⩽ ψ 1 (t), a.s., t ∈ [0, T + K ], from Theorem 4.1 we derive Y 0 (t) ⩽ Y 1 (t),
a.s., a.e. t ∈ [0, T + K ].
Conclusion In this paper, we studied a new class of integral equations called the anticipated backward stochastic Volterra integral equations. The corresponding generator involves both the present values and the future values of the solutions. We proved the existence and uniqueness of the adapted M-solutions of anticipated BSVIEs. Moreover, a kind of comparison theorem is also proved. In the coming future papers, we will focus on the application of this class of equations in stochastic optimal control problem with delay. References Agram, N., Øksendal, B., 2015. Malliavin calculus and optimal control of stochastic Volterra equations. J. Optim. Theory Appl. 167, 1070–1094. Chen, L., Wu, Z., 2010. Maximum principle for the stochastic optimal control problem with delay and application. Automatica 46, 1074–1080. Djordjević, J., Janković, S., 2013. On a class of backward stochastic Volterra integral equations. Appl. Math. Lett. 26, 1192–1197. Douissi, S., Wen, J., Shi, Y., 2019. Mean-field anticipated BSDEs driven by fractional Brownian motion and related stochastic control problem. Appl. Math. Comput. 355, 282–298. El Karoui, N., Peng, S., Quenez, M.C., 1997. Backward stochastic differential equations in finance. Math. Finance 7, 1–71. Huang, J., Shi, J., 2012. Maximum principle for optimal control of fully coupled forward-backward stochastic differential delayed equations. ESAIM Control Optim. Calc. Var. 18 (4), 1073–1096. Kromer, E., Overbeck, L., 2017. Classical differentiability of BSVIEs and dynamic capital allocations. Int. J. Theor. Appl. Finance 20, 1–26. Pardoux, E., Peng, S., 1990. Adapted solution of a backward stochastic differential equation. Systems Control Lett. 4, 55–61. Pardoux, E., Peng, S., 1992. Backward SDEs and quasi-linear PDEs. Lecture Notes in Control and Inform. Sci. 176, 200–217. Peng, S., Yang, Z., 2009. Anticipated backward stochastic differential euquations. Ann. Probab. 37, 877–902. Shi, Y., Wang, T., 2012. Solvability of general backward stochastic Volterra integral equations. J. Korean Math. Soc. 49, 1301–1321. Shi, Y., Wang, T., Yong, J., 2015. Optimal control problems of forward-backward stochastic Volterra integral equations. Math. Control Relat. Fields 5, 613–649. Shi, Y., Wen, J., Xiong, J., Backward doubly stochastic Volterra integral equations and applications to optimal control problems, Arxiv.org/abs/1906. 10582. Wang, H., Extended backward stochastic Volterra integral equations, quasilinear parabolic equations, and Feynman–Kac formula, Arxiv.org/abs/1908. 07168. Wang, H., Sun, J., Yong, J., Quadratic backward stochastic Volterra integral equations, Arxiv.org/abs/1810.10149. Wang, T., Yong, J., 2015. Comparison theorems for some backward stochastic Volterra integral equations. Stochastic Process. Appl. 125, 1756–1798. Wang, T., Zhang, H., 2017. Optimal control problems of forward-backward stochastic Volterra integral equations with closed control regions. SIAM J. Control Optim. 55, 2574–2602. Wen, J., Shi, Y., Symmetrical martingale solutions of backward doubly stochastic Volterra integral equations, Arxiv.org/abs/1909.04292. Wen, J., Shi, Y., 2017a. Anticipative backward stochastic differential equations driven by fractional Brownian motion. Statist. Probab. Lett. 122, 118–127. Wen, J., Shi, Y., 2017b. Maximum principle for a stochastic delayed system involving terminal state constraints. J. Inequal. Appl. 2017 (1), 103. Wu, H., Wang, W., Ren, J., 2012. Anticipated backward stochastic differential equations with non-Lipschitz coefficients. Statist. Probab. Lett. 82, 672–682. Yong, J., 2006. Backward stochastic Volterra integral equations and some related problems. Stochastic Process. Appl. 116, 779–795. Yong, J., 2007. Continuous-time dynamic risk measures by backward stochastic Volterra integral equations. Appl. Anal. 86, 1429–1442. Yong, J., 2008. Well-posedness and regularity of backward stochastic Volterra integral equations. Probab. Theory Related Fields 142, 21–77. Yong, J., 2017. The representation of adatped solutions of backward stochastic Volterra integral equations. Sci. China Math. 37, 1355–1366 (Chinese). Yong, J., Zhou, X., 1999. Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer, New York.