Applied Mathematics and Computation 349 (2019) 408–420
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
A new smoothness result for Caputo-type fractional ordinary differential equationsR Binjie Li, Xiaoping Xie∗, Shiquan Zhang School of Mathematics, Sichuan University, Chengdu 610064, China
a r t i c l e
i n f o
a b s t r a c t
Keywords: Caputo-type Fractional differential equation Smoothness
We present a new smoothness result for Caputo-type fractional ordinary differential equations, which reveals that, subtracting a non-smooth function that can be obtained by the information available, a non-smooth solution belongs to Cm for some positive integer m. © 2018 Published by Elsevier Inc.
1. Introduction Let us consider the following model problem: seek h ∈ (0, a] and
y∈
v ∈ C[0, h] : v − c0 C[0,h] b
such that
Dα∗ y = f (x, y ), 0 x h, y ( 0 ) = c0 ,
(1.1)
where a > 0, b > 0, 0 < α < 1, c0 ∈ R, and
f ∈ C ([0, a] × [c0 − b, c0 + b] ). ∞ Above, the Caputo-type fractional differential operator Dα ∗ : C[0, h] → C0 (0, h ) is given by
Dα∗ z := DJ 1−α z − z(0 )
(1.2)
for all z ∈ C[0, h], where D denotes the well-known first order generalized differential operator, and the Riemann–Liouville fractional integral operator J 1−α : C[0, h] → C[0, h] is defined by
J 1−α z(x ) :=
1 (1 − α )
0
x
(x − t )−α z(t ) dt ,
0 x h,
for all z ∈ C[0, h]. By [2, Lemma 2.1], the above problem is equivalent to seeking solutions of the following Volterra integration equation:
y ( x ) = c0 +
R ∗
1
(α )
x 0
(x − t )α−1 f t , y(t ) dt .
This work was supported in part by National Natural Science Foundation of China (11771312, 11401407). Corresponding author. E-mail addresses:
[email protected] (B. Li),
[email protected] (X. Xie),
[email protected] (S. Zhang).
https://doi.org/10.1016/j.amc.2018.12.052 0 096-30 03/© 2018 Published by Elsevier Inc.
(1.3)
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
409
Diethelm and Ford [2] proved that, if f is continuous, then (1.3) has a solution y ∈ C[0, h] for some h ∈ (0, a], and this solution is unique if f is Lipschitz continuous. A natural question arises whether y can be smoother than being continuous. To this question, Miller and Feldstein [6] gave the first answer: if f is analytic, then y is analytic in (0, h) for some 0 < h ≤ a. Then Lubich [5] considered the behavior of the solution near 0. He showed that, if f is analytic at the origin, then there exists a function Y of two variables that is analytic at the origin such that
y(x ) = Y (x, xα ),
0 x h,
for some 0 < h ≤ a. The above work suggests that non-smoothness of the solution to (1.1) is generally unavoidable. Diethelm [1] established a sufficient and necessary condition under which y is analytic on [0, h] for some 0 < h ≤ a. Recently, Deng [3] proposed two conditions: under the first condition the solution belongs to Cm for some positive integer m; under the second one the solution is a polynomial. It should be noted that, the second condition is just the one proposed in [1]. The main result of this paper is that, although the solution y of (1.1) does not generally belong to Cm for some positive integer m, we can still construct a non-smooth function of the form
S(x ) := c0 +
n
c j xγ j ,
j=1
such that
y − S ∈ Cm , provided f is sufficiently smooth. Most importantly, given c0 and f, we can obtain S by a simple computation. This may be helpful in the development of numerical methods for (1.1). In addition, we obtain a sufficient and necessary condition under which y ∈ Cm . We note that this condition is essentially the same as the first condition mentioned already in [3, Theorem 2.8], but the necessity was not considered therein. The key ingredients of our analysis are the introduction of a compact operator (cf. (4.17)) and the use of the Schauder Fixed-Point Theorem. The rest of this paper is organized as follows. In Section 2, we introduce some basic notations and preliminaries. In Section 3, we state the main results of this paper, and in Section 4 we present their proofs. 2. Notation and preliminary Let 0 < h < ∞. We use C[0, h] to denote the space of all continuous real functions defined on [0, h], and use · C[0, h] to denote the standard maximum norm on C[0, h]. For any k ∈ N and 0 < γ ≤ 1, define
C k [0, h] :=
v ∈ C[0, h] : v( j ) ∈ C[0, h] and v( j ) (0 ) = 0
for j = 0, 1, . . . , k ,
C
k,γ
v(k) (y ) − v(k) (x )
[0, h] := v ∈ C [0, h] : max <∞ , ( y − x )γ 0x
(2.1)
k
(2.2)
and endow the above two spaces with two norms respectively by
vC k [0,h] := v(k) C[0,h] ,
for all
v ∈ C k [0, h],
v(k) (y ) − v(k) (x )
vC k,γ [0,h] := max ( y − x )γ 0x
for all
(2.3)
v ∈ C k,γ [0, h] .
(2.4)
For convenience, we use C[0, h] to abbreviate C 0 [0, h], and, in some cases, use C k,0 [0, h] and ·C k,0 [0,h] to denote C k [0, h] and ·C k [0,h] respectively. In addition, by the notation vC k,γ [0,h] we also mean that v ∈ C k,γ [0, h]. For any s ∈ N>0 , define
s := β = (β1 , β2 , . . . , βs ) ∈ {1, 2}s ,
and, for any β ∈ s , we use the following notation:
∂β g :=
∂ ∂ ∂ ··· g( x , x ) , ∂ xβs ∂ xβs−1 ∂ x β1 1 2
where g is a real function of two variables. Besides, we define
0 := {∅}, and denote by ∂ ∅ the identity mapping.
410
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
3. Main results Let us first make the following assumption on f. Assumption 1. There exist a positive integer n and a positive constant M such that
nα > 1 , f ∈ C n ([0, a] × [c0 − b, c0 + b] ),
∂i ∂ j
max max f (x, y )
M. (x,y )∈[0,a]×[c0 −b,c0 +b] 0in ∂ xi ∂ y j 0 jn i+ jn
Define J ∈ N and a strictly increasing sequence {γi }i=1 by J
γ j : 1 j J = {i + jα : i, j ∈ N, 0 < i + jα m},
(3.1)
where
m := max { j ∈ N : j < nα}. Define c1 , c2 , . . . , cJ ∈ R by
(3.2)
Q (x ) − S(x ) + c0 ∈ span xi+ jα : i, j ∈ N, i + jα > m ,
(3.3)
where
Q (x ) :=
n−1 s=0 β ∈s
J s tk−1
∂β f ( 0, c0 ) x 1 + (−1 )βk +1 1 + (−1 )βk (x − t0 )α−1 dt0 + γ j c j tkγ j −1 dtk , (α ) 2 2 0 0 j=1 k=1
(3.4)
and
S(x ) := c0 +
J
c j xγ j .
(3.5)
j=1
Above and throughout, a product of a sequence of integrals should be understood in expanded form. For example, (3.4) is just an abbreviation for
Q (x ) :=
n−1 s=0 β ∈s
t1 0
J 1 + (−1 )β2 +1 1 + (−1 )β2 + γ j c j t2γ j −1 dt2 2 2 j=1
... ts−1 0
t0 J ∂β f ( 0, c0 ) x 1 + (−1 )β1 +1 1 + (−1 )β1 (x − t0 )α−1 dt0 + γ j c j t1γ j −1 dt1 (α ) 2 2 0 0 j=1
J 1 + (−1 )βs +1 1 + (−1 )βs + γ j c j tsγ j −1 dts . 2 2 j=1
Remark 3.1. It is easy to see that we can express Q in the form
Q (x ) =
L
d j xγ j ,
j=1
L where γ j is a strictly increasing sequence such that γJ < γJ+1 and j=J+1
γ j : 1 j L = i + jα : i, j ∈ N, i + jα (n − 1 ) max{1, γJ } + α .
Moreover, for 1 ≤ j ≤ J, the value of dj depends only on c0 , c1 , . . . , c j−1 , and f (more precisely, ∂ β f(0, c0 ), β ∈ s , 1 s n − 1). Remark 3.2. Obviously, there exist(s) uniquely c1 , c2 , . . . , cJ such that (3.3) holds, and hence c1 , c2 , . . . , cJ are/is well-defined. In fact, by (3.3) the coefficients of terms xγ j ( j = 1, 2, . . . , J ) in Q (x ) − S(x ) + c0 are zero. Then we can obtain J equations for cj ( j = 1, 2, . . . , J), and the jth equation is of the form d j (c0 , c1 , . . . , c j−1 , f ) − c j = 0. For example, a simple calculation shows f (0,c )
c1 = d1 = α (α0) . Remark 3.3. It is easy to see
Q − S + c0 ∈ C m,γ [0, a],
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
411
where
γ := min {i + jα : i, j ∈ N, i + jα > m}. Remark 3.4. Note that S depends only on c0 and
∂β f (0, c0 ) : β ∈ s , 0 s n − 1 .
Since c0 and f are already available, we can obtain S by a simple calculation. Define
∗
h := min a,
b(1 + α ) M
α1 .
By [2, Theorem 2.2] we know that there exists a unique solution y∗ ∈ C[0, h∗ ] to (1.1). For this solution, we have the following important results. Theorem 3.1. There exist two positive constants, C0 and C1 , that depend only on a, α and M, such that, for any h ∈ (0, h∗ ] and K > 0 satisfying
Q − S + c0 C m [0,h] + C1 hα + C0 hα
m
K j K,
(3.6)
j=1
we have
y∗ − SCm [0,h] K.
(3.7)
Corollary 3.1. There exists h ∈ (0, h∗ ] such that (y∗ )(m) ∈ C[0, h] if, and only if,
∂i f ( 0, c0 ) = 0 ∂ xi
for all 0 i m − 1.
(3.8)
Remark 3.5. Put
:= 1 j J : γ j ∈ N .
Obviously,
c j xγ j
j∈
is the singular part in S, and thus the singular part in y∗ . Corollary 3.1 essentially claims that (3.8) holds if and only if c j = 0 for all j ∈ . Since (3.8) is rare, we can consider singularity as an intrinsic property of solutions to fractional differential equations. In addition, we have the following result: that c j = 0 for all 1 ≤ j ≤ J is equivalent to that c j = 0 for all j ∈ . This result is contained in the proof of Corollary 3.1 in Section 4.3. In what follows we give two examples for Theorem 3.1. Example 3.1 (A linear case). Consider the model problem (1.1) with 1/2 < α < 1, c0 = 0 and
f (x, y ) = (α + 1 ) + (α + 2 )x + x2+α + x3+α − x2 y. It is easy to see that Assumption 1 holds with n = 2. Then, from (3.2), (3.5) and Remark 3.2 it follows that m = 1 and f (0,c ) α +1 ) S(x ) = c1 xα with c1 = α (α0) = ( α (α ) = 1. As a result, by Theorem 3.1 we have the conclusion
y∗ − S = y∗ − xα ∈ C 1 [0, h]. This indeed holds true because the exact solution to (1.1) is
y∗ = xα + x1+α . Example 3.2 (A nonlinear case). Consider (1.1) with 1/3 < α < 1/2, c0 = 0 and
f (x, y ) = (α + 1 ) + (α + 2 )x + x3 (xα + x1+α )2 − x3 y2 . In this case, Assumption 1 holds with n = 3. And we easily see that m = 1 and S(x ) = xα . Thus, by Theorem 3.1 we have
y∗ − S = y∗ − xα ∈ C 1 [0, h]. This is consistent with the exact solution y∗ = xα + x1+α .
412
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
4. Proofs For 0 < h ≤ a, define
Cm [0, h] :=
v ∈ C m [0, h] : v + S − c0 C[0,h] b .
In the remainder of this paper, unless otherwise specified, we use C to denote a positive constant that depends only on α , a, n and M, and its value may differ at each occurrence. By the definitions of c1 , c2 , . . . , cJ , it is easy to see that |cj | ≤ C for all 1 ≤ j ≤ J, and we use this implicitly in the forthcoming analysis. 4.1. Some auxiliary results We start by introducing some operators. Let 0 < h ≤ a. For l = 1, 2, 3, define Pl,h : Cm [0, h] → C[0, h] by
Pl,h z(x ) :=
1
(α )
x 0
(x − t )α−1 Gl,h z(t ) dt
(4.1)
for all z ∈ Cm [0, h], where Gl,h z ∈ C[0, h] (l = 1, 2, 3) are given respectively as follows: for all 0 ≤ t0 ≤ h,
G1,h z(t0 ) :=
n s−1 s=1 β ∈s k=1 βs =2
ts−1 J 1 + (−1 )βk +1 1 + (−1 )βk + γ j c j tkγ j −1 dtk z (ts )∂β f ts , z(ts ) + S(ts ) dts , 2 2 0
tk−1 0
G2,h z(t0 ) :=
(4.2)
j=1
tn−1 0
−1 n
tk−1
β ∈n k=1 0 βn =2
J 1 + (−1 )βk +1 1 + (−1 )βk + γ j c j tkγ j −1 dtk 2 2 j=1
J ∂β f tn , z(tn ) + S(tn ) γ j c j tnγ j −1 dtn ,
(4.3)
j=1
G3,h z(t0 ) :=
tn−1
0
−1 n
β ∈n k=1 0 βn =1
tk−1
J 1 + (−1 )βk +1 1 + (−1 )βk + γ j c j tkγ j −1 dtk 2 2 j=1
∂β f tn , z(tn ) + S(tn ) dtn .
(4.4)
Lemmas 4.1–4.3 show some important results for the operators Pl,h . Lemma 4.1. For any z ∈ Cm [0, h] and 0 ≤ x ≤ h, we have
1
(α )
0
x
(x − t )α−1 f t , z(t ) + S(t ) dt = Q (x ) + P1,h z(x ) + P2,h z(x ) + P3,h z(x ).
(4.5)
Proof. Let β ∈ s with 1 s n − 1. For any 0 < ts ≤ h, applying the fundamental theorem of calculus yields
ts ∂β f ts , z(ts ) + S(ts ) = ∂β f , z( ) + S( ) + ∂β f ts+1 , z(ts+1 ) + S(ts+1 ) dts+1 ts J γ j −1 + z (ts+1 ) + γ j c j ts+1 ∂ ≈ f ts+1 , z(ts+1 ) + S(ts+1 ) dts+1
β
j=1
≈
:= (β , β , . . . , β , 1 ) and β := (β , β , . . . , β , 2 ). Taking limits on both sides of the above equation for all 0 < ≤ ts , where β s s 1 2 1 2 as approaches 0+, we obtain
ts ∂β f ts , z(ts ) + S(ts ) = ∂β f 0, c0 ) + ∂β f ts+1 , z(ts+1 ) + S(ts+1 ) dts+1 0 ts J γ −1 j + z (ts+1 ) + γ j c j ts+1 ∂ ≈ f ts+1 , z(ts+1 ) + S(ts+1 ) dts+1 . 0
j=1
β
Using this equality repeatedly, we easily obtain (5). This completes the proof.
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
413
Lemma 4.2. For any z ∈ Cm [0, h], we have
P1,h zC m [0,h] Chα
m
zCj m [0,h] ,
(4.6)
j=1
P1,h zC m,α [0,h] C
m
zCj m [0,h] .
(4.7)
j=1
Lemma 4.3. For any z ∈ Cm [0, h], we have
P2,h zC m [0,h] + P3,h zC m [0,h] Chα ,
(4.8)
P2,h zC m,α [0,h] + P3,h zC m,α [0,h] C.
(4.9)
To prove the above two lemmas, we need the following three lemmas. Lemma 4.4. For any g ∈ C m [0, h], we have
wC m [0,h] Chα gC m [0,h] ,
(4.10)
wC m,α [0,h] C gC m [0,h] ,
(4.11)
where
w(x ) :=
x 0
(x − t )α−1 g(t ) dt ,
0 x h.
Proof. Since g ∈ C m [0, h], we easily obtain
w (i ) ( x ) =
x 0
(x − t )α−1 g(i) (t ) dt ,
1 i m.
Then (4.10) follows, and (4.11) follows from [7, Theorem 3.1].
Lemma 4.5. Let γ ∈ γ1 , γ2 , . . . , γJ , and l ∈ N satisfying lα ≤ 1. For any g ∈ C k,l α [0, h] with 0 ≤ k ≤ m, define
w(x ) :=
x 0
t γ −1 g(t ) dt ,
0 < x h.
Then we have the following results: • If (l + 1 )α 1, then
wC k,(l+1)α [0,h] C gC k,lα [0,h] . • If (l + 1 )α > 1, then
wC k+1,(l+1)α−1 [0,h] C gC k,lα [0,h] . Lemma 4.6. For any z ∈ Cm [0, h] and β ∈ s with 1 ≤ s ≤ n, we have
wC min{m−1,n−s} [0,h] C
min{m,n−s+1}
zCj m [0,h] ,
j=1
where
w(x ) := z (x )∂β f x, z(x ) + S(x ) ,
0 x h.
The proofs of Lemmas 4.5 and 4.6 are presented in Appendix A. In the rest of this subsection, we give the proofs of Lemmas 4.2 and 4.3. Proof of Lemma 4.2. By (4.1), (4.2), and (4.4), it suffices to show that, for each β ∈ s with βs = 2, we have
g0 C m [0,h] C
min{m,n−s+1}
zCj m [0,h] ,
j=1
where, if s = 1, then
g0 (x ) :=
if 2 ≤ s ≤ n, then
0
x
z (t )∂2 f t , z(t ) + S(t ) dt ;
(4.12)
414
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
g0 (x ) :=
g1 (x ) :=
x
0
x
gs−1 (x ) :=
0
x
0
J 1 + (−1 )β1 +1 1 + (−1 )β1 + γ j c j t γ j −1 g1 (t ) dt , 2 2 j=1
J 1 + (−1 )β2 +1 1 + (−1 )β2 + γ j c j t γ j −1 g2 (t ) dt , 2 2 j=1
.. . gs−2 (x ) :=
J 1 + (−1 )βs−1 +1 1 + (−1 )βs−1 + γ j c j t γ j −1 gs−1 (t ) dt , 2 2 j=1
x
0
z (t )∂β f t , z(t ) + S(t ) dt .
To do so, we proceed as follows. If s = 1, then by Lemma 4.6 we obtain (4.12). Let us suppose that 2 ≤ s ≤ n. From Lemma 4.6 it follows
gs−1 C min{m,n−s+1} [0,h] C
min{m,n−s+1}
zCj m [0,h] .
j=1
Then, by the fact that γ j ≥ α for all 1 ≤ j ≤ J and the simple estimate
min{m, n − s + 1} + (s − 1 )α > m, applying Lemma 4.5 to gs−2 , gs−3 , . . . , g0 successively yields (4.12). This completes the proof of Lemma 4.2.
Proof of Lemma 4.3. Let us first show
G2,h zC m [0,h] C.
(4.13)
By (4.3) it suffices to show that, for any β ∈ n with βn = 2, we have
g0 Cm [0,h] C,
(4.14)
where, for all 0 ≤ x ≤ h,
g0 (x ) :=
g1 (x ) :=
x 0
x
gn−1 (x ) := Since
J 1 + (−1 )β1 +1 1 + (−1 )β1 + γ j c j t γ j −1 g1 (t ) dt , 2 2 j=1
1 + (−1 )β2 +1 2
0
.. . gn−2 (x ) :=
x 0
2
γ j c j t γ j −1
g2 (t ) dt ,
j=1
J 1 + (−1 )βn−1 +1 1 + (−1 )βn−1 + γ j c j t γ j −1 gn−1 (t ) dt , 2 2 j=1
x 0
+
J 1 + (−1 )β2
∂β f t , z(t ) + S(t )
J
γ j c j t γ j −1 dt.
j=1
∂β f ·, z(· ) + S(· ) ∈ C[0, h]
and γ j ≥ α for all 1 ≤ j ≤ J, we easily obtain
gn−1 C0,α [0,h] C. Then, applying Lemmas 4.5 to gn−2 , gn−3 , . . . , g0 successively and using the fact nα > m, we obtain (4.14), which indicates (4.13). Similarly, we can show G3,h zC m [0,h] C. Consequently, by (4.1) and (4.4), we obtain (4.8) and (4.9). This completes the proof.
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
415
4.2. Proof of Theorem 3.1 By Lemmas 4.2 and 4.3 there exist two positive constants C0 and C1 that depend only on a, α and M, such that, for all 0 < h ≤ a and z ∈ Cm [0, h],
P1,h zC m [0,h] C0 hα
m
zCj m [0,h] ,
(4.15)
j=1
P2,h zC m [0,h] + P3,h zC m [0,h] C1 hα .
(4.16)
In what follows let 0 < h ≤ h∗ and K > 0 satisfying (3.6). Define an operator J : V → C[0, h] by
J z(x ) := c0 − S(x ) + where
V :=
1
(α )
x 0
(x − t )α−1 f t , z(t ) + S(t ) dt for all z ∈ V, x ∈ [0, h],
v ∈ Cm [0, h] : vC m [0,h] K .
(4.17)
(4.18)
It is clear that V is a bounded, closed, convex subset of Cm [0, h]. The outline of the proof of Theorem 3.1 is as follows: (1) prove that J maps V to V and J: V → V is a compact operator; (2) apply the famous Schauder fixed point theorem to prove that J has a fixed point z ∈ V; (3) verify that z + S is a solution to problem (1.1). Remark 4.1. Let δ > 0. If we put
K := Q − S + c0 C m [0,h] + C1 aα + δ,
⎧ ⎨
h := min
h∗ ,
⎩
δ −1C0
m j=1
Kj
− α1 ⎫ ⎬
,
⎭
then (3.6) holds. For the operator J , we have the following key result. Lemma 4.7. For each z ∈ V, we have J z ∈ V and
J zC m,γ [0,h] Q − S + c0 C m,γ [0,h] + C
m
K j,
(4.19)
j=0
where
γ := min {α , min {i + jα : i, j ∈ N, i + jα > m}}. 1+α ) α Proof. Let us first show J z ∈ V . Using (4.17) and the fact h ( b(M ) , we obtain 1
|J z(x ) + S(x ) − c0 | =
x
α
(x − t )α−1 f t , z(t ) + S(t ) dt Mh b
(α ) 0 (1 + α ) 1
for all x ∈ [0, h], and so
J z + S − c0 C[0,h] b. By Lemma 4.1 we get
J z(x ) = c0 − S(x ) + Q (x ) + P1,h z(x ) + P2,h z(x ) + P3,h z(x ), and then, by Lemmas 4.2 and 4.3 and the fact c0 − S + Q ∈ that
C m [0, h],
J zC m [0,h] K.
we obtain J z ∈
It remains therefore to show
(4.21)
Note that (4.20), (4.15) and (4.16) imply
J zC m [0,h] Q − S + c0 C m [0,h] + C1 hα + C0 hα
(4.20) C m [0, h].
m
K j,
j=1
then (4.21) follows from (3.6). We have thus shown J z ∈ V . The thing left is to show (4.19). By Lemmas 4.2 and 4.3 we have
416
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
P1,h zC m,α [0,h] + P2,h zC m,α [0,h] + P3,h zC m,α [0,h] C
m
zCj m [0,h] C
j=0
m
K j.
j=0
From the fact γ ≤ α it follows
P1,h zC m,γ [0,h] + P2,h zC m,γ [0,h] + P3,h zC m,γ [0,h] C
m
K j.
j=0
In light of this estimate and the fact that Q − S + c0 ∈ C m,γ by the definitions of Q and S, the desired estimate (4.19) follows from (4.20). This completes the proof. By the famous Arzelà–Ascoli Theorem and Lemma 4.7, it is evident that J : V → V is a compact operator, where V is endowed with norm ·C m [0,h] . Moreover, a routine computation yields that J: V → V is continuous. Since V is a bounded, closed, convex subset of Cm [0, h], from the Schauder Fixed-Point Theorem [4, Theorem 2.9] we know that there exists z ∈ V such that
J z = z. Putting
y(x ) := z(x ) + S(x ), we obtain
y ( x ) = c0 +
1
(α )
0 x h, x
0
(x − t )α−1 f t , y(t ) dt ,
0 x h.
By [2, Lemma 2.1], the above y is a solution of (1.1), and then, since y∗ is the unique solution of (1.1) on [0, h∗ ], we have y∗ = y on [0, h]. Therefore, it is obvious that (3.7) holds. This completes the proof of Theorem 3.1. 4.3. Proof of Corollary 3.1 Let us first state the following fact. For each 1 ≤ j ≤ J, by the definition of cj , a straightforward calculation yields
cj =
∂1s f (0, c0 ) + r j, (1 + s + α ) 1s
(4.22)
s+α =γ j
where rj is a positive constant that depends only on
{ci : 1 i < j} and ∂β f (0, c0 ) : β ∈ s , 1 s < n .
Moreover, for 1 ≤ j ≤ J, it is easy to see that r j = 0 if ci = 0 for all 1 ≤ i < j. To prove Corollary 3.1, by Theorem 3.1 it suffices to show that (3.8) is equivalent to
cj = 0
for all j ∈ := 1 j J : γ j ∈ N .
(4.23)
Note that, if (3.8) holds, then (4.22) and the fact γ J ≤ m indicate
cj = 0
for all 1 j J.
(4.24)
Then it remains to show that (4.23) implies (3.8). Note that (4.24) also holds. In fact, if this statement was false, then let
j0 := min 1 j J : c j = 0 . Obviously, we have j0 > 1 and γ j0 ∈ N, and, in this case,
1s
∂1s f (0, c0 ) = 0. (1 + s + α )
From (4.22) it follows c j0 = r j0 . Since ci = 0 for all 1 ≤ i < j0 , we have r j0 = 0, and so c j0 = 0. This is contrary to the definition of j0 . Therefore, (4.24) holds indeed. Then, from (4.24) and (4.22) it follows
∂1s f (0, c0 ) =0 (1 + s + α ) 1s
for all 1 j J,
s+α =γ j
and then, using (4.24) again, and noting the fact
{i + α : 0 i < m } ⊂ γ j : 1 j J ,
we obtain (3.8). This completes the proof of Corollary 3.1.
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
417
Appendix A. Proofs of Lemmas 4.5 and 4.6 Let 0 < h ≤ a. For some k ∈ N and 0 ≤ γ ≤ 1, denote
C k,γ [0, h] :=
v ∈ C k,γ [0, h] : v(0 ) = 0 .
Lemma A.1. For any g ∈ C k [0, h] with 1 ≤ k ≤ m, we have
wCk−1 [0,h] C g(k) C[0,h] ,
where, for γ ∈ γ1 , γ2 , . . . , γJ ,
w(x ) := g(x )xγ −1 ,
0 < x h.
Proof. If k = 1, then, by the Mean Value Theorem and the fact g(0 ) = 0, the conclusion is evident; therefore, below we assume that 2 ≤ k ≤ m. Let us first show that, for 0 ≤ i < k,
wi C[0,h] C g(k) C[0,h] ,
(A.1)
wi (x ) := w(i ) (x ),
(A.2)
where
0 < x h.
In fact, it is easy to see
wi ( x ) =
i
ci j g( j ) (x )xγ −1−i+ j ,
0 < x h,
(A.3)
j=0
where cij is a constant that depends only on γ , i and j for all 0 ≤ j ≤ i. Since g ∈ C k [0, h], we have g( j ) ∈ C k− j [0, h], and then, applying Taylor’s formula with integral remainder yields
g( j ) (x ) =
1 ( k − j − 1 )!
x
0
(x − t )k− j−1 g(k) (t ) dt ,
0 x h.
It follows that
( j)
g(k) C[0,h] γ +k−(i+1) γ −1 −i + j
g (x )x
x , 0 < x h. ( k − j )!
Since γ + k − (i + 1 ) γ > 0, the above estimate implies
( j) g (· )(· )γ −i−1+ j
C [ 0,h]
C gC k [0,h] .
Therefore, by (A.3) we obtain (A.1). Next, let i < k − 1. Note that by (A.2) we have
wi (x ) = wi+1 (x ),
0 < x h.
Since we have already proved that wi , wi+1 ∈ C[0, h], by the Mean Value Theorem it is evident that wi ∈ C 1 [0, h] and
wi (x ) = wi+1 (x ),
0 x h.
Consequenctly, we obtain w0 ∈ C k−1 [0, h] and
w0(i ) = wi ,
0 i < k,
and hence, by (A.1) we have
w0 C k−1 [0,h] C gC k [0,h] . By noting the fact w = w0 , this completes the proof.
Lemma A.2. Let l ∈ N such that (l + 1 )α 1. For any g ∈ C k,l α [0, h] with 1 ≤ k ≤ m, we have
wC k−1,(l+1)α [0,h] C gC k,lα [0,h] ,
where, for γ ∈ γ1 , γ2 , . . . , γJ ,
w(x ) := xγ −1 g(x ),
0 < x h.
(A.4)
418
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
Proof. By Lemma A.1 we have w ∈ C k−1 [0, h], and by the proof of Lemma A.1, we see that
w(k−1) (x ) =
k−1
e j v j ( x ),
j=0
where, for each 0 j k − 1,
v j (x ) := xγ −1− j g(k−1− j ) (x ), 0 < x h, and ej is a positive constant that depends only on α , γ and j. It remains, therefore, to show that
v j C 0,(l+1)α [0,h] C gC k,lα [0,h] , 0 j k − 1.
(A.5)
To this end, we let 0 < x < y ≤ h. A simple calculation gives
v j (y ) − v j (x ) I1 + I2 ,
where
I1 := yγ −1− j g(k−1− j ) (y ) − g(k−1− j ) (x ) ,
I2 := g(k−1− j ) (x )
yγ −1− j − xγ −1− j . Let us first estimate I1 . Using the fact g ∈ C k,l α [0, h], we obtain
(k−1− j )
g (y ) − g(k−1− j ) (x ) C gC k,lα [0,h] (y − x )1+ j yl α .
It follows
I1 C gC k,lα [0,h] (y − x )1+ j yl α +γ −1− j C gC k,lα [0,h] (y − x )(l+1)α (y − x )1+ j−(l+1)α yl α +γ −1− j C gC k,lα [0,h] (y − x )(l+1)α
y − x 1+ j−(l+1)α y
y γ −α .
Then, from the fact (l + 1 )α 1 and α ≤ γ , we obtain
I1 C gC k,lα [0,h] (y − x )(l+1)α .
(A.6)
Next we turn to estimate I2 by
I2 C gC k,lα [0,h] (y − x )(l+1)α .
(A.7)
In fact, if γ − 1 − j 1, then (A.7) follows immediately from the estimate
yγ −1− j − xγ −1− j C (y − x ) C (y − x )(l+1)α . And, if (l + 1 )α γ − 1 − j < 1, then (A.7) follows from the estimate
yγ −1− j − xγ −1− j < (y − x )γ −1− j C (y − x )(l+1)α . Thus, it remains to prove the case γ − 1 − j < (l + 1 )α . Using the fact g ∈ C k,l α [0, h], we obtain
(k−1− j )
g (x ) C gC k,lα [0,h] x1+ j+l α ,
and hence
I2 C gC k,lα [0,h] x1+ j+l α yγ −1− j − xγ −1− j
C gC k,lα [0,h] xγ −α x(l+1)α +1+ j−γ yγ −1− j − xγ −1− j
C gC k,lα [0,h] xγ −α (y − x )(l+1)α (A − 1 )(l+1)α +1+ j−γ Aγ −1− j − (A − 1 )γ −1− j , where A :=
y y−x .
Form the fact α ≤ γ it follows
I2 C gC k,lα [0,h] (y − x )(l+1)α (A − 1 )(l+1)α +1+ j−γ Aγ −1− j − (A − 1 )γ −1− j .
(A.8)
If γ − 1 − j > 0, then, by the facts (l + 1 )α + 1 + j − γ > 0, (l + 1 )α − 1 0, and 1 < A < ∞, we easily obtain
(A − 1 )(l+1)α+1+ j−γ Aγ −1− j − (A − 1 )γ −1− j C,
and then (A.7) follows from (A.8) directly. If γ − 1 − j = 0, then it is evident that (A.7) holds. If γ − 1 − j < 0, then, by the facts (l + 1 )α > 0, (l + 1 )α − 1 0, and 1 < A < ∞, we easily obtain
(A − 1 )(l+1)α+1+ j−γ (A − 1 )γ −1− j − Aγ −1− j C,
and then (A.7) follows from (A.8).
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
419
Finally, by (A.6) and (A.7) we readily obtain
v j (y ) − v j (x ) C gC k,lα [0,h] (y − x )(l+1)α .
This indicates (A.5), and thus completes the proof.
Using the same technique used in the proofs of Lemmas A.1 and A.2, we can easily prove the following lemma. Lemma A.3. Let l ∈ N satisfying l α 1 < (l + 1 )α . For any g ∈ C k,l α [0, h] with 0 ≤ k ≤ m, we have
wC k,(l+1)α−1 [0,h] C gC k,lα [0,h] ,
where, for γ ∈ γ1 , γ2 , . . . , γJ ,
w(x ) := xγ −1 g(x ),
0 < x h.
Proof of Lemma 4.5. In light of Lemmas A.2 and A.3, it remains to show that, for any g ∈ C 0,l α [0, h] with l ∈ N and (l + 1 )α 1, we have
wC 0,(l+1)α [0,h] C gC 0,lα [0,h] .
(A.9)
It is easy to see that w ∈ C[0, h]. Let 0 < x < y ≤ h. By the fact g ∈ C 0,l α , we obtain
|w(y ) − w(x )| C gC 0,lα [0,h] (yγ +lα − xγ +lα ). If γ + l α 1, then we get
|w(y ) − w(x )| C gC 0,lα [0,h] (y − x ) C gC 0,lα [0,h] (y − x )(l+1)α . If γ + l α < 1, then, by the fact α ≤ γ , we have
|w(y ) − w(x )| C gC 0,lα [0,h] (y − x )γ +lα C gC 0,lα [0,h] (y − x )(l+1)α . Consequently, we obtain (A.9), and thus complete the proof of Lemma 4.5.
Finally, we present a more general result than Lemma 4.6. For any w ∈ C[0, h] and β ∈ s with 1 ≤ s ≤ n, define Tw,β ,h : Cm [0, h] → C[0, h] by
Tw,β ,h z(x ) := w(x )∂β f x, z(x ) + S(x )
for all 0 x h , z ∈ Cm [0, h].
Lemma A.4. Let 0 k m − 1, w ∈ C k [0, h], and β ∈ s with 1 ≤ s ≤ n. For any z ∈ Cm [0, h], we have
Tw,β ,h z
C min{k,n−s} [0,h]
C wC k [0,h]
min{k,n−s}
zCj m [0,h] .
(A.10)
j=0
Proof. We employ the well-known principle of mathematical induction to prove this lemma. Firstly, it is clear that (A.10) holds in the case k = 0. Secondly, assuming that (A.10) holds for k = l with 0 l < m − 1, let us prove that (A.10) holds for k = l + 1. To this end, a straightforward calculation gives, for all 0 < x ≤ h,
(Tw,β ,h z ) (x ) = Tw ,β ,h z(x ) + Tw,β,h z(x ) + T
≈
,β ,h w
z ( x ),
(A.11)
≈
:= (β , β , . . . , β , 1 ), β , := (β , β , . . . , β , 2 ), and where β s s 1 2 1 2
(x ) := w(x ) z (x ) + w
J
γ j c j xγ j −1 .
j=1
∈ C k−1 [0, h]; consequently, Tw ,β ,h z and T Since w ∈ C k [0, h], we have w ∈ C k−1 [0, h], and by Lemma A.1 we get w
≈
,β ,h w
z are
well-defined, and they both belong to C[0, h]. Therefore, from the Mean Value Theorem and the fact Tw,β ,h z ∈ C[0, h], it follows that Tw,β ,h z ∈ C 1 [0, h] with (A.11) for all 0 ≤ x ≤ h. By our assumption, we have the following results:
Tw ,β ,h z
C min{k−1,n−s} [0,h]
T
,h w,β
z
C min{k−1,n−s−1} [0,h]
min{k−1,n−s}
C w
C k−1 [0,h]
C wC k−1 [0,h]
zCj m [0,h] ,
j=0
min{k−1,n−s−1}
j=0
zCj m [0,h] ,
420
B. Li, X. Xie and S. Zhang / Applied Mathematics and Computation 349 (2019) 408–420
Tw,β≈ ,h z
C min{k−1,n−s−1} [0,h]
C k−1 [0,h] C w
min{k−1,n−s−1}
zCj m [0,h] .
j=0
In addition, by Lemma A.1 we easily obtain
C k−1 [0,h] C wC k [0,h] 1 + z k−1 . w C [0,h]
As a consequence, we obtain
(Tw,β ,h z )
C min{k−1,n−s−1} [0,h]
C wC k [0,h]
i.e. (A.10) holds. This completes the proof.
min{k,n−s}
zCj m [0,h] ,
j=0
References [1] [2] [3] [4] [5] [6] [7]
K. Diethelm, Smoothness properties of solutions of Caputo-type fractional differential equations, Fract. Calc. Appl. Anal. 10 (2007) 151–160. K. Diethelm, N.J. Ford, Analysis of fractional differential equations, J. Math. Anal. Appl. 265 (2002) 229–249. W. Deng, Smoothness and stability of the solutions for nonlinear fractional differential equations, Nonlinear Anal. 72 (2010) 1768–1777. H.L. Dret, Nonlinear Elliptic Partial Differential Equations, Springer, Cham, 2018. C. Lubich, Runge–Kutta theory for Volterra and Abel integral equations of the second kind, Math. Comput. 41 (1983) 87–102. R.K. Miller, A. Feldstein, Smoothness of solutions of Volterra integral equations with weakly singular kernels, SIAM J. Math. Anal. 2 (1971) 242–258. S.G. Samko, A.A. Kilbas, O.I. Marichev, Fractional Integrals and Derivatives: Theory and Applications, Gordon and Breach, Yverdon, 1993.