Nonlinear Analysis: Real World Applications 51 (2020) 103002
Contents lists available at ScienceDirect
Nonlinear Analysis: Real World Applications www.elsevier.com/locate/nonrwa
Analytical and numerical solutions of a class of nonlinear integro-differential equations with L1 kernels✩ Da Xu Department of Mathematics, Hunan Normal University, Changsha 410081, Hunan, People’s Republic of China
article
info
abstract
Article history: Received 20 February 2019 Accepted 5 August 2019 Available online 16 August 2019 Keywords: Nonlinear and nonlocal hyperbolic integro-differential equations L1 weakly singular kernel Global existence and uniqueness Galerkin methods Euler methods Optimal error estimates
We study the finite element approximations to a general class of nonlinear and nonlocal hyperbolic integro-differential equations with L1 convolution kernels. The continuous time Galerkin procedures are defined and global existence of a unique discrete solution is derived. Moreover, optimal error estimates are shown in the L∞ (H01 (Ω ))-norms. For the completely discrete scheme, linearized backward Euler method is defined and error estimates in l∞ (H01 (Ω ))-norm are proved. Several numerical experiments are reported to confirm our theoretical findings. © 2019 Elsevier Ltd. All rights reserved.
1. Introduction Let Ω be a bounded domain in Rd (d ≥ 1) with smooth boundary ∂Ω and consider the following nonlinear and nonlocal hyperbolic integro-differential equations (∫ t ∫ ) ∫ t 2 utt (x, t) + M |∇u(x, s)| dxds ut (x, t) − △u(x, t) + β(t − s)△u(x, s)ds = f (x, t), (1.1) 0
Ω
0
in Ω × (0, ∞), taken together with the Dirichlet boundary and initial conditions u(x, t) = 0,
on ∂Ω × (0, ∞),
(1.2)
u(x, 0) = u0 (x), ut (x, 0) = u1 (x), in Ω . (1.3) ∑ 2 d ∂ + Here ∇ is the gradient operator in Rd and △ = j=1 ∂x2 the Laplacian, M (w) is a function from R into j
(0, ∞), f (x, t), u0 (x) and u1 (x) are given functions in their respective domains of definition, the kernel in ✩ This work was supported in part by the National Natural Science Foundation of China, contract grant numbers 11271123, 11671131. E-mail address:
[email protected].
https://doi.org/10.1016/j.nonrwa.2019.103002 1468-1218/© 2019 Elsevier Ltd. All rights reserved.
2
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
the memory term is the weakly singular function in the origin β(t) = e−at t−α /Γ (1 − α), a > 1, 0 < α < 1.
(1.4)
Our interest is mainly motivated by the fact that the above equation may be regarded as a model problem for some elastic systems with memory, see [1–7] and references therein. It is noteworthy that the kernels ∫∞ β(t) and k(t) = t β(τ )dτ are completely monotonic on (0, ∞), belong to L1 (0, 1) and β(∞) = k(∞) = 0, β(0+ ) = ∞, k(0) < 1. Hyperbolic integro-differential equation (1.1) is believed to provide a powerful tool for modeling plenty of nature phenomena in physics [1,7], biology [6] and chemistry [4]. In [7], Cannarsa and Sforza present maximal regularity and asymptotic behavior at ∞ for the solutions of nonlinear integro-differential equation (1.1). Due to potential applications, the numerical simulation and analysis of hyperbolic integro-differential equations have received much attentions. For example, Larsson and Saedpanah [8] discussed a Galerkin method for solving the linear hyperbolic integro-differential equation with weakly singular kernels. Saedpanah [9] studied stability and convergence of a continuous space–time finite element scheme. Cannon et al. [10] considered accuracy and stability of a Galerkin method for hyperbolic integro-differential equation. Allegretto and Lin [11] presented error analysis of a fully discrete finite element method for hyperbolic integro-differential equations. For more development of numerical methods and analysis for the hyperbolic integro-differential equations with fractional kernels, we refer the readers to [12–18] and references therein. The works in above are interesting and instructive, but most of them focus on the analysis of numerical schemes for linear problems. In this paper we shall consider the numerical solutions for nonlinear hyperbolic integro-differential equation (1.1), and propose employing Galerkin finite element scheme. In order to obtain the global existence and well-posedness of the nonlinear problem (1.1)–(1.3) we shall use the following assumptions on M (w) (see [7]). (M 1) There exist positive constants m0 and m1 such that for 0 ≤ w ≤ Q, m0 ≤ M (w) ≤ m1 ,
(1.5)
where the constant m1 is dependent on Q. (M2) The function M (w) is continuously differentiable with 0 ≤ M ′ (w) ≤ L for w ≥ 0, where L is a positive constant. As a consequence of (M2), we see that the function M (w) is Lipschitz continuous with Lipschitz constant L, that is, for any w1 , w2 > 0, the following holds |M (w1 ) − M (w2 )| ≤ L |w1 − w2 |.
(1.6)
As examples, consider the following two functions both satisfying (M1) and (M2): M1 (w) = 1 + w and 1 M2 (w) = (1 + w) 2 . A weak generalized solution of (1.1)–(1.3) is a function ( )⋂ ( ) u(x, t) ∈ C 1 [0, T ]; L2 (Ω ) C [0, T ]; H01 (Ω ) such that, for any v(x) ∈ H01 (Ω ), (u(., t), v) ∈ C([0, T ]) and for any t ∈ (0, T ] one has (∫ t ∫ ) 2 (utt (t), v) + M |∇u(x, s)| dxds (ut (t), v) 0
∫ +(∇u(t), ∇v) −
(1.7)
Ω
t
β(t − s)(∇u(s), ∇v)ds = (f (t), v), ∀v ∈ H01 (Ω ).
0
Here, L2 (Ω ), H m (Ω ) and H0m (Ω ), m ∈ Z + , denote the usual Lebesgue and Sobolev spaces on Ω , respectively. (., .) denote the L2 (Ω ) inner product and ∥u∥2 = (u, u) as its induced norm. For a normal
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
3
linear space X with norm ∥.∥X , C m ([0, T ]; X), m = 0, 1, stands for the space of continuous function from [0, T ] to X having continuous derivatives up to the order k in [0, T ]. In particular, we write C([0, T ; X]) for C 0 ([0, T ]; X). Cannarsa and Sforza [7] give the global existence and well-posedness for the weak solution of problems (1.1)–(1.3). We state in the following theorem. Theorem 1.1 (See [7, Corollary 4.7]). Under assumption (M2), for any u0 ∈ H01 (Ω ), u1 ∈ L2 (Ω ), f ∈ L1 (0, ∞; L2 (Ω )) problems (1.1)–(1.4) possesses a unique weak solution u(t) on [0, ∞) such that ∥ut (t)∥2 + ∥∇u(t)∥2 ≤ C(∥u1 ∥2 + ∥∇u0 ∥2 + ∥f ∥21 ),
(1.8)
for any t ≥ 0 and some constant C > 0. Where L1 (0, ∞; L2 (Ω )) denote the usual spaces of measurable functions f (x, t) : [0, ∞) → L2 (Ω ) such that one has ∫ ∞ ∥f ∥1 = ∥f (t)∥dt < ∞. (1.9) 0
Let Sh be a finite-dimensional subspaces in
H01 (Ω )
with the approximation property
inf {∥v − χ∥ + h∥∇(v − χ)∥} ≤ C h2 ∥v∥H 2 (Ω) ,
χ∈Sh
(1.10)
⋂ where C is a positive constant independent of h and v ∈ H 2 (Ω ) H01 (Ω ). In applications, h is typically the maximum diameter of triangle in the triangulation underlying the definition of the finite element space Sh . The continuous Galerkin approximation to the solution u of (1.1)–(1.3) is defined to be a solution uh (t) ∈ Sh such that (∫ t ) ∥∇uh (s)∥2 ds (uh,t (t), χ)
(uh,tt (t), χ) + M
(1.11)
0
∫
t
+(∇uh (t), ∇χ) −
β(t − s)(∇uh (s), ∇χ)ds = (f (t), χ), ∀χ ∈ Sh , t > 0, 0
uh (0) = u0 h , uh, t (0) = u1 h ,
(1.12)
where u0 h and u1 h belonging Sh are suitable projection of u0 and u1 in Sh , respectively. We know that (1.11)–(1.12) is actually a system of nonlinear integro-differential equations. The existence of a unique solution to (1.11)–(1.12) follows from Picard’s theorem in some finite interval [0, th ), th > 0. For the global existence of solution for all t > 0, it is enough to show the boundedness of uh (t) for t > 0 in which we shall prove the following boundedness (see Lemma 2.1), ∥uh,t (t)∥2 + ∥∇uh (t)∥2 ≤ C(∥u1 h ∥2 + ∥∇u0 h ∥2 + ∥f ∥21 ).
(1.13)
We then turn to discretization of (1.11)–(1.12) in time. We introduce a partition: 0 = t0 < t1 < t2 < · · · < tN = T of the time interval [0, T ]. Let ∆t = tn − tn−1 , where tn = n∆t, 1 ≤ n ≤ N , and U n be the approximation of u(tn ). We define the first and second order backward difference quotient by 2 ∂t U n = (U n − U n−1 )/∆t, and ∂t U n = (U n − 2U n−1 + U n−2 )/(∆t)2 , respectively. The discrete time Galerkin approximation to (1.1)–(1.3) is defined to be a family {U n }N n=0 in Sh such that ⎛ ⎞ n−1 ∑ 2 (∂t U n , χ) + M ⎝∆t ∥∇U j ∥2 ds⎠ (∂t U n , χ) (1.14) j=0
+(∇U n , ∇χ) −
n ∑
βn−j (∇U j , ∇χ) = (f n , χ), ∀χ ∈ Sh , n ≥ 2,
j=1
U 1 = U 0 + ∆tu1 h , U 0 = u0 h ,
(1.15)
4
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
where
∫
tj+1
β(s) ds.
βj =
(1.16)
tj
The discrete problem (1.14)–(1.15) yields a system of linear algebraic equations at t = tn . Thus, by using a priori bound for U n , it is easy to show that this system has a unique solution at each time level t = tn . As we shall see, we may derive an error estimate of the form ∥∇(U n − u(tn ))∥ ≤ C(u)(h + ∆t).
(1.17)
This error estimate is optimal in the H01 (Ω )-norm under the condition (1.10). In addition appropriate condition on the triangulation, we believe that the optimal L2 (Ω )-norm error estimates O(h2 ) can be proved. It remains an open problem. Earlier work on the numerical solution of problems of the linear version (1.1) with M (s) = 0, s ≥ 0, has been done by e.g. Cannon et al. [10], and Larsson et al. [19]. We shall make some comments on these papers below. Larsson and Saedpanah [8] considered the continuous Galerkin methods in both space and time; see also Saedpanah [9]. Xu [12] studied the Crank–Nicolson type time stepping methods based on the Galerkin finite element in space. Moreover, in [12], the author derived some optimal error estimates on the long time uniform norm. Xu [13] studied the orthogonal spline collocation method for linear hyperbolic integro-differential equation. Fairweather [16] analyzed the spline collocation methods for hyperbolic partial integro-differential equations. He established existence and uniqueness of the OSC solution for sufficiently small h and derived the optimal order L2 error estimates. OSC methods for partial integro-differential equations were first considered by Yanik and Fairweather [15] who formulated and analyzed discrete-time OSC methods for hyperbolic partial integro-differential equations in one space variable. The rest of the paper is organized as follows. In Section 2, we deal with the analysis of a semidiscrete solution. Section 3 is concerned with the fully discrete schemes and their error analysis. Finally, Section 4 describes some simple numerical experiments, the results of which confirm our theoretical findings. 2. Discretization in space In this section we study the discretization in space of the nonlinear hyperbolic integro-differential equation (1.1)–(1.3). The numerical solution is sought in the family {Sh } ⊂ H01 (Ω ), depending on a small parameter h, in which we assume to have the property that (1.10) holds, uniformly in h. The semidiscrete Galerkin finite element solution uh (t) : [0, ∞) → Sh is defined by (∫ t ) (uh,tt (t), χ) + M ∥∇uh (s)∥2 ds (uh,t (t), χ) (2.1a) 0
∫ +(∇uh (t), ∇χ) −
t
β(t − s)(∇uh (s), ∇χ)ds = (f (t), χ), ∀χ ∈ Sh , t > 0, 0
uh (0) = u0 h , uh, t (0) = u1 h ,
(2.1b)
where u0 h and u1 h are appropriate approximations of u0 and u1 in Sh , respectively. A closely analogous argument to that in [7, Corollary 4.7], we can observe that to obtain the global existence of solution uh (t) for all t > 0, it is enough to show the boundedness of uh (t) for t > 0 which is stated below. Lemma 2.1. Assume that f ∈ L1 (0, ∞; L2 (Ω )), and u0 h , u1 h are the approximations of u0 and u1 in Sh , respectively. Then we have, for the solution of (2.1), ∥uh,t (t)∥2 + ∥∇uh (t)∥2 ≤ C(∥u1 h ∥2 + ∥∇u0 h ∥2 + ∥f ∥21 ), where C is a positive constant depending on k(0).
(2.2)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
Proof . Since uh,t (t) ∈ Sh we may choose χ = uh,t (t) in (2.1a) to obtain (∫ t ) 1 d ∥uh,t (t)∥2 + M ∥∇uh (s)∥2 ds ∥uh,t (t)∥2 2 dt 0 ∫ t 1 d 2 ∥∇uh (t)∥ − β(t − s)(∇uh (s), ∇uh,t (t))ds = (f (t), uh, t (t)), t > 0. + 2 dt 0 Integrating the above identity from 0 to s, 0 < s < t, by (1.5) and the fact that kt (t) = −β(t), ∫ s 1 1 ∥uh,t (s)∥2 + m0 ∥uh,t (τ )∥2 dτ + ∥∇uh (s)∥2 2 2 0 ∫ s∫ t kt (t − τ )(∇uh (τ ), ∇uh,t (t))dτ dt + ∫ s 0 0 1 1 ≤ ∥u1 h ∥2 + ∥∇u0 h ∥2 + ∥f (t)∥∥uh,t (t)∥dt. 2 2 0 The convolution term using integration by parts, is written ∫ t ∫ t kt (t − τ )∇uh (τ )dτ = k(t)∇u0 h − k(0)∇uh (t) + k(t − τ )∇uh,t (τ )dτ. 0
5
(2.3)
(2.4)
(2.5)
0
Using the above identity to evaluate the fourth term on the left-hand side of (2.4), and along with ∫ s∫ t k(t − τ )(∇uh,t (τ ), ∇uh,t (t))dτ dt ≥ 0, 0
0
we conclude that s
t
( ) ∫ s kt (t − τ )(∇uh (τ ), ∇uh,t (t))dτ dt = ∇u0 h , k(t)∇uh, t (t)dt 0 0 0 ∫ s∫ t k(0) k(0) 2 2 ∥∇uh (s)∥ + ∥∇u0 h ∥ + k(t − τ )(∇uh,t (τ ), ∇uh,t (t))dτ dt − 2 ( 2 ) ∫ s 0 0 k(0) k(0) ≥ ∇u0 h , k(t)∇uh,t (t)dt − ∥∇uh (s)∥2 + ∥∇u0 h ∥2 . 2 2 0 ∫
∫
Again, we use the integration by parts. This gives us ∫ s ∫ k(t)∇uh,t (t)dt = k(s)∇uh (s) − k(0)∇u0 h + 0
(2.6)
s
β(t)∇uh (t)dt.
(2.7)
0
Then (2.6) and (2.7) imply ∫ s∫
t
kt (t − τ )(∇uh (τ ), ∇uh,t (t))dτ dt ≥ k(s) (∇u0 h , ∇uh (s)) 0
∫
(2.8)
0 s
β(τ ) (∇u0 h , ∇uh (τ )) dτ −
+ 0
k(0) k(0) ∥∇uh (s)∥2 − ∥∇u0 h ∥2 . 2 2
Using (2.4) and (2.8) we obtain that ∫ s 1 ν0 ∥uh,t (s)∥2 + m0 ∥uh,t (τ )∥2 dτ + ∥∇uh (s)∥2 2 2 0 ) ( ∫ ∞ 1 1 β(τ )dτ − k(s) (∇u0 h , ∇uh (s)) ≤ ∥u1 h ∥2 + ∥∇u0 h ∥2 1 + 2 2 0 ∫ s ∫ s − β(τ ) (∇u0 h , ∇uh (τ )) dτ + ∥f (t)∥∥uh,t (t)∥dt, 0
where ν0 = 1 − k(0) = 1 −
∫∞ 0
0
β(t)dt > 0.
(2.9)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
6
From (2.9) we get that ∫ s ν0 1 ∥uh,t (s)∥2 + m0 ∥uh,t (τ )∥2 dτ + ∥∇uh (s)∥2 2 2 0 ( ) ∫ ∞ 1 1 ≤ ∥u1 h ∥2 + ∥∇u0 h ∥2 1 + β(τ )dτ + k(0)∥∇u0 h ∥∥∇uh (s)∥ 2 2 0 ∫ s ∫ s + β(τ )∥∇u0 h ∥∥∇uh (τ )∥dτ + ∥f (t)∥∥uh,t (t)∥dt. 0
(2.10)
0
The analysis of the right-hand side of (2.10) can be completed observing that k(0)∥∇u0 h ∥∥∇uh (s)∥ ≤ and
∫
s
∫ β(τ )∥∇u0 h ∥∥∇uh (τ )∥dτ ≤
0
0
s
(k(0))2 ν0 ∥∇uh (s)∥2 + ∥∇u0 h ∥2 , 4 ν0 1 β(τ )dτ ∥∇u0 h ∥ + 4 2
∫
s
β(τ )∥∇uh (τ )∥2 dτ.
0
Furthermore, we have that ∫
s
∫ ∥f (t)∥∥uh,t (t)∥dt ≤ sup ∥uh,t (t)∥ 0≤t≤s
0
(∫ ≤ 0
s
s
∥f (t)∥dt 0
)2 1 ∥f (t)∥dt + sup ∥uh,t (t)∥2 . 4 0≤t≤s
Thus, the inequality (2.10), along with the above three inequalities, gives us ∫ s 1 ν0 ∥uh,t (s)∥2 + m0 ∥uh,t (τ )∥2 dτ + ∥∇uh (s)∥2 2 4 0 ( ) ∫ ∞ 1 2(k(0))2 1 2 2 β(τ )dτ + ≤ ∥u1 h ∥ + ∥∇u0 h ∥ 1 + 3 2 2 ν0 0 ∫ s 1 1 + β(τ )∥∇uh (τ )∥2 dτ + ∥f (t)∥21 + sup ∥uh,t (t)∥2 . 4 0 4 0≤t≤s
(2.11)
For a given s, letting ts be such that ∥uh,t (ts )∥2 = sup0≤t≤s ∥uh,t (t)∥2 , we conclude from (2.11) that ( ) 1 1 2(k(0))2 1 2 2 2 sup ∥uh,t (τ )∥ ≤ ∥u1 h ∥ + ∥∇u0 h ∥ 1 + 3k(0) + 4 0≤τ ≤s 2 2 ν0 ∫ s 1 + β(τ )∥∇uh (τ )∥2 dτ + ∥f (t)∥21 . (2.12) 4 0 Now (2.11) and (2.12) imply that ∫ s ν0 1 2 ∥uh,t (s)∥ + m0 ∥uh,t (τ )∥2 dτ + ∥∇uh (s)∥2 (2.13) 2 4 0 ) ( 2(k(0))2 ≤ ∥u1 h ∥2 + ∥∇u0 h ∥2 1 + 3k(0) + ν0 ∫ 1 s + β(τ )∥∇uh (τ )∥2 dτ + 2∥f (t)∥21 . 2 0 ∫∞ Since 0 β(τ )dτ = k(0) < ∞, by Gronwall’s lemma we obtain ∫ s 1 ν0 2 ∥uh,t (s)∥ + m0 ∥uh,t (τ )∥2 dτ + ∥∇uh (s)∥2 2 4 0 [ ( ) ] 2 k(0) 2(k(0))2 ∥u1 h ∥2 + ∥∇u0 h ∥2 1 + 3k(0) + ≤ e ν0 + 2∥f (t)∥21 . ν0 This completes the proof of Lemma 2.1.
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
7
We shall show error estimates for (2.1). For this purpose we introduce the Ritz projection u ˜h (t) : H01 (Ω ) → Sh by (∇˜ uh (t), ∇χ) = (∇u(t), ∇χ),
∀χ ∈ Sh .
(2.14)
Let ρ(t) = u(t) − u ˜h (t), from a well-known error estimate for the elliptic problem (see [20] or [21]), we have at once dr dr dr (2.15) ∥ r ρ(t)∥ + h ∥∇ r ρ(t)∥ ≤ C hj ∥ r u(t)∥H j (Ω) , j = 1, 2; r = 0, 1, 2. dt dt dt To derive error estimates for the semidiscrete scheme (2.1), we write the error uh (t) − u(t) = uh (t) − u ˜h (t) − (u(t) − u ˜h (t)) = θ(t) − ρ(t). Having error estimates (2.15) for ρ, it remains to estimate θ(t). Subtracting (1.7) from (2.1), we derive an equation in θ(t) as follows (
t
(∫
) (∫ ∥∇uh (s)∥ ds uh,t (t) − M
t
2
(θtt (t), χ) + M 0
) ) ∥∇˜ uh (s)∥ ds u ˜h,t (t), χ 2
0
(2.16)
t
∫ +(∇θ(t), ∇χ) −
β(t − s)(∇θ(s), ∇χ)ds 0
t
( (∫ = (ρtt (t), χ) + M
) (∫ ∥∇u(s)∥ ds ut (t) − M
t
2
0
0
) ) ∥∇˜ uh (s)∥ ds u ˜h,t (t), χ ∀χ ∈ Sh , t > 0. 2
Theorem 2.1. Assume that β(t) is the type of (1.4) and the assumptions (M1)–(M2) hold. If u0 h and u1 h are chosen as u0 h = u ˜h (0) and u1 h = u ˜h,t (0), respectively. Then for the solutions of (2.1) and (1.7) we have, for 0 < s ≤ T , ∫ s 1 ν0 2 ∥uh,t (s) − ut (s)∥ + m0 ∥∇(uh (s) − u(s))∥2 (2.17) ∥uh,t (τ ) − ut (τ )∥2 dτ + 2 4 0 ≤ C h2
s
[∫ 0
∫
s
+ 0
∥utt (τ )∥2H 1 (Ω) dτ + 0
∫
s
∥ut (τ )∥2H 1 (Ω) dτ 0
0
] ∥u(τ )∥2H 2 (Ω) dτ + ∥u(s)∥2H 2 (Ω) ⋂ H 1 (Ω) + ∥ut (s)∥2H 1 (Ω) , 0
0
where C is a positive constant depending on T, L, ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, 0 0 and ∥f ∥L1 (0, ∞; L2 (Ω)) .
T ; L2 (Ω)) ,
Proof . Since θ(t) ∈ Sh we may choose χ = θt (t) in (2.16) to obtain for t > 0, ( (∫ t ) (∫ t ) ) 1 d ∥θt (t)∥2 + M ∥∇uh (s)∥2 ds uh,t (t) − M ∥∇˜ uh (s)∥2 ds u ˜h,t (t), θt (t) 2 dt 0 0 + ( = (ρtt (t), θt (t)) +
1 d ∥∇θ(t)∥2 − 2 dt (∫
M 0
t
∫
t
β(t − τ )(∇θ(τ ), ∇θt (t))dτ 0
) (∫ ∥∇u(s)∥2 ds ut (t) − M 0
t
) ) ∥∇˜ uh (s)∥2 ds u ˜h,t (t), θt (t) .
(2.18)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
8
To establish the estimate of the second term on the left hand side of (2.18), we rewrite it as follows ( (∫ t ) (∫ t ) ) 2 2 M ∥∇uh (s)∥ ds uh,t (t) − M ∥∇˜ uh (s)∥ ds u ˜h,t (t), θt (t) (2.19) 0
0
) ] ) [ (∫ t 1 d 2 ∥σ∇θ(τ ) + ∇˜ uh (τ )∥ dτ (σθt (t) + u ˜h,t (t)) dσ, θt (t) M 0 dσ 0 ) (∫ t (∫ 1 2 ′ ∥σ∇θ(τ ) + ∇˜ uh (τ )∥ dτ M = 0 (∫ t 0 ) ) d 2 × ∥σ∇θ(τ ) + ∇˜ uh (τ )∥ dτ (σθt (t) + u ˜h,t (t)) , θt (t) dσ dσ 0 ) ∫ 1 (∫ t 2 ∥σ∇θ(τ ) + ∇˜ uh (τ )∥ dτ dσ∥θt (t)∥2 M +
(∫ =
0 1
∫
M
= 2
′
0
t
(∫
t
)∫
2
(
∥σ∇θ(τ ) + ∇˜ uh (τ )∥ dτ
0
0
) σ∥∇θ(τ )∥2 + (∇˜ uh (τ ), ∇θ(τ )) dτ
0 2
(
) σ∥θt (t)∥ + (˜ uh,t (t), θt (t)) dσ ) ∫ 1 (∫ t 2 + M ∥σ∇θ(τ ) + ∇˜ uh (τ )∥ dτ dσ∥θt (t)∥2 . 0
0
For the second term on the right hand side of (2.19), we obtain from (2.2), the fact that ∥∇˜ uh (t)∥ ≤ ∥∇u(t)∥, (1.8) and (1.5) that ) ∫ 1 (∫ t M ∥σ∇θ(τ ) + ∇˜ uh (τ )∥2 dτ dσ∥θt (t)∥2 ≥ m0 ∥θt (t)∥2 . (2.20) 0
0
Again, the first term on the right hand side of (2.19) can be written as (∫ t )∫ t ∫ 1 ( ) M′ ∥σ∇θ(τ ) + ∇˜ uh (τ )∥2 dτ σ∥∇θ(τ )∥2 + (∇˜ uh (τ ), ∇θ(τ )) dτ 0
0
(2.21)
0
) σ∥θt (t)∥2 + (˜ uh,t (t), θt (t)) dσ (∫ t )∫ t ∫ 1 ( ) = M′ ∥σ∇θ(τ ) + ∇˜ uh (τ )∥2 dτ σ∥∇θ(τ )∥2 σ∥θt (t)∥2 dτ dσ (
0
0
0
+ J1 (t; θ, u ˜h ) + J2 (t; θ, u ˜h ) + J3 (t; θ, u ˜h ), where 1
∫ J1 (t; θ, u ˜h ) =
M
J2 (t; θ, u ˜h ) = ∫
′
(∫
t 2
(∇˜ uh (τ ), ∇θ(τ )) σ∥θt (t)∥2 dτ dσ,
(2.22b)
(∇˜ uh (τ ), ∇θ(τ )) dτ dσ (˜ uh,t (t), θt (t)) .
(2.22c)
)∫
0 ′
(2.22a)
0
∥σ∇θ(τ ) + ∇˜ uh (τ )∥ dτ
0 1
) σ∥∇θ(τ )∥2 dτ dσ (˜ uh,t (t), θt (t)) ,
(
∥σ∇θ(τ ) + ∇˜ uh (τ )∥ dτ
1
M
t
)∫
2
0
M 0
t
(∫
0
∫
J3 (t; θ, u ˜h ) =
′
(∫
t
0
t 2
)∫
∥σ∇θ(τ ) + ∇˜ uh (τ )∥ dτ 0
t
0
From the assumption (M2), we see that the first term on the right hand side of (2.21) is nonnegative, or (∫ t )∫ t ∫ 1 ( ) ′ 2 M ∥σ∇θ(τ ) + ∇˜ uh (τ )∥ dτ σ∥∇θ(τ )∥2 σ∥θt (t)∥2 dτ dσ ≥ 0, for 0 ≤ t ≤ T. 0
0
0
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
9
We use (2.18)–(2.22) and above inequality to get that 1 d 1 d ∥θt (t)∥2 + m0 ∥θt (t)∥2 + ∥∇θ(t)∥2 2 dt 2 dt ∫ t β(t − τ )(∇θ(τ ), ∇θt (t))dτ − 0
≤ |(ρtt (t), θt (t))| +
4 ∑
|Jk (t; θ, u ˜h )| ,
(2.23)
k=1
where J4 (t; θ, u ˜h ) denotes the second term on the right hand side of (2.18). We now bound each of the terms on the right-hand side of (2.23) separately. First, using the Cauchy– Schwarz inequality and the Young’s inequality, we have |(ρtt (t), θt (t))| ≤
2 m0 ∥θt (t)∥2 . ∥ρtt (t)∥2 + m0 8
(2.24)
From the Cauchy–Schwarz inequality, and the assumption (M2) we find the estimates ∫ t |J1 (t; θ, u ˜h )| ≤ L ∥∇θ(τ )∥2 dτ ∥θt (t)∥∥˜ uh,t (t)∥.
(2.25)
0
By (1.8) it is clear that ( ) ∥ut (t)∥ ≤ C ∥u1 ∥ + ∥u0 ∥H 1 (Ω) + ∥f ∥1 , and from [7, Proposition 4.4] and (1.8) we find that ( ∥∇ut (t)∥ ≤ C(T ) ∥u1 ∥H 1 (Ω) + ∥u0 ∥H 2 (Ω) + ∥f (0)∥ + ∥ft ∥L1 (0,
T ; L2 (Ω))
) + ∥f ∥1 .
Combining above two estimates and (2.15) we have that, for h appropriately small and 0 < t ≤ T , ∥˜ uh,t (t)∥ = ∥˜ uh,t (t) − ut (t) + ut (t)∥ ≤ ∥ut (t)∥ + Ch∥∇ut (t)∥ ( ) ≤ C(T ) ∥u1 ∥H 1 (Ω) + ∥u0 ∥H 2 (Ω) + ∥f (0)∥ + ∥ft ∥L1 (0, T ; L2 (Ω)) + ∥f ∥1 .
(2.26)
From (2.2), the fact that ∥∇˜ uh (t)∥ ≤ ∥∇u(t)∥, and the following Poincar´ e inequality λ1 ∥v∥2 ≤ ∥∇v∥2 ,
for v ∈ H01 (Ω ),
(2.27)
where λ1 represents the minimum eigenvalue of Laplace operator with homogeneous Dirichlet boundary condition, we obtain that ∥uh,t (t)∥ ≤ C (∥u1,h ∥ + ∥∇u0,h ∥ + ∥f ∥1 ) = C (∥˜ uh,t (0)∥ + ∥∇˜ uh (0)∥ + ∥f ∥1 ) ≤ (∥∇˜ uh,t (0)∥ + ∥∇u0 ∥ + ∥f ∥1 ) ≤ (∥∇u1 ∥ + ∥∇u0 ∥ + ∥f ∥1 ) . From (2.26) and above just derived estimate we obtain that ∥θt (t)∥ = ∥uh,t (t) − u ˜h,t (t)∥ ≤ ∥uh,t (t)∥ + ∥˜ uh,t (t)∥ ( ≤ C ∥u1 ∥H 1 (Ω) + ∥u0 ∥H 2 (Ω) + ∥f (0)∥ + ∥ft ∥L1 (0,
T ; L2 (Ω))
Combining (2.25), (2.26) and (2.28) we have now established that ∫ t |J1 (t; θ, u ˜h )| ≤ C ∥∇θ(τ )∥2 dτ, 0
(2.28)
) + ∥f ∥1 .
(2.29)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
10
where C is a constant depending on L, ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, 0 ∥f ∥L1 (0, ∞; L2 (Ω)) and independent of h. We know from (2.22b), the Cauchy–Schwarz inequality and the assumption (M2) that ∫ t ∥∇θ(τ )∥ ∥∇˜ uh (τ )∥dτ, |J2 (t; θ, u ˜h )| ≤ L∥θt (t)∥2
T ; L2 (Ω)) ,
(2.30)
0
and using (2.30), (2.28), the Young’s inequality and the fact that ∥∇˜ uh (t)∥ ≤ ∥∇u(t)∥ ≤ C ≤ (∥u1 ∥L2 (Ω) + ∥∇u0 ∥L2 (Ω) + ∥f ∥L1 (0,
∞; L2 (Ω)) ),
(2.31)
we observe that, for 0 ≤ t ≤ T , t
∫
∥∇θ(τ )∥ ∥∇˜ uh (τ )∥dτ
|J2 (t; θ, u ˜h )| ≤ C∥θt (t)∥
0
∫
m0 ∥θt (t)∥2 + C 8
≤
t
∥∇θ(τ )∥2 dτ,
(2.32)
0
where C is a constant depending on T, L, ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) and ∥f ∥L1 (0, ∞; L2 (Ω)) , respec0 tively. In order to obtain a similar estimate on J3 (t; θ, u ˜h ) we use the Cauchy–Schwarz inequality, the assumption (M2), (2.26) and (2.32) to yield ∫ t |J3 (t; θ, u ˜h )| ≤ L∥θt (t)∥∥˜ uh,t (t)∥ ∥∇θ(τ )∥ ∥∇˜ uh (τ )∥dτ 0
∫ ≤ C∥θt (t)∥
t
∥∇θ(τ )∥ ∥∇˜ uh (τ )∥dτ 0
∫
m0 ∥θt (t)∥2 + C 8
≤
t
∥∇θ(τ )∥2 dτ.
(2.33)
0
Returning to (2.18), we observe that the second term on the right hand side of (2.18) is evaluated using the Cauchy–Schwarz inequality, the Young’s inequality, (1.8) and the assumptions (M1)–(M2) as follows ( (∫ t ) (∫ t ) ) J4 (t; θ, u ˜h ) = M ∥∇u(s)∥2 ds ut (t) − M ∥∇˜ uh (s)∥2 ds u ˜h,t (t), θt (t) 0
0
(∫ = M 0
(( +
t
(∫
t
) ∥∇u(s)∥2 ds (ut (t) − u ˜h,t (t), θt (t)) t
) (∫ ∥∇u(s)∥2 ds − M
M 0
0
)) ) 2 ||∇˜ uh (s)| ds u ˜h,t (t), θt (t)
⏐∫ t ⏐ ⏐ ( ) ⏐ 2 2 ⏐ ≤ m1 ∥ρt (t)∥∥θt (t)∥ + L ⏐ ∥∇u(τ )∥ − ∥∇˜ uh (τ )∥ dτ ⏐⏐ |(˜ uh,t (t), θt (t))| 0
⎛ ≤ ⎝m1 ∥ρt (t)∥ +
√
(∫ 2L
t
(
∥∇u(s)∥2 + ∥∇˜ uh (s)∥
) 2
⎞ ) 21 (∫ t ) 12 ds ∥∇ρ(s)∥2 ds ∥˜ uh,t (t)∥⎠ ∥θt (t)∥
0
m0 2m21 4L2 ≤ ∥θt (t)∥2 + ∥ρt (t)∥2 + ∥˜ uh,t (t)∥2 8 m0 m0
0
∫
t 2
2
∫
(∥∇u(τ )∥ + ||∇˜ uh (τ )| )dτ 0
where m1 depends on T, ∥u0 ∥H 1 (Ω) , ∥u1 ∥L2 (Ω) , and ∥f ∥L1 (0,
0 ∞; L2 (Ω)) .
t
∥∇ρ(τ )∥2 dτ.
(2.34)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
11
From (2.34), (2.26), and (2.31) we obtain that m0 2m21 ∥θt (t)∥2 + ∥ρt (t)∥2 + C 8 m0
J4 (t; θ, u ˜h ) ≤
t
∫
∥∇ρ(τ )∥2 dτ,
(2.35)
0
where C is a constant depending on T, L, ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, 0 ∥f ∥L1 (0, ∞; L2 (Ω)) , respectively. By (2.23), (2.24), (2.29), (2.32)–(2.35) it is clear that
T ; L2 (Ω)) ,
1 d m0 1 d ∥θt (t)∥2 + ∥θt (t)∥2 + ∥∇θ(t)∥2 2 dt 2 2 dt
and
(2.36)
t
∫
β(t − τ )(∇θ(τ ), ∇θt (t))dτ
− 0
2 2m21 ≤ ∥ρtt (t)∥2 + ∥ρt (t)∥2 + C m0 m0
∫
t
∫
2
t
∥∇ρ(τ )∥ dτ + C 0
∥∇θ(τ )∥2 dτ.
0
Integrating the above inequality from 0 to s, 0 < s ≤ T , by the assumptions of Theorem 2.1 we have that m0 1 ∥θt (s)∥2 + 2 2 ∫
s
∫
0
2m21 + m0
0
(2.37)
τ
0
1 1 2 ∥θt (0)∥2 + ∥∇θ(0)∥2 + 2 2 m0
s
∫
1 ∥θt (τ )∥2 dτ + ∥∇θ(s)∥2 2
kt (τ − r)(∇θ(r), ∇θt (τ ))dr dτ
+
≤
s
∫
s
∥ρtt (t)∥2 dt
0
s
(∫
2
∫
∥ρt (t)∥ dt + C(T )
s
∫
2
∥∇ρ(τ )∥ dτ +
0
0
)
2
∥∇θ(τ )∥ dτ
.
0
Similarly, as in the derivation of (2.8), we can show that s
∫
∫
τ
kt (τ − r)(∇θ(r), ∇θt (τ ))dr dτ ≥ k(s) (∇θ(0), ∇θ(s)) 0
∫
(2.38)
0 s
k(0) k(0) ∥∇θ(s)∥2 − ∥∇θ(0)∥2 . 2 2
β(τ ) (∇θ(0), ∇θ(τ )) dτ −
+ 0
Comparison of (2.36) and (2.37), along with θ(0) = 0, θt (0) = 0, Gronwall’s lemma and (2.15), shows that 1 m0 ∥θt (s)∥2 + 2 2 2 ≤ m0 ≤ Ch2
∫ 0
∫ 0
s
∫
s
∥θt (τ )∥2 dτ +
0
2m21 ∥ρtt (τ )∥ dτ + m0 2
s
∥u(τ )∥2H 2 (Ω) ⋂ H 1 (Ω) dτ + 0
2 2 h m0
∫
ν0 ∥∇θ(s)∥2 2
s
s
∫
2
∥∇ρ(τ )∥2 dτ
∥ρt (τ )∥ dτ + C 0
∫ 0
(2.39)
0
s
∥utt (τ )∥2H 1 (Ω) dτ + 0
for 0 < s ≤ T . An application of (2.15) now completes the proof.
2m21 2 h m0
∫ 0
s
∥ut (τ )∥2H 1 (Ω) dτ, 0
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
12
3. Fully discrete methods In this section, we shall consider the discretization in time of the spatially semidiscrete problem (2.1) studied above. Let N be a positive integer, ∆t the time step, tn = n∆t, and U n ∈ Sh the approximation of uh (tn ). We denote the backward difference quotient in time by ∂t U n = (U n − U n−1 )/△t. Using the quadrature formula n ∫ tj n ∑ ∑ qn (φ) = β(tn − r)φ(tj )dr = βn−j φ(tj ) (3.1) j=1
tj−1
j=1 tn
∫ ≈
β(tn − r)φ(r)dr, 0
where
∫
tj+1
β(r) dr.
βj =
(3.2)
tj
Thus, we define the fully discrete linear scheme based on backward Euler method as the following form ⎛ ⎞ n−1 ( 2 ) ∑ ( ) ∂t U n , χ + M ⎝∆t ∥∇U j ∥2 ⎠ ∂t U n , χ + (∇U n , ∇χ) (3.3) j=0
−
n ∑
( ) βn−j ∇U j , ∇χ = (f n , χ) ,
∀χ ∈ Sh , n ≥ 2,
j=1
with U 0 = u0 h and U 1 = △tu1 h + u0 h given, respectively. The discrete problem (3.3) yields a system of linear algebraic equations at t = tn . Then, using a priori bound for U n , it is clear to check that this system has a unique solution at each time level t = tn . Below we shall show a priori estimate for the fully discrete solution. We put ϕn = ϕ(tn ) and consider the following discrete norms in our subsequent analysis ∥ϕ∥2l2 (0,
tN ; L2 (Ω))
= △t
N ∑
∥ϕn ∥2L2 (Ω)
n=1
and ∥ϕ∥2l∞ (0,
tN ; L2 (Ω))
= max ∥ϕn ∥2L2 (Ω) . 1≤n≤N
Lemma 3.1. Assume that f ∈ l2 (0, tN ; L2 (Ω )), u0 h and u1 h ∈ Sh . Then the solution of (3.3) satisfies N ∑ 1 ν0 m0 ∥∂t U N ∥2 + ∥∇U N ∥2 + △t ∥∂t U n ∥2 2 4 2 n=2
( ≤ C
∥u1 h ∥2L2 (Ω)
+
△t∥∇u1 h ∥2L2 (Ω)
+
∥∇u0 h ∥2L2 (Ω)
+ △t
N ∑
(3.4) ) ∥f n ∥2L2 (Ω)
,
n=2
for N ≥ 2, where C is a positive constant depending on ν0 and m0 , respectively. Proof . Setting χ = ∂t U n in (3.3) and noting that kt (t) = −β(t) we have ⎛ ⎞ n−1 ( 2 ) ∑ ( ) ( ) ∂t U n , ∂t U n + M ⎝∆t ∥∇U j ∥2 ⎠ ∂t U n , ∂t U n + ∇U n , ∇∂t U n j=0
+
n ∑ j=1
( ) ( ) k˙ n−j ∇U j , ∇∂t U n = f n , ∂t U n , n ≥ 2.
(3.5)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
Now, note that
13
1 1 2 ∂t ∥∂t U n ∥2 + △t∥∂t U n ∥2 , 2 2 1 1 (∇U n , ∇∂t U n ) = ∂t ∥∇U n ∥2 + △t∥∂t ∇U n ∥2 , 2 2 2
(∂t U n , ∂t U n ) =
and use the representation k˙ n−j =
∫
tn−j+1
˙ )dτ = k(tn−j+1 ) − k(tn−j ), 1 ≤ j ≤ n, n ≥ 2, k(τ
tn−j
to obtain
⎛ ⎞ n−1 ∑ 1 1 2 ∂t ∥∂t U n ∥2 + △t∥∂t U n ∥2 + M ⎝∆t ∥∇U j ∥2 ⎠ ∥∂t U n ∥2 2 2 j=0
(3.6)
1 1 + ∂t ∥∇U n ∥2 + △t∥∂t ∇U n ∥2 2 2 +
n ∑ (k(tn−j+1 ) − k(tn−j ))(∇U j , ∇∂t U n ) = (f n , ∂t U n ), n ≥ 2. j=1
The sum term can be computed summing by parts: n n ∑ ∑ (k(tn−j+1 ) − k(tn−j ))∇U j = k(tn−j+1 )∇(U j − U j−1 ) + k(tn )∇U 0 − k(t0 )∇U n . j=1
j=1
Using the above identity and (3.6) we obtain ⎛ ⎞ n−1 ∑ 1 1 2 ∂t ∥∂t U n ∥2 + △t∥∂t U n ∥2 + M ⎝∆t ∥∇U j ∥2 ⎠ ∥∂t U n ∥2 2 2 j=0
(3.7)
] 1 1 ∂t ∥∇U n ∥2 + △t∥∂t ∇U n ∥2 2 2 n ∑ k(tn−j+1 )(∇∂t U j , ∇∂t U n ) = (f n , ∂t U n ), n ≥ 2. + k(tn )(∇U 0 , ∇∂t U n ) + △t [
+ ν0
j=1
After summation of (3.7) from 2 to N and using (3.4) and the assumption (M1) we have N N N ∑ ν0 ∑ 1∑ ∂t ∥∂t U n ∥2 + m0 ∥∂t U n ∥2 + ∂t ∥∇U n ∥2 2 n=2 2 n=2 n=2
+
N ∑
k(tn )(∇U 0 , ∇∂t U n ) + △t
n=2
N ∑ n ∑
(3.8)
k(tn−j+1 )(∇∂t U j , ∇∂t U n )
n=1 j=1
≤
N ∑
(f n , ∂t U n ) + △tk(t1 )(∇∂t U 1 , ∇∂t U 1 ).
n=2
The sequence
{k(tj+1 )}∞ j=0
is convex, as a result of the convexity of k(t),
k(tj+3 ) − 2k(tj+2 ) + k(tj+1 ) = (△t)2 k ′′ (ζ) > 0, ζ ∈ [tj+1 , tj+3 ]. We use Lemma 4.3 in [22] to get △t
N ∑ n ∑ n=1 j=1
k(tn−j+1 )(∇∂t U j , ∇∂t U n ) ≥ 0, N ≥ 1.
(3.9)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
14
Another summation by parts yields N ∑ n=2
( k(tn )∇
U n − U n−1 △t
) =
N −1 ∑ k(tN ) k(t2 ) k(tj ) − k(tj+1 ) ∇U N − ∇U 1 + ∇U j , N ≥ 2. △t △t △t n=2
(3.10)
Thus, from (3.8), (3.9), (3.10) and the Cauchy–Schwarz inequality, we get N ∑ 1 ν0 ∥∂t U N ∥2 + m0 △t ∥∇U N ∥2 ∥∂t U n ∥2 + 2 2 n=2
(3.11)
1 ν0 ∥∂t U 1 ∥2 + ∥∇U 1 ∥2 + k(tN )∥∇U 0 ∥∥∇U N ∥ 2 2 N −1 ∑ k(tj ) − k(tj+1 ) ∥∇U 0 ∥∥∇U j ∥ + k(t2 )∥∇U 0 ∥∥∇U 1 ∥ + △t △t j=2 ≤
+ △t
N ∑
∥f n ∥∥(∂t U n )∥ + k(t1 )∥∇(U 1 − U 0 )∥2 ,
N ≥ 2.
n=2
We use the Young’s inequality and the assumption on U 0 and U 1 , to yield N
∑ 1 m0 ν0 ∥∂t U N ∥2 + △t ∥∂t U n ∥2 + ∥∇U N ∥2 2 2 4 n=2
(3.12)
( ) 1 1 2 2 2 ≤ ∥u1 h ∥ + (2 + ν0 )(△t) ∥∇u1 h ∥ + 2 + ν0 + ∥∇u0 h ∥2 2 ν0 N N −1 ∫ ∑ 1 ∑ tj+1 1 △t ∥f n ∥2 + β(r)dr∥∇U j ∥2 , N ≥ 2. + 2m0 2 t j n=2 j=2 By the discrete Gronwall’s lemma (see [23, p. 1055, Lemma]) we show that N
∑ 1 m0 ν0 ∥∂t U N ∥2 + △t ∥∂t U n ∥2 + ∥∇U N ∥2 2 2 4 n=2 [
1 ∥u1 h ∥2 + (2 + ν0 )(△t)2 ∥∇u1 h ∥2 2 ] ( ) N ∑ 1 1 2 n 2 + 2 + ν0 + ∥∇u0 h ∥ + △t ∥f ∥ , ν0 2m0 n=2 ≤ e
2 ν0
(3.13)
N ≥ 2.
The proof of Lemma 3.1 is now complete. Our next result is to yield error estimates for the fully discrete problem (3.3). Using the elliptic projection u ˜h (t) : H01 (Ω ) → Sh defined in (2.14) we set U n − u(tn ) = (U n − u ˜h (tn )) − (u(tn ) − u ˜h (tn )) = θn − ρ(tn ), for n ≥ 2. As estimate of ∥ρ(tn )∥ is known from (2.15) at t = tn , it is enough to estimate ∥θn ∥. At t = tn , we subtract (1.7) from (3.3) to obtain an equation in θn as follows: ⎞ ⎛ ⎞ ⎞ ⎛ ⎛ n−1 n−1 ∑ ∑ ( ) ( ) 2 n (∂t θ , χ) + ⎝M ⎝△t ∥∇˜ uh (tj )∥2 ⎠ ∂t u ∥∇U j ∥2 ⎠ ∂t U n , χ − M ⎝△t ˜h (tn ), χ ⎠ (3.14) j=0
j=0
+(∇θn , ∇χ) +
n ∑ j=1
k˙ n−j (∇θj , ∇χ)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
2
15
2
= (∂t ρn , χ) + (utt (tn ) − ∂t u(tn ), χ) ⎛ ⎞ ⎞ ⎞ ⎞ n−1 n−1 ∑ ∑ ˜nh ⎠ , χ⎠ + ⎝⎝M ⎝△t ∥∇˜ ujh ∥2 ⎠ ∂t u ∥∇uj ∥2 ⎠ ∂t un − M ⎝△t ⎛⎛
⎛
j=0
j=0
⎛⎛
⎞ ⎞ ⎞ ) n−1 ∑ n n j 2 ∥∇u(s)∥ ds ∂t u − M ⎝△t ∥∇u ∥ ⎠ ∂t u ⎠ , χ⎠ ⎛
tn
(∫
2
+ ⎝⎝M 0
j=0
(∫
tn
+M
) ( ) 2 ∥∇u(s)∥ ds unt − ∂t un , χ
0
⎛
tn
∫ +⎝
˙ n − s)∇u(s) ds − k(t
0
n ∑
⎞ k˙ n−j ∇uj , ∇χ⎠ , ∀χ ∈ Sh , n ≥ 2.
j=1
Theorem 3.1. Let U n satisfy (3.3) and let u0 h = u ˜h (0) and u1 h = u ˜h, t (0), respectively. Then, there holds, for N ≥ 2, ∥∂t (U N − u(tN ))∥2 + ν0 ∥∇(U N − u(tN ))∥2 +
2
{(
≤ C(△t)
)2
(∫
max ∥∇utt (r)∥
(∫
tN
+
)2
t1
0
∥∇ut (r)∥2 dr
)2
∫
tN
+
0
(
(3.15)
∥∇utt (r)∥ dr
+
0≤r≤△t
+
N m0 △t ∑ ∥∂t (U n − u(tn ))∥2 2 n=2
∥utt (r)∥2 dr
0
)2 ∫ max ∥△ut (r)∥ +
0≤r≤T
tN
} 2
∥uttt (r)∥ dr
0
⎧ ∫ tN −1 ⎨ N∑ + C h2 △t ∥u(tj )∥2H 2 (Ω) ⋂ H 1 (Ω) + ∥ut (r)∥2H 1 (Ω) dr ⎩ 0 0 0 j=0
tN
∫ +
∥ut (r)∥2H 1 (Ω) dr + 0
0
max ∥utt (r)∥H 1 (Ω)
0≤r≤T
max ∥ut (r)∥H 1 (Ω)
0≤r≤T
0
0
+ }
)2
( +
)2
(
+
∥u(tN )∥2H 2 (Ω) ⋂ H 1 (Ω) 0
,
where C is a positive constant depending on T, L, ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, 0 0 and ∥f ∥1 , respectively.
T ; L2 (Ω)) ,
Proof . Set χ = ∂t θn in (3.14). To estimate the second term on the left hand side of the resulting form, we rewrite it as ⎛ ⎛ ⎞ ⎛ ⎞ ⎞ n−1 n−1 ∑ ∑ ( ) ( ) ⎝M ⎝△t ∥∇U j ∥2 ⎠ ∂t U n , ∂t θn − M ⎝△t ∥∇˜ uh (tj )∥2 ⎠ ∂t u ˜h (tn ), ∂t θn ⎠ (3.16) j=0
∫ = 0
1
j=0
⎛ M ′ ⎝△t
n−1 ∑ j=0
⎞
⎛ ⎞ n−1 ∑ d ⎝ ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠ △t ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠ dσ j=0
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
16
˜nh , ∂t θ . σ∂t θn + ∂t u (
⎛
1
∫
M ′ ⎝△t
= 2△t
) n
n−1 ∑
0
∫
1
⎛ M ⎝△t
dσ + 0
n−1 ∑
⎞ ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠ ∥∂t θn ∥2 dσ
j=0
⎞ ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠
j=0
n−1 ∑
(
( )) σ∥∇θj ∥2 + ∇˜ uh (tj ), ∇θj
j=0
)) ˜nh , ∂t θn dσ . σ∥∂t θn ∥2 + ∂t u ⎛ ⎞ ∫ 1 n−1 ∑ + M ⎝△t ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠ ∥∂t θn ∥2 dσ (
(
0
∫ = 2△t
j=0
⎛
1
M ′ ⎝△t
0
n−1 ∑
⎞ ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠
j=0
⎛
1
∫
M ⎝△t
+ 0
n−1 ∑
(
) σ∥∇θj ∥2 σ∥∂t θn ∥2 dσ
j=0 n−1 ∑
⎞ ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠ ∥∂t θn ∥2 dσ
j=0
+ J1 (n, △t, θ, u ˜h ) + J2 (n, △t, θ, u ˜h ) + J3 (n, △t, θ, u ˜h ) , where 1
∫ J1 (n, △t, θ, u ˜h ) = 2△t
M ′ ⎝△t
0
⎛ M ′ ⎝△t
0
∫ 0
1
n−1 ∑
⎞ ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠
j=0
1
∫ J2 (n, △t, θ, u ˜h ) = 2△t
J3 (n, △t, θ, u ˜h ) = 2△t
⎛
n−1 ∑
M ′ ⎝△t
n−1 ∑
(
)) ( n ˜h , ∂t θn dσ σ∥∇θj ∥2 ∂t u
j=0
⎞ ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠
j=0
⎛
n−1 ∑
n−1 ∑
(
( )) σ∥∂˜t θj ∥2 ∇˜ ujh , ∇θj dσ
j=0
⎞ ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠
j=0
n−1 ∑
(
∇˜ ujh , ∇θj
)(
) ∂t u ˜nh , ∂t θn dσ.
j=0
By the Cauchy–Schwarz inequality and the assumption (M2) we obtain, for n ≥ 2, J1 (n, △t, θ, u ˜h ) ≤ 2L△t
n−1 ∑
˜nh ∥∥∂t θn ∥, ∥∇θj ∥2 ∥∂t u
(3.17)
j=0
when we take the initial data conditions u0 h = u ˜h (0) and u1 h = u˜ h, t (0), respectively, from (3.4), (2.27) and the fact that ∥∇˜ uh (t)∥ ≤ ∥∇u(t)∥, (3.18) we have that ∥∂t U n ∥ ≤ C, where C is a positive constant depending on ∥u0 ∥H 1 (Ω) , ∥u1 ∥H 1 (Ω) , and ∥f ∥l2 (0, 0 0 Using (2.26) we find that 1 ∫ tn n ˜h ∥ = u ˜h, t (r)dr ∥∂t u △t tn−1 ∫ tn C ≤ ∥∇ut (r)∥dr ≤ C, △t tn−1
(3.19) n; L2 (Ω)) ,
respectively.
(3.20)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
where C is a positive constant depending on ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, 0 0 ∥f ∥1 , respectively. Combining (3.19) and (3.20) we yield that
17
T ; L2 (Ω)) ,
∥∂t θn ∥ ≤ C,
and
(3.21)
where C is a positive constant depending on ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, 0 0 ∥f ∥1 , respectively. From (3.17), (3.20) and (3.21) we obtain that, for n ≥ 2, J1 (n, △t, θ, u ˜h ) ≤ C△t
n−1 ∑
T ; L2 (Ω)) ,
∥∇θj ∥2 ,
and
(3.22)
j=0
where C is a positive constant depending on L, ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, 0 0 and ∥f ∥1 , respectively. Similarly, n−1 ∑ ∥∇˜ ujh ∥∥∇θj ∥, J2 (n, △t, θ, u ˜h ) ≤ 2L∥∂t θn ∥2 △t
T ; L2 (Ω)) ,
(3.23)
j=0
and by (3.21), (3.23) and (2.30) it follows that J2 (n, △t, θ, u ˜h ) ≤ C∥∂t θn ∥△t
n−1 ∑
∥∇˜ ujh ∥∥∇θj ∥
(3.24)
j=0 n−1
≤
∑ m0 ∥∂t θn ∥2 + C△t ∥∇θj ∥2 , 16 j=0
where C is a positive constant depending on T, L, ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, 0 0 and ∥f ∥1 , respectively. From (3.16) it is clear that, for n ≥ 2, ˜nh ∥△t J3 (n, △t, θ, u ˜h ) ≤ 2L∥∂t θn ∥∥∂t u
n−1 ∑
∥∇˜ ujh ∥∥∇θj ∥
T ; L2 (Ω)) ,
(3.25)
j=0
≤ C ∥∂t θn ∥△t
n−1 ∑
∥∇θj ∥
j=0 n−1
≤
∑ m0 ∥∂t θn ∥2 + C△t ∥∇θj ∥2 , 16 j=0
where C is a positive constant depending on T, L, ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, T ; L2 (Ω)) , 0 0 and ∥f ∥1 , respectively. For the third term on the right hand side of (3.14), use the Cauchy–Schwarz inequality, (1.8) and the assumptions (M1)–(M2) to obtain |R3 (n, △t, θ, u ˜h )| (3.26) ⏐⎛ ⎛ ⎞ ⎛ ⎞ ⎞⏐ ⏐ ⏐ n−1 n−1 ∑ ∑ ⏐ ⏐ = ⏐⏐⎝M ⎝△t ∥∇uj ∥2 ⎠ ∂t un − M ⎝△t ∥∇˜ ujh ∥2 ⎠ ∂t u ˜nh , ∂t θn ⎠⏐⏐ ⏐ ⏐ j=0 j=0 ⏐⎛⎛ ⎛ ⎞ ⎛ ⎞⎞ ⎞ ⏐ n−1 n−1 ∑ ∑ ⏐ = ⏐⏐⎝⎝M ⎝△t ∥∇uj ∥2 ⎠ − M ⎝△t ∥∇˜ ujh ∥2 ⎠⎠ ∂t u ˜nh , ∂t θn ⎠ ⏐ j=0 j=0
18
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
⎛
⎛
+ ⎝M ⎝△t
n−1 ∑ j=0
≤ L△t
n−1 ∑
⎞⏐ ⏐ ⏐ ) ( n n n ⎠⏐ j 2⎠ ∂t u − ∂t u ˜h , ∂t θ ⏐ ∥∇u ∥ ⏐ ⎞
⏐( )⏐ ⏐ ⏐ ujh ∥2 ⏐ ∥∂t u ˜nh ∥∥∂t θn ∥ ⏐ ∥∇uj ∥2 − ∥∇˜
j=0
+ m1 ∥∂t ρn ∥∥∂t θn ∥ ≤ L△t
n−1 ∑
(
) ˜nh ∥∥∂t θn ∥ ∥∇uj ∥ + ∥∇˜ ujh ∥ ∥∇ρj ∥∥∂t u
j=0
1 ∫ tn + m1 ρt (r)dr ∥∂t θn ∥, △t tn−1 where m1 depends on T, ∥u0 ∥H 1 (Ω) , ∥u1 ∥L2 (Ω) , and ∥f ∥L1 (0, ∞; along with (3.26) we show that |R3 (n, △t, θ, u ˜h )| n−1 ∑
≤ C△t
L2 (Ω)) ,
and then (3.20), (3.18) and (1.8), (3.27)
∥∇ρj ∥ ∥∂t θn ∥
j=0
+ m1
1 △t
∫
tn
∥ρt (r)∥dr ∥∂t θn ∥
tn−1
n−1
≤
∑ 1 m0 ∥∂t θn ∥2 + C△t ∥∇ρj ∥ + C 16 △t j=0
∫
tn
∥ρt (r)∥2 dr.
tn−1
For the fourth through to sixth terms on the right hand side of (3.14) we apply the same arguments as in the proof of (3.27). In view of the rectangle quadrature rule, the fourth term can be estimated by the following |R4 (n, △t, θ, u)| ⏐⎛⎛ ⎛ ⎞ ⎞ ⎞⏐ ⏐ ⏐ (∫ tn ) n−1 ∑ ⏐ ⏐ ∥∇u(s)∥2 ds ∂t un − M ⎝△t = ⏐⏐⎝⎝M ∥∇uj ∥2 ⎠ ∂t un ⎠ , ∂t θn ⎠⏐⏐ 0 ⏐ ⏐ j=0 ⏐ ⏐ ⏐ ∫ tn ⏐ n−1 ∑ ⏐ ⏐ 2 j 2⏐ ⏐ ≤ L⏐ ∥∇u(s)∥ ds − △t ∥∇u ∥ ⏐ ∥∂t un ∥∥∂t θn ∥ ⏐ 0 ⏐ j=0 ⏐ ⏐ ⏐n−1 ∫ tj+1 ⏐ ∫ tn n−1 ∑ ⏐∑ ⏐ 2 j 2⏐ 1 ⏐ ≤ L⏐ ∥∇u(s)∥ ds − △t ∥∇u ∥ ⏐ ∥ ut (r)dr∥∥∂t θn ∥ △t t ⏐ j=0 tj ⏐ n−1 j=0 ⏐ ⏐ ∫ n−1 ⏐ ∑ ⏐⏐ tj+1 ⏐ ≤ C∥∂t θn ∥ (r − tj )∥∇ut (r)∥2 dr⏐ ⏐ ⏐ tj ⏐ j=0
≤ C∥∂t θn ∥ △t∥∇ut ∥2L2 (0, ≤
tn ; L2 (Ω))
≤ C∥∂t θn ∥ △t
m0 ∥∂t θn ∥2 + C△t 16
∫
tn
∥∇ut (r)∥2 dr
t0
∫
tn
∥∇ut (r)∥2 dr,
0
where C is a positive constant depending on T, L, ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, 0 0 and ∥f ∥1 , respectively.
T ; L2 (Ω)) ,
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
19
Furthermore, it follows that 2△t
N ∑
|R4 (n, △t, θ, u)|
(3.28)
n=2
⏐ ⏐ ⏐ N ⏐∫ tn n−1 ∑ ∑ ⏐ ⏐ 2 j 2⏐ ⏐ ∥∇u(s)∥ ds − △t ≤ 2△tL ∥∇u ∥ ⏐ ∥∂t un ∥∥∂t θn ∥ ⏐ ⏐ n=2 ⏐ 0 j=0 ≤ 2△tL
N ∑
tn
∫
∥∇ut (r)∥2 dr
△t 0
n=2
≤ C△t
N ∑
△t
≤ C△t
N ∑
(△t)
2
(∫
tn
ut (r)dr∥∥∂t θn ∥
tn−1
∥∇ut (r)∥2 dr ∥∂t θn ∥
tn 2
∥∇ut (r)∥ dr
)2 ) 12 ( ∑ N
0
n=2
≤
∫
0
n=2
(
tn
∫
1 ∥ △t
) 21 n 2
∥∂t θ ∥
n=2
(∫ tn )2 N N ∑ m0 △t ∑ ∥∂t θn ∥2 + C△t (△t)2 ∥∇ut (r)∥2 dr 16 n=2 0 n=2 (∫ )2 N tN m0 △t ∑ n 2 2 2 ∥∂t θ ∥ + C(△t) ≤ ∥∇ut (r)∥ dr . 16 n=2 0
By Taylor’s expansion the fifth term is estimated as
⏐ (∫ ⏐ = ⏐⏐M
tn
0
|R5 (n, △t, θ, u)| ⏐ ) )⏐ ( n n n ⏐ 2 ∥∇u(s)∥ ds ut − ∂t u , ∂t θ ⏐
≤ m1 ∥unt − ∂t un ∥∥∂t θn ∥ 1 ∫ tn = m1 (s − tn−1 )utt (s) ds ∥∂t θn ∥ △t tn−1 ∫ tn ∫ tn m0 n n 2 ≤ m1 ∥∂t θ ∥ + C ∥utt (s)∥ ds∥∂t θ ∥ ≤ ∥utt (s)∥2 ds, 16 tn−1 tn−1 where m1 depends on T, ∥u0 ∥H 1 (Ω) , ∥u1 ∥L2 (Ω) , and ∥f ∥L1 (0, ∞; L2 (Ω)) , and C depends on m0 and m1 . Also, we have that N ∑ △t |R5 (n, △t, θ, u)| (3.29) n=2
≤ m1 △t
N ∑
∥unt − ∂t un ∥∥∂t θn ∥
n=2
( ≤ m1 △t
N ∑
n=2
≤
≤
) 21 ( ∥unt − ∂t un ∥2
N ∑
) 12 ∥∂t θn ∥2
n=2
N N ∑ ∑ m0 △t ∥∂t θn ∥2 + C△t ∥unt − ∂t un ∥2 16 n=2 n=2
∫ tN N ∑ m0 △t ∥∂t θn ∥2 + C(△t)2 ∥utt (r)∥2 dr, 16 0 n=2
where C depends on m0 and m1 .
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
20
For the sixth term we use the Green’ formula and Lemma 4.7 in [22] to obtain |R6 (n, △t, θ, u)| ⏐⎛ ⎞⏐ ⏐ ⏐ ∫ tn n ∑ ⏐ ⏐ kt (tn − s)∇u(s) ds − kt (tn−j )∇uj , ∇∂t θn ⎠⏐⏐ = ⏐⏐⎝ ⏐ ⏐ 0 j=1 ⏐⎛ ⎞⏐ ⏐ ∫ tn ⏐ n ∑ ⏐ ⏐ j n ⏐ kt (tn − s)△u(s) ds − kt (tn−j )△u , ∂t θ ⎠⏐⏐ = ⏐⎝ ⏐ 0 ⏐ j=1 ∫ tn n ∑ j kt (tn − s)△u(s) ds − kt (tn−j )△u ∥∂t θn ∥ ≤ 0 j=1 ∑ ∫ tj n n = kt (tn − s)(△u(s) ds − △u(tj )) ds ∥∂t θ ∥ j=1 tj−1 ≤
n ∫ ∑
tj
∫
tj−1
j=1
n ∑
≤
≤ C
∥△ut (r)∥ dr ds∥∂t θn ∥
tj−1
∫
tj
∥△ut (r)∥ dr ∥∂t θn ∥
|kt (tn−j )| tj−1
j=1 n ∑
tj
|kt (tn − s)|
tj
∫ |kt (tn−j )|
∥△ut (r)∥2 dr +
tj−1
j=1
m0 ∥∂t θn ∥2 , 16
where C depends on m0 . Also, it follows that △t
N ∑
|R6 (n, △t, θ, u)|
(3.30)
n=2
≤ △t
N ∑ n ∑
∫
⎢ ≤ △t ⎣
N ∑
n=2
⎛ ∫ n ∑ ⎝ |kt (tn−j )| j=1
∥△ut (r)∥ dr ∥∂t θn ∥
tj−1
n=2 j=1
⎡
tj
|kt (tn−j )|
tj
tj−1
⎞2 ⎤ 12 [ ] 21 N ∑ ⎥ ∥∂t θn ∥2 ∥△ut (r)∥ dr ⎠ ⎦ n=2
⎛ ⎞2 ∫ tj N N n ∑ ∑ ∑ m0 ⎝ ≤ △t ∥∂t θn ∥2 + C△t |kt (tn−j )| ∥△ut (r)∥ dr ⎠ 16 tj−1 n=2 n=2 j=1 ( )2 N ∑ m0 n 2 2 ≤ △t ∥∂t θ ∥ + C(△t) max ∥△ut (r)∥ . 0≤r≤T 16 n=2 It follows from (M2) that the first term on the right hand side of (3.16) is nonnegative, this is ⎛ ⎞ ∫ 1 n−1 n−1 ∑ ∑( ) 2△t M ′ ⎝△t ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠ σ∥∇θj ∥2 σ∥∂t θn ∥2 dσ ≥ 0, for n ≥ 1. 0
j=0
j=0
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
21
Also, from (2.2), the fact that ∥∇˜ uh (t)∥ ≤ ∥∇u(t)∥, and (1.5) we find that on the second term for the right hand side of (3.16) ⎛ ⎞ ∫ 1 n−1 ∑ M ⎝△t ∥σ∇θj + ∇˜ uh (tj )∥2 ⎠ ∥∂t θn ∥2 dσ ≥ m0 ∥∂t θn ∥2 , for n ≥ 1. 0
j=0
Using above two estimates and Combining the estimates (3.22), (3.24), (3.25), (3.27)–(3.30), and substituting term in (3.14), we get that, for n ≥ 2, 1 ν0 ∂t ∥∂t θn ∥2 + m0 ∥∂t θn ∥2 + ∂t ∥∇θn ∥2 2 2 0
(
+ k(tn ) ∇θ , ∇∂t θ
n
)
n ∑
+ △t
(3.31)
) ( k(tn−j+1 ) ∇∂t θj , ∇∂t θn
j=1 n−1 ∑ m0 ∥∇ρj ∥2 ∥∂t θn ∥2 + C△t 16 j=0 j=0 ∫ tn ∫ tn 2 2 ∥ρt (r)∥ dr + C△t ∥∇ut (r)∥ dr + C ∥∇utt (r)∥2 dr
≤ C△t 1 +C △t
∫
tn
n−1 ∑
∥∇θj ∥2 +
tn−1
+C
tn−1
0
n ∑
∫
tj
2
∥△ut (r)∥2 dr + C∥∂t ρn ∥2 + C∥τ n ∥2 ,
|kt (tn−j )| tj−1
j=1 2
where τ n = ∂t u(tn ) − utt (tn ). Then, after multiplication by 2△t sum up from n = 2 to N and using (3.9) to find that, for N ≥ 2, N m0 △t ∑ ∥∂t θN ∥2 + ν0 ∥∇θN ∥2 + (3.32) ∥∂t θn ∥2 2 n=2 ≤ ∥∂t θ1 ∥2 + ν0 ∥∇θ1 ∥2 + k(tN )∥∇θ0 ∥∥∇θN ∥ + k(t2 ) ∥∇θ0 ∥∥∇θ1 ∥ + 2△t
+ C△t
N −1 ∑
N ∑ k(tj ) − k(tj+1 ) ∥∇θ0 ∥∥∇θj ∥ + 2(△t)2 k(t1 )∥∇∂t θ1 ∥2 △t n=2
j 2
∥∇θ ∥ + C△t
j=0 2
(∫
tN
2
)2 + C(△t)
2
∥∇ρ ∥ + C
0
∥∇ρt (r)∥2 dr
0
tN
∫
tN
∫
j 2
j=0
∥∇u(r)∥ dr
+ C(△t)
N −1 ∑
2
∥utt (r)∥ dr + C(△t) 0
+ C△t
N ∑
2
∥∂t ρn ∥2 + C△t
n=2
N ∑
2
(
)2 max ∥△ut (r)∥
0≤r≤T
∥τ n ∥2 ,
n=2
By (2.15) it is clear that △t
N ∑
2
∥∂t ρn ∥2 ≤ C△t
n=2
N ∑
∥ρtt (ζn )∥2 ≤ C△th2
n=2 2
n=2
)2
(
≤ Ch
max ∥utt (r)∥H 1 (Ω)
0≤ r≤ T
0
where ζn ∈ [tn−1 , tn ]. Since 1 ∂t ρ = △t n
N ∑
∫
tn
ρt (r)dr, tn−1
,
∥utt (ζn )∥2H 1 (Ω) 0
(3.33)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
22
from (2.15) we have that ∥∂t ρN ∥ ≤ Ch max ∥ut (r)∥H 1 (Ω) ,
(3.34)
∥∇ρN ∥ ≤ Ch ∥u(tN )∥H 2 (Ω) ⋂ H 1 (Ω) ,
(3.35)
0≤r≤T
0
0
and it follows that 1 △t
∥∂t ρn ∥ ≤
tn
∫
∥ρt (r)∥dr ≤ C tn−1
(∫
h
≤ C
∥ut (r)∥H 1 (Ω) dr
h2 △t
0
tn−1
) 21
tn−1
∥∂t ρn ∥2 ≤ C
tn
∫
tn
1
(△t) 2
h △t
∥ut (r)∥2H 1 (Ω) dr 0
∫
,
tn
tn−1
∥ut (r)∥2H 1 (Ω) dr, 0
thus, △t
N ∑
∥∂t ρn ∥2 ≤ ch2
∫ 0
n=2
tN
∥ut (r)∥2H 1 (Ω) dr.
Also, △t
N ∑
n 2
∥τ ∥ ≤ C△t
n=2
(∫ N ∑
)2
tn
∥uttt (r)∥dr
≤ c(△t)
2
tN
∫
tn−1
n=2
(3.36)
0
∥uttt (r)∥2 dr,
(3.37)
0
1
θ1 = U 1 − u ˜h = U 0 + △tu1 h − u ˜h (t1 ) ] [ u ˜h (t1 ) − u ˜h (t0 ) = u ˜h (t0 ) + △t˜ uh, t (0) − u ˜h (t1 ) = △t u ˜h, t (0) − △t ∫ ζ1 = △t [˜ uh, t (0) − u ˜h, t (ζ1 )] = (△t)2 u ˜h, tt (ζ2 ) = △t u ˜h, tt (r)dr, 0
where ζ1 , ζ2 ∈ [0, t1 ], it implies that ∥∇θ1 ∥ ≤ △t
t1
∫
∫
t1
∥∇˜ uh, tt (r)∥dr ≤ C△t 0
∥∇utt (r)∥dr.
(3.38)
0
Similarly, ∫ ζ1 U1 − u ˜h (t1 ) = u ˜h, tt (r) dr, △t 0 ∫ t1 ∥∇utt (r)∥ dr ≤ C△t max ∥∇utt (r)∥, tt (r)∥ dr ≤ C
∂t θ1 = 1
∫
ζ1
∥∂t θ ∥ ≤
∥˜ uh,
0≤r≤△t
0
0
and
∫
1
∥∇∂t θ ∥ ≤ C
(3.39)
t1
∥∇utt (r)∥ dr ≤ C△t max ∥∇utt (r)∥.
(3.40)
0≤r≤△t
0
From (3.32)–(3.40) and the discrete Gronwall’s lemma we have that ∥∂t θN ∥2 + ν0 ∥∇θN ∥2 +
2
≤ C(△t)
4
+ C(△t)
)2
( max ∥∇utt (r)∥
0≤r≤△t
(
N m0 △t ∑ ∥∂t θn ∥2 2 n=2 2
(∫
(3.41) )2
t1
∥∇utt (r)∥ dr
+ ν0 (△t)
0
)2 N −1 ∑ max ∥∇utt (r)∥ + h2 △t ∥u(tj )∥2H 2 (Ω) ⋂ H 1 (Ω)
0≤r≤△t
j=0
0
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
+ Ch2
tN
∫
∥ut (r)∥2H 1 (Ω) dr + C(△t)2 0
0
+ (△t)2
tN
∫
∥utt (r)∥2 dr + C(△t)2
0
+ Ch2
(∫
)2
( max ∥utt (r)∥H 1 (Ω)
0≤r≤T
0
tN
∥∇ut (r)∥2 dr
23
)2
0
)2
( max ∥△ut (r)∥
0≤r≤T
+ C(△t)2
∫
tN
∥uttt (r)∥2 dr.
0
The proof of Theorem 3.1 is completed. Remark 3.1. By an analogous argument of the proof of Theorem 3.1, we can obtain that, under the conditions of Theorem 3.1 it holds true that, for N ≥ 2, ∥U N − u(tN )∥ ≤ C(h + △t),
(3.42)
where C is a positive constant depending on T, L, ∥u0 ∥H 2 (Ω) ⋂ H 1 (Ω) , ∥u1 ∥H 1 (Ω) , ∥f (0)∥, ∥ft ∥L1 (0, T ; L2 (Ω)) , 0 0 and ∥f ∥1 , respectively. We shall not pursue this line of investigation in further detail. In addition appropriate condition on the triangulation (cf. [14, Sections 2 and 4]), we believe that the optimal L2 (Ω ) -norm error estimates O(h2 + ∆t) can be obtained. 4. Numerical experiments In this section, we present the numerical experiments which demonstrate our theoretical findings. Let us consider the following nonlinear and nonlocal hyperbolic integro-differential equation that models the vibrations of viscoelastic beams. In a bounded interval (0, 1), we consider the viscoelastic Euler–Bernoulli beam equation (see [7, (6.1)]) (∫ t ∫ 1 ) ∫ t 2 utt (x, t) + M |ux (x, s)| dx ds ut (x, t) − uxx (x, t) + β(t − s)uxx (x, s)ds = 0, (4.1a) 0
0
0
in (0, 1) × (0, 1], taken together with the Dirichlet boundary and initial conditions u(0, t) = u(1, t) = 0, u(x, 0) = x(1 − x) e−x ,
0 < t ≤ 1, 0 < x < 1,
ut (x, 0) = x(1 − x) + x(e−x − e−1 cos(24πx)), with the kernel
1 1 β(t) = e−2t t− 2 /Γ ( ), 2
(4.1b)
0 < x < 1,
(4.1c)
0 < t ≤ 1, 1
(4.2)
and two type of nonlinear functions M (s) = 1 + s and (1 + s) 2 . We apply the numerical method (3.3) to problem of (4.1). We have used the piecewise linear finite element for spatial discretization with mesh size h = 1/(J + 1) and linearized backward Euler scheme for temporal discretization with mesh size ∆t(N ) = 1/N , and tN = N ∆t(N ) = 1. For the initial data, we choose the Ritz projection of u(x, 0) and ut (x, 0) onto Sh . We note that the Ritz projection Rh , defined by (∇(Rh u − u), ∇χ) = 0,
f or all χ ∈ Sh ,
(see (2.14)) is particularly simple in our case: Rh u is just the piecewise linear interpolant to u, that is Rh u (xj ) = u(xj ),
f or j = 0, 1, 2, . . . , J + 1,
24
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
with xj = j h (see [22, p. 65]). The finite-element space semi-discretization of (4.1) can be written as ⎞ ⎛ J ∫ t ∑ 2 uj (s)(uj (s) − uj+1 (s))ds⎠ XU ′ (t) X U ′′ (t) + M ⎝ h j=1 0 t
∫
β(t − s)C U (s) ds = 0,
+ C U (t) −
0 < t ≤ 1,
(4.3a)
0 T
U (0) = (u(x1 , 0), u(x2 , 0), · · · , u(xJ , 0)) , T
U ′ (0) = (ut (x1 , 0), ut (x2 , 0), · · · , ut (xJ , 0)) ,
(4.3b)
where U (t) = (u1 (t), u2 (t), . . . , uJ (t))T . We apply the linearized backward Euler method to (4.3) to obtain ⎞ ⎛ ( n ] ) [ n J n−1 ∑ ∑ U − U n−1 U − 2U n−1 + U n−2 2 l l l ⎠ ⎝ △t uj (uj − uj+1 ) X +M X (△t)2 h j=1 △t l=0
+ CU
n
−
n ∑
βn−j C U j = 0,
n ≥ 2,
(4.4a)
j=1
⎛ ⎜ ⎜ U0 = ⎜ ⎝
⎛ ⎛ ⎞ ⎞ ⎞ u(x1 , 0) u(x1 , 0) ut (x1 , 0) ⎜ u(x2 , 0) ⎟ ⎜ ut (x2 , 0) ⎟ u(x2 , 0) ⎟ ⎜ ⎜ ⎟ ⎟ ⎟ ⎟ , U 1 = ⎜ .. ⎟ + △t ⎜ .. ⎟, .. ⎝ ⎝ ⎠ ⎠ ⎠ . . . u(xJ , 0) u(xJ , 0) ut (xJ , 0)
where
⎛ ⎜ ⎜ Ul = ⎜ ⎝
and ∫
ul1 ul2 .. .
(4.4b)
⎞
ulJ
⎟ ⎟ ⎟ , l ≥ 0, ⎠
(j+1)△t
βj =
β(r) dr, j = 0, 1, 2, . . . , n.
(4.5)
j△t
We present spatial and temporal errors in L2 (0, 1) norm at tN = 1, and the corresponding convergence rates determined by log(eN /eN +1 ) time convergence rate ≃ , (4.6) log(∆t(N )/∆t(N + 1)) where ⎞ 21 ⎛ J ⏐ ⏐2 1 ∑ ⏐ ⏐ 2N ⎠ , J ≫ N, eN = ⎝ (4.7) ⏐uN j,∆t(N ) − uj,∆t(N )/2 ⏐ J + 1 j=1 is the norm of the corresponding error, and ∆t(N ) is the step size of the corresponding eN , also space convergence rate ≃ in which eJ
j=1
(4.8)
⎞ 12
⎛
J ⏐ ∑ ⏐ N = ⎝ ⏐uj, hJ − uN 2j,
log(eJ /eJ+1 ) , log(hJ /hJ+1 )
⏐2 ⏐ hJ /2 ⏐
1 ⎠ , J +1
N ≫ J,
is the norm of the corresponding discrete error, and hJ is the step size of the corresponding eJ .
(4.9)
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
25
Fig. 1. Norms of approximated solution: h = 1/101, k = 1/50.
Fig. 2. Error with time, h = 1/101, k = 1/50.
Figs. 1–2 are plotted to exhibit the behavior of solutions and errors with time and M (u) = 1 + u. For 1 M (u) = (1 + u) 2 , the plotted figures are similarly to Figs. 1–2. We omit the detail. Note that the exact solution is not known, therefore, the errors are calculated using double mesh principle (cf. (4.7)). Fig. 1 shows the exponential decay of L2 (Ω )-norm for the numerical solutions U n on the time interval [0.5, 4] with M (u) = 1 + u. Fig. 2 numerically illustrates the exponential decay of the error eN of the solutions on the time interval [0.5, 4] with M (u) = 1+u. This shows numerically that the numerical scheme (4.3) for problem (4.1) preserves the exponential decay of the solutions for problem (4.1) (see [7, Theorem 5.1]). 1 The numerical results for the scheme (4.4) of the problem (4.1) with M (s) = 1 + s and (1 + s) 2 are presented in Tables 1–4, respectively. In Fig. 3 we show that the error in L2 -norm in time attains one order of accuracy for M (s) = 1 + s and tN = 1, and in Fig. 4 we get that the error in space attains second order of accuracy for M (s) = 1 + s and tN = 1. Tables 1 and 3 show clearly the first order of convergence of (4.4) in
26
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
Fig. 3. Convergence rate in time with M (s) = 1 + s.
Fig. 4. Convergence rate in space with M (s) = 1 + s. Table 1 The L2 errors and convergence rates in time with M (s) = 1 + s, △t = 1/N , N = 10, 20, 30, 40, 50, J = 101, h = 1/101, and tN = 1. N
eN
Rate
10 20 30 40 50
0.0074 0.0037 0.0025 0.0019 0.0015
– 1.0000 0.9669 0.9540 1.0594
Theory
1.0000
the time direction. Further, in Tables 2 and 4 it is observed that the order of convergence in L2 (0, 1) norm at tN = 1 is two, which suggests optimal order of convergence. However, our present analysis fails to prove this and it will form a part of our future work.
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
27
Table 2 The L2 errors and convergence rates in space with M (s) = 1 + s, h = 1/J, J = 5, 10, 15, 20, N = 16000, ∆t = 1/16000, and tN = 1. J
eJ
Rate
5 10 15 20
0.0171 0.0042 0.0019 0.0011
– 2.0255 1.9563 1.8998
Table 3 The L2 errors and convergence rates in time with 1 M (s) = (1 + s) 2 , △t = 1/N , N = 10, 20, 30, 40, 50, J = 101, h = 1/101, and tN = 1. N
eN
Rate
10 20 30 40 50
0.0075 0.0036 0.0025 0.0018 0.0014
– 1.0589 0.8993 1.1419 1.1262
Theory
1.0000
Table 4 The L2 errors and convergence rates in space with 1 M (s) = (1 + s) 2 , h = 1/J, J = 5, 10, 15, 20, N = 16000, ∆t = 1/16000, and tN = 1. J
eJ
Rate
5 10 15 20
0.0172 0.0058 0.0027 0.0012
– 1.5683 1.8858 2.8188
References [1] M.M. Cavalcanti, V.N. Domingos Cavalcanti, T.F. Ma, Exponential decay of the viscoelatic Euler–Bernoulli equation with a nonlocal dissipation in general domains, Differential Integral Equations 17 (2004) 495–510. [2] C.M. Dafermos, Asymptotic stability in viscoelasticity, Arch. Ration. Mech. Anal. 37 (1970) 297–308. [3] C.M. Dafermos, An abstract Volterra equation with applications to linear viscoelasticity, J. Differential Equations 7 (1970) 554–569. [4] G. Lebon, C. Perez-Garcia, J. Casas-Vazquez, On the thermodynamic foundations of viscoelasticity, J. Chem. Phys. 88 (1988) 5068–5075. [5] G. Leugering, Boundary controllability of a viscoelastic string, in: G. Da Prato, M. Iannelli (Eds.), Volterra Integro-Differential Equations in Banach Spaces and Applications, Longman Sci. Tech., Harlow, Essex, 1989, pp. 258–270. [6] M. Renardy, W.J. Hrusa, J.A. Nohel, Mathematics Problems in Viscoelasticity, in: Pitman Monographs Pure Appl. Math., vol. 35, Longman Sci. Tech., Harlow, Essex, 1988. [7] P. Cannarsa, D. Sforza, A stability result for a class of nonlinear integro-differential equations with L1 kernels, Appl. Math. 35 (4) (2008) 395–430. [8] S. Larsson, F. Saedpanah, The continuous Galerkin method for an integro-differential equation modeling dynamic fractional order viscoelasticity, IMA J. Numer. Anal. 30 (2010) 964–986. [9] F. Saedpanah, Continuous Galerkin finite element methods for hyperbolic integro-differetial equations, IMA J. Numer. Anal. 35 (2015) 885–908. [10] J.R. Cannon, Lin Yanping, C.Y. Xie, Galerkin Methods and L2 - error estimates for hyperbolic integro-differential equations, Calcolo 26 (1989) 197–207. [11] W. Allegretto, Yanping Lin, Numerical solutions for a class of differential equations in linear viscoelasticity, Calcolo 30 (1993) 69–88. [12] Da Xu, Decay properties for the numerical solutions of a partial differential equation with memory, J. Sci. Comput. 62 (2015) 146–178. [13] Da Xu, Boundary observability of semi-discrete second-order integro-differential equations derived from piecewise Hermite cubic orthogonal spline collocation method, Appl. Math. Optim. 77 (2018) 73–97. [14] Da Xu, Finite element methods of the two nonlinear integro-differential equations, Appl. Math. Comput. 58 (1993) 241–273. [15] E.G. Yanik, G. Fairweather, Finite element methods for parabolic and hyperbolic partial integro-differential equations, Nonlinear Anal. 12 (1988) 785–809.
28
D. Xu / Nonlinear Analysis: Real World Applications 51 (2020) 103002
[16] G. Fairweather, Spline collocation methods for a class of hyperbolic partial integro-differential equations, SIAM J. Numer. Anal. 31 (2) (1994) 444–460. [17] Yi Yan, G. Fairweather, Orthogonal spline collocation methods for some partial integro-differential equations, SIAM J. Numer. Anal. 29 (3) (1992) 755–768. [18] B. Bialecki, G. Fairweather, Orthogonal spline collocation methods for partial differential equations, J. Comput. Appl. Math. 128 (2001) 55–82. [19] S. Larsson, V. Thom´ ee, Nai-Ying Zhang, Interpolation of coefficients and transformation of the dependent variable in finite element methods for the nonlinear heat equation, Math. Methods Appl. Sci. 11 (1989) 105–124. [20] P.G. Ciarlet, The Finite Element Method for Elliptic Problems, North-Holland, Amsterdam, 1978. [21] V. Thom´ ee, Galerkin Finite Element Methods for Parabolic Problems, Springer, 1997. [22] W. McLean, V. Thom´ ee, Numerical solution of an evolution equation with a positive-type memory term, J. Aust. Math. Soc. Ser. B 35 (1993) 23–70. [23] I.H. Sloan, V. Thom´ ee, Time discretization of an integro-differential equation of parabolic type, SIAM J. Numer. Anal. 23 (5) (1986) 1052–1061.