Optimal error analysis of Crank–Nicolson schemes for a coupled nonlinear Schrödinger system in 3D

Optimal error analysis of Crank–Nicolson schemes for a coupled nonlinear Schrödinger system in 3D

Journal of Computational and Applied Mathematics ( ) – Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics ...

993KB Sizes 0 Downloads 30 Views

Journal of Computational and Applied Mathematics (

)



Contents lists available at ScienceDirect

Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

Optimal error analysis of Crank–Nicolson schemes for a coupled nonlinear Schrödinger system in 3D Weiwei Sun a , Jilu Wang b,∗ a

Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong

b

Department of Scientific Computing, Florida State University, Tallahassee, FL 32306, USA

article

abstract

info

Article history: Received 1 January 2015 Received in revised form 5 October 2016

The paper is concerned with the time step condition of the commonly-used semi-implicit Crank–Nicolson finite difference schemes for a coupled nonlinear Schrödinger system in three dimensional space. We present the optimal L2 error estimate without any restriction on time step, while all previous works require certain time step conditions. Our approach is based on a rigorous analysis in both real and imaginary parts of the energy estimate (inequality) of the error function. Numerical examples for both two-dimensional and three-dimensional models are investigated and numerical results illustrate our theoretical analysis. © 2016 Elsevier B.V. All rights reserved.

Keywords: Unconditionally optimal error estimate Coupled nonlinear Schrödinger system Semi-implicit Crank–Nicolson schemes

1. Introduction In this paper, we consider the initial–boundary value problem of a coupled nonlinear Schrödinger (CNLS) system in three dimensional space: iut + 1u + (|u|2 + β|v|2 )u = 0,

x ∈ Ω, 0 < t ≤ T ,

(1.1)

ivt + 1v + (|v| + β|u| )v = 0,

x ∈ Ω, 0 < t ≤ T ,

(1.2)

2

u(x, 0) = u0 (x), u|x∈∂ Ω = 0,



2

v(x, 0) = v0 (x), v|x∈∂ Ω = 0,

x ∈ Ω,

(1.3) (1.4)

where i = −1, β is a given constant, Ω = [0, L1 ] × [0, L2 ] × [0, L3 ] ⊂ R3 . u(x, t ) and v(x, t ) are complex unknown functions defined in Ω × [0, T ]. When β = 1 and u0 (x) = v0 (x), the system reduces to a single equation iut + 1u + 2|u|2 u = 0, u(x, 0) = u0 (x),

x ∈ Ω, 0 < t ≤ T ,

x ∈ Ω,

u|x∈∂ Ω = 0.

(1.5) (1.6) (1.7)

The Schrödinger equations may describe many physical phenomena in optics, mechanics, and plasma physics. Here, we are particularly interested in coupled nonlinear Schödinger system due to its important applications and physical significance [1–3]. We refer the readers to [4–6] for an overview of various properties of the system, such as existence, uniqueness



Corresponding author. E-mail addresses: [email protected] (W. Sun), [email protected] (J. Wang).

http://dx.doi.org/10.1016/j.cam.2016.12.004 0377-0427/© 2016 Elsevier B.V. All rights reserved.

2

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



and stability results of solutions. Numerical methods and analysis for solving such equations have been investigated extensively, e.g., see [7–14] for finite difference methods, [15,16] for finite element methods and [17–22] for others. Usually, fully implicit schemes are unconditionally stable. However, at each time step, one has to solve a system of nonlinear equations. An explicit scheme is much easy for implementation. But it suffers the severely restricted time stepsize from the convergence requirement. A popular and widely-used approach is a semi-implicit scheme, such as linearized Crank–Nicolson scheme. At each time step, the scheme only requires the solution of a linear system. The time step restrictive condition is always a key issue for these schemes. For the Schrödinger equations, several modified explicit schemes were studied in [23,24], in which an extra dissipative term was added in the scheme to improve their stability condition. Analysis for implicitly nonlinear schemes can be found in [25,26]. Chang et al. [27] presented a systematic review and numerical comparison of several commonly-used finite difference schemes for the generalized nonlinear Schrödinger equations. Based on their numerical results and comparison, the linearized Crank–Nicolson scheme showed the best performance. Numerical analysis for the linearized Crank–Nicolson scheme was studied by several authors. Wang et al. [28] analyzed this linearized Crank–Nicolson scheme for the CNLS equations in one-dimensional space and provided optimal L2 error estimate under the time-step restrictive condition τ = o(h1/4 ), where τ and h are the stepsizes in the temporal direction and the spatial direction, respectively. Applying their approach to the equations in three-dimensional space will require a stronger time-step condition τ = o(h3/4 ). A similar work for the Kuramoto–Tsuzuki equation can be found in [29] with the same time-step condition. The approach is based on the analysis of the imaginary part of the classical energy estimate (inequality) since the real part is too weak. Such an approach was used also for Galerkin finite element methods, e.g., see [15], where optimal error estimates were obtained under some similar time-step conditions. More recently, Bao and Cai [30] studied a class of Schrödinger equations with an extra ϵ -perturbed term, which reduces to the classical nonlinear Schrödinger equations when ϵ = 0. An optimal L2 error estimate of a linearized Leap-frog scheme for the equations in one-dimensional space was obtained when τ , h ≤ s0 for certain small positive constant s0 . As they pointed out, their analysis can be extended to the equations in three-dimensional space under the time-step condition τ = o(h) and the results are still valid for the classical Schrödinger equations (ϵ = 0). However, in the analysis the numerical solution in L∞ norm was bounded, in terms of the classical inverse inequality

∥eh ∥L∞ ≤ C γd (h)∥eh ∥H 1 for the error function eh , where d is the dimension and γ2 = | ln h|, γ3 = h−1/2 . The H 1 error bound ∥eh ∥H 1 was estimated by using an H 2 -estimate method, which requires higher regularity of solution. Also the H 2 -estimate method may not be applicable directly to Galerkin finite element methods. To obtain an L2 error estimate directly by following the classical energy estimate method [28,30], one has to use the inverse inequality

∥eh ∥L∞ ≤ Ch−d/2 ∥eh ∥L2 to bound the numerical solution in L∞ norm, which results in a stronger time step restriction. In practical computations, time-step restrictive conditions may result in the use of an unnecessarily small time step and extremely time-consuming. Also the problem becomes more serious when a non-uniform mesh is used. The paper focuses on unconditionally optimal error analysis of two popular linearized semi-implicit Crank–Nicolson finite difference schemes for the CNLS system in three dimensional space. In these two schemes, the nonlinear term is treated by a linearized semi-implicit approximation and an explicit approximation, respectively. The optimal L2 error estimate is obtained without any time-step condition, i.e., 0 < hr ≤ Lr and 0 < τ ≤ T , for the first scheme and with the condition h < s0 for some small constant s0 > 0 for the second scheme. The approach is based on a rigorous analysis in both real and imaginary parts of the energy estimate (inequality) and a simple inequality, with which the error function at a given time level is bounded, in terms of its average at two consecutive time levels, when τ ≥ h. Numerical results presented in this paper confirm that the first scheme is unconditionally convergent. More important is that our approach can be easily extended to Leap-frog finite difference scheme and Galerkin finite element methods to obtain optimal error estimates without any time-step conditions, while those previous works always require certain time-step restrictions. The paper is organized as follows. In Section 2, we present two linearized Crank–Nicolson schemes and our main results. By introducing some notations and lemmas, we prove optimal error estimates of the finite difference schemes in Section 3. Numerical results are given in Section 4. Two artificial examples in 2D and 3D spaces are presented, respectively, to show that the linearized Crank–Nicolson scheme provides second-order accuracy in both time and spatial directions without any time step conditions. An example for the interaction of two solitons is also presented. 2. Linearized Crank–Nicolson schemes and main results In this section, we present a linearized Crank–Nicolson scheme. Let Ωh = {(x1,j1 , x2,j2 , x3,j3 )|x1,j1 = j1 h1 , x2,j2 =

j2 h2 , x3,j3 = j3 h3 ; 0 ≤ jr ≤ Mr , r = 1, 2, 3} be a partition of Ω with the mesh size hr = Lr /Mr , and α1 ≤

hi hj

≤ α2 ,

1 ≤ i, j ≤ 3, for some positive constants α1 and α2 . We denote h = max{h1 , h2 , h3 }. Let Ωτ = {tn |tn = nτ ; 0 ≤ n ≤ N } be a uniform partition of [0, T ] with the time step τ = T /N, Ωhτ = Ωh × Ωτ and denote Jh = {(j1 , j2 , j3 )|0 ≤ jr ≤ Mr ; r = 1, 2, 3} Jh′ = {(j1 , j2 , j3 )|1 ≤ jr ≤ Mr − 1; r = 1, 2, 3} Jh′′ = {(j1 , j2 , j3 )|0 ≤ jr ≤ Mr − 1; r = 1, 2, 3}.

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



3

Let {Ujn1 j2 j3 , Vjn1 j2 j3 |0 ≤ jr ≤ Mr , 0 ≤ n ≤ N ; r = 1, 2, 3} be two mesh functions defined on Ωhτ . Some notations are introduced below.

  1 n 1 n Dτ U⃗jn = U⃗j + U⃗n−1 , U⃗j − U⃗n−1 , j j j 2 τ       1  n  1 1 ∇h U⃗jn = , Ujn1 +1,j2 ,j3 − Ujn1 ,j2 ,j3 , Ujn1 ,j2 +1,j3 − Ujn1 ,j2 ,j3 , Uj1 ,j2 ,j3 +1 − Ujn1 ,j2 ,j3 h1 h2 h3 n− 21

U⃗

=

∆h U⃗jn = ∇h · (∇h Ujn1 −1,j2 −1,j3 −1 ), where ⃗j = (j1 , j2 , j3 ). Moreover, we define the inner product and some norms by

( U n , V n ) = h∆



n

U⃗jn V ⃗j ,

∥U n ∥2 = (U n , U n ),

∥U n ∥L∞ = max |U⃗jn |, ⃗j∈J ′ h

⃗j∈J ′ h

 1p

 ∥U n ∥L p =  h∆



|U⃗jn |p  ,

∥∇h U n ∥ = h∆

h

where



|∇h U⃗jn |2  ,

⃗j∈J ′′ h

⃗j∈J ′ n V ⃗j

 12



denotes the conjugate of V⃗n and h∆ = h1 h2 h3 . j

With these notations, we define a linearized Crank–Nicolson scheme for Eqs. (1.1)–(1.4) by n− 21

iDτ U⃗jn + ∆h U⃗ j

n− 21

iDτ V⃗jn + ∆h V⃗ j

1

n− 12

1

n− 21

+ Gn− 2 (U , V )U⃗j + Gn− 2 (V , U )V⃗j

= 0, ⃗j ∈ Jh′ , 2 ≤ n ≤ N ,

(2.1)

= 0, ⃗j ∈ Jh′ , 2 ≤ n ≤ N ,

(2.2)

subject to U⃗jn |⃗j∈∂ Jh = V⃗jn |⃗j∈∂ Jh = 0,

2 ≤ n ≤ N,

(2.3)

where 1

Gn− 2 (U , V ) =

   1   n −1 2 3 |U⃗ | + β|V⃗n−1 |2 − |U⃗n−2 |2 + β|V⃗n−2 |2 j j j j 2

(2.4)

is a standard extrapolation approximation [31]. At the initial step n = 1, we use a linearized Euler scheme defined by iDτ U⃗j1 + ∆h U⃗j1 + (|U⃗j0 |2 + β|V⃗j0 |2 )U⃗j1 = 0,

⃗j ∈ Jh′

(2.5)

iDτ V⃗j1 + ∆h V⃗j1 + (|V⃗j0 |2 + β|U⃗j0 |2 )V⃗j1 = 0,

⃗j ∈ Jh′

(2.6)

with U⃗j0 = u0 (x⃗j ),

V⃗j0 = v0 (x⃗j ),

⃗j ∈ Jh ,

U⃗j1 |⃗j∈∂ Jh = V⃗j1 |⃗j∈∂ Jh = 0.

(2.7) (2.8)

At each time step, the solution {U⃗n , V⃗n |⃗j ∈ Jh′ , 2 ≤ n ≤ N } can be calculated by solving two uncoupled linear systems j j (2.1) and (2.2), and the solution {U⃗1 , V⃗1 |⃗j ∈ Jh′ } can be obtained by solving the systems (2.5) and (2.6). It is easy to see that j

j

numerical solutions satisfy the discrete mass conservation [28], i.e.,

∥U n ∥ ≡ ∥U 0 ∥,

∥V n ∥ ≡ ∥V 0 ∥,

2 ≤ n ≤ N.

(2.9)

When n = 1, we can also get

∥U 1 ∥ ≤ ∥U 0 ∥,

∥V n ∥ ≤ ∥V 0 ∥.

(2.10)

A simpler semi-implicit scheme is defined by n− 21

iDτ U⃗jn + ∆h U⃗ j

n− 12

iDτ V⃗jn + ∆h V⃗ j

1 + Gn− 2 (U , V ) = 0, ⃗j ∈ Jh′ ,

(2.11)

1 + Gn− 2 (V , U ) = 0, ⃗j ∈ Jh′

(2.12)

4

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



for n = 2, 3 . . . , N, with iDτ U⃗j1 + ∆h U⃗j1 + (|U⃗j0 |2 + β|V⃗j0 |2 )U⃗j0 = 0,

⃗j ∈ Jh′

(2.13)

iDτ V⃗j1 + ∆h V⃗j1 + (|V⃗j0 |2 + β|U⃗j0 |2 )V⃗j0 = 0,

⃗j ∈ Jh′

(2.14)

where

    1   n−1 2 1  Gn− 2 (U , V ) = 3 |U⃗ | + β|V⃗n−1 |2 U⃗n−1 − |U⃗n−2 |2 + β|V⃗n−2 |2 U⃗n−2 . j j j j j j

(2.15)

2

In the above scheme, the nonlinear term is treated explicitly such that the coefficient matrices are the same at different time steps. Let (u, v) be the solution of the coupled nonlinear Schrödinger system (1.1)–(1.4), and denote u⃗n = u(x⃗j , tn ), v⃗n = j

v(x⃗j , tn ). We define the error functions by e⃗nj = u⃗nj − U⃗jn ,

j

η⃗jn = v⃗jn − V⃗jn , ⃗j ∈ Jh , 0 ≤ n ≤ N .

We present our main results in the following two theorems and the proofs will be given in next section. Theorem 2.1. Suppose that the system (1.1)–(1.4) has a unique and smooth solution (u, v). Then the finite difference scheme defined in (2.1)–(2.2) and (2.5)–(2.6) has a unique solution and

∥en ∥2 + ∥ηn ∥2 ≤ C0 (τ 2 + h2 )2 ,

1 ≤ n ≤ N,

(2.16)

where C0 is a positive constant independent of τ and h. Theorem 2.2. Suppose that the system (1.1)–(1.4) has a unique and smooth solution (u, v). Then there exists a positive constant s0 such that when h ≤ s0 , the finite difference scheme defined in (2.11)–(2.14) has a unique solution and

∥en ∥2 + ∥ηn ∥2 ≤ C0 (τ 2 + h2 )2 ,

1 ≤ n ≤ N,

(2.17)

where C0 is a positive constant independent of τ and h. 3. Optimal error analysis The existence and uniqueness of the first finite difference solution (2.1)–(2.2) can be obtained from the mass conservation (2.9). The coefficient matrices of the second difference scheme (2.11)–(2.12) at each time step are strictly diagonally dominant, the existence and uniqueness follow immediately. In this section, we derive the error estimates (2.16)–(2.17) for both difference schemes by using the induction method and the emphasis will be given on the unconditional optimal convergence rate. For the simplicity of notations, we denote by C a generic positive constant and by ϵ a generic small positive constant, which is independent upon n, h, τ , and C0 . 3.1. Preliminaries Let (u, v) be the solution of the system (1.1)–(1.4). Then (u, v) satisfies the finite difference equations n− 21

iDτ u⃗nj + ∆h u⃗ j

n− 12

1

n− 21

1

n− 12

+ Gn− 2 (u, v)u⃗j

, ⃗j ∈ Jh′ , 2 ≤ n ≤ N ,

n− 12

(3.1)

, ⃗j ∈ Jh′ 2 ≤ n ≤ N ,

(3.2)

iDτ u⃗1j + ∆h u⃗1j + (|u⃗0j |2 + β|v⃗j0 |2 )u⃗1j = P⃗j1 ,

⃗j ∈ Jh′ ,

(3.3)

iDτ v⃗j1 + ∆h v⃗j1 + (|v⃗j0 |2 + β|u⃗0j |2 )v⃗j1 = Q⃗j1 ,

⃗j ∈ Jh′ ,

(3.4)

iDτ v⃗jn + ∆h v⃗ j

+ Gn− 2 (v, u)v⃗j

n− 12

= P⃗j

= Q⃗j

and

where n− 12

P⃗ j

=i



Dτ u⃗nj 1



− ut x⃗j , tn− 1





n− 21





+ ∆h u⃗j − 1u x⃗j , tn− 1 2     2 2        − u x⃗j , tn− 1  + β v x⃗j , tn− 1  u x⃗j , tn− 1 , ⃗j ∈ Jh′ , 2 ≤ n ≤ N , 2

n− 21

+ Gn− 2 (u, v)u⃗j

2

2

2

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (



)



P⃗j1

    n− 1 = i Dτ v⃗jn − vt x⃗j , tn− 1 + ∆h v⃗j 2 − 1v x⃗j , tn− 1 2 2     2 2    1 1 n −     + Gn− 2 (v, u)v⃗j 2 − v x⃗j , tn− 1  + β u x⃗j , tn− 1  v x⃗j , tn− 1 , ⃗j ∈ Jh′ , 2 ≤ n ≤ N , 2 2 2     = i Dτ u⃗1j − ut (x⃗j , t1 ) + ∆h u⃗1j − 1u(x⃗j , t1 ) + (|u⃗0j |2 + β|v⃗j0 |2 )u⃗1j

Q⃗j1

− (|u(x⃗j , t1 )|2 + β|v(x⃗j , t1 )|2 )u(x⃗j , t1 ), ⃗j ∈ Jh′ ,     = i Dτ v⃗j1 − vt (x⃗j , t1 ) + ∆h v⃗j1 − 1v(x⃗j , t1 ) + (|v⃗j0 |2 + β|u⃗0j |2 )v⃗j1

n− 12

Q⃗ j

5



− (|v(x⃗j , t1 )|2 + β|u(x⃗j , t1 )|2 )v(x⃗j , t1 ), ⃗j ∈ Jh′ , define the truncation errors. By Taylor formula, we can see that

   n− 12 2 P  ≤ C (τ 2 + h2 )2 ,

   n− 21 2 Q  ≤ C (τ 2 + h2 )2 ,

2 ≤ n ≤ N,

(3.5)

and

∥P 1 ∥2 ≤ C (τ + h2 )2 ,

∥Q 1 ∥2 ≤ C (τ + h2 )2 .

(3.6)

Subtracting (2.1)–(2.2) and (2.5)–(2.6) from (3.1)–(3.2) and (3.3)–(3.4), respectively, we obtain the error equations iDτ e⃗nj



n− 21

+ ∆h e⃗j

n− 21

+ G

n− 21

(u, v)u⃗j

−G

n− 21

iDτ e⃗1j + ∆h e⃗1j + (|u⃗0j |2 + β|v⃗j0 |2 )e⃗1j = P⃗j1 ,



n− 12

n− 12

1

+ Gn− 2 (v, u)v⃗j

iDτ η⃗jn + ∆h η⃗ j

e⃗nj ∂ Jh

|

=η |

n ⃗j ∂ Jh

(U , V )U⃗j



n− 21

= P⃗j

, ⃗j ∈ Jh′ ,

1

(3.7) (3.8)

 1

n− 2

− Gn− 2 (V , U )V⃗j

n− 12

= Q⃗j

, ⃗j ∈ Jh′ ,

⃗j ∈ Jh′ ,

2 ≤ n ≤ N,

(3.9) (3.10)

⃗j ∈ Jh′ , = 0,

2 ≤ n ≤ N,

⃗j ∈ Jh′ ,

iDτ η⃗j1 + ∆h η⃗j1 + (|v⃗j0 |2 + β|u⃗0j |2 )η⃗j1 = Q⃗j1 , e⃗0j = η⃗j0 = 0,

n− 12

(3.11) 0 ≤ n ≤ N.

(3.12)

In our proof, the following formula will be often used h∆



n

U⃗jn ∆h V ⃗j = −h∆



n

∇h U⃗jn · ∇h V ⃗j .

(3.13)

⃗j∈J ′′ h

⃗j∈J ′ h

Also we present several discrete Sobolev interpolation formulas and inverse inequalities in the following lemma. The proof can be found in [32]. Lemma 3.1. Let {ω⃗j } be a mesh function defined in Ωh . Then 3

− 12

∥ω∥Lp ≤ K ∥ω∥Lp2



∥∇h ω∥L2 + ∥ω∥L2

 23 − 3p

,

(3.14)

for 2 ≤ p ≤ 6. And 3

3

∥w∥Lp ≤ Kh p − 2 ∥w∥L2

(3.15)

for 2 ≤ p ≤ ∞, where K is a constant independent of h and the mesh function ω. Lemma 3.2. Let {ω⃗n } be a mesh function on Ωhτ . Then j

∥ωm ∥ ≤ 2

m     k− 12  ω  + ∥ω0 ∥,

1 ≤ m ≤ N.

(3.16)

k=1

Proof. (3.16) can be obtained directly by using triangular inequality.



6

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



Lemma 3.3 (Gronwall’s inequality). Suppose that the discrete function {U n |n = 0, 1, 2, . . . , N ; N τ = T } satisfies the inequality Un ≤ A + τ

n 

Bm U m ,

(3.17)

m=0

where A and Bm (m = 0, 1, 2, . . . , N ) are nonnegative constants. Then



N 

max |U | ≤ Aexp 2τ n

0≤n≤N

 Bm

,

(3.18)

m=0

when τ is sufficiently small, such that τ · (max0≤m≤N Bm ) ≤

1 . 2

Lemma 3.4. Under the assumptions of Theorem 2.1, there exists a unique solution of the difference scheme (2.5)–(2.8) and

     1 2 1 2   ∥e1 ∥2 + ∥η1 ∥2 + τ ∇h e 2  + ∇h η 2  ≤ C (τ 2 + h2 )2 .

(3.19)

1

Proof. We multiply (3.8) by e⃗j and sum them up for ⃗j ∈ Jh′ to get i

   2   ∥e 1 ∥2 − ∥∇h e1 ∥2 + h∆ u⃗0j  + β|v⃗j0 |2 |e⃗1j |2 = P 1 , e1 . τ ⃗ ′

(3.20)

j∈Jh

The imaginary part of the above equation implies 1

τ

∥e1 ∥2 = Im(P 1 , e1 ),

and from the real part, we get

  1 2  ∇h e 2  ≤ max{|u0 (x)|2 + β|v0 (x)|2 }∥e1 ∥2 + |Re(P 1 , e1 )|. x∈Ω

By (3.6), we can see that

∥e1 ∥2 ≤ C (τ 2 + h2 )2 ,   1 2  τ ∇h e 2  ≤ C (τ 2 + h2 )2 .

(3.21)

Similarly, we can prove that

∥η1 ∥2 ≤ C (τ 2 + h2 )2 ,   1 2  τ ∇h η 2  ≤ C (τ 2 + h2 )2 . The proof is complete.

(3.22)



3.2. The proof of Theorem 2.1 In this section, we shall prove a slightly stronger result

     1 2 1 2   ∥em ∥2 + ∥ηm ∥2 + τ ∇h em− 2  + ∇h ηm− 2  ≤ C¯ 0 (τ 2 + h2 )2

(3.23)

for 1 ≤ m ≤ N. Clearly we can see from Lemma 3.4 that (3.23) holds for m = 1. By mathematical induction, we can assume that (3.23) holds at the first (n − 1)th step, i.e., m ≤ n − 1, and we need to find such a C¯ 0 , independent of n, τ , h, that (3.23) also holds for m = n. n− 21

We multiply (3.7) by e⃗ j

i

and sum them up for ⃗j ∈ Jh′ to arrive at

    1 ∥ e n ∥ 2 − ∥ e n −1 ∥ 2 1 2 1 1 n− 1  − Im(en , en−1 ) − ∇h en− 2  + R1 2 = P n− 2 , en− 2 , 2τ τ

where n− 12

R1

= h∆



n− 21

G

⃗j∈J ′ h

n− 12

(u, v)u⃗j

n− 21

−G

n− 21

(U , V )U⃗j



n− 12

e⃗ j

.

(3.24)

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



7

Again, taking the imaginary part of the above equation gives 1 



n 2

∥e ∥ − ∥ e n− 21

We rewrite R1 n− 12

R1

n−1 2







n− 21

+ Im R1



  1 1 = Im P n− 2 , en− 2 .

(3.25)

by



= h∆

1

 

n− 21

Gn− 2 (u, v) e⃗ j

⃗j∈J ′ h n− 21

:= R11

2    n− 1   n− 1 n− 12 n− 12 n− 12 2  + h∆ ( u , v) − G ( U , V ) · u e⃗ 2 G − e ⃗j ⃗j  j ⃗j∈J ′ h

n− 1

+ R12 2 .

(3.26)

By noting (2.4), it is easy to see that



n− 12



= 0,

Im R11 and

        n− 12   R  ≤ h∆ 2 3|u⃗n−1 | |e⃗n−1 | + |u⃗n−2 | |e⃗n−2 | + 2β 3|v⃗n−1 | |η⃗n−1 | + |v⃗n−2 | |η⃗n−2 |  12  j j j j j j j j 2 ⃗ ′ j∈Jh     n− 1   n− 1   n− 1   n −1 2 n −2 2 n −1 2 n −2 2 u 2  + e 2  e 2  + 3|e⃗j | + |e⃗j | + β 3|η⃗j | + |η⃗j |  ⃗j   ⃗j   ⃗j   1 2   ≤ C (∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 ) + C en− 2   1 2   + C (∥en−1 ∥2L3 + ∥en−2 ∥2L3 + ∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 ) en− 2  6 .

(3.27)

L

With the above inequalities and (3.5), (3.25) reduces to 1

τ

 1 2    ∥en ∥2 − ∥en−1 ∥2 ≤ C (∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 ) + C en− 2   1 2   + C (∥en−1 ∥2L3 + ∥en−2 ∥2L3 + ∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 ) en− 2  6 + C (τ 2 + h2 )2 .

(3.28)

L

1 Similarly, multiplying (3.9) by ηn− 2 and summing them up for ⃗j ∈ Jh′ gives

i

    ∥ηn ∥2 − ∥ηn−1 ∥2 1 1 2 1 1 n− 1  − Im(ηn , ηn−1 ) − ∇h ηn− 2  + R2 2 = Q n− 2 , ηn− 2 , 2τ τ

(3.29)

where n− 21

R2

= h∆



n− 21

1

Gn− 2 (v, u)v⃗ j

n− 21

1

− Gn− 2 (V , U )V⃗j



n− 12

η⃗j

.

⃗j∈J ′ h

Taking the imaginary part of Eq. (3.29), we obtain 1 2τ

    1 1 n− 1 (∥ηn ∥2 − ∥ηn−1 ∥2 ) + Im R2 2 = Im Q n− 2 , ηn− 2 .

By noting (2.4), n− 12

R2



= h∆

1

n− 12

:= R21

n− 12

+ R22 ,

from which, we can see that



n− 12



n− 12

j

⃗j∈J ′ h

Im R21

 

Gn− 2 (v, u) η⃗

= 0,

2    n− 1   n− 12 n− 1 n− 21 n− 12 2  + h∆ G · v (v, u ) − G ( V , U ) − η η⃗j 2 ⃗j ⃗j  ⃗j∈J ′ h

(3.30)

8

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



and

        n− 21  n −1 n−1 n −2 n−2 n −1 n−1 n −2 n−2 R  ≤ h∆ | 2 3 |v ||η | + |v | |η | + 2 β 3 | u | | e | + | u | | e 22 ⃗j ⃗j ⃗j ⃗j ⃗j ⃗j ⃗j ⃗j   2 ⃗ ′ j∈Jh   n− 1   n− 1   n− 1     v 2  + η 2  η 2  + 3|η⃗jn−1 |2 + |η⃗jn−2 |2 + β 3|e⃗nj −1 |2 + |e⃗nj −2 |2  ⃗j   ⃗j   ⃗j    1 2  ≤ C (∥ηn−1 ∥2 + ∥ηn−2 ∥2 + ∥en−1 ∥2 + ∥en−2 ∥2 ) + C ηn− 2    1 2  + C (∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 + ∥en−1 ∥2L3 + ∥en−2 ∥2L3 ) ηn− 2  6 . L

Then, (3.30) reduces to 1

τ

   1 2  ∥ηn ∥2 − ∥ηn−1 ∥2 ≤ C (∥ηn−1 ∥2 + ∥ηn−2 ∥2 + ∥en−1 ∥2 + ∥en−2 ∥2 ) + C ηn− 2    1 2  + C (∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 + ∥en−1 ∥2L3 + ∥en−2 ∥2L3 ) ηn− 2  6 + C (τ 2 + h2 )2 .

(3.31)

L

In many previous works, one used to bound the error functions in L∞ -norm by using classical inverse inequality, which, however, leads to a time step restrictive condition. Here we take a different way. By taking the real part of Eq. (3.24), we have

  2 1    1 1 n− 21  n n−1 n− 12  = −Re P n− 2 , en− 2 . ∇h e  + Im(e , e ) − Re R1 τ

(3.32)

By (3.26), we have further

   1 2  n− 12   R  ≤ C  en− 2   11  from which together with (3.27), (3.32) reduces to

   1 2 1 2    ∇h en− 2  ≤ C (∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 ) + C en− 2   1 2   + C (∥en−1 ∥2L3 + ∥en−2 ∥2L3 + ∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 ) en− 2  6 L

+

1 2τ

(∥en ∥2 + ∥en−1 ∥2 ) + C (τ 2 + h2 )2 .

(3.33)

Applying the above approach to (3.29), we get

    1 2 1 2   ∇h ηn− 2  ≤ C (∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 ) + C ηn− 2    1 2  + C (∥en−1 ∥2L3 + ∥en−2 ∥2L3 + ∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 ) ηn− 2  6 L

+

1 2τ

(∥ηn ∥2 + ∥ηn−1 ∥2 ) + C (τ 2 + h2 )2 .

(3.34)

To reduce the nonlinear term in the above inequalities, now we prove the following inequalities

 1 2   (∥en−1 ∥2L3 + ∥en−2 ∥2L3 + ∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 ) en− 2  6 L

 1 2   ≤ C (∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 ) + C en− 2  + C (τ 2 + h2 )2 ,

(3.35)

and

  1 2  (∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 + ∥en−1 ∥2L3 + ∥en−2 ∥2L3 ) ηn− 2  6 L

≤ C (∥η

n −1 2

∥ + ∥η

with two different cases.

n−2 2

∥ + ∥e

n−1 2

∥ + ∥e

  1 2  ∥ ) + C ηn− 2  + C (τ 2 + h2 )2 ,

n−2 2

(3.36)

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



9

First, we consider the case τ ≤ h. We use Lemma 3.1 and (3.23) to get 1

1 3 ∥en−1 ∥L3 ≤ Ch− 2 ∥en−1 ∥ ≤ C C¯ 02 h 2 ,  1  1  n− 2    e  6 ≤ Ch−1 en− 2  ,

L

1

1

3

∥η ∥L3 ≤ Ch− 2 ∥ηn−1 ∥ ≤ C C¯ 02 h 2 ,     1  n− 12    6 ≤ Ch−1 ηn− 2  . η n−1

(3.37)

L

When C C¯ 0 h ≤ 1,

 1 2  1 2     (∥en−1 ∥2L3 + ∥en−2 ∥2L3 + ∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 ) en− 2  6 ≤ C en− 2  , L

and

    1 2 1 2   (∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 + ∥en−1 ∥2L3 + ∥en−2 ∥2L3 ) ηn− 2  6 ≤ C ηn− 2  . L

Secondly, we consider the case τ ≥ h. By Lemma 3.1, we get

∥em ∥2L6 ≤ C (∥∇h em ∥ + ∥em ∥)2 , which with (3.23) and Lemma 3.2 further implies

∥em ∥2L3 ≤ ∥em ∥ ∥em ∥L6 ≤ C ∥em ∥(∥∇h em ∥ + ∥em ∥)     2T 1  max ∇h ek− 2  + ∥em ∥ ≤ C ∥e m ∥ τ 1≤k≤m 5

≤ C C¯ 0 τ 2 , for 1 ≤ m ≤ n − 1. Similarly we can obtain 5 ∥ηm ∥23 ≤ C C¯ 0 τ 2 , 1 ≤ m ≤ n − 1. L

It follows that

   1 2   1 2 1 2      (∥en−1 ∥2L3 + ∥en−2 ∥2L3 + ∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 ) en− 2  6 ≤ ϵτ ∇h en− 2  + en− 2  , L

and

       1 2 1 2 1 2    (∥ηn−1 ∥2L3 + ∥ηn−2 ∥2L3 + ∥en−1 ∥2L3 + ∥en−2 ∥2L3 ) ηn− 2  6 ≤ ϵτ ∇h ηn− 2  + ηn− 2  . L

3

By noting (3.33) and (3.34), (3.35) and (3.36) also hold in this case when C C¯ 0 τ 2 ≤ ϵ . With (3.35), (3.28) and (3.31) reduce to   1 (∥en ∥2 − ∥en−1 ∥2 ) ≤ C ∥en ∥2 + ∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 + C (τ 2 + h2 )2 ,

τ

and 1

τ

  (∥ηn ∥2 − ∥ηn−1 ∥2 ) ≤ C ∥ηn ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 + ∥en−1 ∥2 + ∥en−2 ∥2 + C (τ 2 + h2 )2 ,

respectively. Adding the above two equations together, and by Lemma 3.3, we obtain

∥en ∥2 + ∥ηn ∥2 ≤ CTe2CT (τ 2 + h2 )2 , 1 where we assume τ ≤ 2C . Moreover, from (3.33) and (3.34), we see that      1 2 1 2   τ ∇h en− 2  + ∇h ηn− 2  ≤ CTe2CT (τ 2 + h2 )2 .

(3.38)

(3.39)

Thus, (3.23) holds for m = n if we take C¯ 0 = 2CTe2CT . We complete the induction. Up to now, we have proved that (2.16) holds with C0 ≥ C¯ 0 when τ , h ≤ s0 for some constant s0 > 0. For any τ and h with τ + h > s0 , by (2.9)–(2.10), the finite difference system at each time step has a unique solution and

∥U n − un ∥ + ∥V n − v n ∥ ≤ 2∥u0 ∥ + 2∥v 0 ∥ ≤ when C0 ≥

16(∥u0 ∥ + ∥v 0 ∥)2 s40

The proof is complete.



.



C0 (h2 + τ 2 )

(3.40)

10

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



3.3. The proof of Theorem 2.2 In this subsection, we prove that the inequality (2.17) holds for the scheme (2.11)–(2.14) also by mathematical induction. Similarly, we need to establish the estimate (3.23) for 1 ≤ m ≤ N. From (1.1)–(1.4) and (2.11)–(2.14), we obtain the error equations n− 12

iDτ e⃗nj + ∆h e⃗ j

  1 1 n− 1 Gn− 2 (U , V ) =  P⃗ 2 , ⃗j ∈ Jh′ , 2 ≤ n ≤ N , +  Gn− 2 (u, v) −  j

P⃗j1 , iDτ e⃗1j + ∆h e⃗1j =  n− 12

iDτ η⃗jn + ∆h η⃗ j

(3.41)

⃗j ∈ Jh′

(3.42)

  1 1 n− 1 +  Gn− 2 (v, u) −  Gn− 2 (V , U ) =  Q⃗ 2 , ⃗j ∈ Jh′ , 2 ≤ n ≤ N , j

(3.43)

Q⃗j1 ⃗j ∈ Jh′ , iDτ η⃗j1 + ∆h η⃗j1 = 

(3.44)

e⃗0j = η⃗j0 = 0,

(3.45)

e⃗nj ∂ Jh

|

=η |

n ⃗j ∂ Jh

⃗j ∈ Jh′ , = 0,

0 ≤ n ≤ N.

(3.46)

By Taylor formula, the truncation errors satisfy

  n− 21 2 P  ≤ C (τ 2 + h2 )2 ,

  n− 12 2 Q  ≤ C (τ 2 + h2 )2 ,

2≤n≤N

(3.47)

and

∥ P 1 ∥2 ≤ C (τ + h2 )2 ,

∥ Q 1 ∥2 ≤ C (τ + h2 )2 .

(3.48)

We can see from (3.42), (3.44) that

     1 2 1 2   2 2 ∥e ∥ + ∥η ∥ + τ ∇h e  + ∇h η  ≤ C (τ 2 + h2 )2 . 1 2

1 2

(3.49)

The above inequality clearly shows that (3.23) holds for m = 1. By mathematical induction, we assume that the estimate (3.23) holds for m ≤ n − 1. Next, we need to prove that (3.23) also holds for m = n. n− 12

We multiply (3.41) by e⃗ j

i

and sum them up for ⃗j ∈ Jh′ to arrive at

    ∥ e n ∥ 2 − ∥ e n −1 ∥ 2 1 1 2 1 1 n− 1  P n− 2 , en− 2 , − Im(en , en−1 ) − ∇h en− 2  +  R1 2 =  2τ τ

(3.50)

where 1

n−  R1 2 = h∆

 n− 1  1 1  Gn− 2 (u, v) −  Gn− 2 (U , V ) e 2 . ⃗j

⃗j∈J ′ h

Taking the imaginary part of the above equation gives 1

 1 2    ∥en ∥2 − ∥en−1 ∥2 ≤ C (∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 ) + C en− 2  τ  1   + C (∥en−1 ∥3L4 + ∥en−2 ∥3L4 + ∥ηn−1 ∥3L4 + ∥ηn−2 ∥3L4 ) en− 2  6 + C (τ 2 + h2 )2 .

(3.51)

L

1 Similarly, multiplying (3.43) by ηn− 2 and summing them up for ⃗j ∈ Jh′ lead to

i

    ∥ηn ∥2 − ∥ηn−1 ∥2 1 1 1 2 1 n− 1  Q n− 2 , η n− 2 , − Im(ηn , ηn−1 ) − ∇h ηn− 2  +  R2 2 =  2τ τ

(3.52)

where 1

n−  R2 2 = h∆

 n− 1  1 1  Gn− 2 (v, u) −  Gn− 2 (V , U ) η 2 . ⃗j

⃗j∈J ′ h

From the imaginary part of (3.52), we get 1

   1 2  ∥ηn ∥2 − ∥ηn−1 ∥2 ≤ C (∥ηn−1 ∥2 + ∥ηn−2 ∥2 + ∥en−1 ∥2 + ∥en−2 ∥2 ) + C ηn− 2  τ   1  + C (∥ηn−1 ∥3L4 + ∥ηn−2 ∥3L4 + ∥en−1 ∥3L4 + ∥en−2 ∥3L4 ) ηn− 2  6 + C (τ 2 + h2 )2 . L

(3.53)

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



11

Moreover, taking the real parts of (3.50) and (3.52), respectively, we have further

 1 2   1 2    ∇h en− 2  ≤ C (∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 ) + C en− 2   1   + C (∥en−1 ∥3L4 + ∥en−2 ∥3L4 + ∥ηn−1 ∥3L4 + ∥ηn−2 ∥3L4 ) en− 2  6 L

+

1 2τ

(∥e ∥ + ∥e n 2

∥ ) + C (τ + h ) ,

n−1 2

2

2 2

(3.54)

and

    1 2 1 2   ∇h ηn− 2  ≤ C (∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 ) + C ηn− 2    1  + C (∥en−1 ∥3L4 + ∥en−2 ∥3L4 + ∥ηn−1 ∥3L4 + ∥ηn−2 ∥3L4 ) ηn− 2  6 L

+

1 2τ

(∥η ∥ + ∥η n 2

∥ ) + C (τ + h ) .

n−1 2

2

2 2

(3.55)

With the same approach as used in Section 3.2 for (3.35), we can prove the estimate

 1   (∥en−1 ∥3L4 + ∥en−2 ∥3L4 + ∥ηn−1 ∥3L4 + ∥ηn−2 ∥3L4 ) en− 2  6 L

 1 2   ≤ C (∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 ) + C en− 2  + C (τ 2 + h2 )2 ,

(3.56)

and

  1  (∥ηn−1 ∥3L4 + ∥ηn−2 ∥3L4 + ∥en−1 ∥3L4 + ∥en−2 ∥3L4 ) ηn− 2  6 L

  1 2  ≤ C (∥ηn−1 ∥2 + ∥ηn−2 ∥2 + ∥en−1 ∥2 + ∥en−2 ∥2 ) + C ηn− 2  + C (τ 2 + h2 )2 ,

(3.57)

when τ , h ≤ min{(1/C C¯ 0 )4/3 , (ϵ/C C¯ 0 )8/3 }. Therefore, (3.51) and (3.53) reduce to

  ∥en ∥2 − ∥en−1 ∥2 ≤ C ∥en ∥2 + ∥en−1 ∥2 + ∥en−2 ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 + C (τ 2 + h2 )2 , τ and

  ∥ηn ∥2 − ∥ηn−1 ∥2 ≤ C ∥ηn ∥2 + ∥ηn−1 ∥2 + ∥ηn−2 ∥2 + ∥en−1 ∥2 + ∥en−2 ∥2 + C (τ 2 + h2 )2 . τ Adding the above two inequalities together, and using Lemma 3.3 again, we get

∥en ∥2 + ∥ηn ∥2 ≤ CTe2CT (τ 2 + h2 )2 , where we have assumed that τ ≤

1 2C

(3.58)

. Then, adding (3.54) and (3.55) shows

     1 2 1 2   τ ∇h en− 2  + ∇h ηn− 2  ≤ CTe2CT (τ 2 + h2 )2 .

(3.59)

Thus, (3.23) holds for the scheme (2.11)–(2.14) for m = n if we take C¯ 0 = 2CTe2CT . The induction is complete. We have proved that there exists a small constant s0 such that (2.17) holds when C0 ≥ C¯ 0 and τ , h ≤ s0 . For τ > s0 , clearly at each time step of the scheme (2.11)–(2.14), one needs to solve two uncoupled linear systems. Since the coefficient matrices of these two systems are diagonally dominant, the system (2.11)–(2.14) has a unique solution and we have

∥U n ∥L∞ ≤ C ∥ Gn−1/2 (U , V )∥L∞ + ∥U n−1 ∥L∞ , and

∥V n ∥L∞ ≤ C ∥ Gn−1/2 (V , U )∥L∞ + ∥V n−1 ∥L∞ . Moreover, by noting (2.15), En : = ∥U n ∥L∞ + ∥V n ∥L∞

≤ C (∥ Gn−1/2 (U , V )∥L∞ + ∥ Gn−1/2 (U , V )∥L∞ ) + (∥U n−1 ∥L∞ + ∥V n−1 ∥L∞ ) 3 3 ≤ C (En−1 + En−2 ) + C (En−1 + En−2 ).

12

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



Fig. 1. Numerical results with β = 3, γ = 1.2.

  T s0

By noting the fact N ≤

+ 1, the above inequality implies

∥U n ∥L∞ + ∥V n ∥L∞ ≤ Cs0 ,

(3.60)

where Cs0 is a positive constant, dependent upon s0 . Thus (2.17) holds in this case when we take C0 ≥

(Cs0 + ∥u∥L∞ + ∥v∥L∞ )2 s40

.

The proof of Theorem 2.2 is complete.



4. Numerical results In this section, we present three numerical examples to confirm our theoretical analysis. For simplicity, here numerical simulations are made only by the first linearized Crank–Nicolson scheme defined in (2.1)–(2.2). All computations are performed with Matlab. Example 4.1. First we consider the coupled nonlinear Schrödinger equations in one-dimensional space, defined in (1.1)–(1.4) with Ω = [−40, 40] and u0 ( x ) =

v0 (x) =

√ √

2sech(x + 10)eiγ x/4 ,

2sech(x − 10)e

−iγ x/4

(4.1)

.

(4.2)

We apply the proposed Crank–Nicolson scheme for solving the system with h = 0.2 and τ = 0.1. The numerical results |U n | and |V n | are given in Figs. 1 and 2 to show the interaction of two solitons. In Fig. 1, the two solitons develop into three solitons after the collision, while in Fig. 2, the two solitons fuse after the collision. Example 4.2. Secondly we consider the coupled nonlinear Schrödinger equations in two-dimensional space iut + 1u + (|u|2 + β|v|2 )u = f1 ,

x∈Ω

ivt + 1v + (|v| + β|u| )v = f2 ,

x∈Ω

2

u(x, 0) = u0 (x), u|x∈∂ Ω = 0,

2

v(x, 0) = v0 (x), v|x∈∂ Ω = 0,

x∈Ω

(4.3) (4.4) (4.5) (4.6)

where β = 1, Ω = [0, 1] × [0, 1] and (f1 , f2 ) and (u0 , v0 ) are chosen correspondingly to the exact solutions given by u(x, t ) = 15eit 

v(x, t ) = e

 1+ 

i+ 12 t

1 2

t2



(1 − x1 )(1 − x2 ) sin x1 sin x2 ,

sin(π x1 ) sin(π x2 ).

(4.7) (4.8)

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



13

Fig. 2. Numerical results with β = 0.3, γ = 0.4. Table 1 Numerical results of Example 4.2 with β = 1 and τ = h. h = 1/40

t

h = 1/80

h = 1/160

Order(α)

6.8067e−05 1.0415e−04 5.5136e−05 9.5886e−04

1.7683e−05 2.6180e−05 1.2600e−05 2.4906e−04

1.874 1.991 2.303 1.874

1.6473e−04 1.2263e−04 2.4376e−04 1.2063e−03

4.1818e−05 2.9968e−05 5.9221e−05 3.0930e−04

1.950 2.079 2.082 1.920

∥en ∥ t t t t

= 0.5 = 1.0 = 1.5 = 2.0

2.3767e−04 4.1345e−04 3.0662e−04 3.3445e−03

∥ηn ∥ t t t t

= 0.5 = 1.0 = 1.5 = 2.0

6.2413e−04 5.3471e−04 1.0614e−03 4.4253e−03

Since the Crank–Nicolson scheme is of second order in both temporal and spatial directions, we take τ = h such that the error bound in L2 -norm is proportional to h2 . We present in Table 1 the errors and the order of convergence (hα ) of the proposed method with h = 1/40, 1/80, 1/160 at different time t. We can see clearly that the numerical errors in the L2 -norm are proportional to h2 , which confirms our theoretical analysis in previous section. We also test our scheme with h = 1/80 and large time steps τ = h, 5h, 10h and present error results in Table 2. Numerical results show that the proposed scheme is stable for the large time steps. Example 4.3. Finally we solve the coupled nonlinear Schrödinger equations (4.3)–(4.6) in three dimensional space, where β = 1.5, Ω = [0, 1] × [0, 1] × [0, 1]. (f1 , f2 ) and (u0 , v0 ) are chosen correspondingly to the exact solutions given by u(x, t ) = 60e

v(x, t ) = e



it

 1+ 

i+ 12 t

1 2

t

2



(1 − x1 )(1 − x2 )(1 − x3 ) sin x1 sin x2 sin x3 ,

sin(π x1 ) sin(π x2 ) sin(π x3 ).

(4.9) (4.10)

Previous analysis requires stronger time step restrictive conditions for equations in three-dimensional space. To conform our analysis, we only test the scheme with h = 1/80 and large time steps τ = h, 5h, 10h. We present error results in Table 3. Our numerical results show that the scheme is unconditionally convergent. 5. Conclusion We have present the optimal L2 error estimates of linearized Crank–Nicolson schemes for the coupled nonlinear Schrödinger equations without any restriction on the time step, while all previous works require certain time step conditions. Numerical results illustrate our theoretical analysis. Since our analysis is based on the classical energy estimate method, analysis presented in this paper can be easily extended to Galerkin finite element methods to obtain optimal L2 error estimates as shown in Theorem 2.1 with a continuous

14

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



Table 2 Numerical results of Example 4.2 with β = 1 and h = 1/80.

τ =h

t

τ = 5h

τ = 10h

7.5752e−04 3.1864e−04 1.7196e−03 2.2461e−02

1.5203e−03 2.6741e−03 7.5234e−03 6.1880e−02

1.7036e−04 5.3041e−04 1.8598e−03 2.2228e−02

1.1269e−03 1.5304e−03 6.3792e−03 6.1057e−02

n

∥e ∥ t t t t

= 0.5 = 1.0 = 1.5 = 2.0

6.8067e−05 1.0415e−04 5.5136e−05 9.5886e−04

∥ηn ∥ t t t t

= 0.5 = 1.0 = 1.5 = 2.0

1.6473e−04 1.2263e−04 2.4376e−04 1.2063e−03

Table 3 Numerical results of Example 4.3 with β = 1.5 and h = 1/80.

τ =h

t

τ = 5h

τ = 10h

3.6437e−04 2.4382e−04 3.3105e−04 2.7208e−03

9.2588e−04 3.1475e−04 2.5328e−03 9.3270e−03

1.5886e−04 2.8200e−04 8.2271e−04 2.5564e−03

2.4117e−04 1.1241e−03 1.7951e−03 9.4453e−03

∥en ∥ t t t t

= 0.5 = 1.0 = 1.5 = 2.0

3.8903e−05 6.7215e−05 5.5041e−05 1.9276e−04

∥ηn ∥ t t t t

= 0.5 = 1.0 = 1.5 = 2.0

8.2791e−05 9.5557e−05 7.0137e−05 2.5047e−04

norm. Also our analysis can be extended to some other time discrete schemes and many other equations [25,29,30,33–38] for which previous works were done under certain time step conditions. Acknowledgments The work of the authors was supported in part by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 11302514). References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

D.J. Griffiths, Introduction to Quantum Mechanics, Pearson Prentice Hall, 2005. A. Hasegawa, Optical Solitons in Fibers, Springer-Verlag, Berlin, 1989. C.R. Menyuk, Stability of solitons in birefringent optical fibers. I: Equal propagation amplitudes, Opt. Lett. 12 (1987) 614–616. L.C.F. Ferreira, E.J. Villamizar-Roa, Self-similarity and asymptotic stability for coupled nonlinear Schrödinger equations in high dimensions, Physica D 241 (2012) 534–542. M. Ohta, Stability of solitary waves for coupled nonlinear Schrödinger equations, Nonlinear Anal. 26 (1996) 933–939. A.C. Yew, Stability analysis of multipulses in nonlinearly-coupled Schrödinger equations, Indiana Univ. Math. J. 49 (2000) 1079–1124. A.G. Bratsos, A modified numerical scheme for the cubic Schrödinger equation, Numer. Methods Partial Differential Equations 27 (2011) 608–620. M.S. Ismail, T.R. Taha, Numerical simulation of coupled nonlinear Schrödinger equation, Math. Comput. Simulation 56 (2001) 547–562. M.S. Ismail, T.R. Taha, A linearly implicit conservative scheme for the coupled nonlinear Schrödinger equation, Math. Comput. Simulation 74 (2007) 302–311. ¯ F. Ivanauskas, M. Rad˘ziunas, On convergence and stability of the explicit difference method for solution of nonlinear Schrödinger equations, SIAM J. Numer. Anal. 36 (1999) 1466–1481. H. Liao, Z. Sun, H. Shi, Error estimate of fourth-order compact scheme for linear Schrödinger equations, SIAM J. Numer. Anal. 47 (2010) 4381–4401. B. Reichel, S. Leble, On convergence and stability of a numerical scheme of coupled nonlinear Schrödinger equations, Comput. Math. Appl. 55 (2008) 745–759. J.M. Sanz-Serna, Methods for the numerical solution of nonlinear Schrödinger equation, Math. Comp. 43 (1984) 21–27. L. Wu, Dufort-Frankel-Type methods for linear and nonlinear Schrödinger equations, SIAM J. Numer. Anal. 33 (1996) 1526–1533.

[15] Y. Tourigny, Optimal H 1 estimates for two time-discrete Galerkin approximations of a nonlinear Schrödinger equation, IMA J. Numer. Anal. 11 (1991) 509–523. [16] G.E. Zouraris, On the convergence of a linear two-step finite element method for the nonlinear Schrödinger equation, M2AN Math. Model. Numer. Anal. 35 (2001) 389–405. [17] X. Antoine, C. Besse, P. Klein, Absorbing boundary conditions for general nonlinear Schrödinger equations, SIAM J. Sci. Comput. 33 (2011) 1008–1033. [18] A. Borzi, E. Decker, Analysis of a leap-frog pseudospectral scheme for the Schrödinger equation, J. Comput. Appl. Math. 193 (2006) 65–88. [19] H.S. Chen, C.S. Chien, Multilevel spectral-Galerkin and continuation methods for nonlinear Schrödinger equations, Multiscale Model. Simul. 8 (2009–2010) 370–392. [20] M. Dehghan, A. Taleei, Numerical solution of nonlinear Schrödinger equation by using time-space pseudo-spectral method, Numer. Methods Partial Differential Equations 26 (2010) 979–992.

W. Sun, J. Wang / Journal of Computational and Applied Mathematics (

)



15

[21] B.K. Li, G. Fairweather, B. Bialecki, Discrete-time orthogonal spline collocation methods for Schrödinger equations in two space variables, SIAM J. Numer. Anal. 35 (1998) 453–477. [22] Z. Sun, X. Wu, The stability and convergence of a difference scheme for the Schrödinger equation on an infinite domain by using artificial boundary conditions, J. Comput. Phys. 214 (2006) 209–223. [23] T.F. Chan, L. Shen, Stability analysis of difference schemes for variable coefficient Schrödinger equations, SIAM J. Numer. Anal. 24 (1987) 336–349. [24] W. Dai, An unconditionally stable three-level explicit difference scheme for the Schrödinger equation with a variable coefficient, SIAM J. Numer. Anal. 29 (1992) 174–181. [25] Q. Chang, B. Guo, H. Jiang, Finite difference method for generalized Zakharov equations, Math. Comp. 64 (1995) 537–553. [26] Z. Sun, D. Zhao, On the L∞ convergence of a difference scheme for coupled nonlinear Schrödinger equations, Comput. Math. Appl. 59 (2010) 3286–3300. [27] Q. Chang, E. Jia, W. Sun, Difference schemes for solving the generalized nonlinear Schrödinger equation, J. Comput. Phys. 148 (1999) 397–415. [28] T. Wang, B. Guo, L. Zhang, New conservative difference schemes for a coupled nonlinear Schrödinger system, Appl. Math. Comput. 217 (2010) 1604–1619. [29] T. Wang, B. Guo, A robust semi-explicit difference scheme for the Kuramoto-Tsuzuki equation, J. Comput. Appl. Math. 233 (2009) 878–888. [30] W. Bao, Y. Cai, Uniform error estimates of finite difference methods for the nonlinear Schrödinger equation with wave operator. [31] T. Dupont, G. Fairweather, J.P. Johnson, Three-level Galerkin methods for parabolic equations, SIAM J. Numer. Anal. 11 (1974) 392–410. [32] Y. Zhou, Applications of Discrete Functional Analysis to the Finite Difference Method, International Academic Publishers, Beijing, 1991. [33] R.T. Glassey, Convergence of an energy-preserving scheme for the Zakharov equations in one space dimension, Math. Comp. 58 (1992) 83–102. [34] Y. He, Y. Lin, Samuel S.P. Shen, W. Sun, R. Tait, Finite element approximation for the viscoelastic fluid motion problem, J. Comput. Appl. Math. 155 (2003) 201–222. [35] Y. Hou, B. Li, W. Sun, Error estimates of splitting Galerkin methods for heat and sweat transport in textile materials, SIAM J. Numer. Anal. 51 (2013) 88–111. [36] J. Li, H. Ma, W. Sun, Error analysis for solving the Korteweg–de Vries equation by a Legendre pseudo-spectral method, Numer. Methods Partial Differential Equations 16 (2000) 513–534. [37] W. Sun, Z. Sun, Finite difference methods for a nonlinear and strongly coupled heat and moisture transport system in textile materials, Numer. Math. 120 (2012) 153–187. [38] S. Zhu, G. Yuan, W. Sun, Convergence and stability of explicit/implicit schemes for parabolic equations with discontinuous coefficients, Int. J. Numer. Anal. Model. 1 (2004) 131–145.