A windowing waveform relaxation method for time-fractional differential equations

A windowing waveform relaxation method for time-fractional differential equations

Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150 Contents lists available at ScienceDirect Commun Nonlinear Sci Numer Simulat journal homepage: ...

584KB Sizes 1 Downloads 91 Views

Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

Contents lists available at ScienceDirect

Commun Nonlinear Sci Numer Simulat journal homepage: www.elsevier.com/locate/cnsns

A windowing waveform relaxation method for time-fractional differential equations✩ Xiao-Li Ding a,∗, Yao-Lin Jiang b a b

Department of Mathematics, Xi’an Polytechnic University, Xi’an, Shaanxi 710048, China Department of Mathematical Sciences, Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China

a r t i c l e

i n f o

Article history: Received 3 October 2014 Revised 12 May 2015 Accepted 17 June 2015 Available online 26 June 2015 Keywords: Time-fractional differential equations Waveform relaxation Windowing technique

a b s t r a c t This paper presents a new windowing waveform relaxation method for time-fractional differential equations. Unlike the classical case, the proposed windowing method uses the history part of the solution at each window. Second, it is the first time that a multi-domain finite difference scheme together with a windowing method has been used for the time-fractional differential equations, which makes the numerical scheme very efficient. Third, the paper provides an effective estimation on window length. Numerical results are given to further illustrate the theoretical analysis. © 2015 Elsevier B.V. All rights reserved.

1. Introduction Fractional calculus has been used widely to deal with some problems in fluid and continuum mechanics [1,2], viscoelastic and viscoplastic flow [3] and anomalous diffusion [4–6]. The main advantage of fractional derivatives lies in that they are more suitable for describing memory and hereditary properties of various materials and process in comparison with classical integerorder derivative. In these years, numerical solutions to fractional differential equations have been investigated (for example, see [3,7–11]). Waveform relaxation (WR) method is an iterative method for solving ordinary differential equations. It was originally proposed to simulate large circuits in [12]. The method has two main advantages. On the one hand, it can decouple a given large differential system into a set of weakly coupled small subsystems. On the other hand, it can also decouple a complicated differential system into a set of simplified subsystems. Some authors have successfully applied the WR method into solving ordinary differential equations [12–15], differential-algebraic equations [16], functional differential equations [17] and fractional differential equations [18–20]. Although the WR method of fractional differential equations is eventually convergent, the convergence rate is very slow during the iterations on finite long intervals. One of the reasons of slow convergence is that the solution of fractional differential equation at a certain time depends on the whole history of the solution at previous times. Considering the design of numerical schemes, an immediate consequence is that the entire past trajectory of the numerical solution must be carried forward and used in the current time step. This impacts both the storage and the cost of the numerical method, both of which may substantially increase over time. Therefore, it is natural to accelerate the WR method in terms of both storage and computation. ✩ This work was supported by the Science and Technology Planning Project (2014JQ1041) of Shaanxi Province and the Scientific Research Program Funded by Shaanxi Provincial Education Department (14JK1300). The work was partially supported also by the Natural Science Foundation of China (11371287). ∗ Corresponding author. Tel.: +86 15829680246. E-mail address: [email protected] (X.-L. Ding).

http://dx.doi.org/10.1016/j.cnsns.2015.06.017 1007-5704/© 2015 Elsevier B.V. All rights reserved.

140

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

In this paper, we will develop the windowing technique following the idea of the windowing WR method proposed by Leimkuhler and Ruehli in RC circuit simulation [15]. Basic idea of the windowing WR is that the interval of integration is split into a series of subintervals, called windows, with iteration taking place on each window successively. During the last years, there has been an explosion of research work on the analysis of the windowing WR method (for example, see [21,22]). However, there is currently no development of windowing WR method for fractional differential equations. This paper is an effort to meet the high computational demand of the time fractional differential equations (TFDEs). The original windowing WR method for classic ODEs must be modified to take into account the memory effect of the TFDEs and we propose to include the history part of the solution in the iteration step. On the one hand, the proposed windowing WR method maintains the nice property of the original windowing WR method, i.e., the speed of convergence of the WR method can be accelerated using suitable window. On the other hand, it has a particular advantage for the time fractional differential equations in that the numerical solution on each subinterval is updated by using the history value after k iterations, thus the memory restriction is relaxed. In this paper, we provide a convergence analysis of the new method, and give an effective estimation on window length. To implement the windowing method, one also needs to be cognizant of the challenge in the serial time stepping scheme for the TFDEs. In this paper, we adopt a multi-domain finite difference method. The whole time domain will be decomposed into subintervals, on which finite difference scheme is established. The paper is organized as follows. In the next section, a brief background on the windowing WR method and the TFDEs will be provided. In Section 3, we describe the new windowing WR method for the TFDEs and derive an effective estimation on window length. Numerical results will be shown in Section 4. 2. Preliminaries In this section, we provide background knowledge on the windowing WR method for classic ODEs and introduce some concepts about fractional calculus. 2.1. The windowing WR method Consider the following scalar ODE:



dx(t ) = f (x(t ), t ), t ∈  = [0, T ], dt x(0) = x0 ,

(2.1)

where f (x(t ), t ) : Rn × [0, T ] → Rn is a given function, x0 is a known initial value, and x(t) is to be computed. In the following, we describe the basic settings of the original windowing WR method (cf. [21]), aiming to solve (2.1). First, the time domain is decomposed into N elements:

0 = T0 < T1 < · · · < TN = T,

i = [Ti , Ti+1 ],

where each subinterval i = [Ti , Ti+1 ] is called a window, and the lengths of windows are Hi = Ti+1 − Ti for i = 0, 1, . . . , N − 1. Next, we introduce a splitting function F (·, ·, t ) : Rn × Rn × [0, T ] → Rn satisfies

F (x(t ), x(t ), t ) = f (x(t ), t ), for any x(t ) ∈ Rn . The splitting function is chosen to attempt to decouple the original system into easily solvable independent subsystems. As an example, one can use the classic Jacobi or Gauss–Seidel scheme for F. The computation of WR solutions starts at T0 and goes on for one window to another window until TN = T is reached. That is,

⎧ (k+1) dx (t ) ⎪ ⎪ = F (x(i k+1) (t ), x(i k) (t ), t ), t ∈ [Ti , Ti+1 ], ⎨ i dt

(2.2)

(k ) x(k+1) (Ti ) = xi−1i−1 (Ti ), k = 0, 1, . . . , ki − 1, ⎪ ⎪ ⎩ i

i = 0, 1, . . . , N − 1,

(k

)

(0)

(k

)

i−1 (Ti ) on [Ti , Ti+1 ] for all i. By the existing convergence analysis (cf. [21]), where x−1−1 (0) = x0 and the initial guesses xi (t ) ≡ xi−1

(k )

it has limki →+∞ xi i (t ) = xi (t ) for all i, where xi (t) denotes the exact solution of Eq. (2.1) on the subinterval i . And the optimal window length is of practice importance for the implementation of the windowing WR method. 2.2. Fractional differential equations Let t ∈ (a, b). The fractional integral of order α of a given function x(t) is defined as

(a Itα x)(t ) =

1

 (α)



t a

(t − τ )α−1 x(τ )dτ , a < t ≤ b.

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

141

Let x(t) ∈ Cm ([a, b]). The Caputo fractional derivative of order α (m − 1 < α ≤ m) is defined as

(a Dtα x)(t ) =



1  (m − α)

t

(t − τ )m−α−1 x(m) (τ )dτ , a < t ≤ b,

a

which is preferred over alternative definitions to deal with standard initial conditions. In this definition, x(m) (t) stands for the derivative of integer order m of the function x(t). In particular, when m = 1, we denote x(1) (t) as x (t). For some concepts about fractional calculus in details, one can refer to [24,25]. In the investigation of fractional differential equations, Mittag–Leffler functions play a key role. Three-parameter Mittag– Leffler function is defined as γ

Eα ,β (z) =

+∞ 

(γ )n zn ,  (nα + β)n! n=0

where α , β , γ ∈ C, (α) > 0, (β) > 0, (γ ) > 0, and

(γ )0 = 1, (γ )n =

 (γ + n) = γ (γ + 1) · · · (γ + n − 1), n = 1, 2, ....  (γ )

In particular, when γ = 1, Eα1 ,β (z) becomes two-parameter Mittag–Leffler function Eα , β (z), i.e.,

Eα1 ,β (z) = Eα ,β (z) =

+∞  n=0

zn .  (nα + β)

In particular, when β = γ = 1, Eα1 ,1 (z) becomes one-parameter Mittag–Leffler function Eα (z), i.e.,

Eα1 ,1 (z) = Eα (z) =

+∞  n=0

zn .  (nα + 1)

3. Windowing WR method for TFDEs In this section, we first describe how to generalize the original windowing WR method to TFDEs and then give an effective estimation on window length. 3.1. Description of the method In this paper, we consider the windowing WR method of the following TFDE:



(0 Dtα x)(t ) = f (x(t ), t ), t ∈ [0, T ], x(0) = x0 ,

(3.1)

where 0 < α < 1, 0 Dtα is the fractional differential operator in the Caputo sense, x0 ∈ Rn is an initial value, T is a finite value, f (x(t ), t ) : Rn × [0, T ] → Rn is a given function, and x(t) is to be computed. Without the loss of generality, the time domain is decomposed into N elements:

0 = T0 < T1 < · · · < TN = T,

i = [Ti , Ti+1 ].

The lengths of windows are Hi = Ti+1 − Ti (i = 0, 1, . . . , N − 1), and the maximum of window lengths is defined as H = max {Hi }. 0≤i≤N−1

(k+1)

(k+1)

, the relaxation scheme requires the entire history part of xi To update xi method for (3.1) is given as

as the input. Hence, the windowing WR

⎧ i  ⎪ α (k+1) )(t ) +

α x(k j−1 ) )(t ) = F (x(k+1) (t ), x(k) (t ), t ), t ∈ [Ti , Ti+1 ], ⎪ (Tj−1 D ⎪ Tj j i i ⎨(Ti Dt xi j=1

(3.2)

(k ) ⎪ x(k+1) (Ti ) = xi−1i−1 (Ti ), ⎪ ⎪ ⎩ i

k = 0, 1, . . . , ki − 1, i = 0, 1, . . . , N − 1,

(ki−1 ) (k ) (0)

α (0 < α < 1) is defined as where x−1−1 (0) = x0 , the initial guesses xi (t ) ≡ xi−1 (Ti ) on [Ti , Ti+1 ] for all i, and the operator a D b

α x)(t ) = (a D b

1  (1 − α)



b a

(t − τ )−α x (τ )dτ , a < t ≤ b.

(3.3)

Like the windowing WR method for classic ODEs in (2.2), the new windowing WR method in (3.2) goes forward in a sequential way. Unlike the classic case, however, the new windowing WR method uses the history part of the solution at each step (Fig. 1). This is natural since the solution at the current time depends on solutions at all previous times in the TFDE case.

142

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

classic case

fractional case

Fig. 1. At each iteration, both two methods go sequentially element by element. In the fractional case, the history part is needed.

3.2. Analysis of the method In this section, we analyze the convergence of (3.2), and give an error bound of the windowing WR method. By the error bound, a suggested window length can be theoretically estimated. 3.2.1. Nonlinear case An important tool is the generalized Gronwall inequality for integral equations. Lemma 3.1. ([24]) Let 0 < α < 1 and let f : Rn × (0, T ] → Rn be a function such that f(x, t) ∈ C([0, T]) for any x ∈ Rn . If x(t) ∈ C([0, T]), then x(t) satisfies Eq. (3.1) if and only if x(t) satisfies the integral equation

x(t ) = x(0) +



1

 (α)

t

(t − τ )α−1 f (x(τ ), τ )dτ , 0 < t ≤ T.

0

Lemma 3.2. Let 0 < α < 1, C, L1 , L2 ∈ R+ and g(t), h(t ) : [a, b] → R+ be continuous functions. Moreover assume that g(t) satisfies the inequality

g(t ) ≤ C +

L1



 (α)

t

(t − τ )α−1 g(τ )dτ +

a

for all t ∈ [a, b]. Then it has

g(t ) ≤ CEα (L1 (t − a)

α

) + L2



t a



L2

 (α)

t a

(t − τ )α−1 h(τ )dτ ,

(t − τ )α−1 Eα,α (L1 (t − τ )α )h(τ )dτ .

(3.4)

(3.5)

Proof. Let  > 0. Construct the following system

α (a Dt )(t ) − L1 (t ) = L2 h(t ), t ∈ [a, b], (a) = C +  .

(3.6)

By Example 4.9 in [24], the solution of (3.6) is

(t ) = (C + )Eα (L1 (t − a)α ) + L2



t a

(t − τ )α−1 Eα,α (L1 (t − τ )α )h(τ )dτ .

(3.7)

On the other hand, by Lemma 3.1, Eq. (3.6) is equivalent to the following integral equation

(t ) = C +  +



L1

 (α)

t a

(t − τ )α−1 (τ )dτ +



L2

 (α)

t a

(t − τ )α−1 h(τ )dτ .

(3.8)

So, (3.7) is also the solution of (3.8). By (3.4), we have the following relation

g(a) ≤ C < C +  = (a). Because of the continuity of g(t) and (t), there exists η > 0 such that g(t) < (t) for any t ∈ [a, a + η]. Next we will prove that g(t) < (t) is valid on [a, b] by contrary. Suppose that t0 is the smallest number such that g(t0 ) = (t0 ), t0 ∈ [a, b]. Then g(t) < (t) for all a ≤ t ≤ t0 . Combing (3.4), we have

g(t0 ) ≤ C +

L1



 (α)

< C++ = (t0 ),

t0 a

L1

 (α)

(t0 − τ )α−1 g(τ )dτ + 

t0 a

(t0 − τ )

α −1

L2



 (α)

(τ )dτ +

t0

a

L2

(t0 − τ )α−1 h(τ )dτ

 (α)



t0 a

(t0 − τ )α−1 h(τ )dτ

which contradicts g(t0 ) = (t0 ). In other words, we have proof that g(t) < (t) for all t ∈ [a, b]. Also since  > 0 is arbitrary, the relation (3.5) holds. The proof is completed.  Based on the above lemmas, we give the error bound on each window. For the splitting function F, we assume that it obeys the following assumption throughout this paper.

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

143

Assumption. Assume that the splitting function F satisfies the Lipschitz condition:

F (x1 , y1 , t ) − F (x2 , y2 , t ) ≤ L1 x1 − x2 + L2 y1 − y2 , xi , yi ∈ Rn , i = 1, 2,

(3.9)

where L1 and L2 are two positive constants. For convenience, we introduce some notations, which will be used in this paper. (k

)

e(i 0) (t ) = xi−1i−1 (Ti ) − xi (t ),

i = x(i ki ) (Ti+1 ) − xi (Ti+1 ) , (ki−1 )

e(i k) (t ) = x(i k) (t ) − xi (t ),

e(i 0) [Ti ,Ti+1 ] = max { e(i 0) (t ) }, t∈[Ti ,Ti+1 ]

α (x(ki−1 ) − xi ))(t ), i (t ) = yi (t ) − yi (t ) = (Ti−1 D Ti i  i k t   ϒ(k,i) (t ) = Ll2 (t − τ )(l+1)α−1 Eαl+1 (L (t − τ )α )  (j k j−1 ) (τ ) dτ , ,(l+1)α 1 Pi (t ) = Lk2i

(ki−1 )



Ti

j=1 l=0 t

Ti

(t − τ )ki α−1 Eαki,ki α (L1 (t − τ )α )dτ , i = 0, 1, . . . , N − 1.

Theorem 3.1. For i ∈ {0, 1, . . . , N − 1}, we have

e(i ki ) (t ) ≤ i−1 Eα (L(t − Ti )

α

) + ϒ(ki −1,i) (t ) + Pi (t ) e(i 0) [Ti ,Ti+1 ] ,

(3.10)

where −1 = 0, and L = L1 + L2 . Proof. Noting that the second part of the left-side in Eq. (3.2) can be considered as the known function, we rewrite Eq. (3.2) as

⎧ i  ⎪ (k ) ⎪(Ti Dtα x(k+1) )(t ) + y j j−1 (t ) = F (x(i k+1) (t ), x(i k) (t ), t ), t ∈ [Ti , Ti+1 ], ⎪ i ⎨ j=1

(k ) ⎪ x(k+1) (Ti ) = xi−1i−1 (Ti ), ⎪ ⎪ ⎩ i

(3.11)

k = 0, 1, . . . , ki − 1, i = 0, 1, . . . , N − 1,

where (k j−1 )

yj

α x(k j−1 ) )(t ), j = 1, . . . , i. (t ) = (Tj−1 D Tj j

On the other hand, the exact solution xi (t) on [Ti , Ti+1 ] (i = 0, 1, . . . , N − 1) satisfies the following equation

⎧ i  ⎪ ⎨(T Dtα xi )(t ) + y j (t ) = F (xi (t ), xi (t ), t ), t ∈ [Ti , Ti+1 ], i ⎪ ⎩x (T ) = x i

i

j=1 i−1

(3.12)

(Ti ), i = 0, 1, . . . , N − 1,

where

α x j )(t ), j = 1, . . . , i. y j (t ) = (Tj−1 D Tj By Lemma 3.1, Eqs. (3.11) and (3.12) are equivalent to the integral equations, respectively,

 t ⎧ 1 ⎪ x(i k+1) (t ) = x(i k+1) (Ti ) + (t − τ )α−1 F (x(i k+1) (τ ), x(i k) (τ ), τ )dτ ⎪ ⎪  (α) Ti ⎪ ⎪ ⎪ ⎪  t i ⎨  1 − (t − τ )α−1 y(jk j−1 ) (τ )dτ , t ∈ [Ti , Ti+1 ],  (α) Ti ⎪ j=1 ⎪ ⎪ ⎪ (k ) ⎪ ⎪ x(k+1) (Ti ) = xi−1i−1 (Ti ), ⎪ ⎩ i

(3.13)

k = 0, 1, . . . , ki − 1, i = 0, 1, . . . , N − 1,

and

⎧  t  t i  ⎪ 1 1 α −1 ⎪ ⎪ (t − τ )α−1 y j (τ )dτ , ⎨xi (t ) = xi (Ti ) +  (α) T (t − τ ) F (xi (τ ), xi (τ ), τ )dτ −  (α) T i i j=1 ⎪ t ∈ [T , T ], ⎪ i i+1 ⎪ ⎩ xi (Ti ) = xi−1 (Ti ), i = 0, 1, . . . , N − 1.

(3.14)

144

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

Subtracting (3.13) from (3.14) and combining (3.9), we can obtain

e(i k+1) (t ) ≤ i−1 + +



L1

 (α)

Ti



i 

1

j=1

 (α)

t

t Ti

(t − τ )α−1 e(i k+1) (τ ) dτ +

L2



 (α)

t

Ti

(t − τ )α−1 e(i k) (τ ) dτ

(t − τ )α−1  (j k j−1 ) (τ ) dτ .

(3.15)

Applying Lemma 3.2 – (3.15) leads that

e(i k+1) (t ) ≤ i−1 Eα (L1 (t − Ti )α ) + L2 +

i   j=1

t Ti



t Ti

(t − τ )α−1 Eα,α (L1 (t − τ )α ) e(i k) (τ ) dτ

(t − τ )α−1 Eα,α (L1 (t − τ )α )  (j k j−1 ) (τ ) dτ .

(3.16)

We define an operator Vi as

(Vi ϕ)(t ) =



t

Ti

(t − τ )α−1 Eα,α (L1 (t − τ )α )ϕ(τ )dτ .

Then (3.16) can be rewritten compactly as

e(i k+1) (t ) ≤ i−1 Eα (L1 (t − Ti )α ) + L2 (Vi e(i k) (t ) )(t ) +

i 

(Vi  (j k j−1 ) (t ) )(t ).

(3.17)

j=1

Since the operator Vi is nondecreasing (see Property 2.2 in [23]), by the recurrence, we can obtain

e(i k+1) (t ) ≤

k 

Ll2 (Vil i−1 Eα (L1 (t − Ti )

α

k+1 ))(t ) + Lk+1

e(i 0) (t ) )(t ) + 2 (Vi

i  k 

(k j−1 )

Ll2 (Vil+1  j

(t ) )(t ).

(3.18)

j=1 l=0

l=0

Furthermore, by Property 2.3 in [23], it has

(Vil i−1 Eα (L1 (t − Ti )α ))(t ) = i−1 = i−1



t

Ti



t

Ti

(t − τ )lα−1 Eαl ,lα (L1 (t − τ )α )Eα (L1 (τ − Ti ))dτ mα +∞ +∞   (1)n Ln (τ − Ti )nα (l )m Lm 1 (t − τ ) 1 dτ  (mα + l α)m!  (nα + 1)n!

(t − τ )lα−1

m=0

n=0

 t (l )m (1)n Lm+n 1 = i−1 (t − τ )lα+mα−1 (τ − Ti )nα dτ  ( mα + l α)m! (nα + 1)n! Ti m=0 n=0 +∞ +∞  

= i−1

+∞ +∞   (l )m (1)n Lm+n (t − Ti )lα+mα+nα 1 m=0 n=0

= i−1 = i−1 = i−1

 (l α + mα + nα + 1)m!n!

(l )m (1)k−m Lk1 (t − Ti )lα+kα  (l α + kα + 1)m!(k − m)! m=0 k=m +∞ +∞  

(l )m (1)k−m Lk1 (t − Ti )lα+kα  (l α + kα + 1)m!(k − m)! k=0 m=0

+∞  k 

l α +kα +∞ k   Lk1 (t − Ti ) (l )m (1)k−m k!  (l α + kα + 1) m!(k − m)!k! m=0

k=0

+∞  (l + 1)k Lk1 (t − Ti )lα+kα = i−1 k! (l α + kα + 1) k=0



α

= i−1 (t − Ti ) Eαl+1 (L (t − Ti ) ,l α +1 1

).

Using the same arguments as above, we have

(Vil+1  (j k j−1 ) (t ) )(t ) =



t Ti

(t − τ )(l+1)α−1 Eαl+1 (L (t − τ )α )  (j k j−1 ) (τ ) dτ . ,(l+1)α 1

(3.19)

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

145

Thus, (3.18) becomes

e(i k+1) (t ) ≤ i−1

+∞ 



Ll2 (t − Ti ) Eαl+1 (L (t − Ti ) ,l α +1 1

α

k+1 ) + Lk+1

e(i 0) (t ) )(t ) + ϒ(k,i) (t ) 2 (Vi

l=0

= i−1

+∞ 



Ll2 (t − Ti )

n=0

l=0

= i−1

mα +∞  +∞ l m−l  L2 L1 (l + 1)m−l (t − Ti )

 (mα + 1)(m − l )!

l=0 m=l

= i−1

+∞  (l + 1)n Ln1 (t − Ti )nα k+1 + Lk+1

e(i 0) (t ) )(t ) + ϒ(k,i) (t ) 2 (Vi  (nα + l α + 1)n!

m +∞   Ll1 Lm−l (l + 1)m−l (t − Ti )mα 2

 (mα + 1)(m − l )!

m=0 l=0

k+1 + Lk+1

e(i 0) (t ) )(t ) + ϒ(k,i) (t ) 2 (Vi

k+1 + Lk+1

e(i 0) (t ) )(t ) + ϒ(k,i) (t ) 2 (Vi

= i−1

m Ll2 Lm−l m! (t − Ti )mα  k+1 1 + Lk+1

e(i 0) (t ) )(t ) + ϒ(k,i) (t ) 2 (Vi  ( mα + 1) ( m − l )!l! m=0 l=0

= i−1

+∞  (L1 + L2 )m (t − Ti )mα k+1 + Lk+1

e(i 0) (t ) )(t ) + ϒ(k,i) (t ) 2 (Vi  (mα + 1)

+∞ 

m=0

= i−1 Eα (L(t − Ti )

α

k+1 ) + Lk+1

e(i 0) (t ) )(t ) + ϒ(k,i) (t ). 2 (Vi

Let k = ki − 1, then we have α

e(i ki ) (t ) ≤ i−1 Eα (L(t − Ti )

α

≤ i−1 Eα (L(t − Ti )

) + ϒ(ki −1,i) (t ) + Lk2i



t Ti

(3.20)

(t − τ )ki α−1 Eαki,ki α (L1 (t − τ )α ) e(i 0) (τ ) dτ

) + ϒ(ki −1,i) (t ) + Pi (t ) e(i 0) [Ti ,Ti+1 ] .

The proof of Theorem 3.1 is completed.  Furthermore, we can establish the following corollary. Corollary 3.1. Let Hi ≡ H for i = 0, 1, . . . , N − 1. Then it holds that

1 − (E (LH α ))i+1 α i ≤ max{Pl (Tl+1 ) e(l 0) [Ti ,Ti+1 ] } + max{ϒ(kl −1,l ) (Tl+1 )} . 1 − Eα (LH α ) 0≤l≤i 0≤l≤i Proof. Let t = Ti+1 in (3.10) for any fixed i. We have

i ≤ i−1 Eα (LH α ) + ϒ(ki −1,i) (Ti+1 ) + Pi (t ) e(i 0) [Ti ,Ti+1 ] ,

(3.21)

where −1 = 0. By recurrence, we can obtain

i ≤ max{Pl (Tl+1 ) e(l 0) [Tl ,Tl+1 ] }{1 + Eα (LH α ) + · · · + (Eα (LH α ))i } 0≤l≤i

(kl −1,l )

+ max{ϒ ≤



0≤l≤i

(Tl+1 )}{1 + Eα (LH α ) + · · · + (Eα (LH α ))i } (kl −1,l )

max{Pl (Tl+1 ) e(l 0) [Tl ,Tl+1 ] } + max{ϒ 0≤l≤i

0≤l≤i

(Tl+1 )}

1 − (E (LH α ))i+1 α . 1 − Eα (LH α )

This completes the proof of Corollary 3.1.  According to Theorem 3.1 and Corollary 3.1, the WR method in (3.2) is convergent in finite interval if we let min0≤l≤N−1 {kl } → (k −1,l )

(0)

(Tl+1 )} → 0 as min0≤l≤N−1 {kl } → +∞ for +∞. This is because max0≤l≤N−1 {Pl (Tl+1 ) el [Tl ,Tl+1 ] } → 0 and max0≤l≤N−1 {ϒ l any finite H, L1 , L2 by Lemma 2.1 in [23]. Further, for a given convergence rate ε (ε > 0) and an iteration number ks , the suggested length of window of ε -convergence for WR is estimated by





Tε = max t :

ks

t

a

(t − τ )ks α−1 Eαks,ks α (L1 (t − τ )α )dτ ≤

ε L2



, t>a .

(3.22)

Based on (3.22), we can choose a suitable length of window of (3.2) in advance. (k,i)

Remark 3.1. When α = 1, Theorem 3.1 becomes Theorem 1 in [21]. This is because ϒ

Pi (t ) = Lk2i



t

Ti

e(t−τ )L1

ki −1

(t − τ ) dτ , i = 0, 1, . . . , N − 1. (ki − 1)!

(t ) ≡ 0 and

146

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

3.2.2. Linear case In this section, we consider the windowing WR method of initial value problems of linear fractional differential equation of the form:



α x(t ) = Ax(t ) + b(t ), t ∈ [0, T ],

0 Dt

(3.23)

x(0) = x0 ,

where 0 < α < 1, A ∈ Rn×n , b(t ) ∈ Rn is a known function, x0 ∈ Rn is an initial value, and x(t) is to be computed. For typical model problems, such as spatially discrete forms of time-fractional partial differential equations, A can be taken as



−a

⎢b ⎢ ⎢ ⎢ A=⎢ ⎢ ⎢ ⎣



b −a

b

b

−a ..

.

..

.

..

.

b

⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ b ⎦ −a

where a and b are two positive constants. The damped Jacobi WR method with windowing of (3.23) is

⎧ i  ⎪ (k+1)

α x(k j−1 ) )(t ) = MA x(k+1) (t ) − NA x(k) (t ) + b(t ), t ∈ [Ti , Ti+1 ], ⎪ (t ) + (Tj−1 D ⎪Ti Dtα xi Tj j i i ⎨ j=1

(3.24)

(k ) ⎪ x(k+1) (Ti ) = xi−1i−1 (Ti ), ⎪ ⎪ ⎩ i

k = 0, 1, . . . , ki − 1, i = 0, 1, . . . , N − 1, (k

)

(0)

(k

)

1 i−1 where MA − NA = A and MA = ω diag{−a, · · · , −a} with 0 < ω ≤ 1, x−1−1 (0) = x0 , and the initial guesses xi (t ) ≡ xi−1 (Ti ) on [Ti , Ti+1 ] for all i. We denote c = −a/ω. As before, the meanings of the terms we have used in this section are the same as in the forgoing.

Theorem 3.2. For i ∈ {0, 1, . . . , N − 1} and t ∈ [Ti , Ti+1 ]. It holds that

e(i ki ) (t ) ≤ i−1 Eα ((c + NA )(t − Ti )α ) + pi (t ) e(i 0) [Ti ,Ti+1 ] +

i k i −1  

Ti

j=1 l=0

where

pi (t ) = NA ki



t Ti

t

(t − τ )(l+1)α−1 Eαl+1 (c(t − τ )α ) NA l  (j k j−1 ) (τ ) dτ , ,(l+1)α

(3.25)

(t − τ )ki α−1 Eαki,ki α (c(t − τ )α )dτ .

Proof. Using the same arguments as in Theorem 3.1, on [Ti , Ti+1 ], we have

⎧ i  ⎪ ⎪ α (k+1) (t ) + ⎪  (j k j−1 ) (t ) = MA e(i k+1) (t ) − NA e(i k) (t ), t ∈ [Ti , Ti+1 ], ⎪ ⎨ Ti Dt ei j=1

(3.26)

(k ) ⎪ e(i k+1) (Ti ) = xi−1i−1 (Ti ) − xi−1 (Ti ), ⎪ ⎪ ⎪ ⎩

k = 0, 1, . . . , ki − 1, i = 0, 1, . . . , N − 1.

Then, by Example 4.9 in [24], the solution to (3.26) is given by α

e(i k+1) (t ) = Eα (c(t − Ti )

)Qi−1 − (Ui NA e(i k) )(t ) −

i 

(Ui  (j k j−1 ) )(t ),

(3.27)

j=1

where

Qi−1 = e(i k+1) (Ti ), and

(Ui ϕ)(t ) =



t Ti

(t − τ )α−1 Eα,α (c(t − τ )α )ϕ(τ )dτ .

(3.28)

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

147

From (3.27), we use the mathematical induction on k to obtain k 

e(i k+1) (t ) =

( − 1)l Uil Eα (c(t − Ti )α )NAl Qi−1 + ( − 1)k+1 Uik+1 NAk+1 e(i 0) (t )

l=0 k i  

+

( − 1)l+1 (Uil+1 NAl  (j k j−1 ) )(t ).

(3.29)

j=1 l=0

Using as the same proceeding of (3.20), we have α

Uil Eα (c(t − Ti )

)NAl Qi−1 = (t − Ti )lα Eαl+1 (c(t − Ti )α )NAl Qi−1 , ,l α +1

and



(Uil+1 NAl  (j k j−1 ) )(t ) =

t

(3.30)

(t − τ )(l+1)α−1 Eαl+1 (c(t − τ )α )NAl  (j k j−1 ) (τ )dτ . ,(l+1)α

Ti

(3.31)

Based on (3.30) and (3.31), (3.29) can be rewritten as k 

e(i k+1) (t ) =

( − 1)l (t − Ti )lα Eαl+1 (c(t − Ti )α )NAl Qi−1 ,l α +1

l=0

+ ( − 1)

k+1

+



t

(t − τ )(k+1)α−1 Eαk+1 (c(t − τ )α )NAk+1 e(i 0) (τ )dτ ,(k+1)α

Ti

k i  

( − 1)l+1

j=1 l=0



t

Ti

(t − τ )(l+1)α−1 Eαl+1 (c(t − τ )α )NAl  (j k j−1 ) (τ )dτ . ,(l+1)α

(3.32)

Since Qi−1 = i−1 , using the same arguments as in (3.20), (3.32) becomes

e(i k+1) (t ) ≤ i−1 Eα ((c + NA )(t − Ti )α ) 

+ +

t

Ti

(t − τ )(k+1)α−1 Eαk+1 (c(t − τ )α ) NA k+1 dτ e(i 0) [Ti ,Ti+1 ] ,(k+1)α

k  i  

t Ti

j=1 l=0

(t − τ )(l+1)α−1 Eαl+1 (c(t − τ )α ) NA l  (j k j−1 ) (τ ) dτ . ,(l+1)α

Let k = ki − 1, then it has

e(i ki ) (t ) ≤ i−1 Eα ((c + NA )(t − Ti )α ) + pi (t ) e(i 0) [Ti ,Ti+1 ] +

i k i −1   j=1 l=0

t

Ti

(t − τ )(l+1)α−1 Eαl+1 (c(t − τ )α ) NA l  (j k j−1 ) (τ ) dτ . ,(l+1)α

This completes the proof of Theorem 3.2.  Using the same arguments as in Corollary 3.1, we can derive the following corollary. Corollary 3.2. Let Hi ≡ H and NA + c ≥ 0 for i ∈ {0, 1, . . . , N − 1}. It holds that

1 − (E (( N + c)H α ))i+1 α A i ≤ max{ pl (Tl+1 ) e(l 0) [Ti ,Ti+1 ] } + max{(kl −1,l ) (Tl+1 )} , 1 − Eα (( NA + c)H α ) 0≤l≤i 0≤l≤i where

(k,i) (t ) =

k  i  

t

Ti

j=1 l=0

(t − τ )(l+1)α−1 Eαl+1 (c(t − τ )α ) NA l  (j k j−1 ) (τ ) dτ . ,(l+1)α

For the linear system (3.23), the suggested length of a window of ε -convergence for WR with ks iterations on the window is estimated by

Tε = max t :

 ks

a

t

(t − τ )ks α−1 Eαks,ks α (c(t − τ )α )dτ ≤

ε

NA

Remark 3.2. When α = 1, Theorem 3.2 becomes Theorem 3 in [21].

 ,t > a .

(3.33)

148

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

4. Numerical experiments We first give a multi-domain finite difference method for TFDEs as the serial numerical propagator in the windowing WR method. In the second subsection, we provide the numerical results. 4.1. Description of the serial solver Consider the following initial value problem of TFDE:

α (0 Dt x)(t ) = λx(t ) + f (t ), t ∈ [0, T ], x(0) = x0 .

(4.1)

First, decompose the time domain [0, T] into N elements:

0 = T0 < T1 < · · · < TN = T,

i = [Ti , Ti+1 ], i = 0, 1, . . . , N − 1.

Without the loss of generality, we suppose H ≡ Hi = Ti+1 − Ti for all i, and each subinterval [Ti , Ti+1 ] is divided into m parts with the constant mesh size h = H/m, where H = max0≤i≤N−1 {Hi }. The model equation (4.1) takes the form:

⎧ i ⎨( Dα x)(t ) + 

α x)(t ) = λx(t ) + f (t ), t ∈ [Ti , Ti+1 ], (Tj−1 D Ti t Tj j=1 ⎩ x(0) = x0 ,

(4.2)

α is defined as in (3.3). where a D b Next, we give the numerical solution of (4.2). For tl ∈ [Ti−1 , Ti ] (i = 1, 2, . . . , N), we have

1  (1 − α) = =



tl 0

(tl − τ )−α x (τ )dτ

1  (1 − α) 1

 (1 − α)

 (i−1)H

 tl 1 (t − τ )−α x (τ )dτ  (1 − α) (i−1)H l 0  tl i−1  kH  1 (tl − τ )−α x (τ )dτ + (t − τ )−α x (τ )dτ .  (1 − α) (i−1)H l (k−1)H

(tl − τ )−α x (τ )dτ +

(4.3)

k=1

For the first part of (4.3) (cf. [26]), we have i−1  kH  1 (t − τ )−α x (τ )dτ  (1 − α) k=1 (k−1)H l

=

x m  (k−1)H+ jh i−1   −x 1 ((i − 1)H + lh − τ )−α (k−1)m+ j (k−1)m+ j−1 + O(h) dτ h  (1 − α) (k−1)H+( j−1)h k=1 j=1

= σα ,h

i−1  m  k=1 j=1





w(α) (i−1)m+l−(k−1)m− j+1 x(k−1)m+ j − x(k−1)m+ j−1 + O(h),

(4.4)

where

σα,h =

1 ,  (2 − α)hα

1−α

1−α w(α) − ( p − 1) p = p

,

(4.5)

and xi stands for x(ti ). Using the same arguments, we can calculate the second part of (4.3):

 tl l  1 (tl − τ )−α x (τ )dτ = σα,h w(α) x(i−1)m+ j − x(i−1)m+ j−1 + O(h). l− j+1  (1 − α) (i−1)H j=1

(4.6)

So, we have

((i−1)H Dtαl x)(tl ) = λxl + fl − (0 Dα(i−1)H x)(tl ), i.e., the numerical scheme of (4.2) is

l  w(α) x(i−1)m+ j − x(i−1)m+ j−1 = l− j+1 j=1

where tl ∈ [Ti−1 , Ti ], i = 1, 2, . . . , N.

m i−1   f λxl + l − w(α) x(k−1)m+ j − x(k−1)m+ j−1 , ( i−1 ) m+l− ( k−1 ) m− j+1 σα,h σα,h k=1 j=1

(4.7)

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150

149

4.2. Numerical results In this subsection, we give the main numerical results. Example 4.1. Consider the following time fractional partial differential equation

⎧ ∂ 2 u(x, t ) ⎪ , (x, t ) ∈ [0, 1] × [0, 1], ⎨(0 Dt0.5 u(x, t ))(t ) = μ ∂ x2 u(x, 0) = sin x, x ∈ [0, 1], ⎪ ⎩ u(0, t ) = u(1, t ) = 0, t ∈ [0, 1].

(4.8)

If the equation is spatially approximated on a grid h = {xi = ih|i = 0, 1, . . . , N + 1} with the constant mesh size h = 1/(N + 1), then the resulting discrete equation has the same form as (3.23). Now we let μ = 0.1, and h = 0.2. We use the standard Jacobi WR method to solve its numerical solution, that is, MA = diag{−5, . . . , −5}, NA = MA − A. Then NA ∞ = 5. We define the computed error as

error =

x(ks ) − x(ks −1) [a,b] ,

x(ks ) [a,b]

where x(l ) [a,b] = max { x(l ) (t ) ∞ }, and x(l) (t) is an iterative waveform on a window [a, b]. a≤t≤b

Fig. 2 shows the iterative errors of WR method without windows on the whole interval [0, 1]. In the following we demonstrate the numerical results of the windowing WR method for (4.8). For the cases ks = 20, 25, 30, we use (3.33), where a = 0, to compute the values Tε for ε ∈ [0, 1] (Fig. 3). From Fig. 3, one sees that the window length may be taken as 0.2 for the three cases. Fig. 4 shows the iterative errors of windows for ks = 20, ks = 25, and ks = 30. Comparing Figs. 2 and 4, one sees that the windowing technique can accelerate the convergence rate of the WR method.

0

10

−2

10

−4

10

−6

Error

10

−8

10

−10

10

−12

10

−14

10

−16

10

0

10

20

30 40 Numbers of iteration

50

60

Fig. 2. Iterative errors of WR method without windows on the whole interval [0, 1].

0.45 ks=20 ks=25 ks=30

0.4 0.35 0.3

T

0.25 0.2 0.15 0.1 0.05 0

0

0.05

0.1 /||NB||

0.15

0.2

Fig. 3. Lengths of windows by a theoretical estimation for ks =20, ks =25, and ks =30.

150

X.-L. Ding, Y.-L. Jiang / Commun Nonlinear Sci Numer Simulat 30 (2016) 139–150 −6

10

ks=20 ks=25 ks=30

−7

10

−8

Error

10

−9

10

−10

10

−11

10

1

1.5

2

2.5 3 3.5 Number of windows

4

4.5

5

Fig. 4. Iterative errors of windows for ks = 20, ks = 25, and ks = 30.

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26]

Mainardi F. Fractals and fractional calculus continuum mechanics. Springer Verlag; 1997. Malinowska AB, Torres DFM. Towards a combined fractional mechanics and quantization. Fract Calculus Appl Anal 2012;15:407–17. Liu F, Anh V, Turner I. Numerical solution of the space fractional Fokker–Planck equation. J Comput Appl Math 2004;166:209–19. Hall MG, Barrick TR. From diffusion-weighted MRI to anomalous diffusion imaging. Magn Reson Med 2008;59:447–55. Metzler R, Jeon J-H, Cherstvy AG, Barkai E. Anomalous diffusion models and their properties: non-stationarity, non-ergodicity, and ageing at the centenary of single particle tracking. Phys Chem Chem Phys 2014;16:24128–64. Pagnini G. Short note on the emergence of fractional kinetics. Physica A 2014;409:29–34. Eslahchi MR, Dehghan M, Parvizi M. Application of the collocation method for solving nonlinear fractional integro-differential equations. J Comput Appl Math 2014;257:105–28. Zhao JJ, Xiao JY, Ford NJ. Collocation methods for fractional integro-differential equations with weakly singular kernels. Numer Algorithms 2014;65:723–43. Gong CY, Bao WM, Tang GJ, Yang B, Liu J. An efficient parallel solution for Caputo fractional reaction-diffusion equation. J Supercomput 2014;68:1521–37. Stokes PW, Philippa B, Read W, White RD. Efficient numerical solution of the time fractional diffusion equation by mapping from its brownian counterpart. J Comput Phys 2015;1:334–44. Xu QW, Hesthaven Jan S, Chen F. A parareal method for time-fractional differential equations. J Comput Phys 2014. http://dx.doi.org/10.1016/j.jcp.2014.11.034 Lelarasmee E, Ruehli A, Sangiovanni-Vincentelli A. The waveform relaxation method for time domain analysis of large scale integrated circuits. IEEE Trans Comput Aided Des 1982;1:131–45. Geiser J. An iterative splitting method via waveform relaxation. Int J Comput Math 2011;17:3646–65. Hassanzadeh Z, Salkuyeh DK. Two-stage waveform relaxation method for the initial value problems with non-constant coefficients. Comput Appl Math 2014;3:641–54. Leimkuhler B, Ruehli AE. Rapid convergence of waveform relaxation. Appl Numer Math 1993;11:211–24. Jiang YL. A general approach to waveform relaxation solutions of nonlinear differential-algebraic equations: The continuous-time and discrete-time cases. IEEE Trans Circuits Syst-I 2004;51:1770–80. Zubik-kowal B, Vandewalle S. Waveform relaxation for functional differential equation. SIAM J Sci Comput 1999;21:207–26. Jiang YL, Ding XL. Waveform relaxation methods for fractional differential equations with the Caputo derivatives. J Comput Appl Math 2013;238:51–67. Ding XL, Jiang YL. Waveform relaxation methods for fractional functional differential equations. Fract Calculus Appl Anal 2013;16:573–94. Ding XL, Jiang YL. Waveform relaxation methods for fractional differential-algebraic equations. Fract Calculus Appl Anal 2014;17:585–604. Jiang YL. Windowing waveform relaxation of initial value problem. Acta Math Appl Sin Engl Ser 2006;22:575–88. Zhang H. A note on windowing for the waveform relaxation method. Appl Math Comput 1996;76:49–63. Ding XL, Jiang YL. Semilinear fractional differential equations based on a new integral operator approach. Commun Nonlinear Sci Numer Simulat 2012;17:5143–50. Kilbas AA, Srivastava HM, Trujillo JJ. Theory and applications of fractional differential equations. North-Holland mathematics studies, 204. Amsterdam: Elsevier Science B.V.; 2006. Podlubny I. Fractional differential equations. New York: Academic Press; 1999. Murio DA. Implicit finite difference approximation for time fractional diffusion equations. Comput Math Appl 2008;56:1138–45.