Absorbing boundary conditions and geometric integration: A case study for the wave equation

Absorbing boundary conditions and geometric integration: A case study for the wave equation

Available online at www.sciencedirect.com ScienceDirect Mathematics and Computers in Simulation 111 (2015) 1–16 www.elsevier.com/locate/matcom Origi...

2MB Sizes 3 Downloads 125 Views

Available online at www.sciencedirect.com

ScienceDirect Mathematics and Computers in Simulation 111 (2015) 1–16 www.elsevier.com/locate/matcom

Original articles

Absorbing boundary conditions and geometric integration: A case study for the wave equation I. Alonso-Mallo a,∗ , A.M. Portillo b a IMUVA, Instituto de Matem´aticas, Departamento de Matem´atica Aplicada, Facultad de Ciencias, Universidad de Valladolid, Spain b IMUVA, Instituto de Matem´aticas, Departamento de Matem´atica Aplicada, Escuela de Ingenier´ıas Industriales,

Universidad de Valladolid, Spain Received 15 March 2013; received in revised form 23 July 2014; accepted 26 November 2014 Available online 11 December 2014

Abstract This paper is concerned about the confluence of two subjects of the numerical solution of time evolution PDEs: numerical methods that preserve geometric properties of the flow and the use of absorbing boundary conditions to reduce the computation to a finite domain. This confluence is studied with special attention to the time stability of the resulting full discretization. For this, the stability regions of the time integrators are revisited. Since geometric methods are not always A-stable, it is necessary a suitable behavior of the real part of the eigenvalues of the spatially discretized problem to avoid in practice any time instability. A deep study is carried out for the case of the one dimensional wave equation when it is discretized with finite differences, showing that this suitable behavior happens. Numerical experiments confirming the previous results are included. c 2014 International Association for Mathematics and Computers in Simulation (IMACS). Published by Elsevier B.V. All rights ⃝ reserved. Keywords: Wave equation; Absorbing boundary conditions; Geometric time integration; Time stability

1. Introduction The use of geometric time integrators [14,20] not only leads to better qualitative properties of the numerical solution, but also to a better accuracy when a long time computation is made. There are a lot of numerical integrators with some of these properties: symplectic integrators for Hamiltonian systems, symmetric integrators for reversible systems and methods designed to preserve first integrals. It is quite common that the problem is defined in an unbounded domain when it is desirable to preserve its geometric properties. In this case, it is also necessary to make the numerical computation in a finite domain with suitable artificial boundary conditions [4,5,10–13,22]. There are many possibilities to define these conditions at the boundary, but we are interested in the so called absorbing boundary conditions (ABCs) which are designed to achieve small reflections inside the computational domain along with other good properties as the local character and easy implementation. Apparently, a paradox emerges when geometric integration and ABCs are simultaneously considered. On the one hand, the original problem must be conservative in order to use geometric integrators and, on the other hand, the ∗ Corresponding author. Tel.: +34 983423769; fax: +34 983423013.

E-mail addresses: [email protected] (I. Alonso-Mallo), [email protected] (A.M. Portillo). http://dx.doi.org/10.1016/j.matcom.2014.11.021 c 2014 International Association for Mathematics and Computers in Simulation (IMACS). Published by Elsevier B.V. All rights 0378-4754/⃝ reserved.

2

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

incorporation of ABCs converts the original problem into a strongly dissipative system because of the absorption of every part of the solution which arrives at the boundary. Then, it could be thought that it is absurd to use geometric methods in this case. However, the geometric properties of the solution are still relevant inside the computational domain. In this situation, the stability of the time integration must be addressed. When the ABCs are incorporated, the eigenvalues of the discretized differential operator are complex numbers with, in general, non vanishing real part and with an increasing size when the spatial discretization is refined. Therefore, we can deduce that the stability analysis which has already been carried out in the literature for the conservative case, where the eigenvalues are purely imaginary complex numbers (see for example [17]), is not suitable for the case which we are interested in. We address this question by considering as case study the one dimensional linear wave equation u tt = u x x ,

x ∈ R, t ≥ 0.

(1)

The application of symplectic methods to the wave equation with absorbing boundary conditions, and the subsequent near preservation energy is not new, but it was published in [7,18]. In these works the authors studied the Eq. (1) in the interval [0, L] with the transparent boundary conditions u t (0, t) = u x (0, t), u t (L , t) = −u x (L , t). This problem is discretized in time and space using the Preissman box scheme. Our approach is different, since we start considering a spatial discretization of (1). Let h > 0 be a given spatial step, for real and fixed a, we take the nodes x j = a + j h for j ∈ Z. Using second order finite differences to approximate the second derivative in space, we have U j+1 − 2U j + U j−1 d 2U j = , dt 2 h2

j ∈ Z, t ≥ 0,

(2)

where U j ≈ u(x j ). We denote the computational window by [a, b] = [x0 , x N ] = [a, a + N h], where h = (b − a)/N and N ∈ N. Nonlocal transparent boundary conditions (TBCs), associated to the space discretized wave equation given by (2), have been obtained in Proposition 3.2 of [16]. In Fourier variables, and for small enough ωh, they are given by   u 0 (ω) = r1 (ωh)  u 1 (ω), (3)  u N (ω) = r1 (ωh)  u N −1 (ω), at x0 and x N respectively, where r1 (z) = 1 −

  z 2 1/2 z2 . − iz 1 − 2 2

(4)

The ABCs are deduced by approximating the function r1 (ωh) by using Taylor or Pad´e expansions and taking the inverse Fourier transform in order to deduce the ABCs in the original variables. We will use the notation ABC( p, q) for the ABCs obtained when we use the Pad´e expansion given by a rational function p1 (ωh)/ p2 (ωh), where p1 and p2 are polynomial functions with degrees p and q respectively. In this case, we define the order of absorption as the number p + q + 1 (this definition corresponds to the one used in [4,5] and it is slightly different from the one used in [16], where the order of absorption is defined as p + q). This paper is written considering ABC(2, 2), which have fifth order of absorption. In this way, we obtain an ordinary differential system which may be written in the form d 2uh du h = Ah u h + Bh . 2 dt dt Here, Ah and Bh , given in Section 4, are matrices which dimension depends on the parameter h. The following step is to rewrite (5) as a first order ordinary differential system     d uh u = Ah h , v vh dt h

(5)

(6)

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

where vh = du h /dt, and   0 I Ah = . Ah Bh

3

(7)

The well posedness of (6) has been proved in [1]. We have also considered other ABCs, ABC(1, 0), ABC(2, 0) and ABC(1, 2), with less order of absorption. In these cases, we obtain second order in time ordinary differential systems, similar to (6), which well posedness has been proved in [2]. The time stability of a full discretization with a concrete time integrator depends on the relation between the increasing sizes of the eigenvalues when the spatial discretization is refined and the size of the stability region. To obtain stability when the time integration of (6) is introduced, it is necessary that the eigenvalues multiplied by the time step size are small enough to be inside the absolute stability region R A of the time integrator (cf. [6], where Eq. (1) is spatially discretized with a Chebyshev pseudospectral method and a non geometric Runge–Kutta method is used for the time integration). Useful symplectic methods with internal stages are not A-stable. We study in Section 2 the stability regions of some symplectic Runge–Kutta (RK) and Partitioned Runge–Kutta (PRK) methods, when they are applied to dissipative problems. It is well known that a PRK method is explicit when it is used for separable problems. We show that a PRK method is also explicit when it is applied to a system as (6) and we develop a technique to obtain estimates of its absolute stability region, which is finite. As an example, we apply this technique to a concrete PRK method of order 4. Since, in general, we do not have A-stability, we need to study the behavior of the eigenvalues when the spatial discretization is refined. We prove in Section 4 that the imaginary parts grow in a similar way to the conservative case, whereas the growth of the real parts is rather slower. We can deduce that a symplectic time integrator applied to our case study can be useful even though it is not A-stable. We only prove here the case which corresponds to ABC(2, 2), but the cases with ABC(1, 0), ABC(2, 0) and ABC(1, 2), with less order of absorption, can be also proved in a similar way. We end the paper with a section which is devoted to numerical experiments that agree with our previous results, showing that the advantages of symplectic methods are preserved. We also compare explicit and implicit symplectic time integrators, showing that their efficiency is similar to the well known conservative case. 2. Stability regions of some symplectic methods applied to dissipative problems For a Hamiltonian problem (and also for a general non dissipative problem), the relevant stability region R A is reduced to an interval included in the imaginary axis because the eigenvalues of the problem are purely imaginary complex numbers. When the problem is highly oscillatory, as it is the case for partial differential equations after being discretized in space, this imaginary interval of stability must be unbounded. When the ABCs are incorporated, the eigenvalues of the spatially discretized operator are complex numbers with non vanishing real part. It is convenient that the stability region R A is unbounded when the sizes of the real and imaginary parts of the eigenvalues are big, and the suitable geometry of R A depends on the location of these eigenvalues. We now study the use of some symplectic time integration methods, which are a good choice for problems with a Hamiltonian structure. The stability of these methods in the conservative case has been deeply studied in [17]. As a first possibility, we could consider Gauss RK methods. They are A-stable, i.e., R A = C− , and they have the highest classical order. However, they are not suitable for spatially discretized PDEs because the matrix of coefficients for the internal stages is full and, as a consequence, these methods are very expensive when they are applied to high dimensional systems. Another possible choice is to use symplectic diagonally implicit RK (DIRK) methods. These methods are given by concatenation of several implicit midpoint rule steps of different lengths [14,20]. The stability function of the implicit midpoint rule is r (z) = (1 + z/2)/(1 − z/2) and its absolute stability region is the negative real semi plane. Suppose that a DIRK method is the concatenation of several implicit midpoint rule of steps b1 △ t, . . ., bs △ t. Then, its stability function is r (bs z) · · · r (b1 z). Hence, the imaginary axis is always included in its absolute stability region and, since r (z) has a singularity at z = 2, there exists an island in the left half plane which is out of this region for every bi negative in the previous concatenation. Therefore, there exists a vertical strip into the left half plane included

4

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

in the region of absolute stability. To obtain stability, the key is the behavior of the real parts of the eigenvalues which cannot increase very quickly when the spatial discretization is refined. We consider this question for our case study in Section 3. As a third option, we consider symplectic PRK methods. These methods are designed to integrate problems with a separable Hamiltonian and, as a significant advantage, they are explicit in practice. Their stability when they are applied to a linear separable problem has been well studied and it can be reduced to the application of the numerical method to a linear oscillator. See for example [17], where the stability intervals of several symplectic methods are obtained. However, the problems we are considering are not separable and therefore, we have to revise their usefulness in these cases. In our case, the linear oscillator is not a good model because the use of ABCs results in the appearance of strongly dissipative terms. As alternative, we consider the test equation y¨ = −ω2 y + τ y˙ , where ω represents a frequency of the original problem and τ < 0 accounts for the dissipative term. If we introduce the variables q = y and p = y˙ , the test equation is transformed in the vectorial problem of order 1      0 1 q˙ q = . (8) p˙ p −ω2 τ The case τ = 0 corresponds to a linear oscillator and therefore to a separable problem. We denote a symplectic PRK by using the array (b1 , . . . , bs )[B1 , . . . , Bs ], as in [20]. If we apply this PRK method to the test problem (8) with time step length 1t we obtain Y0 = qn Z1 = pn for i = 1, . . . , s Yi = Yi−1 + 1tbi Z i Z i+1 = Z i + 1t Bi (−ω2 Yi + τ Z i ) qn+1 = Ys pn+1 = Z s+1 .

(9)

Then, it is plain that the method continues being explicit when it is applied to the problem (8). A similar conclusion holds when it is applied to the problem given by (6). We also remark that a symplectic PRK applied to the problem (8) with τ ̸= 0 loses its classical order and it has only order 1 in general. Nonetheless, when it is applied to the problems with absorbing boundary conditions (6), it retains its classical order inside the spatial interval and the order 1 comes out only when the solution is absorbed at the boundary. Since we do not need to compute with accuracy the reflection, this loss of order is less important. We now write the equations of the stages in (9) in a vectorial way,     ωYi−1 + ω1tbi Z i ωYi = Z i+1 Z i + 1t Bi τ Z i − ω2 1t Bi (Yi−1 + 1tbi Z i )   ωYi−1 = Mi (ω1t, τ 1t) , Zi where, Mi (z 1 , z 2 ) =



1 −z 1 Bi

 z 1 bi . 1 − z 12 Bi bi + z 2 Bi

When the s stages of the PRK method are applied, the stability function is R(z 1 , z 2 ) = Ms (z 1 , z 2 )Ms−1 (z 1 , z 2 ) · · · M1 (z 1 , z 2 ). The growth of the errors is related to the boundedness of the powers of the matrix R(ω1t, τ 1t). A suitable definition of a stability region is not straightforward but, in order to bring the powers of the matrices R(ω1t, τ 1t) under control,

5

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

we need that (ω1t, τ 1t) ∈ R B , an auxiliary region given by R B = {(z 1 , z 2 ) ∈ R × R− : ρ(R(z 1 , z 2 )) ≤ 1}, with ρ being the spectral radius. Now, we want to express this stability condition in terms of the eigenvalues of the matrix in (8) which are given by √ τ ± τ 2 − 4ω2 λ= . 2 Therefore, we define the absolute stability region of the PRK method as      z 2 ± z 22 − 4z 12 RA = z ∈ C : z = , (z 1 , z 2 ) ∈ R B .   2 In this way, if λ1t ∈ R A for all the eigenvalues of the coefficient matrix in (8), then (ω1t, τ 1t) ∈ R B , i.e. ρ(R(ω1t, τ 1t)) ≤ 1. Notice that since R B ⊂ R × R− , then R A ⊂ C− . This stability condition can be easily tested for the eigenvalues of the problem (6), but we remark that this study is not sufficient to prove the stability. Nevertheless, we think that the information obtained in this way may be relevant. As distinct from the conservative case, we are also interested in the intersection of R A with the real axis because the real part of the eigenvalues of the spatial discretized differential operator are non vanishing and they could have big sizes when the spatial discretization is refined. To obtain in practice a stability region R A for a concrete PRK method, first we draw its correspondent region  z 2 ± z 2 −4z 2

2 1 where R B . We will deduce R A as the region obtained by means of the transformation z = a + ib = 2 (z 1 , z 2 ) ∈ R B . For this, we analyze how the half lines passing through the origin in the half plane {(z 1 , z 2 ) : z 2 ≤ 0} are transformed in half lines included in the half plane complex with negative real parts. It suffices to consider half lines with positive slope because of the symmetry. Every point in the intersection of the boundary of R B with one of these half lines gives rise to two symmetric points at the boundary of R A . When the slope of the half line in {(z 1 , z 2 ) : z 2 ≤ 0} is greater than or equal to 2, the transformed half line is real and, in another case, the transformed half lines are symmetric with respect to the negative real half line. As an example, we consider the fourth order PRK method, see [20], given by   η η+ν η+ν η (η, ν, η, 0) , , , , (10) 2 2 2 2

where η = (2 + 21/3 + 2−1/3 )/3, ν = 1 − 2η. In Fig. 1 we display the regions R B and R A for the method (10) (we have obtained these regions for other PRK methods of smaller order and their sizes are similar). The dotted line in R B is transformed in the dotted real line in R A . Similarly, the dashed line in R B is transformed in the two symmetric dashed lines in R A . Notice that the intersection with the negative real axis has a similar size to the intersection with the imaginary axis; if the real parts of the eigenvalues grow slower than the imaginary parts, the stability depends only on the imaginary parts. That is, the situation is similar to the conservative case. 3. Time stability of the full discretization We consider the discrete Euclidean norm in order to bound the full discretization error. Because of the well posedness of the original problem in the continuous Euclidean norm [9], this choice is natural. The error of the full discrete problem with ABCs (6) is formed by the contribution of three types of errors. A first error depends only on the accuracy of the spatial discretization. A second one is the error of absorption committed by the incorporation of the ABCs, which behavior could be studied using the reflection coefficient in a similar way to the study in [16], but using the Euclidean norm in space and the maximum norm in time, as it is done in [5] for the case of Schr¨odinger-type equations. Finally, the third error accounts for the influence of the time numerical integration. Let us take a time step 1t > 0 and denote tn = n1t for n ∈ N. We will use the notation  eh,n = [(u h (tn ) − u h,n )T , (vh (tn ) − vh,n )T ]T ,

6

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

Fig. 1. Regions R B (left) and R A (right) of the method (10).

for the global errors which arise when the solution of (6) is approximated by the values [u h,n , vh,n ]T obtained using a time integrator, for example a RK. The matrix Ah given in (7) is not normal and, as a consequence, it is not possible to use some usual techniques in order to deduce the time stability. Suppose that we can write Ah = Ph Dh Ph−1 ,

(11)

where Dh is the diagonal matrix of eigenvalues and Ph is the corresponding matrix of eigenvectors. If the condition µ1t ∈ R A ,

µ ∈ σ (Ah ),

(12)

holds, we can deduce the following bound for the error due to the time integration (see [2], or [4,5] for the case of a Schr¨odinger-type equation), ∥ eh,n ∥ ≤ κh O(1t p ),

(13)

where κh is κh = ∥Ph ∥∥Ph−1 ∥,

(14)

the Euclidean condition number of the eigenvector matrix Ph and p is the classical order of the RK method. Since the matrix Ah is not normal, the value κh can grow when the spatial discretization is refined. The bound (13) indicates that the non normality of the matrix Ah , can be compensated with a suitable time integration. The possible bad behavior of the matrix Ah , has been evaluated numerically in [2] by means of the condition number of the eigenvector matrix κh , and the pseudospectrum [21]. These numerical results show that this weak instability is very small, even in the worst case that we have studied. In conclusion, we need to guarantee that the condition (12) is satisfied. From our previous study of absolute stability regions in Section 2, it is necessary that the two following conditions are satisfied: (a) Re(µ) ≤ 0, for all µ ∈ σ (Ah ), (b) if the time integrator is not A-stable, the sizes of Re(µ), µ ∈ σ (Ah ), must not increase too fast when the spatial discretization is refined. Notice that condition (a) is sufficient for time stability when the integrator is A-stable. We have proved this condition is satisfied in [2,1]. Since in this work we consider geometric integrators, which are not A-stable in general, we also need to prove that condition (b) is satisfied. We prove in Theorem 7 that the minimum real part of the eigenvalues of (7) behaves as O(N 1−α ), for every α ∈ (0, 1), whereas the maximum imaginary part behaves as O(N ), as N → ∞. For small enough values of N , it is possible to compute numerically the eigenvalues of the matrix (7). For example, for the interval [−1, 1] as computational domain, the eigenvalues of the matrix (7), for ABC(1, 0), ABC(2, 0), ABC(1, 2) and ABC(2, 2), are displayed in Fig. 2. It is clear that the eigenvalues shown in this figure satisfy the

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

7

Fig. 2. Eigenvalues µ, for ABC(1, 0), ABC(2, 0), ABC(1, 2) and ABC(2, 2). N = 256 (inner), N = 512 (middle), N = 1024 (outer).

Table 1 Minimum of real parts of the eigenvalues. The values are numerically calculated. N ABC(1, 0) ABC(2, 0) ABC(1, 2) ABC(2, 2)

Minimum of real parts 256 512 −2.65 −2.94 −4.72 −5.29 −6.55 −7.40 −8.28 −9.40

1024 −3.23 −5.87 −8.27 −10.55

previous condition (a). We have marked with a square, a diamond and a circle for N = 256, 512, 1024 respectively, the minimum of the real parts. It can be seen that when N increases, the sizes of the real and imaginary parts of the eigenvalues grow with N . In fact, the behavior of the imaginary parts is almost identical to the one observed in the conservative case. In order to see more clearly the minimum of the real parts, we display it in Table 1 for the four studied ABCs. Notice that the real parts also grow when the order of absorption increases. On the other hand, the sizes of the real parts grow slower than the ones of imaginary parts when the spatial discretization is refined. We remark that this good behavior is not necessarily observed when Eq. (1) with absorbing boundary conditions is discretized; for example, it seems from the Fig. 1 in [6] the real and imaginary parts of the eigenvalues grow in a similar ratio when a spatial pseudospectral discretization is refined. 4. Size of the real and imaginary parts of the eigenvalues To achieve ABC(2,2) we consider the fourth order Pad´e expansion of r1 in (4), given by a rational function p1 (z)/ p2 (z) where p1 and p2 are polynomial functions with degree 2. That is, r1 (z) =

1 − 2i z − 18 z 2 1 + 2i z − 18 z 2

+ O(z 5 ).

8

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

Replacing r1 in (3) by this Pad´e expansion, and taking inverse Fourier transform h2 h h2 h ′ u 0 + u ′′0 = u 1 − u ′1 + u ′′1 , 2 8 2 8 h ′ h 2 ′′ h ′ h2 u N + u N + u N = u N −1 − u N −1 + u ′′N −1 . 2 8 2 8

u0 +

We now replace u ′′1 and u ′′N −1 with its value at the inside the interval and we have 1 4 (−7u 0 + 6u 1 + u 2 ) − (u ′0 + u ′1 ), 2 h h 1 4 = 2 (−7u N + 6u N −1 + u N −2 ) − (u ′N + u ′N −1 ). h h

u ′′0 = u ′′N

We reach the ordinary differential system (5) for u h = [u 0 , . . . , u N ]T , Ah = −7 A N +1

 1  =  

6 −2 .. .

1 1 .. . 1 1

−4

   ,  

..

. −2 6

B N +1

1 −7

 0  =  

−4 0 .. .

1 h2

A N +1 and Bh =

1 h B N +1 ,

where,

 0 .. . 0

..

. 0 −4

  .  

0 −4

We begin with an elementary result which relates the eigenvalues of the matrix (7) to the solutions of a quadratic eigenvalue problem associated with the original second order problem (5). Lemma 1. The following conditions are equivalent: 1. Re(µ) ≤ 0 for all µ ∈ σ (Ah ). 2. All the solutions of the quadratic eigenvalue problem A N +1 x + λB N +1 x = λ2 x,

(15)

satisfy Re(λ) ≤ 0. Furthermore, if the conditions 1 and 2 are satisfied, the solutions of (15) satisfy the equation x ∗ A N +1 x + λx ∗ B N +1 x = λ2 ∥x∥2 , where

x∗

(16)

is the conjugate transpose of x.

The proof is given in [1]. Remark 2. Notice that 0 is an eigenvalue with multiplicity 1 of the matrix Ah , with associated eigenvector [1, . . . , 1, 0, . . . , 0]T . Lemma 3. Let µ ̸= 0, µ ∈ σ (Ah ) and λ = hµ, then λ2 = −2 + r + r −1 , with r being a root of the polynomial functions f 1,N (r ) = r 2N +2 (r − 1)4 − 2r N +3 − 56r N +2 − 140r N +1 − 56r N − 2r N −1 + (r − 1)4 ,

(17)

f 2,N (r ) = r

(18)

2N +2

(r − 1) + 2r 4

N +3

+ 56r

N +2

+ 140r

N +1

+ 56r

N

+ 2r

N −1

+ (r − 1) . 4

The proof is given in [1]. In the two following lemmas we deduce that all roots of the previous polynomial functions are included in an annulus which contains the unit circumference and with a size which is decreasing as N → ∞. These lemmas are a stronger version of Lemma 3 in [1] where a fixed annulus is considered. Lemma 4. For every α ∈ (0, 1) there exists K > 1 such that, for big enough N , | f j,N (z)| > 0

for |z| < 1 −

K , Nα

j = 1, 2.

(19)

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

Proof. Given α ∈ (0, 1), for some N0 , we choose K such that 1 < K < N0α . In this way For j = 1, 2, N ≥ N0 and |z| < 1 − NKα we have the following bound

K Nα

9

< 1 for N ≥ N0 .

| f j,N (z)| ≥ |z − 1|4 − 2|z| N −1 − 56|z| N − 40|z| N +1 − 56|z| N +2 − 2|z| N +3 − |z − 1|4 |z|2N +2         K 4 K N −1 K N K N +1 > − 2 1 − − 56 1 − − 40 1 − Nα Nα Nα Nα   N +2  N +3  4    K K 2N +2 K K −56 1 − α 1− α −2 1− α − N N Nα N  4  N −1  4  2N +2  K K K K > 1− α − 156 1 − α − . Nα N Nα N Let δ be a small positive number. Taking a greater value of N0 if necessary, we have   K N +3 < δ for N ≥ N0 , 1− α N and we deduce that     K 2N +2 K N −1 1− α <δ 1− α N N

(20)

for N ≥ N0 .

Consequently,             K 4 K N −1 K 4 K 2N +2 K 4 K N −1 − 156 1 − α − 1− α > − (156 + δ) 1 − α . Nα N Nα N Nα N If we prove that     K 4 K N −1 > (156 + δ) 1 − α Nα N for big enough N , the lemma would be proved. Since both terms of this inequality are positive, it suffices to prove that   K 4 log(K ) − 4α log(N ) > log(156 + δ) + (N − 1) log 1 − α . N Shifting all the addends of the right term to the left term of this equation and using the bound   K K − log 1 − α > α , N N it suffices to prove that 4 log(K ) − 4α log(N ) − log(156 + δ) + (N − 1) But this inequality is plain for big enough N .

K > 0. Nα

(21)



Lemma 5. For every α ∈ (0, 1), there exists K > 1 such that, for big enough N , all roots r of f j,N (z), for j = 1, 2, satisfy 1 − NKα ≤ |r | ≤ (1 − NKα )−1 . Proof. If r is a root of f j,N (z) for j = 1, 2, from Lemma 4 we deduce that 1 − NKα ≤ |r |, for big enough N . On the other hand, if r is a root of f j,N (z) then r −1 is a root too, and we deduce the other bound.  In Theorem 3.6 in [1] it has been proved that Re(µ) ≤ 0. In the following results, we only consider the case µ ̸= 0. We deduce the behavior of the eigenvalues of (7) when N → ∞.

10

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

Lemma 6. The solutions of the quadratic eigenvalue problem (15) satisfy Re(λ) = O(N −α ), Im(λ) = 2 + O(N −2α ) as N → ∞, for every α ∈ (0, 1). Proof. From Lemma 3 we have that λ2 = (r − 1)2 /r , and if we consider r = ρeiθ , with ρ < 1, it is satisfied that θ ∈ (−π, π ), and when we take the square root with positive real part, we have λ=

ρeiθ − 1 r −1 = 1/2 iθ/2 = (ρ 1/2 − ρ −1/2 ) cos(θ/2) + i(ρ 1/2 + ρ −1/2 ) sin(θ/2). 1/2 r ρ e

Therefore, Re(λ) = (ρ 1/2 − ρ −1/2 ) cos(θ/2) and Im(λ) = (ρ 1/2 + ρ −1/2 ) sin(θ/2). From Lemma 5, it can be deduced that     K 1/2 K −1/2 − 1− α , |Re(λ)| ≤ 1 − α N N     K 1/2 K −1/2 + 1− α . |Im(λ)| ≤ 1 − α N N The result is achieved considering the Taylor expansions for |z| < 1, 1 1 f (z) = (1 − z)1/2 = 1 − z − z 2 + O(z 3 ), 2 8 3 1 g(z) = (1 − z)−1/2 = 1 + z + z 2 + O(z 3 ).  2 8 The result of Lemma 6 may be numerically corroborated by computing the solution λ of (15). In Fig. 3 we show these solutions for N = 256, 512 and 1024 for the four ABCs which we have studied in [2]. It can be observed that when N increases, the minimum of the real part of the eigenvalues λ goes to zero. On the other hand, the maximum and the minimum of the imaginary part of the eigenvalues λ tend to 2 and −2 respectively, when N increases. Theorem 7. The eigenvalues µ of the matrix Ah satisfy Re(µ) = O(N 1−α ), Im(µ) = O(N ), as N → ∞, for every α ∈ (0, 1). Proof. It is plain from Lemma 6 because µ = λ/ h.



The minimum real part of the eigenvalues µ is negative, as it is deduced from Theorem 3.6 in [1], and from Theorem 7 its size increases slower than the maximum of the imaginary part of the eigenvalues when N rises, which agrees with the behavior displayed in Table 1 and Fig. 2. Although (19) is satisfied for every α ∈ (0, 1) if we take big enough N and a suitable constant K , we are only interested in the values of α which will be observed for the values of N being used in practice. For this, we take the values N = 128, 256, 512, 1024 which are in a useful range. We have numerically computed the corresponding values of the minimum of the real parts of the numbers λ which are solutions of (15) for ABC(1, 0), ABC(2, 0), ABC(1, 2) and ABC(2, 2) and we display them in Fig. 4. The slope of the lines are very similar for all ABCs and it corresponds approximately to the value α = 0.8. If we take α = 0.8, δ = 0.05 and N ≥ 20, it is possible to verify that, when the constant K = 7.5 is used, (20) and (21) hold. This behavior allows, in practice, to use the symplectic integrators considered in previous sections without troubles of stability for reasonable values of the time step size 1t although these integrators are not A-stable.

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

11

Fig. 3. Evolution of the solutions λ of the quadratic eigenvalues problem (15), for ABC(1, 0), ABC(2, 0), ABC(1, 2) and ABC(2, 2). N = 256 (inner), N = 512 (middle), N = 1024 (outer).

Fig. 4. Minimum of the real part of the eigenvalues calculated numerically for N = 128, N = 256, N = 512 and N = 1024.

5. Numerical experiments 5.1. The test problem We consider the problem (1) and the computational window [−L , L] with absorbing boundary conditions. We take 2 L = 64 and t ∈ [0, T ]. The problem is subject to initial conditions u(x, 0) = f (x) = e−B(x+32) and u t (x, 0) = 0. 2 We choose B = (22 × log(10))/(L/3)2 to achieve e−B(L/3) < 10−20 and, in practice, we can consider the support of u(x, 0) contained in [−32 − L/3, −32 + L/3].

12

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

Fig. 5. Solution of the problem test for t = 0, t = 28, t = 50 and t = 92.

The solution is initially a Gaussian wave located at x = −32. From the D’Alembert formula u(x, t) = ( f (x + t) + f (x − t))/2, we deduce that two waves are formed that travel with velocity 1 to the left and to the right respectively. The former one arrives before at the boundary and it has to be absorbed while the other one continues traveling. Then, it seems clear that the absorption must be acting when the other part of the solution is inside the interval and therefore the geometric integration remains to be necessary. This behavior of the solution, displayed in Fig. 5, justify the joint use of a symplectic method and ABCs. 5.2. Symplectic versus non symplectic time integration The first issue we are interested in is checking if a geometric integrator has advantage over a non geometric integrator. Symplectic methods do, in general, nearly preserve the energy, i.e. a modified energy is preserved up to an exponentially small error term, or there is a spatially discrete energy conservation law [7,14,20]. As spatially discrete energy inside the computational interval, we consider E h (t) =

h T (v vh − u hT A˜h u h ), 2 h

(22)

where −1  1 1  ˜ Ah = 2  h  

1 −2 .. .

 1 .. . 1

..

. −2 1

  .  

(23)

1 −1

In [2], it was proved that the ordinary differential system similar to (5), corresponding to ABC(1, 0), is well posed in the sense of E h (t) is decreasing for t ≥ 0. In this paper, we also use E h (t) to measure the spatially discrete energy of (5) inside the computational interval. It is expected that the symplectic method preserves the spatially discrete energy inside the spatial interval even for long time integration.

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

13

Fig. 6. Energy versus time. N = 1000, 1t = 0.5.

We select as symplectic method the 3-stage fourth order symmetric DIRK method introduced by Sanz-Serna and Abia in [19]. For this method a time step of size 1t is the result of the concatenation of three implicit midpoint rules, with time steps b1 1t, b2 1t and b1 1t, where b1 =

1 (2 + 21/3 + 2−1/3 ), b2 = 1 − 2b1 . 3

(24)

The stability region was studied in [8] and it was deduced that, although the method is not A-stable, its stability region includes the left half plane Re(z) ≤ 0, except for a little island which intersects the real axis at −1.1344 · · · and −1.2006 · · ·. This island appears because b2 < 0. Obviously, this region is suitable for Hamiltonian problems, and, from our results in Section 3, it is also suitable for the problems with ABCs given in (6). On the other hand, we choose as non symplectic method, the 3-stage singly DIRK method of fourth order given by 1+ν 2 1 2 1−ν 2

1+ν 2 ν − 2

0

0

1+ν 2

0

1+ν

−1 − 2ν

1+ν 2

1 6ν 2

1−

1 3ν 2

(25)

1 6ν 2

√ where ν = (2/ 3) cos(π/18). This method is A-stable [15]. We have calculated the energy along the time interval with both methods for N = 1000, 1t = 0.5 and final time T = 120. In Fig. 6 we can see that the symplectic method preserves the energy (22) inside the spatial interval. When part of the numerical solution reaches the boundary and the ABCs are acting, the energy decreases. After this absorption, there is another time interval where the spatially discrete energy keeps constant, corresponding to the time when the other part of the solution remains inside the spatial interval. Finally, the spatially discrete energy vanishes when the remaining solution reaches the other boundary. On the contrary, the non symplectic method does not preserve the spatially discrete energy inside the spatial interval. In Fig. 7, we show the absolute error of the energy for the two considered DIRK methods. The size of the energy error for the symplectic method cannot be distinguished in Fig. 7 when the solution is not being absorbed. In fact, in the first part it is approximately 4 × 10−5 and, between t = 40 and t = 90, it is 2.4 × 10−5 roughly. Since the

14

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

Fig. 7. Energy error versus time. N = 1000, 1t = 0.5.

Fig. 8. Relative errors in Euclidean norm versus time. N = 1000, 1t = 0.5.

remaining energy after the first absorption is a half of the initial energy, the absorption error is not transmitted inside the computational window. It can be used the D’Alembert formula for the exact solution of the problem test and thus it can be computed the error of the numerical solution. In Fig. 8, we show the relative errors in Euclidean norm. It can be observed that, as a consequence of the better behavior of the first one inside the spatial interval, the errors produced by the geometric method are lower than the errors of the non geometric one and the difference between both errors grows with the time. This experiment clearly shows the advantage of the use of geometric methods simultaneously with ABCs. 5.3. Implicit symplectic versus explicit symplectic time integration Our concern is now to compare how two symplectic methods work, one of them being implicit and the other one explicit. We consider the symplectic DIRK (24) and the PRK method (10) studied in Section 2. The PRK method has as advantage to be explicit and, as a consequence, it has as disadvantage to have a small stability region, which forces to take a small time step size. In Fig. 9 we display the relative errors in discrete Euclidean norm for DIRK method and PRK method. We notice that the time step size is smaller than the ones used in the previous subsection because of the restriction imposed by the finite stability region of PRK method.

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

15

Fig. 9. Relative errors in Euclidean norm versus time, N = 1000, 1t = 0.025.

Fig. 10. Maximum relative error in Euclidean norm versus cpu time for T = 110 with N = 500 (left) and N = 2000 (right).

Since the symplectic DIRK and the PRK have different cost and they need different time step size to achieve their stability region, it is more appropriate to compare these two methods in terms of the computational cost. For different step sizes for both methods, we calculate the maximum relative error in discrete Euclidean norm in the time interval. In Fig. 10 we display the maximum relative error in Euclidean norm, when T = 110, versus cpu time, for N = 500 and N = 2000. For the DIRK method, the maximum time step size which we used was 1t = 0.5 for both values of N ; for the PRK method, we used 1t = 0.07 for N = 500 and 1t = 0.018 for N = 2000. It can be seen in Fig. 10 that, for N = 500, the implicit method is more efficient when not too much precision is requested because larger step sizes can be used. But when high precision is requested, the explicit method has a lower cost. On the other hand, for N = 2000, the implicit method is always more efficient than the explicit one. The conclusion, as we have predicted in previous sections, is that the implicit method does not have stability problems.

16

I. Alonso-Mallo, A.M. Portillo / Mathematics and Computers in Simulation 111 (2015) 1–16

On the contrary, the explicit method needs to take a small step size in time. This is a very similar situation to the conservative case. 6. Conclusions We have shown that, when geometric integration and absorbing boundary conditions are used simultaneously in a partial differential equation, the stability analysis of a full discretization must be revisited. In fact, we cannot expect that the geometric time integrator is A-stable, even when its imaginary interval stability is the whole imaginary axis. We deduce that, apart from being negative, the size of the real parts of the eigenvalues of the discretized differential operator must be in control to avoid possible instabilities. The previous results on absolute stability are valid for a wide range of problems and discretizations. For a case study, the 1D wave equation discretized by means of finite differences, we are able to prove that the eigenvalues of the semidiscrete operator behave properly when the spatial discretization is refined and the order of absorption increases. In other work, we also have considered other related problems with more practical interest. Specifically, we have obtained ABCs for a coupled wave problem along with a fourth order spatial discretization in [3]. Acknowledgment This work has obtained financial support from Ministerio de Econom´ıa y Competitividad proyecto MTM201123417 cofinanced by FEDER funds. References [1] I. Alonso-Mallo, A.M. Portillo, A proof of the well posedness of discretized wave equation with an absorbing boundary condition, J. Numer. Math. 22 (4) (2014) 271–287. [2] I. Alonso-Mallo, A.M. Portillo, On the well posedness of discretized wave equations with local high order absorbing boundary conditions, unpublished report, 2010. [3] I. Alonso-Mallo, A.M. Portillo, High order full discretizations of coupled wave equations with absorbing boundary conditions and geometric integration, J. Comput. Phys. 265 (2014) 16–33. [4] I. Alonso-Mallo, N. Reguera, Weak ill-posedness of spatial discretizations of absorbing boundary conditions for Schr¨odinger-type equations, SIAM J. Numer. Anal. 40 (2002) 134–158. ¨ [5] I. Alonso-Mallo, N. Reguera, Discrete absorbing boundary conditions for Schrdinger-type equations. Construction and error analysis, SIAM J. Numer. Anal. 41 (2003) 1824–1850. [6] F.S.V. Baz´an, Chebyshev pseudospectral method for wave equation with absorbing boundary conditions that does not use a first order hyperbolic system, Math. Comput. Simul. 80 (2010) 2124–2133. [7] T.J. Bridges, S. Reich, Numerical methods for Hamiltonian PDEs, J. Phys. A: Math. Gen. 39 (2006) 5288–5289. [8] J. de Frutos, J.M. Sanz-Serna, An easily implementable fourth-order method for the time integration of wave problems, J. Comput. Phys. 103 (1992) 160–168. [9] H.O. Fattorini, Second order linear differential equations in Banach spaces, in: North-Holland Mathematics Studies, vol. 108. Notas de Matem´atica [Mathematical Notes], 99, North-Holland Publishing Co., Amsterdam, 1985. [10] D. Givoli, High-order local non-reflecting boundary conditions: a review, Wave Motion 39 (4) (2004) 319–326. [11] T. Hagstrom, Radiation boundary conditions for the numerical simulation of waves, Acta Numer. 8 (1999) 47–106. Cambridge Univ. Press, Cambridge. [12] T. Hagstrom, New results on absorbing layers and radiation boundary conditions. Topics in computational wave propagation, in: Lect. Notes Comput. Sci. Eng., vol. 31, Springer, Berlin, 2003, pp. 1–42. [13] T. Hagstrom, A. Mar-Or, D. Givoli, High-order local absorbing conditions for the wave equation: extensions and improvements, J. Comput. Phys. 227 (6) (2008) 3322–3357. [14] E. Hairer, E. Lubich, G. Wanner, Geometric Numerical Integration. Structure-preserving Algorithms for Ordinary Differential Equations, second ed., Springer, Berlin, 2006. [15] E. Hairer, G. Wanner, Solving Ordinary Differential Equations II. Stiff and Differential Algebraic Problems, Springer, Berlin, 2002. [16] L. Halpern, Absorbing boundary conditions for the discretization schemes of the one-dimensional wave equation, Math. Comp. 38 (1982) 415–429. [17] R. McLachlan, Symplectic integration of Hamiltonian wave equations, Numer. Math. 66 (1994) 465–492. [18] B.E. Moore, A modified equations approach for multi-symplectic integration methods (Ph.D. thesis), University of Surrey, 2003, pp. 34–36. [19] J.M. Sanz-Serna, L. Abia, Order conditions for canonical Runge–Kutta schemes, SIAM J. Numer. Anal. 28 (1991) 1081–1096. [20] J.M. Sanz-Serna, M.P. Calvo, Numerical Hamiltonian Problems, Chapman and Hall, London, 1994. [21] L.N. Trefethen, Pseudospectra of linear operators, SIAM Rev. 39 (1997) 383–406. [22] S.V. Tsynkov, Numerical solution of problems on unbounded domains. A review, Appl. Numer. Math. 27 (4) (1998) 465–532.