SERK2v3: Solving mildly stiff nonlinear partial differential equations

SERK2v3: Solving mildly stiff nonlinear partial differential equations

Journal of Computational and Applied Mathematics 299 (2016) 194–206 Contents lists available at ScienceDirect Journal of Computational and Applied M...

478KB Sizes 0 Downloads 62 Views

Journal of Computational and Applied Mathematics 299 (2016) 194–206

Contents lists available at ScienceDirect

Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

SERK2v3: Solving mildly stiff nonlinear partial differential equations B. Kleefeld a , J. Martín-Vaquero b,∗ a

Brandenburgische Technische Universität Cottbus, Institut für Mathematik, D-03044 Cottbus, Germany

b

ETS Ingenieros industriales, Universidad de Salamanca. E37700, Bejar, Spain

article

info

Article history: Received 19 May 2015 Received in revised form 14 November 2015 Keywords: Higher-order discretizations in space Multi-dimensional partial differential equations Non-smooth problems Stabilized explicit Runge–Kutta methods Variable-step length

abstract Multidimensional nonlinear parabolic partial differential equations (PDEs) appear in a large variety of disciplines. Usually, the scientific literature advises against the use of explicit ODE solvers for the integration of stiff problems. However, in this manuscript, we would like to show how some explicit methods can be very efficient for specific problems. Stabilized Explicit Runge–Kutta methods (SERK) are a class of explicit methods with extended stability domains along the negative real axis. It is necessary to evaluate the function s times, but the stability region is O(s2 ). Hence, the computational cost is O(s) times lower than for traditional explicit algorithms. In this way moderately stiff problems can be integrated by the use of simple explicit evaluations. SERK schemes can easily be applied to many different classes of problems with large dimensions and additionally they have low memory demand. Since these methods are explicit, they do not require algebra routines to solve large nonlinear systems associated to ODEs and therefore are especially well-suited for the method of lines (MOL) discretizations of parabolic multi-dimensional PDEs. Additionally, the stability domain is adapted precisely to the spectrum of the problem at the current time of integration in an optimal way, that is with minimal number of additional stages; hence, adaptation of the length step is possible at practically no extra cost. In this work, we study several strategies to change the time step for non-smooth functions and finally propose a new technique. This procedure improves the numerical results, especially when the data is non-smooth. We also derive two algorithms to estimate the spectral radius in the new code: a nonlinear power method and another procedure based on the Gershgorin theorem. Although SERK algorithms are only second-order schemes, in this manuscript we propose to utilize them after higher-order discretizations in space to obtain higher-accurate solutions efficiently and usually faster (since these higher-order schemes allow the use of a smaller number of nodes). Some numerical experiments shown in the paper support these conclusions. © 2015 Elsevier B.V. All rights reserved.

1. Introduction Over the last years, many numerical methods have been proposed to solve multidimensional nonlinear parabolic partial differential equations (PDEs). This kind of equations appears in a large variety of chemical reactions, physical modeling, industrial processes or financial problems, for example. In many cases, they are spatially discretized using the method of



Corresponding author. E-mail addresses: [email protected] (B. Kleefeld), [email protected] (J. Martín-Vaquero).

http://dx.doi.org/10.1016/j.cam.2015.11.045 0377-0427/© 2015 Elsevier B.V. All rights reserved.

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206

195

lines and transformed into very large systems of stiff initial-value ordinary differential equations (ODEs). The numerical integration of these problems can be computationally demanding, hence it is desirable to employ integrators efficiently. A widely-used approach to solve time-dependent and multi-dimensional PDEs is to first discretize the space variables to obtain a system of ODEs of the form y′ = f (t , y);

y(t0 ) = y0 ;

(1)

where y, y0 ∈ R , t ≥ t0 and f (t , y) takes values in R . Implicit schemes have frequently been proposed to solve many of these problems (including VODE [1], LSODE [2], DASSL [3] and MEBDF algorithms [4], or RADAU [5], STRIDE [6], DIRK and SDIRK codes [7]). They have better stability properties than traditional explicit algorithms. However, such methods require the solution of large systems of nonlinear equations in every iteration. This is computationally very expensive and very efficient algebra routines and machines are necessary. In case of complex PDEs discretized with many spatial nodes, it is not possible to use simple computers due to problems with memory demand. Therefore, traditional implicit schemes could require parallelization on workstation clusters. On the other hand, traditional explicit schemes require the temporal step size to be very small to ensure numerical stability. Although they do not require very complex algebra routines, the number of function evaluations can become large and therefore the whole process of integration gets very time-consuming. Recently, exponential integrators, frequently called Exponential Time Differencing schemes (ETD), have emerged as a potential alternative class of methods to solve large stiff problems efficiently. When first introduced, exponential methods were considered computationally unattractive due to the high cost of approximating exponential functions of very large matrices. However, these methods started to draw attention when the use of Padé approximations and/or Krylov projection techniques allowed these matrix exponential terms to be evaluated efficiently. Since then, a number of exponential integrators have been proposed for general stiff systems, see [8–12], for example. In this work, the authors would like to show numerical comparisons between some well-known traditional explicit and implicit methods, and ETD schemes. We also would like to propose another type of ODE integrators to solve some multidimensional parabolic nonlinear PDEs, namely Stabilized Explicit Runge–Kutta (SERK) methods. These schemes are explicit methods with extended stability domains, usually along the negative real axis. As explained previously, they require more function evaluations in each step than a traditional explicit method, two of them are required to obtain second-order convergence and the rest are used to quadratically extend the region where they are stable. But they can easily be applied to large problem classes, have low memory demand and are especially suited for discretizations using the method of lines (MOL) of two and three dimensional parabolic PDEs ([13–17] and references therein). Therefore, the class of problems of interest for SERK methods are problems for which the eigenvalues of the Jacobian ∂f matrix ∂ y are known to lie in a long narrow strip along the negative real axis. This situation typically arises when discretizing parabolic equations or hyperbolic–parabolic equations such as advection–diffusion–reaction equations (with dominating diffusion or reaction). In this paper, we want to propose the efficient use of SERK methods to solve this kind of problems. Usually second-order finite differences have been considered as spatial discretizations to explain the usefulness of these schemes. In this work, we would like to show the behavior of these algorithms when different procedures – also with faster convergence – are employed. Additionally, these algorithms allow adaptation of the time step with practically no extra cost. In this work we also want to consider the case of non-smooth functions in which case the mechanisms to choose the length step of the algorithm are very sensitive. Hence, we propose different procedures to choose, both, the length step in time and later the number of stages, and we compare the numerical results. This paper is organized as follows. In Section 2 we briefly describe the procedure to obtain polynomials with large stability regions and derive Runge–Kutta methods with these polynomials as stability functions. These methods have been implemented in a variable step and number of stages code presented in Section 3. Also, several procedures for choosing the step length are described in this section. Finally, Section 4 contains numerical experiments with different spatial discretizations. n

n

2. SERK methods For many problems, usually not extremely stiff, of large dimension and with eigenvalues known to lie in a certain region (in general along the negative real line, but in some cases they are close to the imaginary axis or in cross-shaped domains, see [5] for example), explicit methods with large stability domains can be very efficient (see papers on DUMKA, ROCK or RKC [13–17] and references therein). Their advantage, when compared with fully implicit or partly implicit methods, is that they do not require the solution of large systems of nonlinear equations when solving multi-space dimensional problems. They are much simpler, can easily be applied to large problem classes and have low memory demand. Additionally, they are very fast when all eigenvalues of the Jacobian lie in the region described above and are especially well suited for PDEs which are discretized in space. We are using these methods for multi-dimensional parabolic partial differential equations. Hence, in this paper we are only interested in stabilized explicit Runge–Kutta methods with extended stability domains along the negative real axis.

196

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206 2

4

1

2

0

0

–1

–2

–2

–4 –3

–2

–1

0

–4

1

4

–3

–2

–1

0

1

2

–50

0

7.5 5

2

2.5 0

0 –2.5

–2

–5 –7.5

–4 –80

–60

–40

–20

0

–300

–250

–200

–150

–100

Fig. 1. Comparison of the absolute stability regions (in gray) of Heun, DOPRI5 and SERK methods with 10 and 20 stages.

For the construction of this kind of algorithms two problems need to be solved: (i) Finding stability functions with extended stability domains along the negative real axis; (ii) Finding explicit Runge–Kutta methods with those polynomials as stability functions. 2.1. Stability functions with extended stability domains along the negative real axis Several strategies for approximating the optimal stability polynomials have been proposed. DUMKA methods use optimal polynomials without recurrence relation [15], whereas RKC methods are based on non optimal polynomials with recurrence relation [18], while ROCK methods have near optimal polynomials with recurrence relation [19,13]. We derived SERK2v2 in [20] by first obtaining polynomials with large stability regions using a modified Remes’ type algorithm (as explained in [21]). The linear systems are badly conditioned for large values of s, which we solved by using Mathematica with very high working precision. In this way, we calculated polynomials R2s (z ) of degree 10, 20, 30, 40, 50, 60, 80, 100, 150, 200 and 250 with damping η = 0.975 (the length of the stability region along the real axis decreases proportionally to this parameter but on the other hand the stability region is wider, see [5]). Two-stages second-order Runge–Kutta methods are stable on the real axis only in the interval [−2, 0] and other classical and more sophisticated algorithms have similar regions, however SERK methods studied in [20,14] are stable in [−0.8s2 , 0]. In Fig. 1 we compare the stability regions of popular Heun’s and DOPRI5’s (an explicit 6-stages method proposed by Dormand and Prince) methods with our SERK methods with 10 and 20 stages. 2.2. Derivation of the Runge–Kutta method Next, we need to find explicit Runge–Kutta methods with these stability regions. A first idea was mentioned by V.K. Saul’ev [22]: the procedure is to write Rs (z ) =

s  i=1

(1 + αi z ),

αi =

−1 zi

,

where zi are the roots of the polynomial and to represent the Runge–Kutta method as a composition of Explicit Euler steps g0 = y0 ,

gi = gi−1 + hαi f (gi−1 ),

i = 1 , . . . , s,

y1 = gs .

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206

197

Obviously, there is a major problem with this technique since for the smaller root, we obtain a very large Euler step that produces oscillations and problems with stability. Lebedev improved these algorithms when he grouped the roots symmetrically two-by-two together and represented the quadratic factor by a two-stage scheme. But his methods still have problems with some propagation of rounding errors in the last stages [13,15–17]. An elegant idea by van der Houwen and Sommeijer [23], based on shifted Chebyshev polynomials and three-term recursion formula for the internal stages, reduced these difficulties with internal stabilities and propagation of rounding errors. When first introduced stabilized explicit Runge–Kutta methods were first order, however some second- (and higher-) order methods have been proposed lately; in these cases the interval of absolute stability is necessarily shorter (see [18,19, 21], for example). Most of these new schemes also incorporate damping, that is the width of the stability domain is not 0. However, all the methods with order higher than two have (so far) difficulties with internal stabilities and propagation of rounding errors. In [20], the authors proposed a new algorithm based on these schemes for parabolic equations. We calculated secondorder polynomials with damping whose length of the stability interval is 0.8s2 (instead of 0.65s2 as in [18]). Later we write the polynomials R2s (z ) as linear combinations of the shifted Chebyshev polynomials and their products. Then, we use a three-term recurrence relation among Chebyshev polynomials to construct the stabilized explicit Runge–Kutta methods with these shifted Chebyshev polynomials as stability functions. These new methods have better stability properties than DUMKA [24] or ROCK methods [19], which have difficulties when some of the eigenvalues of the Jacobian are smaller than −105 or −106 . Using similar techniques, together with extra stages and rooted tree theory or Richardson’s extrapolation, higher-order (than second-order) methods can be derived (in a similar way as Bhatt and Khaliq did in [8]). 3. Variable-step and number of stages algorithm One advantage of stabilized explicit Runge–Kutta methods is that variable-step length produces almost no additional cost (only one extra evaluation of the function in the whole interval of integration). Hence, in a variable step code the step size can be adjusted in every iteration and, depending on the problem, can be increased generating a faster code while controlling the error. As usual in other Chebyshev codes (or stabilized explicit methods), we use a family of second-order methods with different numbers of stages. In every iteration, we first select the step size in order to control the local error. Afterward we select a stage order such that the stability properties are satisfied. 3.1. Step size estimation An adaptive step size selection is an important feature for an efficient integrator of stiff ODEs. Many procedures have been considered to control the step size (and herewith the error). In this paper we will compare two ways of choosing the length step. Additionally, we will study how to choose the time step after a rejection. The two main formulae, that we have considered to control the step size, are the following: (i) Control formula 1: A step size strategy with memory that we previously used in [20,14]: hn +1 = ϱ hn

hn

1



1/2 ∥est n+1 ∥2 hn−1

∥est n ∥2 ∥est n+1 ∥2

1/2

.

(2)

In this formula ∥est n ∥2 is a measure of the error after the step that we calculate like

  n   1  err i 2 , ∥est n ∥2 =  n i =1

(3)

tol

where erri denotes the ith component of the vector y1 − y˜1 , where y˜1 = y0 + hf (y1 ). This strategy was first considered by H.A. Watts [25] and K. Gustafsson [26] and other well-known stabilized explicit Runge–Kutta methods such as ROCK2 or ROCK4 use Eqs. (2) and (3) to adapt the time step (see [19,13]). (ii) Control formula 2: In this paper, we also propose another strategy with memory appearing in [27] β

hn+1 = ϱhn ∥est n+1 ∥−α 2 ∥est n ∥2 .

(4)

We choose the same values for α and β as suggested in [27], namely α = 0.35, β = 0.2 and, in this case, y˜1 = y0 + h(γ1 f (y0 ) + γ2 f (y1 )), with γ1 = 0.6, γ2 = 0.4. Since we want to prevent rejections, with both control formulae, we also employ thresholds on the increase or reduce of the time step: 0.1 ≤

hn+1 hn

≤ 10

to allow the step size to increase or reduce quickly.

198

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206

Non-smooth functions: In some applications the PDEs are non-smooth, have mismatched initial and boundary conditions or one (or more) of the functions even have jumps. This discontinuity typically produces a significant growth in the number of rejected steps for the following reason: (0) Let us suppose that after the nth time step any of the control formulae described above yielded hn+1 as length step, and (0)

(0)

that in the interval [tn , tn+1 = tn + hn+1 ] is a jump at t = t. This will most likely produce a rejection and the algorithm (0)

(1)

(1)

(1)

(0)

calculates a new length step hn+1 . Generally t ̸∈ [tn , tn+1 = tn + hn+1 ], but t ∈ [tn+1 , tn+1 ], which causes a problem, since the (n + 1)-th step is accepted, but in the following step the algorithm considers that the previous length step was too short, (0) (1) (0) (0) (0) and hence calculates a new time step, hn+2 , which is larger. This leads to a new rejection since t ∈ [tn+1 , tn+2 = tn+1 + hn+2 ] which can continue and therefore yields several rejections per jump of any of the functions. To avoid the difficulties described above, we propose to limit the increase by (j)

hn+i+1

≤1

(k)

hn +i in the two steps following the rejection (i = 0, 1) and by (j)

hn+i+1

≤ 2.5

(k)

hn +i

(0)

(0)

in the three steps after that (i = 2, 3, 4), unless the interval where there could be jumps (for example, [tn , tn+1 = tn + hn+1 ]) (0)

has passed. When tn+1 has passed and the risk of rejections has decreased we allow again that hn+i+1

≤ 10.

hn +i

In the last numerical example, we will check how these ideas will clearly decrease the number of rejections and the CPU time needed will get smaller. 3.2. Stage number selection As usual in other Chebyshev codes (or stabilized explicit methods) we employ a family of second-order methods with different numbers of stages and we choose the best number of stages according to the stability regions of the algorithm. In the code proposed in [20,14] we use SERK methods with number of stages 10, 20, 30, 40, 50, 60, 80, 100, 150, 200 and 250, plus RKC methods with s = 2, . . . , 9 and 12, . . . , 18 taking η = 0.985. For the RKC algorithms incorporated the bounds of the stability intervals are significantly smaller, which is an element to consider when choosing the number of stages. In each iteration, we first select the step size in order to control the local error as stated previously. Then we select a stage order such that the stability properties are satisfied



h

ρ ∗

s>

cs

  ∂f ∂y

where ρ



 ∂f , ∂y

(5)

is a bound for the spectral radius (the largest eigenvalue in absolute value of the Jacobian of the function f (y))

∗ 2

and cs s is an estimate of the bound of the stability interval. For s ≥ 20 and s = 10 (SERK algorithms) cs∗ can be considered 0.8, while in many of the RKC methods incorporated in the code cs∗ = 0.66. For the estimation of the spectral radius several procedures have traditionally been considered. If it is not possible to get an estimate of the spectral radius easily, a nonlinear power method (see [18], for example) is usually considered. Another way is to use the Gershgorin theorem

ρ



∂f ∂y



 ≤

max

i=1,...,neqn

 −aii +

neqn 

 |aij |

.

j=1,j̸=i

4. Numerical experiments So far in most cases with stabilized explicit Runge–Kutta methods second-order finite difference approximations in space have been used [21,20,14]. The main reason for that is that these schemes are second-order in time. However, since variable-step length is used, one might think that applying higher-order finite differences or spectral techniques can produce higher-accurate numerical results. That would imply that we can employ smaller number of grid points in space to achieve approximately the same error requiring less computing time for the simulations. In this paper, we will study this possibility.

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206

199

Example 1. The first example considered in this paper is the two-dimensional Brusselator reaction–diffusion problem

 2  ∂u ∂ u ∂ 2u = B + u2 v − (A + 1)u + α + , ∂t ∂ x2 ∂ y2  2  ∂v ∂ v ∂ 2v = Au − u2 v + α + , ∂t ∂ x2 ∂ y2

(6)

subject to initial and boundary conditions such that u and v are: u(x, y, t ) = e−(x+y+0.5t ) ,

v(x, y, t ) = ex+y+0.5t .

We integrate this problem for 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, 0 ≤ t ≤ tend = 2, taking A = 1, B = 0 and α = 0.25. In this example we use a Matlab version of the code such that we are able to compare to the functions ODE15s and ODE45 (built-in functions of Matlab) and additionally to the method EETD-LOD [8] (also written in Matlab). In this fashion we can compare not only the error achieved but also the efficiency of the codes. In this paper we also want to show how using higher-order finite differences in space can produce more efficient solutions with a smaller computational cost. Hence, in this numerical example, we will approximate second-order derivatives in space with second-order differences first, that is Dxx Vi =

Vi+1 − 2Vi + Vi−1

(1x)2

,

i = 1, . . . , N − 1,

(7)

and afterward with fourth-order approximations:

−Vi+2 + 16Vi+1 − 30Vi + 16Vi−1 − Vi−2 , 12(1x)2 10V0 − 15V1 − 4V2 + 14V3 − 6V4 + V5 Dxx V1 = , 12(1x)2

Dxx Vi =

i = 2, . . . , N − 2,

(8) (9)

(and similarly for i = N − 1). When we choose these finite differences, the matrices have only real and negative eigenvalues. The largest eigenvalue (in absolute value) for second-order approximations is approximately 4α N 2 , while for the fourthorder differences it is only 34 times larger (approximately). But we can use a smaller number of nodes with fourth-order finite differences, and, although SERK schemes are only second-order in time, the errors can become smaller and CPU times, too (since the number of equations is smaller and also the spectral radius is shorter). To show this we first report the results using second-order centered differences (7) to approximate the spatial derivatives. In Table 1 we see the results we obtain with our method SERK2v3 with N = 100 spatial grid points. The spectralradius is approximated with the nonlinear power method as about 21 364 needing 14 additional function evaluations. Another way of approximating the spectralradius is the Gershgorin theorem. In the second numerical example we will compare those two procedures of finding an upper bound on the spectrum. We also show results using the Matlab built-in functions ODE15s and ODE45 as well as the method EETD-LOD. Since the EETD-LOD method is not an RKC type method we cannot compare in the sense of function evaluations and number of steps taken. Additionally it is a fixed step method and we can just prescribe the spatial as well as the temporal step size. Hence, we want to compare the error and the CPU time for h = 0.01, that is N = 100 spatial nodes and length steps in time varying between k = 10−3 and k = 10−7 . The EETD-LOD algorithm is a very efficient method based on exponential time differencing schemes, which has very good stability properties. Additionally, this novel scheme has the benefit of solving multidimensional problems in locally one dimensional fashion by applying sequences of tridiagonal matrix solvers instead of solving a band system. It is second-order in time and space. In Table 1 we see that the error (which we calculated in L∞ ) is comparable for all methods and that the error has a limitation (at some point it does not reduce anymore if we reduce the tolerance). It can be checked that the CPU time does not change significantly for ODE45 if we lower the tolerance since the limitation for ODE45 is given by the numerical stability. Except in the first iterations, the step length does not change excessively. However, SERK2v3, also explicit, is much faster than any of them. Next, we want to approximate the spatial derivatives with the fourth-order approximations (8) and (9). Here we will use just N = 50 spatial nodes and hence have a smaller spectralradius which we calculated as about 6830 with the nonlinear power method and 14 additional function evaluations. We see in Table 2 that the fourth order approximations yield smaller errors for small tolerances while the CPU time reduces drastically. In this table we compare to the Matlab built-in functions ODE45 and ODE15s only since EETD-LOD is second order in time and space. The errors are comparable but our method is way faster than the built-in functions of Matlab. ODE15s is approximately twice as fast as ODE45 for large tolerances. However, for these large tolerances, SERK2v3 is more than 10 times faster than ODE15s and the error is smaller.

200

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206 Table 1 Numerical results with SERK2v3, EETD-LOD and the Matlab built-in functions ODE15s and ODER 45 in Example 1 using second-order centered differences and the nonlinear power method to approximate the spectralradius. tol 10−3

10−4

10−5

10−6

Method

Error

CPU

SERK2v3 EETD-LOD ODE15s ODE45

−3

0.13 × 10 0.12 × 10−2 0.65 × 10−4 0.50 × 10−3

3.613029 8.2145 467.367809 425.086473

SERK2v3 EETD-LOD ODE15s ODE45

0.30 × 10−4 0.94 × 10−4 0.72 × 10−4 0.53 × 10−3

5.912098 82.0414 598.837854 432.121242

SERK2v3 EETD-LOD ODE15s ODE45

0.81 × 10−5 0.88 × 10−5 0.13 × 10−4 0.15 × 10−4

7.46585 823.0193 690.056718 394.117364

SERK2v3 EETD-LOD ODE15s ODE45

0.87 × 10−5 0.88 × 10−5 0.93 × 10−5 0.10 × 10−4

13.616033 8203.6 912.432130 423.938105

Table 2 Numerical results with ODE15s, ODE45 and SERK2v3 in Example 1 using fourth order approximations and the nonlinear power method to approximate the spectralradius. tol

Method

Error

CPU

10−3

SERK2v3 ODE15s ODE45

0.21 × 10−3 0.57 × 10−4 0.96 × 10−3

0.824559 9.446446 21.755374

10−4

SERK2v3 ODE15s ODE45

0.10 × 10−4 0.63 × 10−4 0.32 × 10−3

0.992560 10.376378 21.721774

10−5

SERK2v3 ODE15s ODE45

0.77 × 10−6 0.45 × 10−5 0.68 × 10−5

1.774271 11.462161 21.947043

10−6

SERK2v3 ODE15s ODE45

0.13 × 10−6 0.52 × 10−6 0.79 × 10−6

3.311294 13.440426 21.870266

10−7

SERK2v3 ODE15s ODE45

0.11 × 10−7 0.32 × 10−6 0.36 × 10−7

6.370979 14.785240 22.109961

Example 2. We will consider the Robertson equation with one-dimensional diffusion (see [9,11])

∂ 2u ∂u = −0.04u + 104 vw + α 2 ∂t ∂x ∂v ∂ 2v = 0.04u − 3 · 107 v 2 − 104 vw + β 2 ∂t ∂x ∂w ∂ 2w 7 2 = 3 · 10 v + γ 2 ∂t ∂x for 0 ≤ x ≤ 1, 0 ≤ t ≤ 1 with boundary and initial conditions u(x, 0) = sin(π x), u(0, t ) = u(1, t ) = 0, We have chosen α =

(10)

v(x, 0) = 0, w(x, 0) = 1 − sin(π x), v(0, t ) = v(1, t ) = 0, w(0, t ) = w(1, t ) = 1.

26 , 25π 2

β=

v(x, t ) = 0 and w(x, t ) = 1 − e

−t

γ = sin(π x). 1 , 25π 2

1

π2

, N = 1000 and tend = 1. In this case, the solution is u(x, t ) = e−t sin(π x),

In this numerical example we want to compare the two different ways of controlling the error as well the two different ways of computing the spectralradius, once using the Gershgorin theorem in order to obtain an estimate of the upper bound

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206

201

Table 3 Numerical results with SERK2v2 and SERK2v3 in Example 2 using centered differences and the Gershgorin theorem to approximate the spectralradius. tol

Method

MaxStg

rej

Error

10−3

SERK2v2 SERK2v3

250 250

Nfe 3 341 3 351

Nstp 18 20

0 0

0.13 × 10−3 0.18 × 10−3

CPU 0.9375 0.890625

10−4

SERK2v2 SERK2v3

150 200

6 689 5 109

51 41

0 0

0.13 × 10−4 0.30 × 10−4

1.75 1.3125

10−5

SERK2v2 SERK2v3

80 100

10 526 8 092

153 100

0 0

0.16 × 10−5 0.39 × 10−5

2.6875 2.0625

10−6

SERK2v2 SERK2v3

40 60

18 820 13 880

475 285

0 0

0.43 × 10−6 0.68 × 10−6

4.71875 3.5

10−7

SERK2v2 SERK2v3

30 30

34 489 25 652

1491 866

0 0

0.32 × 10−6 0.34 × 10−6

8.6093755 6.40625

10−8

SERK2v2 SERK2v3

14 18

57 385 43 724

4705 2704

0 0

0.30 × 10−6 0.30 × 10−6

14.96875 11.421875

Table 4 Numerical results with SERK2v2 and SERK2v3 in Example 2 using centered differences and the nonlinear power method to approximate the spectralradius. tol

Method

MaxStg

Nfe

Nstp

rej

Error

CPU

10−3

SERK2v2 SERK2v3

250 250

3 441 3 371

18 20

0 0

0.13 × 10 0.18 × 10−3

0.9375 0.890625

10−4

SERK2v2 SERK2v3

150 200

6 939 5 259

51 41

0 0

0.13 × 10−4 0.30 × 10−4

1.78125 1.375

10−5

SERK2v2 SERK2v3

80 100

10 786 8 232

153 100

0 0

0.16 × 10−5 0.39 × 10−5

2.75 2.109375

10−6

SERK2v2 SERK2v3

40 60

18 821 14 102

475 285

0 0

0.43 × 10−6 0.68 × 10−6

4.65625 3.578125

10−7

SERK2v2 SERK2v3

30 30

35 619 25 655

1491 866

0 0

0.32 × 10−6 0.34 × 10−6

8.84375 6.4375

10−8

SERK2v2 SERK2v3

14 20

58 994 44 669

4705 2704

0 0

0.30 × 10−6 0.30 × 10−6

15.34375 11.734375

−3

on the spectrum and once using a nonlinear power method to calculate it. In Table 3 we show the numerical results for our methods with control formula 1 (SERK2v2) and control formula 2 (SERK2v3) with second order finite differences (7) with N = 1000 using the Gershgorin theorem which yields an estimate of ρ = 442 571. The number of steps taken as well the number of function evaluations is smaller using control formula 2 while the error stays approximately the same. The maximum number of stages used is higher for control formula 2 and it seems to be a little bit faster than control formula 1. In many cases it is not easy to find an upper bound of the spectralradius. That is the reason why we implemented a nonlinear power method such that also in more difficult problems the methods can be used easily. In Table 4 we show the results similar to the one in the last table, just using the nonlinear power method in this case to approximate the spectralradius. It yields an estimate of ρ = 460 674 – about 4% higher than the Gershgorin estimate – and needs an additional 7 function evaluations to compute this number. Comparing those results to the results in Table 3 with the Gershgorin theorem we see that the number of steps, the maximum number of stages used and the error remain the same. The number of function evaluations is a little bit higher and thus also the CPU time is a little bit higher but does not increase significantly. Hence, if the spectralradius is not easily found the nonlinear power method is a good and cheap way of calculating this estimate. In this example as well as the next one we use a Fortran version to compute the numerical results since we want to compare to the well-known codes RKC and ROCK2 which are both written in Fortran. For all methods we use the same estimate of the spectralradius which we calculated using the Gershgorin theorem. Table 5 shows that the errors for ROCK 2 are comparable to the ones we obtain. RKC is faster but the error is of lower order (and, for small tolerances, not below the prescribed tolerance) than the errors the SERK schemes and ROCK2 obtain. Lastly, also in this numerical example we want to show the results using the fourth order approximations (8) and (9) to approximate the spatial derivative. Instead of N = 1000 like with the second order approximations we use just N = 500 points in space. Also here we want to compare the Gershgorin and the nonlinear power method to approximate the spectralradius. Table 6 shows the results using the Gershgorin estimate of ρ = 147 524. As mentioned before the spectralradius is smaller since we can use less spatial nodes. It can be seen that the maximum number of stages used is lower but also the number of steps and therefore the number of function evaluations is way smaller. Thus the code is faster. The error in this case is by far better for low tolerances. With second-order approximations we had limitations that we do not detect with the fourth-order ones. Hence, we recommend higher-order approximations for SERK methods, when the functions in the equation are smooth enough.

202

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206

Table 5 Numerical results with ROCK2 and RKC in Example 2. Second order approximations in space and Gershgorin theorem were used. tol

Method

MaxStg

rej

Error

CPU

10−3

ROCK2 RKC

200 434

Nfe 3 580 1 848

Nstp 21 6

0 0

0.86 × 10−4 0.15 × 10−2

0.9375 0.546875

10−4

ROCK2 RKC

182 308

3 951 2 553

30 11

0 0

0.49 × 10−4 0.37 × 10−3

1.0 0.734375

10−5

ROCK2 RKC

104 211

6 729 3 580

82 20

0 0

0.54 × 10−5 0.79 × 10−4

1.6875 0.953125

10−6

ROCK2 RKC

58 145

11 824 5 206

243 41

0 0

0.81 × 10−6 0.17 × 10−4

2.921875 1.390625

10−7

ROCK2 RKC

32 99

20 842 7 609

749 86

0 0

0.35 × 10−6 0.4 × 10−5

5.078125 1.953125

10−8

ROCK2 RKC

18 68

36 944 11 131

2349 181

0 0

0.31 × 10−6 0.11 × 10−5

8.96875 2.875

Table 6 Numerical results with SERK2v2 and SERK2v3 in Example 2 using fourth order approximations and the Gershgorin theorem to approximate the spectralradius. tol

Method

MaxStg

Nfe

Nstp

rej

Error

CPU

10−3

SERK2v2 SERK2v3

150 150

2 231 2 071

18 20

0 0

0.13 × 10 0.18 × 10−3

0.390625 0.359375

10−4

SERK2v2 SERK2v3

80 100

3 476 2 886

51 41

0 0

0.13 × 10−4 0.30 × 10−4

0.53125 0.453125

10−5

SERK2v2 SERK2v3

50 60

5 985 4 616

153 100

0 0

0.13 × 10−5 0.36 × 10−5

0.859375 0.6875

10−6

SERK2v2 SERK2v3

30 30

11 399 8 236

475 285

0 0

0.13 × 10−6 0.39 × 10−6

1.578125 1.15625

10−7

SERK2v2 SERK2v3

14 20

18 827 14 280

1491 867

0 0

0.17 × 10−7 0.39 × 10−7

2.625 2.015625

10−8

SERK2v2 SERK2v3

8 10

34 907 25 646

4706 2705

0 0

0.64 × 10−8 0.88 × 10−8

4.84375 3.53125

−3

Table 7 Numerical results with SERK2v2 and SERK2v3 in Example 2 using fourth order approximations and the nonlinear power method to approximate the spectralradius. tol

Method

MaxStg

Nfe

Nstp

rej

Error

CPU

10−3

SERK2v2 SERK2v3

150 150

2 181 2 021

18 20

0 0

0.13 × 10 0.18 × 10−3

0.359375 0.34375

10−4

SERK2v2 SERK2v3

80 100

3 416 2 866

51 41

0 0

0.13 × 10−4 0.30 × 10−4

0.53125 0.4375

10−5

SERK2v2 SERK2v3

40 60

5 975 4 556

153 100

0 0

0.13 × 10−5 0.36 × 10−5

0.859375 0.65625

10−6

SERK2v2 SERK2v3

30 30

11 129 8 235

475 285

0 0

0.13 × 10−6 0.39 × 10−6

1.53125 1.15625

10−7

SERK2v2 SERK2v3

14 20

18 454 14 047

1491 867

0 0

0.17 × 10−7 0.38 × 10−7

2.578125 1.984375

10−8

SERK2v2 SERK2v3

8 10

34 644 25 477

4706 2705

0 0

0.64 × 10−8 0.88 × 10−8

4.8125 3.484375

−3

If we use the nonlinear power method to calculate the spectralradius, we need an additional 11 function evaluations and obtain an estimate of ρ = 143 319—about 3% lower than the Gershgorin estimate. As can be seen in Table 7 the number of function evaluations and the CPU time are marginally higher, but the maximum number of stages used, the number of steps and the error remain the same as for the second order finite differences. Example 3. SERK schemes are especially useful to numerically solve multidimensional or systems of partial differential equations. However, in this work, we would like to show that these algorithms can also be very efficient to approximate one-dimensional PDEs with non-smooth data. The third example is the non-homogeneous parabolic problem [20] given by

∂u ∂ 2u = d 2 + f (x, t ) ∂t ∂x

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206

203

Table 8 Numerical results with SERK2v2 and SERK2v3 in Example 3 with control. Second-order approximations and the nonlinear power method (to estimate the spectral radius) were used. tol

Method

MaxStg

Nfe

Nstp

rej

Error

CPU

10−3

SERK2v2 SERK2v3

250 250

44 813 44 418

753 516

74 43

0.26 × 10 0.48 × 10−4

0.375 0.359375

10−4

SERK2v2 SERK2v3

250 250

56 874 55 674

1 312 1 074

83 72

0.18 × 10−4 0.10 × 10−4

0.421875 0.4375

10−5

SERK2v2 SERK2v3

250 250

76 558 73 425

2 997 2 088

123 96

0.19 × 10−5 0.19 × 10−5

0.5625 0.546875

10−6

SERK2v2 SERK2v3

250 250

126 715 107 713

7 354 4 890

149 133

0.17 × 10−6 0.30 × 10−6

0.890625 0.765625

10−7

SERK2v2 SERK2v3

250 250

205 208 166 588

20 776 12 726

180 159

0.16 × 10−7 0.41 × 10−7

1.484375 1.1875

−3

Table 9 Numerical results with SERK2v2 and SERK2v3 in Example 3 without control. Second-order approximations and the nonlinear power method (to estimate the spectral radius) were used. tol

Method

MaxStg

Nfe

Nstp

rej

Error

CPU

10−3

SERK2v2 SERK2v3

250 250

41 744 43 787

528 482

75 63

0.10 × 10 0.52 × 10−4

0.328125 0.34375

10−4

SERK2v2 SERK2v3

250 250

61 488 56 683

1 164 979

117 115

0.18 × 10−4 0.10 × 10−4

0.46875 0.4375

10−5

SERK2v2 SERK2v3

250 250

80 265 70 445

2 811 1 811

179 98

0.19 × 10−5 0.20 × 10−5

0.578125 0.53125

10−6

SERK2v2 SERK2v3

250 250

117 267 105 575

6 936 4 405

180 127

0.17 × 10−6 0.32 × 10−6

0.828125 0.75

10−7

SERK2v2 SERK2v3

250 250

221 605 167 099

20 585 12 295

279 194

0.16 × 10−7 0.41 × 10−7

1.5625 1.203125

−3

for 0 ≤ x ≤ 1, 0 ≤ t ≤ 10.3 with boundary and initial conditions u(x, 0) = g (x),

u(0, t ) = u(1, t ) = 0,

where d is a positive diffusion constant and f is a possibly discontinuous heat source. Let f (x, t ) = f1 (x)f2 (t ), where

 f 1 ( x) =

0, 1, 0,

0 ≤ x ≤ 0.25 0.25 < x < 0.75 0.75 ≤ x ≤ 1,

and f2 (t ) represents a time switch, which at t = 0 is switched on (=1) and switches either on or off (=0) every second. We take g (x) to be identically one (which yields smooth initial data but does not match smoothly with the specified boundary conditions) and the diffusion constant d = 2. In this numerical example we want to compare the two different control formulae 1 (SERK2v2) and 2 (SERK2v3) given in the previous section and investigate how the results change if we limit the increase after a rejection. Therefore, we first control the increase of the step size after a rejection. In Table 8 we show the maximum number of stages (MaxStg), the number of function evaluations (Nfe), the total number of steps (Nstp), the number of rejected steps (rej) as well as the error and the CPU time (in seconds) needed for the chosen method and prescribed tolerance (tol) with N = 200 spatial points. We see that the error in both cases is comparable but the number of rejections and also the number of overall steps is (in this example) smaller using control formula 2. This code also seems to be faster—at least in this numerical example using the control mechanism described above. Next we want to study what happens if we do not control the increase of the step length after a rejection. In Table 9 we see that the error between the two methods is comparable whereas the number of rejections is clearly smaller using control formula 2. Comparing the methods – once using the step size control and once without step size control after a rejection – we see that in most cases the number of rejections reduces quite a bit using the step size control. Here we use a nonlinear power method to calculate an estimate of the spectralradius once at the initial condition. We can also apply the Gershgorin theorem to estimate the spectral radius but the results change only slightly. Again, we want to compare our schemes with the two well-known second order schemes RKC and ROCK2. Table 10 shows the results using those two methods. It can be seen that ROCK2 and RKC are very efficient schemes, although – for both methods – the error is not below the specified tolerance. For RKC the maximum number of stages used is way higher than with SERK algorithms but also the number of rejections is larger. The opposite is true for ROCK2: the maximum number

204

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206

Table 10 Numerical results with ROCK2 and RKC in Example 3. Second-order approximations and the nonlinear power method (to estimate the spectral radius) were used. tol

Method

MaxStg

rej

Error

CPU

10−3

ROCK2 RKC

200 623

Nfe 32 177 52 085

Nstp 254 368

13 105

0.23 × 10−3 0.11 × 10−3

0.234375 0.421875

10−4

ROCK2 RKC

200 842

40 309 58 568

514 562

25 153

0.89 × 10−4 0.34 × 10−4

0.296875 0.484375

10−5

ROCK2 RKC

200 473

53 629 66 077

1 246 925

32 193

0.41 × 10−4 0.91 × 10−5

0.375 0.515625

10−6

ROCK2 RKC

200 460

83 113 84 166

3 720 1 617

52 226

0.50 × 10−4 0.16 × 10−5

0.546875 0.640625

10−7

ROCK2 RKC

200 388

138 494 111 690

11 191 3 045

56 259

0.34 × 10−5 0.36 × 10−6

0.859375 0.8125

Table 11 Errors and rates of the proposed schemes with s = 100, 150 and s = 200 stages in Example 3, respectively using N = 200 spatial nodes. Errors were calculated at tend = 10.3 in L∞ . Method dt1 dt2 dt3 dt4 dt5 dt6 dt7

SERK100

= 0.02 = 0.01 = 0.005 = 0.0025 = 0.00125 = 0.000625 = 0.0003125

SERK150

Rate

(a) Error versus tolerance.

SERK200

0.4789 × 10 0.5357 × 10−4 0.4373 × 10−5 0.1985 × 10−5 0.3581 × 10−7 0.1475 × 10−7 0.5322 × 10−8

0.4519 × 10 0.5379 × 10−4 0.4477 × 10−5 0.1983 × 10−5 0.3541 × 10−7 0.1222 × 10−7 0.4062 × 10−8

0.4333 × 10−4 0.5388 × 10−4 0.4498 × 10−5 0.1979 × 10−5 0.3535 × 10−7 0.1119 × 10−7 0.3549 × 10−8

2.1892

2.2403

2.2626

−4

−4

(b) CPU time versus error.

Fig. 2. Comparison of our method SERK2v3 with control with the two well-known methods RKC and ROCK2 in Example 3. Second-order approximations and the nonlinear power method (to estimate the spectral radius) were used.

of stages used is a little smaller but therefore also the number of rejections is smaller. However, the errors with ROCK2 in this problem are also larger than the prescribed tolerance (for tol < 10−5 ). In Fig. 2 we graphically show the results of Tables 8 and 10. We see that the error for a specified tolerance is smallest using SERK2V3. If we compare the error versus the CPU time the method needed we see that our method is very efficient compared to RKC and ROCK2. Lastly, we want to investigate the rate of convergence of the proposed scheme. Therefore, we again choose N = 200 spatial nodes yielding a spectralradius close to 320 000 for the diffusion constant d = 2. Hence, the methods with s = 100, 150 and 200 stages will be stable. Table 11 shows the errors calculated at tend = 10.3 in L∞ . Since no closed form solution exists for this problem a reference solution using RKC with tolerance tol = 10−12 . Hence, the  we generated  rate which we calculated as log

dt1 dtlast

err 1 err last

is the temporal rate of convergence. As expected the rate is close to 2.

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206

205

Fig. 3. Solution of the American Digital Call option at tend = 1 with parameters σ = 0.8, r = 0.1, E = 1.0 and sw = 0.2 using the variable step code SERK2v3 with N = 100 spatial nodes and tol = 10−3 .

Example 4. Lastly, we want to apply our scheme to a nonlinear PDE with nonsmooth initial data. Herefore, we choose an application from finance, namely the American Digital Call Option [28] given by the model equation

∂u 1 ∂ 2u ∂u = σ 2 S 2 2 + rS − ru ∂t 2 ∂S ∂S for 0 ≤ S ≤ 2, 0 ≤ t ≤ 1 with boundary and initial conditions u(0, t ) = 0, u(2, t ) = 1,

 0,   1, u(S , 0) =  0, 1,

0≤S≤E E < S ≤ E + sw E + sw < S ≤ E + 2sw E + 2sw < S ≤ 2.

Here u(S , t ) is the value of the option written on the asset S (space variable in this case) at time t with given volatility σ , interest rate r and exercise price E. The initial condition has three discontinuities at E , E + sw and E + 2sw (switch points for a multiple strike model). For the numerical tests we choose the same parameters as [28], namely, σ = 0.8, r = 0.1, E = 1.0 and sw = 0.2. Since also in this example no closed form solution exists we again use RKC (with N = 500 spatial nodes and tolerance tol = 10−8 ) to calculate a reference solution. In Fig. 3 we show the solution generated with our variable step code SERK2v3 at tend = 1 using N = 100 spatial nodes and tolerance tol = 10−3 . The code needed 10 528 function evaluations to produce the solution in 0.14 s with L∞ = 2.269 × 10−3 (the error is entirely due to the spatial discretization, using smaller tol does not change the solution obtained). And as we see there are no spurious oscillations like with the undamped fourth-order ETD-scheme (2, 2)-Padé scheme (see [28]). Also here we want to show an empirical rate of convergence using the constant step methods with s = 100, 150 and 200 steps. In this case we choose N = 250 spatial nodes with the same parameters as before. The results in Table 12 support the fact that our methods are indeed second order accurate. 5. Conclusions In this manuscript we illustrate a new code SERK2v3, which can easily be applied to many different multi-dimensional parabolic PDEs with large dimensions and has low memory demand. SERK algorithms are only second-order schemes, however we propose to utilize them after higher-order methods in space to obtain more accurate solutions efficiently and usually at a lower computational cost, since these higher-order schemes allow the use of a smaller number of nodes to obtain more accurate results. These methods also allow adaptation of the length step with practically no extra cost. In this work, we compare several strategies to change the time step in SERK algorithms. We also derive two algorithms to estimate the spectral radius: a nonlinear power method and another procedure based on the Gershgorin theorem. We develop algorithms in both,

206

B. Kleefeld, J. Martín-Vaquero / Journal of Computational and Applied Mathematics 299 (2016) 194–206 Table 12 Errors and rates of the proposed schemes with s = 100, 150 and s = 200 stages in Example 4, respectively using N = 250 spatial nodes. Errors were calculated at tend = 1 in L∞ . Method dt1 dt2 dt3 dt4 dt5

= 0.05 = 0.025 = 0.0125 = 0.00625 = 0.003125

Rate

SERK100

SERK150

SERK200

0.1581 0.5782 × 10−1 0.1001 × 10−1 0.2394 × 10−2 0.5210 × 10−3

0.1634 0.5677 × 10−1 0.9607 × 10−2 0.2602 × 10−2 0.5216 × 10−3

0.1680 0.5209 × 10−1 0.9873 × 10−2 0.2496 × 10−2 0.5218 × 10−3

2.0613

2.0728

2.0827

Fortran and Matlab, to compare SERK schemes with other well-known codes, including those employed by Matlab to solve moderately stiff problems. Numerical results support the efficiency and accuracy of the new techniques. In this paper, it is shown how some explicit and simple techniques can be very efficient in solving nonlinear partial differential equations. It can be seen that SERK schemes are suitable for multidimensional parabolic PDEs, however, they also perform very well with systems of PDEs and with one-dimensional PDEs with non-smooth data. Acknowledgments JM acknowledges support from Spanish Ministry of Economía y Competitividad through the grant TIN2014-55325C2-2-R. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28]

P.N. Brown, G.D. Byrne, A.C. Hindmarsh, VODE: A variable-coefficient ODE solver, SIAM J. Sci. Stat. Comput. 10 (5) (1989) 1038–1051. A.C. Hindmarsh, LSODE and LSODI, two new initial value ordinary differential equation solvers, SIGNUM Newsl. 15 (4) (1980) 10–11. L. Petzold, A description of DASSL: a differential–algebraic system solver, IMACS Trans. Scientific Computing (1993) 65–68. J. Cash, The integration of stiff initial value problems in ODEs using modified extended backward differentiation formulae, Comput. Math. Appl. 9 (5) (1983) 645–657. E. Hairer, G. Wanner, Solving Ordinary Differential Equations. II. Stiff and Differential–Algebraic Problems, Springer, Berlin, 1996. S. Nörsett, Semi explicit Runge–Kutta methods, Mathematics and Computing Rpt. N. 6/74. R. Alexander, Diagonally implicit Runge–Kutta methods for stiff ODEs, SIAM J. Numer. Anal. 14 (1977) 1006–1021. H. Bhatt, A. Khaliq, The locally extrapolated exponential time differencing LOD scheme for multidimensional reaction–diffusion systems, J. Comput. Appl. Math. 285 (2015) 256–278. M. Hochbruck, C. Lubich, H. Selhofer, Exponential integrators for large systems of differential equations, SIAM J. Sci. Comput. 19 (5) (1998) 1552–1574. L. Ixaru, G.V. Berghe, H.D. Meyer, Frequency evaluation in exponential fitting multistep algorithms for ODEs, J. Comput. Appl. Math. 140 (1–2) (2002) 423–434. A. Khaliq, J. Martín-Vaquero, B. Wade, M. Yousuf, Smoothing schemes for reaction–diffusion systems with nonsmooth data, J. Comput. Appl. Math. 223 (1) (2009) 374–386. J. Martín-Vaquero, J. Vigo-Aguiar, Exponential fitting BDF algorithms: Explicit and implicit 0-stable methods, J. Comput. Appl. Math. 192 (1) (2006) 100–113. A. Abdulle, Chebyshev methods based on orthogonal polynomials (Ph.D. thesis), University of Geneva, 2001. J. Martín-Vaquero, A. Khaliq, B. Kleefeld, Stabilized explicit Runge–Kutta methods for multi-asset American options, Comput. Math. Appl. 67 (6) (2014) 1293–1308. A. Medovikov, High order explicit methods for parabolic equations, BIT 38 (2) (1998) 372–390. L. Skvortsov, Explicit stabilized Runge–Kutta methods, Comput. Math. Math. Phys. 51 (7) (2011) 1153–1166. M. Torrilhon, R. Jeltsch, Essentially optimal explicit Runge–Kutta methods with application to hyperbolic–parabolic equations, Numer. Math. 106 (2) (2007) 303–334. B. Sommeijer, L. Shampine, J. Verwer, RKC: An explicit solver for parabolic PDEs, J. Comput. Appl. Math. 88 (2) (1998) 315–326. A. Abdulle, A.A. Medovikov, Second order Chebyshev methods based on orthogonal polynomials, Numer. Math. 90 (1) (2001) 1–18. B. Kleefeld, J. Martín-Vaquero, Serk2v2: A new second-order stabilized explicit Runge–Kutta method for stiff problems, Numer. Methods Partial Differential Equations 29 (1) (2013) 170–185. J. Martín-Vaquero, B. Janssen, Second-order stabilized explicit Runge–Kutta methods for stiff problems, Comput. Phys. Comm. 180 (10) (2009) 1802–1810. V. Saul’yev, Integration of Equations of Parabolic Type by the Method of Nets, in: International Series of Monographs in Pure and Applied Mathematics, Pergamon Press, Oxford, Paris, 1964. P.J. van Der Houwen, B.P. Sommeijer, On the internal stability of explicit, m-stage Runge–Kutta methods for large m-values, ZAMM—J. Appl. Math. Mech. 60 (10) (1980) 479–485. V.I. Lebedev, A new method for determining the roots of polynomials of least deviation on a segment with weight and subject to additional conditions. part I, Russian J. Numer. Anal. Math. Modelling 8 (3) (1993) 195–222. H.A. Watts, Step size control in ordinary differential equation solvers, Trans. Soc. Comput. Simul. 1 (1984) 15–25. K. Gustafsson, Control-theoretic techniques for stepsize selection in implicit Runge–Kutta methods, ACM Trans. Math. Software 20 (4) (1994) 496–517. K. Gustafsson, Control theoretic techniques for stepsize selection in explicit Runge–Kutta methods, ACM Trans. Math. Software 17 (4) (1991) 533–554. B. Wade, A. Khaliq, M. Siddique, M. Yousuf, Smoothing with positivity-preserving padé schemes for parabolic problems with nonsmooth data, Numer. Methods Partial Differential Equations 21 (3) (2005) 553–573.