Exponential fitting BDF algorithms and their properties

Exponential fitting BDF algorithms and their properties

Applied Mathematics and Computation 190 (2007) 80–110 www.elsevier.com/locate/amc Exponential fitting BDF algorithms and their properties J. Vigo-Agui...

713KB Sizes 0 Downloads 59 Views

Applied Mathematics and Computation 190 (2007) 80–110 www.elsevier.com/locate/amc

Exponential fitting BDF algorithms and their properties J. Vigo-Aguiar a, J. Martı´n-Vaquero a b

b,*

Facultad de Ciencias, Universidad de Salamanca, Salamanca, Spain ETS Ingenieros industriales, Universidad de Salamanca, Bejar, Spain

Abstract We present two families of explicit and implicit BDF formulas, capable of the exact integration (with only round-off errors) of differential equations whose solutions belong to the space generated by the linear combinations of exponential of matrices, products of the exponentials by polynomials and products of those matrices by ordinary polynomials. Those methods are suitable for stiff and highly oscillatory problems, then we will study their properties of local truncation error and stability. Plots of their 0-stability regions in terms of Ah are provided. Plots of their regions of absolute stability that include all the negative real axis in the implicit case are provided. Exponential fitting algorithms depend on a parameter Ah, how can we find this parameter is a big question, here we show different ways to find a good parameter. Finally, numerical examples show the efficiency of the proposed codes, specially when we are integrating stiff problems where the jacobian of the function has complex eigenvalues or problems where the jacobian has positive eigenvalues but the solutions of the problems have not positive exponentials.  2007 Elsevier Inc. All rights reserved. Keywords: BDF methods; Exponential fitting; Stiff problems; Local truncation error; Stability regions; Numerical results

1. Introduction In this paper, we are going to derive explicit and implicit BDF methods, that solve the IVP problem y 0 ðxÞ ¼ Gðx; yðxÞÞ; m t

yðx0 Þ ¼ y 0 ;

ð1Þ m t

where y ¼ ½y 1 ; . . . ; y  , and G ¼ ½g1 ; . . . ; g  . The most important improve of these new methods shall be that they are able to integrate problems where the jacobian has positive eigenvalues but the solutions of the problems have not positive exponentials. We will see that these problems are very hard, and most of the classical methods (including classical methods for stiff problems) are not stable with these problems. We are going to show very good results in problems that appear in a recent bibliography (see [11] or [25]).

*

Corresponding author. E-mail addresses: [email protected] (J. Vigo-Aguiar), [email protected] (J. Martı´n-Vaquero).

0096-3003/$ - see front matter  2007 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2007.01.008

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

81

The new methods are very efficient with other two kind of numerical examples: stiff and highly oscillatory problems. The most common definition of stiffness (see Definition 2.3 in [39], and for example, [11] or [25]) is: stiffness occurs if for most classical explicit methods, the largest step size hn , guaranteeing numerical stability is much smaller than the largest step hn for which the local discretization error is still sufficiently small (in norm), i.e., hn  hn . With this definition stiff problems include highly oscillatory problems, however, many scientists do not consider highly oscillatory problems as stiff problems. Stiff problems (and highly oscillatory problems) are very common problems in many fields of the applied sciences (see [1]): atmosphere, biology, combustion, control theory, dynamics of missile guidance, dispersed phases, electronic circuit theory, fluids, heat transfer, chemical kinetics (this is the most common area), lasers, mechanics, molecular dynamics, nuclear, process industries, process vessels, reactor kinetics, . . . Then, stiff computation is a very interesting area and it is impossible to include all the contributions. Many methods have been considered to integrate stiff problems, but implicit Runge–Kutta (Radau, see [17]; STRIDE, see [2], [10] or [32]; DIRK, [2] or [7]; SDIRK, Gauss, Rosembrock, . . .) and implicit BDF (MEBDF, see [5]; DASSL, see [34] or [35]; LSODE, see [18], . . .) codes have frequently been used. The idea of using exponential fitting/adapted formulae with the numerical integration of stiff systems has recently received considerable attention (see [6,22,26], . . .). The basic idea behind exponential fitting is to integrate exactly (1) when y(x) belongs to a large space where linear combinations of exponential of matrices are included, and we will denote exponential fitting algorithms, in this paper, with the capital letters EF. While we will use the name adapted methods with the methods that solve the IVP problem y 0 ðxÞ  AyðxÞ ¼ F ðx; yðxÞÞ;

yðx0 Þ ¼ y 0

t

ð2Þ t

(where y ¼ ½y 1 ; . . . ; y m  , A is a m · m constant matrix and F ¼ ½f 1 ; . . . ; f m  ) exactly when y(x) belongs to a space where linear combinations of exponential of matrices are included, here, we will denote those algorithms with the capital letter A. This is, exponentially fitting methods will depend on G(x, y(x)), while adapted methods (see [27,42]) depend on F ðx; yðxÞÞ ¼ Gðx; yðxÞÞ  AyðxÞ. In this case, implicit adapted methods function as explicit methods with linear problems. Exponential fitting and adapted methods are very similar and we will show a relation between them in the following sections. In Section 2 we will derive two families of exponential fitting methods, while in Section 3 we will provide the local truncation error of those methods. In Section 4 we will provide plots of the 0-stability of the methods, and then, in Section 5 the definition of absolute stability of the methods and some plots of those regions are provided, in those Sections 4 and 5, we will see that adapted and exponential fitting algorithms have different properties of stability and we will explained this apparent contradiction. Finally in Section 6 we will compare our algorithms with well known ODE solvers for the numerical solution of many different kind of problems. 2. Derivation of the exponential fitting/adapted methods Through the basic theory of difference equations (see [41]), we are going to build two families of exponential fitting/adapted methods. The first one integrate with no truncation error the problem (1) when y(x) is a vector of functions that belongs to the space generated by the linear combinations of heAx ; 1; x; . . . ; xr1 i (we will denote them with EF-I-r or A-I-r), while the second family integrate exactly the problem (1) when y(x) is a vector of functions that belongs to the space generated by the linear combinations of heAx ; . . . ; xk1 eAx ; 1i t t (we will denote them with EF-II-k or A-II-k). We denote ð1; . . . ; 1Þ with 1, ðx; . . . ; xÞ with x, . . ., and t Ax Ax e  ðc1 ; . . . ; cm Þ with e . 2.1. Derivation of EF-I-r or A-I-r methods We are going to begin with the scalar case, we consider, now, A ¼ k 2 C and we want to get the adapted methods that integrate with no truncation error the problem (2) when y(x) is a scalar function that belongs to

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

82

the space generated by the linear combinations of hekx ; 1; . . . ; xr1 i. We proceed symbolically by writing hD ¼ lnðEÞ and P(x) being the polynomial x  kh (see [27]): hf ðx; yðxÞÞ ¼ P ðhDÞyðxÞ ¼ P ðlnðEÞÞ

ðId  rÞ1 rP yðxÞ; LðId=ðId  rÞÞ

ð3Þ

where r ¼ Id  E1 , rP up ¼ rDk up ¼ up  ekh up1 (kh being the parameter of the method), Id being the identity, E the operator Eðup Þ ¼ upþ1 and LðEÞ ¼ ðE  ekh Þ. Thus, we obtain rP

P ð lnðId  rÞÞ yðxÞ ¼ hf ðx; yðxÞÞ: ðId  rÞLðId=ðId  rÞÞ

ð4Þ

Consequently, the multistep implicit adapted method of r + 1 steps is r X rP b0j rj y n ¼ hfn ;

ð5Þ

j¼0

where the coefficients b0j are obtained through the McLaurin series around n = 0 of the generating functions G0 ðn; khÞ ¼

 lnð1  nÞ  kh 1  ekh ð1  nÞ

ð6Þ

for implicit methods:

If we want to derive the multistep explicit adapted method, we merely remember that yðxÞ ¼ E1 yðx þ hÞ ¼ ðId  rÞyðx þ hÞ;

ð7Þ

and hence rP

P ð lnðId  rÞÞ yðx þ hÞ ¼ hf ðx; yðxÞÞ: LðId=ðId  rÞÞ

ð8Þ

Then, the multistep explicit adapted method is r X rP b1j rj y n ¼ hfn1 ;

ð9Þ

j¼0

where the coefficients b1j are obtained through the McLaurin series around n = 0 of the generating functions G1 ðn; khÞ ¼ ð1  nÞ

 lnð1  nÞ  kh 1  ekh ð1  nÞ

ð10Þ

for explicit methods:

Then, some weights b0i (of implicit method), with i ¼ 0; . . . ; 5 are (coefficients b06 , b07 and b08 appear in [29]): b00 ¼ b01 ¼ b02 ¼ b03 ¼ b04 ¼ b05 ¼

kh ; 1  ekh 1 þ ekh ð1 þ hkÞ ð1  ekh Þ

2

;

1 þ 4ekh þ e2kh ð3 þ 2khÞ 2ð1 þ ekh Þ

3

;

2  9ekh þ 18e2kh þ e3kh ð11 þ 6khÞ 6ð1  ekh Þ4

;

3 þ 16ekh  36e2kh þ 48e3kh þ e4kh ð25 þ 12khÞ 12ð1 þ ekh Þ

5

;

12  75ekh þ 200e2kh  300e3kh þ 300e4kh þ e5kh ð137 þ 60khÞ 60ð1 þ ekh Þ

6

:

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

83

While some weights of the explicit methods are: kh ; b10 ¼ 1  ekh 1  ekh þ hk ; b11 ¼ 2 ð1  ekh Þ 1 þ e2kh  2ekh kh b12 ¼  ; 2ð1 þ ekh Þ3 1 þ 6ekh  2e3kh þ e2kh ð3 þ 6khÞ b13 ¼ : 4 6ð1  ekh Þ We also can find the weights of the explicit methods through the generating functions considering G1 ðn; khÞ ¼ ð1  nÞG0 ðn; khÞ, then 1 1 X X b1i ni ¼ G1 ðn; khÞ ¼ ð1  nÞG0 ðn; khÞ ¼ ð1  nÞ b0i ni ; i¼0

i¼0

and b10 ¼ b00 ; b1i ¼ b0i  b0i1

with i P 1:

If we want explicitly the coefficients of the methods we can use that      j j j j 0 j1 j 1 1 r ¼ Id E þ Id ðE Þ þ    þ Id 0 ðE1 Þj ; 0 1 j and we get the following theorem: Theorem 1. Let qr ðzÞ ¼ b00 þ b01 þ b02 z2 þ    þ b0r zr . Then the implicit method (5) with a constant step length h > 0 of r + 1 steps can be written as ! rþ1 X qk ð1Þy n ¼ .j ð1Þy nj þ hfn ; ð11Þ j¼1

where .j ð1Þ ¼ ð1Þ

jþ1



 qðjÞ qðj1Þ ð1Þ Ah r ð1Þ r þ e : j! ðj  1Þ!

ð12Þ

Proof. We derive the coefficient of yn from  r    r   X X 0 j 1 0 bj b0j y n ¼ ðqr Þð1Þy n : ðE Þ y n ¼ 0 j¼0 j¼0 j  ðE1 Þi1 þ ji ðE1 Þi for each rj for The coefficient of yn-i with 0 < i 6 r is derived from the terms i1 j P i. If we apply identity to  i r    r    X X j j i i i ðq Þ ð1Þ y ni : b0j b0j ðE1 Þ y n ¼ ð1Þ y ni ¼ ð1Þ r i i i! j¼i j¼i While if we apply ekh E1 to   ði1Þ r   X j ð1Þ i1 i1 ðqr Þ y niþ1 ; b0j ðE1 Þ y n ¼ ð1Þ i  1 ði  1Þ! j¼i1 and we get the second part of the coefficient of yni.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

84

Finally we get the coefficient of ynr1 from the application of ekh E1 to rÞ  r   r r ðq Þ ð1Þ y nr ; br ðE1 Þ y n ¼ ð1Þ r r r! rþ1Þ and we conclude because ðqr Þ ð1Þ ¼ 0. h The vectorial case is more interesting than the scalar one because most of the real problems are written as vectorial problems. However, we can get the coefficients for the vectorial case from the scalar one changing 1 and k for the identity matrix and A, respectively, and considering CB ¼ BC 1 . Many ways to get the exponential matrix, eAh, can be found in the scientific literature (see [19,30] or [38]), however this is an opened question in this moment. We have implemented the new methods in Mathematica, then we used the numerical calculus of matrix exponentials of Mathematica (but we have implemented in C+ several examples with methods of lower order, in those cases we have used code exp4, see [19], to calculate the exponential matrix). Our conclusion is that the accuracy of our method is independent of the method used, but the CPU time depends on the exponential matrix code used. However, in this paper we have considered h, the step length, constant, then the coefficients are calculated only once and the exponential matrix code used is not too important in the CPU time. In this way we have derived explicit and implicit adapted algorithms that integrate with no truncation error the problem (2) when y(x) belongs to the space generated by the linear combinations of heAx ; 1; x; . . . ; xr1 i. Now, we would like to build the exponential fitting algorithms that integrate with no truncation error the problem (1) when y(x) belongs to the same space. Adapted and exponential fitting algorithms present are very similar, the only one difference is the presentation of the methods: Exponential fitting algorithms solve the IVP problem y 0 ðxÞ ¼ Gðx; yðxÞÞ ¼ AyðxÞ þ F ðx; yðxÞÞ;

yðx0 Þ ¼ y 0 ;

ð13Þ

(where y ¼ ½y 1 ; . . . ; y m , A is a m · m constant matrix and G ¼ ½g1 ; . . . ; gm ) exactly when y(x) belongs to the space heAx ; 1; x; . . . ; xr1 i. While adapted algorithms solve the IVP problem y 0 ðxÞ  AyðxÞ ¼ F ðx; yðxÞÞ; 1

yðx0 Þ ¼ y 0 ;

m

ð14Þ 1

m

(where y ¼ ½y ; . . . ; y , A is a m · m constant matrix and F ¼ ½f ; . . . ; f ) exactly when y(x) belongs to the space heAx ; 1; x; . . . ; xr1 i. Then, if we have an implicit exponential fitting algorithm c0 y n þ c1 y n1 þ    þ cm y nm ¼ hgn ; the implicit adapted algorithm that integrates exactly the same space will be c0 y n þ c1 y n1 þ    þ cm y nm ¼ hfn ; where c0  c0 ¼ Ah, and fn ¼ F ðxn ; y n Þ ¼ Gðxn ; y n Þ  Ay n ¼ gn  Ay n . While if we have an explicit exponential fitting algorithm c0 y n þ c1 y n1 þ    þ cm y nm ¼ hgn1 ; the explicit adapted algorithm that integrates exactly the same space will be c0 y n þ c1 y n1 þ    þ cm y nm ¼ hfn1 ; where c1  c1 ¼ Ah. 2.2. Derivation of EF-II-k or A-II-k methods Now, we shall derive the exponential fitting algorithms of type II (we will denote them as EF-II-k), the methods that integrate with no truncation error the problem (1) when y(x) belongs to the space generated by the linear combinations of heAx ; . . . ; xk1 eAx ; 1i.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

85

Again, we are going to begin with the scalar case, but now, we shall derive the exponential fitting methods first. We proceed: hgðxÞ ¼ hDyðxÞ ¼ r

hD yðxÞ; r

ð15Þ

where r ¼ Id  E1 , rP up ¼ up  ekh up1 (kh being the parameter of the method), Id being the identity and E the operator Eðup Þ ¼ upþ1 . If we write (15) in terms of differences of g(x) at a point x  sh, being s = 0 (implicit case) or s = 1 (explicit case), it must be remembered that hgðx  shÞ ¼ hEs gðxÞ ¼ rEs

hD yðxÞ: r

Now, we proceed symbolically by writing hD ¼ lnðEÞ ¼  lnðId  rÞ, then hgðx  shÞ ¼ rEs

hD s  lnðId  rÞ yðxÞ ¼ rðId  rÞ yðxÞ; r r

and we can write the methods as hgns ¼ r

k X

bsj rjP y n ;

ð16Þ

j¼0

and the coefficients bsj being those which are obtained through the McLaurin series about n=0 of the generating functions  lnðekh ð1  nÞÞ ð17Þ Gs ðn; khÞ ¼ ðekh ð1  nÞÞs 1  ekh ð1  nÞ (where s = 0 in the implicit case, and s = 1 in the explicit one), because r ¼ Id  E1 and rP ¼ Id  ekh E1 , then r ¼ Id  ekh ðId  rP Þ. Then, the weights b0i of the implicit EF-II-k methods, with i ¼ 0; . . . ; 5 are (b06 , b07 , b08 or b09 can be found in [29]): kh b00 ¼ ; 1  ekh ekh ð1 þ ekh  khÞ b01 ¼ ; 2 ð1  ekh Þ b02 ¼ b03 ¼ b04 ¼ b05 ¼

ekh ð3  4ekh þ e2kh þ 2khÞ 2ð1 þ ekh Þ

3

;

ekh ð11 þ 18ekh  9e2kh þ 2e3kh  6khÞ 6ð1  ekh Þ

4

;

ekh ð25  48ekh þ 36e2kh  16e3kh þ 3e4kh þ 12khÞ 12ð1 þ ekh Þ5

;

ekh ð137 þ 300ekh  300e2kh þ 200e3kh  75e4kh þ 12e5kh  60khÞ

60ð1 þ ekh Þ6 While some weights of the explicit EF-II-k methods are: kh b10 ¼ ; 1  ekh 1 þ ekh  hekh k b11 ¼ ; 2 ð1  ekh Þ 1 þ e2kh  2ekh kh b12 ¼  ; 2ð1 þ ekh Þ3

:

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

86

b13 ¼ 

2 þ 3ekh  6e2kh þ e3kh þ 6kekh h

: 6ð1  ekh Þ4 Again, if we prefer, we can obtain explicit weights from implicit ones. If we want an explicit expression of the coefficients of the methods we must remember that       j j j rjP ¼ Id j ðE1 ekh Þ0 þ Id j1 ðE1 ekh Þ1 þ    þ Id 1 ðE1 ekh Þj1 0 1 j1  j j þ Id 0 ðE1 ekh Þ ; j

ð18Þ

and we get the following theorem: Theorem 2. Let qk ðzÞ ¼ b0 þ b1 þ b2 z2 þ    þ bk zk . Then the implicit method (16) with a constant step length h > 0 of k + 1 steps can be written as ! kþ1 X qk ð1Þy n ¼ .j ð1Þy nj þ hgn ; ð19Þ j¼1

where, ( .j ð1Þ ¼ ð1Þ

jþ1

) jÞ j1Þ ðqsk Þ ð1Þ jkh ðqsk Þ ð1Þ ðj1Þkh e þ e : j! ðj  1Þ!

ð20Þ

Proof. From (18) we have rjP y n

kh

¼ y n  e y n1 þ    þ ð1Þ

j1



 j eðj1Þkh y njþ1 þ ð1Þj ejkh y nj ; j1

then, the coefficient of yn is b00 þ b01 þ b02 þ    þ b0k ¼ qk ð1Þ.  j  The coefficient of yn-i with 0 < i 6 k is derived from the terms i1 ðE1 ekh Þi1 þ ji ðE1 ekh Þi for each j rP for j P i. If we apply identity to  iÞ k    k    X X j j ðq Þ ð1Þ y ni ; b0j b0j ðE1 ekh Þi y n ¼ ð1Þi eikh y ni ¼ ð1Þi eikh k i i i! j¼i j¼i we get the first part of the coefficient of yn-i. While if we apply E1 to   i1Þ k   X j ðq Þ ð1Þ 0 1 kh i1 y bj ; ðE e Þ y n ¼ ð1Þi1 eði1Þkh k i1 ði  1Þ! niþ1 j¼i1 and we get the second part of the coefficient of yn-i. Finally we get the coefficient of y nk1 from the application of E1 to     ðq ÞkÞ ð1Þ k 0 k 1 kh k y nk ; bk ðE e Þ y n ¼ ð1Þ ekkh k k k! kþ1Þ

and we conclude because ðqk Þ

ð1Þ ¼ 0.

h

Obviously, we can derive adapted methods of type II, that integrate with no truncation error the problem (2) when y(x) belongs to the space generated by the linear combinations of heAx ; . . . ; xk1 eAx ; 1i. Again, the coefficients of both type methods are the same except c0 ¼ c0 þ Ah with implicit methods and c1 ¼ c1 þ Ah with explicit ones. In following sections, we will compare exponential fitting EF-I-r and EF-II-k methods, and we will have conclusions of what exponential fitting (or adapted) methods function better.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

87

3. Local truncation error Theorem 3. The multistep implicit adapted method of type I r1 X b0j rj y n ¼ hfn rP

ð21Þ

j¼0

of r steps is convergent of order r. Its local truncation error can be expressed in the form: hrþ1 b0r ðD  AÞDr yðxÞ þ Oðhrþ2 Þ: Proof. As we have seen in [27] the operator providing us with the local truncation error of (21) is written in the form rP b0r rr yðxÞ þ Oðhrþ2 Þ;

ð22Þ r

r

r

and the differences satisfy r yðxÞ ¼ h D yðxÞ þ Oðh truncation error of (21) is written in the form hrþ1 b0r ðD  AÞDr yðxÞ þ Oðhrþ2 Þ:

rþ1

2

Þ, and rP yðxÞ ¼ hðD  AÞyðxÞ þ Oðh Þ, then the local ð23Þ



Theorem 4. The multistep explicit adapted method of type I r1 X rP b1j rj y n ¼ hfn1

ð24Þ

j¼0

of r steps is convergent of order r. Its local truncation error can be expressed in the form: hrþ1 b1r ðD  AÞDr yðx þ hÞ: Proof. We only must remember that yðxÞ ¼ E1 yðx þ hÞ ¼ ðId  rÞyðx þ hÞ;

ð25Þ

then rP

P ð lnðId  rÞÞ yðx þ hÞ ¼ hðf ðx; yðxÞÞ: LðId=ðId  rÞÞ

ð26Þ

Then, as we have seen in [27], the operator providing us with the local truncation error of (24) is written in the form rP b1r rr yðx þ hÞ þ Oðhrþ2 Þ; and we can derive the error in a similar way as in Theorem 3.

ð27Þ h

Obviously, A-I-r and EF-I-r have the same error. Theorem 5. The multistep implicit EF-II-k method k1 X hgn ¼ r b0j rjP y n

ð28Þ

j¼0

of k steps is convergent of order k. Its local truncation error can be expressed in the form: k

hkþ1 b0k ðD  AÞ DyðxÞ þ Oðhkþ2 Þ: Proof. In a similar way as we have done with implicit methods of type I, we can demonstrate that the operator providing us with the local truncation error of (28) is written in the form rb0k rkP yðxÞ þ Oðhkþ2 Þ;

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

88

k

and the differences satisfy ryðxÞ ¼ hDyðxÞ þ Oðh2 Þ, and rkP yðxÞ ¼ hk ðD  AÞ yðxÞ þ Oðhkþ1 Þ, then the local truncation error of (28) is written in the form hkþ1 b0k ðD  AÞk DyðxÞ þ Oðhkþ2 Þ:

ð29Þ



Theorem 6. The multistep explicit EF-II-k method hgn1 ¼ r

k1 X

b1j rjP y n

j¼0

of k steps is convergent of order k. Its local truncation error can be expressed in the form: k

hkþ1 b1k ðD  AÞ DyðxÞ: Proof. Again we only must remember that yðxÞ ¼ E1 yðx þ hÞ ¼ ðId  rÞyðx þ hÞ ðId  rÞ ¼ eAh ðId  rP Þ and we can derive the error in a similar way as in Theorem 5. We now consider a parabolic partial differential equation in a Hilbert space H, we can solve the discretised problem as an ODE problem: u0 ðtÞ  AuðtÞ ¼ f ðtÞ;

uð0Þ ¼ u0 ;

ð30Þ

where A is a selfadjoint positive definite operator and f is a function with values in H.

h

Theorem 7. Let r 6 6. Then there is a constant C, independent of the operator A such that if un and Un are the exact solution of the parabolic problem (30) and the approximated solution from the A-I-r, respectively, and u is sufficiently smooth, then Z hn r1 X kU n  un k 6 C kU j  uj k þ Chr kf r kds: ð31Þ 0

j¼0

Proof. See Theorem 5.4 in [43].

h

4. 0-Stability It is common to assume that if the method is exponential fitting or adapted then the method is 0-stable, but this fact is not true in the case of BDF type formulas. There are some regions where the method is not 0-stable although it is really easy to select a step-size that avoid those regions. In attempting to set up zero-stability, we consider the linear constant homogeneous system y 0 ðxÞ ¼ AyðxÞ;

ð32Þ

where the eigenvalues ki ; i ¼ 1; . . . ; m of the m · m matrix A satisfy Reðki Þ < 0. It follows that solutions of the numerical method when it is applied to (32) should satisfy that kyðxÞk ! 0

when x ! 1:

If the eigenvalues of A are distinct, we only need to show that for each eigenvalue k the roots rk of the characteristic polynomial m X ci rmi ; ð33Þ pðrÞ ¼ i¼0

verify j rk j6 1 and those of modulus unity are simple. So, in what follow we will consider the numerical integration of scalar problems.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

89

We will show explicit EF-I-2 0-stability region in Fig. 1, but, in general, this is the only one explicit EF-I-k with good 0-stability region. However, implicit methods have good 0-stability regions until order 6 (even, it is possible to find good results with EF-I-7 in some problems). We show EF-I-r implicit 0-stability regions of second until seventh order in Fig. 2. As we can observe in Fig. 3, 0-stability regions of implicit EF-I-r and A-I-r algorithms of the same order (shown in [27,42]) are different. However, as we will demonstrate both classes of algorithms (EF-I-r and A-I-r algorithms) have a relation. In Fig. 4 we can see that the 0-stability regions of the explicit EF-II-k and A-II-k (in grey). The 0-stability regions of the EF-II-k algorithms are bigger than the 0-stability regions of the EF-I-r methods and the 0-stability regions of the A-II-k algorithms are bigger than the A-I-r ones, but only the second-order explicit method is 0-stable between the explicit methods (in [28] we show that the explicit EF-II-2 method has a very good behavior with stiff problems). Not only the 0-stability regions of the explicit EF-II-k are bigger than the regions of the explicit EF-I-r, but the 0-stability regions of the implicit methods too. We can observe this idea in Fig. 5. In Fig. 6 we show the 0-stability regions of the A-II-k algorithms, we can observe that these are clearly better than the A-I-r ones (they are shown in [27,42]). 5. Absolute stability regions The classical definitions of regions of absolute stability and A-stability were designed for linear multistep methods with constant coefficients. In this section those definitions are modified so as to provide a basis for linear stability analysis of exponential fitting methods for ordinary differential equations of type (1). The stability properties of proposed methods are analyzed, to demonstrate its relevance to stiff oscillatory problems. The way is very similar to that used in [8] to extend those definitions. In [8], Coleman and Ixaru studied the stability of exponential fitting methods, these methods integrate exactly the problem y 00 ðtÞ ¼ f ðt; yðtÞÞ;

ð34Þ

when yðtÞ ¼ expðiktÞ, but when they want to study the stability properties of the methods, they apply the method to the test equation y 00 ðtÞ ¼ w2 yðtÞ;

ð35Þ

then, they plot their regions of stability on the l  h plane (being l = wh and h = kh).

Fig. 1. 0-stability region of explicit EF-I-2 method. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

90

a

b

c

d

e

f

Fig. 2. 0-stability regions (in grey) of the implicit EF-I-r methods. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively. (a) Second-order, (b) third-order, (c) fourth-order, (d) fifth-order, (e) sixth-order, (f) seventh-order.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

a

b

c

d

91

Fig. 3. 0-stability regions (in grey) of the implicit EF-I-r and A-I-r algorithms. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively. (a) EF-I-3, (b) A-I-3, (c) EF-I-5, (d) A-I-5.

In our case, we will consider as a test problem the scalar problem y 0 ðxÞ ¼ myðxÞ;

ð36Þ

where ReðkÞ < 0, ReðmÞ < 0. This is, we are going to introduce the value k in the method, while the true solution depends on the exponential of m. When we apply the methods to this equation we obtain the following stability polynomial: m X

ci ðkhÞrmi ¼ mhrms ;

ð37Þ

i¼0

with s = 1 in the explicit case and s = 0 in the implicit one. Definition. The linear multistep method is said to be absolute stable for a given pair u ¼ ðu1 ¼ kh; u2 ¼ mhÞ if for that u all the roots of the stability polynomial satisfy jrk j < 1 and to be absolutely unstable for that u otherwise.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

92

a

b

c

d

e

f

Fig. 4. 0-stability regions (in grey) of the explicit EF-II-k and A-II-k. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively. (a) EF-II-2, (b) A-II-2, (c) EF-II-3, (d) A-II-3, (e) EF-II-4, (f) A-II-4.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

a

b

c

d

e

f

93

Fig. 5. 0-stability regions (in grey) of the implicit EF-II-k algorithms. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively. (a) Second-order, (b) third-order, (c) fourth-order, (d) fifth-order, (e) sixth-order, (f) seventh-order.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

94

a

b

c

d

e

f

Fig. 6. 0-stability regions (in grey) of the implicit A-II-k algorithms. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively. (a) Second-order, (b) third-order, (c) fourth-order, (d) fifth-order, (e) sixth-order, (f) seventh-order.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

95

Definition. The linear multistep method is said to have a region of absolute stability RA where RA is a region of the complex space C  C, if it is absolute stable for all u 2 RA . Again, as we mentioned in [42], it is impossible to plot regions of absolute stability. However we can fix the first component of u (that is to say a value of u1 ¼ kh that makes the method zero-stable) and plot in the complex plane the values of the the second component mh that makes the method absolute stable. This is a difference with the study of Coleman and Ixaru, because in [8], their parameters l = wh and h = kh where real and they could plot the regions depending on both parameters, but now, the parameters are complex and we need to fix one of these. This new definition is needed because we do not know the matrix A which we calculate the coefficient with, because it can change in function of the time variable or with the solution of the IVP. Then, we have to consider the possibility of differences between the matrix A in the method and the optimal one. In [28] and in [42] we have shown several plots of absolute stability regions of A-I-r algorithms. Those figures showed: 1. We concluded that the regions of instability can be very large if the parameter kh is very near of the 0-instability region. 2. We concluded that the region grows when kh ! 1, with kh 2 R , this is, when the parameter jkhj is bigger, the error can be bigger and the method stable. 3. We concluded that, if kh 2 C , the region of absolute stability grows when jkhj ! 1 (with kh in 0-stability region). 4. We concluded that, if k1 h ¼ a þ ib 2 C , a; b 2 R and k2 h ¼ a  ib the regions of absolute stability were symmetric. Now, we are going to demonstrate the same properties with EF-I-r and EF-II-k methods. First, in Fig. 7 we are going to compare the absolute stability regions of implicit adapted and EF-I-r methods with several kh parameters (the parameter being either real either complex, but in the 0-stability region). We will take the third-order method (we will show the regions of absolute stability of the algorithms A-I-3 and EF-I-3). We can observe that the absolute stability regions of EF-I-3 and A-I-3 algorithms are similar, the differences are regions translate with kh. We can reach the same conclusion with other-order EF-I-r and A-I-r algorithms. Then, EF-I-r algorithms will have the properties 1 until 4. Now, we are going to show the same properties with EF-II-k algorithms. There are several methods, and we should show several plots of absolute regions (changing the parameter kh) with each method if we want to verify each conclusion. Then, we should show a lot of plots of absolute regions. This is the reason because we are going to show only a few of plots, we expect that the reader understands the principal ideas and how other figures shall be. First, we are going to compare the absolute regions of EF-II-k algorithms with the regions of EF-I-r algorithms. In Fig. 8 we show several plots of absolute stability regions of the implicit EF-II-3. We can observe that, in general, the absolute stability regions of the EF-II-3 method are bigger than the absolute stability regions of the EF-I-3 method. We can reach the same conclusion with other-order EF-I-r and EF-II-k methods. In [28], plots of the (explicit and implicit) EF-II-2 methods can be observed and properties 1 until 4 were proved. Now, we show in Fig. 9 plots of the absolute stability regions of the implicit EF-II-4 and EF-II-6 algorithms with kh (the parameter of the methods) real. With all those plots we can conclude that the region grows when kh ! 1, with kh 2 R . Finally, we show in Fig. 10 plots of the absolute stability regions of the EF-II-5 with kh imaginary to conclude that the new methods have properties 1–4.

96

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

a

b

c

d

e

f

Fig. 7. Absolute stability regions (in grey) of the EF-I-3 and A-I-3 implicit algorithms. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively. (a) kh ¼ 1. A-I-3 algorithm. (b) kh ¼ 1. EF-I-3 algorithm. (c) kh ¼ 3  10i. A-I-3 algorithm. (d) kh ¼ 3  10i. EF-I-3 algorithm. (e) kh ¼ 3 þ 10i. A-I-3 algorithm. (f) kh ¼ 3 þ 10i. EF-I-3 algorithm.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

97

b

a

c

Fig. 8. Absolute stability regions (in grey) of the implicit EF-II-3 absolute region. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively. (a) kh ¼ 1. (b) kh ¼ 3  10i. (c) kh ¼ 3 þ 10i.

6. Numerical examples We have explained that the improve of these methods is not only the fact that one is explicit, but the behavior with problems where the jacobian of the function has a positive eigenvalue (but the solution of the ODE has not positive exponentials) or with stiff oscillatory problems, too. We will say that one ODE is stiff oscillatory if y 0 ¼ f ðx; yÞ and the jacobian fy has eigenvalues ki ; kj , where Imðk Þ Reðki Þ  0 comparing with the interval of integration and j Reðkjj Þ j 1. In this kind of problems we have compared the new methods with other well-known methods for stiff problems, as Radau5 and MEBDF (see [6] or [17]). These are variable-step, higher-order and implicit methods. We have compared our results with other solvers from MATLAB (see [37]), too. But they are variable-step methods and our ODE solvers are presented in fixed step length, then we have also used a RadauIIA fixed step length. In many examples, the results with exponential fitting methods are clearly better, then we have compared with other fixed-step exponential fitting methods (see [22]).

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

98

a

b

c

d

Fig. 9. Absolute stability regions (in grey) of the EF-II-6 and EF-II-4 implicit algorithms. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively. (a) kh ¼ 1. EF-II-6 algorithm. (b) kh ¼ 0:1. EF-II-4 algorithm. (c) kh ¼ 5. EF-II-6 algorithm. (d) kh ¼ 0:5. EF-II-4 algorithm.

We are going to divide this section in several subsections: (i) First, we shall study ways to calculate the parameter Ah of the method. Many papers (see [12,14,15,20,24,31,33,36], or [38], for example) consider A as the jacobian of the function, however, several works (see [25], for example, or [11]) demonstrate that in many cases the solutions of the problem do not have a good relation with the eigenvalues of the jacobian of the function (we will see several examples in this section). In those cases we should have other ways to calculate the parameter of the method. In many stiff oscillatory problems we can choose a constant parameter Ah that allows us to integrate the solution during many iterations, in those cases we only have to calculate the coefficients in a small number of times. (ii) We will give several examples of each class where the methods work well: when the jacobian of the function has a positive eigenvalue (but the solution of the ODE has not positive exponentials), stiff oscillatory problems, ODE’s derived from the discretization of PDE’s and other kind of stiff examples. 6.1. How we shall calculate the parameter of the method In this subsection we shall show four new different procedures to do this calculus:

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

a

b

c

d

99

Fig. 10. Absolute stability regions (in grey) of the EF-II-5 algorithm. Parameter of the method is complex. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively. (a) kh ¼ 1 þ 10i, (b) kh ¼ 1  10i, (c) kh ¼ 5 þ 50i, (d) kh ¼ 5  50i.

6.1.1. First procedure We are going to consider an example of the kind y 0 ðxÞ ¼ BðxÞyðxÞ (see [25]): 8 0 > < y 1 ðxÞ ¼ y 2 ðxÞ; y 02 ðxÞ ¼ 1x y 1 ðxÞ  12x y 2 ðxÞ; x x > : 0:1 y 1 ð0:1Þ ¼ e ; y 2 ð0:1Þ ¼ e0:1 ;

ð38Þ

with solution y 1 ðxÞ ¼ ex ;

y 2 ðxÞ ¼ ex :

ð39Þ 1x , x

The eigenvalues of B(x) are k1 ¼ 1 and k2 ¼ so that for x 2 ð0; 1Þ; k2 > 0, and this is a very hard problem in the interval [0.1, 3] (for example) for many ODE solvers as we shall see soon. However, we are going to consider the adapted explicit Euler method: 1

1

AhðeAh  IdÞ y nþ1  AhðeAh  IdÞ y n ¼ hfn ; the local truncation error is

y 00 ðxÞAy 0 ðxÞ 2



A2 y 0 ðxÞ3Ay 00 ðxÞþ2y 000 ðxÞ 12

ð40Þ h3 þ Oðh4 Þ.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

100

If we consider   a b A¼ ; c d

ð41Þ

and we try to solve ðy 00 ðxÞ  Ay 0 ðxÞÞjx0 ¼ 0, we reach ðB0 ðxÞ þ ðBðxÞ  AÞ  BðxÞÞ  y 0 ¼ 0 (being ðy 0 ¼ yð0:1Þ in this case and x = 0.1), and solving we get a = 1 + b and c = 1 + d, those parameters do ðA2 y 0 ðxÞ 3Ay 00 ðxÞ þ 2y 000 ðxÞÞjx0 ¼ 0. Then, we can think they will be good parameters. In the next subsection we will give the results with different parameters Ah, where a = 1 + b and c = 1 + d. We are going to study other IVP of the same class from [25], too. 8 0 y ðxÞ ¼ y 2 ðxÞ; > < 1 cosðxÞsinðxÞ 2ð1þsinðxÞÞ y 1 ðxÞ  2þcosðxÞþsinðxÞ y 2 ðxÞ; y 02 ðxÞ ¼ 2þcosðxÞþsinðxÞ ð42Þ > : y 2 ð0Þ ¼ 1; y 1 ð0Þ ¼ 2; with solution y 1 ðxÞ ¼ 2 þ sinðxÞ;

y 2 ðxÞ ¼ cosðxÞ:

ð43Þ cosðxÞsinðxÞ , 2þcosðxÞþsinðxÞ

then there are many intervals in [0, 100] The eigenvalues of B(x) are now k1 ¼ 1 and k2 ¼ where k2 > 0. However, the solutions of the problem are stable. If we do x = 0 and y 0 ¼ ð2; 1Þt and we solve y 00 ðxÞ  Ay 0 ðxÞ ¼ ðB0 ðxÞ þ ðBðxÞ  AÞ  BÞ  y 0 ¼ 0, we get c ¼ 1, a ¼ 0, with these values we solve ðA2 y 0 ðxÞ  3Ay 00 ðxÞ þ 2y 000 ðxÞÞjx0 ¼ 0 and we get b ¼ 1, d ¼ 0. We must realize that, if y 00 ðx0 Þ ¼ Ay 0 ðx0 Þ, then ðA2 y 0 ðxÞ  3Ay 00 ðxÞ þ 2y 000 ðxÞÞjx0 ¼ ð2Ay 00 ðxÞ þ 2y 000 ðxÞÞjx0 , then we only have to solve a linear problem. We can find similar results if we choose algorithms of a higher order, or if the dimension of the IVP is larger. In general we have: Theorem 8. Let (2) be the IVP of dimension m · m. If we use EF-I-r to integrate this problem with a parameter Ah such us (at each step n) ðy ðrþ1Þ ðxÞ  Ay ðrÞ ðxÞÞ ¼ 0; . . . ; ðy ðrþmÞ ðxÞ  Ay ðrþm1Þ ðxÞÞ ¼ 0, 8x 2 ½xn1 ; xn , then the method is consistent of order r þ m at least. Proof. The m þ 1 first terms of the local truncation error are rþm X rP b0j rj yðxn Þ þ Oðhrþmþ2 Þ;

ð44Þ

j¼r

and we know that rP rj sðxÞ ¼ rj rP sðxÞ (because rP rsðxÞ ¼ rrP sðxÞ), then the local truncation error is rþm X b0j rj rP yðxn Þ þ Oðhrþmþ2 Þ; ð45Þ j¼r

but rj rP yðxn Þ with j ¼ r; . . . ; r þ m  1 are: rj rP yðxn Þ ¼ rj yðxn Þ  eAh rj yðxn1 Þ ¼ cj;j hj ðy ðjÞ ðxn Þ  eAh y ðjÞ ðxn1 ÞÞ þ cj;jþ1 hjþ1 ðy ðjþ1Þ ðxn Þ  eAh y ðjþ1Þ ðxn1 ÞÞ þ    ;

ð46Þ

then, we can write the local truncation error as rþm X rP b0j rj yðxn Þ þ Oðhrþmþ2 Þ ¼ C r hr ðy ðrÞ ðxn Þ  eAh y ðrÞ ðxn1 ÞÞ þ    þ C rþm1 hrþm1 ðy ðrþm1Þ ðxn Þ j¼r

 eAh y ðrþm1Þ ðxn1 ÞÞ þ ðrP brþm rrþm yðxn Þ þ C rþm hrþm ðy ðrþmÞ ðxn Þ j  eAh y ðrþmÞ ðxn1 ÞÞÞ þ Oðhrþmþ2 Þ:

ð47Þ rþ1Þ



rþmÞ

If we could choose a matrix A of dimension m · m such that ðy ðxÞ  Ay ðxÞÞ ¼ 0; . . . ; ðy ðxÞ Ay rþm1Þ ðxÞÞ ¼ 0, 8x 2 ½xn1 ; xn , then in one interval D such that ½xn1 ; xn  D, we would have y ðjÞ ðxÞ ¼ eAðxxn1 Þ y ðjÞ ðxn1 Þ for j ¼ r; . . . ; r þ m  1.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

101

In this case y ðjÞ ðxn Þ  eAh y ðjÞ ðxn1 Þ ¼ 0 for j ¼ r; . . . ; r þ m  1, then the local truncation error would be ðrP bjrþm rrþm yðxn Þ þ C rþm hrþm ðy ðrþmÞ ðxn Þ  eAh y ðrþmÞ ðxn1 ÞÞÞ þ Oðhrþmþ2 Þ;

ð48Þ

and when h ! 0, we have y ðrþmÞ ðxn Þ  eAh y ðrþmÞ ðxn1 Þ ! hðy ðrþmþ1Þ ðxn Þ  Ay ðrþmÞ ðxn ÞÞ þ Oðh2 Þ:

ð49Þ

Then, if we could choose a parameter A (a matrix with dimension m  m), such as ðy ðrþ1Þ ðxÞ  Ay ðrÞ ðxÞÞ ¼ 0; . . . ; ðy ðrþmÞ ðxÞ  Ay ðrþm1Þ ðxÞÞ ¼ 0, 8x 2 ½xn1 ; xn , the local truncation error would be Oðhrþmþ1 Þ. h Of course we can not impose this condition 8x 2 ½xn1 ; xn  before knowing the solution of the IVP, then we only can calculate A (a matrix of dimension m · m) such as at step n ðy ðrþ1Þ ðxÞ  Ay ðrÞ ðxÞÞjðxn1 ;y n1 Þ ¼ 0; . . . ; ðy ðrþmÞ ðxÞ  Ay ðrþm1Þ ðxÞÞjðxn1 ;y n1 Þ ¼ 0. In this way we can solve many IVPs of the class y 0 ðxÞ ¼ BðxÞyðxÞ, where any of the eigenvalues of B(x) is positive, but the solution is stable. We must only be careful with the interval where the parameter A chosen is good. In many cases it is necessary to check and change the parameter after several steps. 6.1.2. Second procedure Other conclusion we can get from these examples is the relation between the solution of the problem and the eigenvalues of the chosen parameter: in the first example the solutions of the IVP where a1 ex þ a2 and the eigenvalues of A where k1 ¼ k2 ¼ 1, while in the second example the solutions of the IVP where a1 sinðxÞ þ a2 cosðxÞ þ a3 and the eigenvalues of A where k1 ¼ i, k2 ¼ i. This idea allows to find another way to get a ‘‘good’’ parameter of the method: again we are going to consider the simplest case, an ODE of dimension 2 and now, we have a small idea of the behavior of the solution of the IVP, we only need to know if the solutions depend on trigonometric functions, in this case we try to find a ‘‘good’’   0 k1 A¼ ð50Þ k2 0 (with k1 k2 < 0), or if the solutions depend on negative exponentials, in this case we try to find a parameter   k1 0 A¼ ð51Þ 0 k2 (with k1 < 0, k2 < 0). For example, we are going to consider a problem that appears in [16]: 8 0 2 > < y 1 ðxÞ ¼ 1002y 1 ðxÞ þ 1000y 2 ðxÞ; y 02 ðxÞ ¼ y 1 ðxÞ  y 2 ðxÞð1 þ y 2 ðxÞÞ; > : y 1 ð0Þ ¼ 1; y 2 ð0Þ ¼ 1;

ð52Þ

with solution y 1 ðxÞ ¼ e2x ;

y 2 ðxÞ ¼ ex :

We suppose that the solution of the IVP depends on negative exponentials plus polynomials, then the parameter will be as in (51). In many papers the Taylor series methods are explained, but, as most of the explicit schemes, the Taylor methods are not suitable for stiff problems. However there are modifications of the Taylor methods to deal with these situations (see [3,9,21] or [23]). In [16] the way to find that near x ¼ 0, y 1 ðxÞ 1  2x þ 2x2  43 x3 þ 23 x4 , and y 2 ðxÞ 1  x þ 12 x2  1 3 x þ 241 x4 is proposed. 6 Then, if we are going to use for example EF-I-3, we look for k1, k2 such as y 1 ðxÞ j1;0 þ j1;1 xþ j1;2 x2 þ r1 ek1 x , y 2 ðxÞ j2;0 þ j2;1 x þ j2;2 x2 þ r2 ek2 x .

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

102

Now, we use the standard Taylor series of the exponential functions, so our conditions in the first variable are:   4 3 2 4 k21 2 k31 3 k41 4 2 2 y 1 ðxÞ 1  2x þ 2x  x þ x ¼ j1;0 þ j1;1 x þ j1;2 x þ r1 1 þ k1 x þ x þ x þ x ; 3 3 2 6 24 then we look for 8 ¼ r1 k31 ;

16 ¼ r1 k41 ;

and we get k1 ¼ 2. In the same way we get k2 ¼ 1. Obviously with this method and parameter we are capable of the exact integration of (52), even more, with this parameter all the methods here derived are capable of the exact integration. In a similar way we are able to find good parameters when the solutions of the IVP depend on trigonometric functions plus polynomials, for example, if we consider Example 2 in [40]: y 00 ðxÞ þ 108 ðyðxÞ  cosðxÞÞ3 ¼  cosðxÞ;

yð0Þ ¼ 1;

y 0 ð0Þ ¼ 0;

in our case the problem will be 8 0 > < y 1 ðxÞ ¼ y 2 ðxÞ; 3 y 02 ðxÞ ¼ 108 ðy 1 ðxÞ  cosðxÞÞ  cosðxÞ; > : y 1 ð0Þ ¼ 1; y 2 ð0Þ ¼ 0;

ð53Þ

with solution y 1 ðxÞ ¼ cosðxÞ;

y 2 ðxÞ ¼  sinðxÞ:

Now, we can suppose that we are going to use EF-I-2, then the first step, as in the other case, is the calculus of the Taylor series of each variables, in this case with order 3 (this order is always the number of steps of the numerical method plus the dimension of the IVP minus 1, so we get a system of equations where the number 2 of equations and the number of variables are the same). In our example we get (obviously) y 1 ðxÞ 1  x2 , 3 y 2 ðxÞ x þ x6 , while if we consider A qas (50), then we suppose that the solutions are ffiffiffiffiffiffi in p ffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi k1 sinð k1 k2 xÞ, j þ j x þ r cosð k k xÞ þ r y 2 ðxÞ j2;0 þ j2;1 x þ r2 cosð k1 k2 xÞ y 1 ðxÞ 1;0 1;1 1 1 2 2 k2 qffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 r1 k sinð k1 k2 xÞ. k1 Now, if we use the Taylor series of trigonometric functions we get the following equations: r1 k1 k2 ¼ 1; qffiffiffiffiffiffiffiffiffi r2 k41 k22 ¼ 0; r2 k1 k2 ¼ 0; qffiffiffiffiffiffiffiffiffi r1 k42 k21 ¼ 1: We can conclude that r2 ¼ 0 (first and third equation) and k1 ¼ 1, r1 ¼ 1, k2 ¼ 1, because y 01 ðxÞ ¼ y 2 ðxÞ. Again, we can integrate exactly the problem with this parameter and any of the methods here explained. 6.1.3. Third procedure Another general equation where most of the classical methods can have problems is y 00 ðxÞ þ k2 yðxÞ þ F ðyðxÞ; y 0 ðxÞÞ ¼ 0;

k  0:

In [4] we can see a way to obtain the equivalent linearization of the nonlinear equation:  0 s 00 y ðxÞ þ k yðxÞ þ ðk2 þ k/0 ÞyðxÞ ¼ 0; s

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

103

where  s0 ; 2k  0 s; /0 ¼ 2sk 0

s0 ¼ 

and we get s0 , s00 from the Fourier series: 1 s0 X ðsn cosðnnÞ þ rn sinðnnÞÞ; F ðs sinðnÞ; sk cosðnÞÞ cosðnÞ ¼ þ 2 n¼1 1 s0 X F ðs sinðnÞ; sk cosðnÞÞ sinðnÞ ¼ 0 þ ðs0 cosðnnÞ þ r0n sinðnnÞÞ: 2 n¼1 n Other ways to get linear equations from nonlinear ones have been proposed. The solutions of both IVP’s are very similar, then we can choose the parameter Ah from the linear problem and Ah allows us to get good results. 6.1.4. Fourth procedure Finally, we are going to consider another special kind of problems: we will consider the next linear stiff problem (see [25] or [22]): 8 0 > < y 1 ðxÞ ¼ 2y 1 ðxÞ þ y 2 ðxÞ þ 2 sinðxÞ; ð54Þ y 02 ðxÞ ¼ 998y 1 ðxÞ  999y 2 ðxÞ þ 999ðcosðxÞ  sinðxÞÞ; > : y 1 ð0Þ ¼ 2; y 2 ð0Þ ¼ 3; with solution y 1 ðxÞ ¼ 2ex þ sinðxÞ;

y 2 ðxÞ ¼ 2ex þ cosðxÞ;

and eigenvalues of the jacobian of the function k1 ¼ 1, k2 ¼ 1000. We are going to add two new variables: y 3 ðxÞ ¼ sinðxÞ, y 4 ðxÞ ¼ cosðxÞ, then the problem is 8 0 y ðxÞ ¼ 2y 1 ðxÞ þ y 2 ðxÞ þ 2y 3 ðxÞ; > > > 10 > > > < y 2 ðxÞ ¼ 998y 1 ðxÞ  999y 2 ðxÞ þ 999ðy 4 ðxÞ  y 3 ðxÞÞ; y 03 ðxÞ ¼ y 4 ðxÞ; > > > y 0 ðxÞ ¼ y 3 ðxÞ; > > > 4 : y 1 ð0Þ ¼ 2; y 2 ð0Þ ¼ 3;

y 3 ð0Þ ¼ 0;

ð55Þ

y 4 ð0Þ ¼ 1:

We have introduce two variables and the dimension of the problem grows, but now the results clearly improve. We can use this idea in many examples, specially in all the problems of the type y 0 ðxÞ ¼ AyðxÞ þ F ðxÞ þ GðxÞ þ P ðxÞ;

ð56Þ

where F ðxÞ ¼ ðf1 ðxÞ; . . . ; fm ðxÞÞ, where each fi ðxÞ belongs to the space generated by < sinðki;1 xÞ; cosðki;1 xÞ >, GðxÞ ¼ ðg1 ðxÞ; . . . ; gm ðxÞÞ, where each gi ðxÞ belongs to the space generated by < eki;2 x >, and P ðxÞ ¼ ðp1 ðxÞ; . . . ; pm ðxÞÞ, where each pi ðxÞ belongs to the space generated by h1; x; . . . ; xr1 i. After those conclusions, we are able to integrate many different problems with good results. 6.2. Positive eigenvalue examples Problem 1. We have integrated the known example (38) in the interval ½0:1; 3 with the ODE solvers of MATLAB and MEBDF from J. Cash’s web page and with the adapted explicit Euler method and parameters: (i) Example 1: h ¼ 0:1, b ¼ 13, d ¼  12. (ii) Example 2: h ¼ 0:1, b ¼ 0, d ¼ 1.

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

104

Errors are at x ¼ 3, with Rtol ¼ Atol ¼ 108 in MATLAB ODE solvers and MEBDF and N.f.e. means the number of function evaluations. Obviously, the results of most of the ODE solvers are not good, and the reason is the eigenvalue k2 , most of the classical methods do not follow the solution (see Table 1). Problem 2. We choose the problem (42) that appears in [25] and we have integrated it in the interval [0, 100]. We reach the following results with the ODE solvers of MATLAB and MEBDF (with Rtol ¼ Atol ¼ 108 ) and using the Adapted Euler Explicit Method using the jacobian of the method as the parameter (and h ¼ 101 ), in this case the method is named AdEulJacobian, and using the first procedure to calculate the parameter (with h ¼ 10), in this case the method is named AdEul1pr (see Table 2). Problem 3. A non-linear problem where one of the eigenvalues of the jacobian of the function is positive: 8 0 5 > < y ðxÞ ¼ 2 ð3 þ 5 cosð20xÞ þ sinð20xÞÞyðxÞ þ ð5 cosð10xÞ  5 sinð10xÞzðxÞ; z0 ðxÞ ¼ ð5 sinð10xÞð5 cosð10xÞ þ sinð10xÞÞyðxÞ þ 52 ð3 þ 5 cosð20xÞ þ sinð20xÞzðxÞ; ð57Þ > : yð0Þ ¼ 13 ; zð0Þ ¼ 4; 2 whose solution is (  cosð10xÞ þ sinð10xÞ þ e5x ðcosð10xÞ  sinð10xÞÞ; yðxÞ ¼ 5e10x 3 2  zðxÞ ¼ 5e10x  cosð10xÞ  32 sinð10xÞ þ e5x ðcosð10xÞ þ sinð10xÞÞ:

ð58Þ

This problem appears in [11, p. 11]. It is the example 1.1.5 where d 1 ¼ 5, d 2 ¼ 20, w ¼ 10 and cotðnÞ ¼ 15. The eigenvalues of the matrix B(x) are k1 ¼ 5, k2 ¼ 20. We have solved this problem in ½0; 5, and we give the errors at x ¼ 5 at k  k2 , with different MATLAB ODE solvers in Fig. 11. MATLAB ODE 45, MEBDF or Radau have bad results with this problem. 6.3. Stiff oscillatory (or highly oscillatory) nonlinear problems Problem 4. The non-linear stiff problem: 3

y 00 ðxÞ þ 1002 yðxÞ ¼ 103 yðxÞ ;

yð0Þ ¼ 3;

y 0 ð0Þ ¼ 0;

ð59Þ

Table 1 Error in the numerical integration of problem 1 Method MATLAB ode23s MATLAB ode15s MATLAB ode45 MATLAB ode113 MEBDF Adapted Euler Ex. 1 Adapted Euler Ex. 2

Error y1

Error y2 4

1.2192 · 10 0.0013 0.4859 0.0019 0.011023 1.94289 · 1016 >1020

Number of f.e. 4

1.2192 · 10 0.0013 0.4859 0.0019 0.011023 6.245 · 1017 2.35922 · 1016

1802 144 169 69 67 29 29

Table 2 Error in the numerical integration of problem 2 Method

Error y1

Error y2

Number of f.e.

ode23s ode15s ode45 ode113 MEBDF AdEulJacobian AdEul1pr

6.2458 · 104 0.0294 0.3446 0.1173 4.1092 · 107 0.0177 6.21725 · 1015

3.6604 · 104 0.018 0.1245 0.0806 2.38712 · 107 0.0144 2.66454 · 1015

117,062 2539 6889 1131 1296 1000 10

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

105

10-5 ODE 113 ODE 15s ODE 23s EF-I-3

Error

10-7

10-9

10-11

10-13 102

103

104

105

Number of function evaluations

Fig. 11. Error in numerical integration of problem 3.

with x 2 ½0; 1. Then our system is ( y 0 ðxÞ ¼ zðxÞ; 0

2

yð0Þ ¼ 3;

ð60Þ

3

3

z ðxÞ ¼ 100 yðxÞ þ 10 yðxÞ ; zð0Þ ¼ 0: This system is conservative, so the solutions satisfy the energy integral E¼

y 20 104 y 2 y4 þ þ ¼ 45000:02025: 2 2 4000

ð61Þ

At x = 0, the eigenvalues of the jacobian are complex near 100i and 100i. We give the error of the energy at point x = 1 in Fig. 12. Problem 5. A non-linear stiff problem: y 00 ðxÞ þ 1002 yðxÞ ¼ 101 yðxÞ5 y 0 ðxÞ2 ;

yð0Þ ¼ 1;

with x 2 ½0; 2. Then our system is ( y 0 ðxÞ ¼ zðxÞ; 0

2

y 0 ð0Þ ¼ 0;

ð62Þ

yð0Þ ¼ 1; 5

1

2

z ðxÞ ¼ 100 yðxÞ þ 10 yðxÞ zðxÞ ;

ð63Þ

zð0Þ ¼ 0:

100 MEBDF EF-II-5 EF-I-5 EF-II-4 EF-I-4

Error

10-2

10-4

10-6

0

1000

2000

3000

Number of function evaluations

Fig. 12. Error in numerical integration of problem 4.

4000

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

106

10 EF-II-6 EF-II-5 EF-I-5 EF-II-4 EF-I-4 MEBDF

1

Error

0.1

0.01

0.001

0.0001 1000

2000

3000

4000

5000

Number of function evaluations

Fig. 13. Error in numerical integration of problem 5.

Again at x = 0, the eigenvalues of the jacobian are complex near 100i and 100i. Later, the eigenvalues are very near of the imaginary axe, and methods need a very small step length. We give the error of the energy at point x = 2 in Fig. 13. 6.4. Other classes of stiff problems Problem 6. The linear stiff problem (54) (see [25] or [22]). 1 The interval [0, 10] was considered and the step-length used were h ¼ 10 . In Table 3 we give the absolute errors at x = 10 from the new third-order methods and the results of Vanden Berghe et al. (VB3, see [22]). Here, we have used an explicit method of order two, the number of the function evaluations are lower, however, the results are clearly better. Problem 7. A linear stiff problem, with eigenvalues k1 ¼ 106 þ 106 i, and k2 ¼ 106  106 i: ( y 0 ðxÞ ¼ 106 yðxÞ þ 106 zðxÞ þ f1 ðxÞ; yð0Þ ¼ 1; z0 ðxÞ ¼ 106 yðxÞ  106 zðxÞ þ f2 ðxÞ;

ð64Þ

zð0Þ ¼ 2:

where f1 ðxÞ ¼ 5ð399999e5x þ 200010 cosð50xÞ þ 200000 sinð50xÞÞ and f2 ðxÞ ¼ 5e5x þ 106 cosð50xÞ 1000050 sinð50xÞ. We know that the true solution is yðxÞ ¼ sinð50xÞ þ e5x , zðxÞ ¼ cosð50xÞ þ e5x . Then, we can compare the results obtained with the new methods of order three and the results of fixed-step RadauIIA at x = 10 and length-step h = 0.2. We show the errors in Table 4. Table 3 Error in the numerical integration of problem 6 Method VB3 Explicit EF-II-2

Error y1

Error y2 4

2.41 · 104 5.66 · 1013

2.41 · 10 3.88 · 1013

Table 4 Error in the numerical integration of problem 7 Method RadauIIA Implicit EF-I-3 Implicit EF-II-3

Error

Number of steps 5

2.817 · 10 5.551 · 1017 5.551 · 1017

50 50 50

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

107

Problem 8. A non-linear stiff problem. This problem is well-known as Van der Pool’s problem (see [17], for example). The eigenvalues of the Jacobian at x = 0 are k1 ¼ 1491:15, and k2 ¼ 8:85292: ( y 0 ðxÞ ¼ zðxÞ; yð0Þ ¼ 2; ð65Þ 2 0 z ðxÞ ¼ 5000ð1  yðxÞ ÞzðxÞ  yðxÞ; zð0Þ ¼ 0:66: We have solved this problem in x 2 ½0; 1, with MEBDF and Radau5, with Rtol ¼ Atol ¼ 1011 and Rtol ¼ Atol ¼ 1012 . We obtained the true solution at x = 0 with MEBDF or Radau5, with Rtol ¼ 1014 , Atol ¼ 1030 (see Table 5). Problem 9. We integrate the nonlinear IVP (52), the jacobian of the function of this problem at the initial point has the eigenvalues k1 ¼ 1004, k2 ¼ 1:00199, then we can consider that this problem is stiff. In Table 6 we have compared the results obtained at point x = 10 with RadauIIA of two steps (the step length is constant) and EF-1-3, with step length h = 1/10. Problem 10. We integrate the IVP (53) in the interval [0, 10], this is a nonlinear second-order ODE of Prothero and Robinson type, the two eigenvalues of the jacobian of the function are 0 (at x = 0), then if we use the jacobian of the function as parameter we will have to use classic BDF, however we have shown in this section a good way to find an appropriate parameter. In Table 7 we compare the results obtained at x = 5 with RadauIIA of two steps (and constant step length) and EF-I-2 with step length h = 1/20. Problem 11. We integrate the well known problem E5 (see [1] or [13]) 1 0 0 1 0 y 1 ðxÞ Ay 1 ðxÞ  By 1 ðxÞy 3 ðxÞ C B y 0 ðxÞ C B A  MCy 2 ðxÞy 3 ðxÞ C B 2 C B C B 0 C¼B @ y 3 ðxÞ A @ Ay 1 ðxÞ  By 1 ðxÞy 3 ðxÞ  MCy 2 ðxÞy 3 ðxÞ þ Cy 4 ðxÞ A y 04 ðxÞ

ð66Þ

By 1 ðxÞy 3 ðxÞ  Cy 4 ðxÞ

Table 5 Error in the numerical integration of problem 8 Method

Error j y n  yð1Þ j

Error j zn  zð1Þ j

Number of f.e.

Implicit EF-I-3 MEBDF, tol = 1011 MEBDF, tol = 1012 Radau5, tol = 1011 Radau5, tol = 1012

<1014 1.29 · 1013 <1014 7.91 · 1014 7.40 · 1014

<1018 1.22 · 1017 <1018 3.12 · 1013 3.12 · 1013

200 370 497 945 1385

Table 6 Error in the numerical integration of problem 9 Method

Error

EF-I-3 RadauIIA

0 6.1476 · 109

Table 7 Error in the numerical integration of problem 10 Method

Error

EF-I-2 RadauIIA

0 1.7744 · 106

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

108

10-11 radauIIA EF-1-k-3-r

Error

10-13

10-15

10-17 101

102

103

104

Tiempo CPU (s)

Fig. 14. Error in the numerical integration of problem 11. We compare error against CPU time.

being A ¼ 7:89  1010 , B ¼ 1:1  107 , C ¼ 1:13  103 , M = 106, with initial conditions ð1:76  103 ; 0; 0; 0Þ. Else, we can consider that y 2 ðxÞ  y 3 ðxÞ  y 4 ðxÞ ¼ 0, then we have a system of three equations. This well known problem is usually integrated in the interval [0, 1000], however, Alexander (1997) discovered that the solutions possess interesting properties on a longer interval. Hairer and Wanner (see [17]) suggest different output values at x ¼ 10; 103 ; 105 ; 107 ; . . . ; 1013 . We are going to integrate the IVP in [0, 107]. The exact solution at x = 107 can be found, for example, at the web page http://www.unige.ch/hairer/testset/testset.html. In this example we have compared the CPU time that RadauIIA of two steps (and constant length step) and EF-I-3 need, because both methods were written in the same code, using the same goal precision on the same machine, an Intel Pentium M 715 with 1.50 GHz. These results are shown in Fig. 14. 7. Conclusions We have shown different types of exponential fitting methods that integrates (1) when y(x) (the solution of the problem) belongs to the space generated by the linear combinations of heAx ; . . . ; xk1 eAx ; 1i or the space generated by heAx ; 1; x; . . . ; xr1 i. In [28] the A-I-r algorithms are shown. The A-I-r methods and the EF-I-r algorithms really verify the same methods, the only one difference is the presentation of the ODE solver. We have shown that the absolute stability regions of the EF-I-r and the A-I-r algorithms are the same (except a translation), and numerical errors are the same. The differences between both algorithms are the 0-stability regions as we have shown in the chapter of 0-stability. For example, R belongs to the region where the explicit EF-I-2 is 0-stable (see Fig. 1), but in [42] we demonstrated that the explicit A-I-2 algorithm was 0-unstable. This means that for example, the method is absolute stable for the pair (kh; 0), being kh < 0, (k 2 C ), however it is not absolute stable for the pair (kh; kh), being kh < 0 (k 2 C ). We show, in Fig. 15, the absolute stability of the explicit EF-I-2 with kh ¼ 0:01 and kh ¼ 100 and we can see this fact. We can conclude that the explicit EF-I-2 algorithm is not stable if we take kh parameter in the method with kh 6¼ 0 for IVP’s when the solution y(x) of perturbed problems y 0 ðxÞ ¼ f ðx; yðxÞÞ;

yðx0 Þ ¼ y 0 þ e;

depends on ekx , while the method is stable if y(x) depends only on polynomials (this sentence is better explained and with more examples in [29]). However, we have shown that the explicit EF-II-2 has good properties of stability (see [28], too) and it can get good results. Else, the implicit methods work very well until order six (inclusive). We have seen that these EFM are able to integrate IVP’s where the jacobian of the function (fy ðx; yðxÞÞ) has a positive eigenvalue (but the solution of the ODE has not positive exponentials) or stiff oscillatory problems,

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

109

Fig. 15. Absolute stability regions (in grey) of the explicit EF-I-2 method. The horizontal and vertical axes represent ReðhkÞ and ImðhkÞ, respectively. (a) kh ¼ 0:01, (b) kh ¼ 100.

that most of the traditionally methods have serious difficulties to solve. We only need to calculate Ah, the parameter of the method, with the eigenvalues ‘‘near’’ of the solution of the problem. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]

R.C. Aiken, Stiff Computation, Oxford University Press, New York, 1985. R. Alexander, Diagonally implicit Runge–Kutta methods for stiff ODEs, SIAM J. Numer. Anal. 14 (1977) 1006–1021. D. Barton, On Taylor series and stiff equations, ACM Trans. Math. Software 6 (1980) 280–294. N.N. Bogoliubov, Y.A. Mitropolsky, Asymptotic Methods in the Theory of Non-linear Oscillations, Gordon and Breach, Science Publishers, Glasgow, 1985. J.R. Cash, S. Considine, An MEBDF code for stiff initial value problems, ACM Trans. Math. Soft. 18 (1992) 142–158. J.R. Cash, On the exponential fitting of composite multiderivative linear multistep methods, SIAM J. Numer. Anal. 18 (5) (1981) 808– 821. J.R. Cash, Diagonally implicit Runge–Kutta formulae with error estimates, J. IMA 24 (1979) 293–301. J.P. Coleman, L.Gr. Ixaru, P-stability and exponential-fitting methods for y00 = f(x, y), IMA J. Numer. Anal. 16 (1996) 179–199. G.F. Corliss, A. Griewank, P. Henneberger, G. Kirlinger, F.A. Potra, H.J. Stetter, Hihg-order ODE solvers via automatic differentiation and rational prediction, Numerical Analysis and its Applications (Rousse, 1996), Springer, Berlin, 1997, pp. 114–125. M. Crouzeix, Sur l’approximation des equations differentiales operationelles lineaires par des methodes Runge–Kutta, Ph.D. Thesis, 1975, University of Parı´s VI, Paris. K. Dekker, J.G. Verwer, Stability of Runge–Kutta methods for stiff nonlinear differential equations, Elsevier Science Publishers, Amsterdam, 1984. W.S. Edwards, L.S. Tuckerman, R.A. Friesner, D.C. Sorensen, Krylov methods for the incompressible Navier–Stokes equations, J. Comput. Phys. 110 (1994) 82–102. W.H. Enright, T.E. Hull, B. Lindberg, Comparing numerical methods for stiff systems of ODE’s, BIT 15 (1975) 10–48. R.A. Friesner, L.S. Tuckerman, B.C. Dornblaser, T.C. Russo, A method for exponential propagation of large systems of stiff nonlinear differential equations, J. Sci. Comput. 4 (1989) 327–354. E. Gallopoulos, Y. Saad, Efficient solution of parabolic equations by Krylov approximation methods, SIAM J. Sci. Stat. Comput. 13 (i. 5) (1992) 1236–1264. N. Guzel, M. Bayram, On the numerical solution of stiff systems, Appl. Math. Comput. 170 (1) (2005) 230–236. E. Hairer, G. Wanner, Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems, Springer-Verlag, Berlin, 1991. A.C. Hindmarsh, LSODE and LSODI, two new initial value ordinary differential equation solvers, ACM-Signum Newslett. 15 (1980) 10–11. M. Hochbruck, C.h. Lubich, H. Selhofer, Exponential integrators for large systems of differential equations, SIAM J. Sci. Comput. 19 (1998) 1552–1574. L.W. Jackson, S.K. Kenue, A fourth order exponentially fitted method, SIAM J. Numer. Anal. 11 (5) (1974) 965–978.

110

J. Vigo-Aguiar, J. Martı´n-Vaquero / Applied Mathematics and Computation 190 (2007) 80–110

[21] F. Jalbert, R.V.M. Zahar, A highly precise Taylor series method for stiff ODEs, in: Proceedings of the Fourteenth Manitoba Conference on Numerical Mathematics and Computing (Winnipeg, Man., 1994), v. 46, 1985, pp. 347–358. [22] L.Gr. Ixaru, G. Vanden Berghe, H. De Meyer, Frequency evaluation in exponential fitting multistep algorithms for ODEs, J. Comput. Appl. Math. 140 (2002) 423–434. [23] G. Kirlinger, G.F. Corliss, On implicit Taylor series methods for stiff ODEs, Computer arithmetics and enclosure methods (Oldenburg, 1991), North-Holland, Amsterdam, 1992, pp. 371–379. [24] R. Kosloff, Propagation methods for quantum molecular dynamics, Annu. Rev. Phys. Chem. 45 (1994) 145–178. [25] J.D. Lambert, Numerical methods for ordinary differential systems. The initial Value Problem, Wiley, Chichester, 1991. [26] W. Liniger, R.A. Willoughby, Efficient numerical integration of ordinary differential equations, Res. Rep. RC 1970, IBM, Yorktown Heights, NY, 1967. [27] J. Martı´n-Vaquero, J. Vigo-Aguiar, Adapted BDF Algorithms: higher-order methods and their stability, J. Sci. Comput., submitted for publication. [28] J. Martı´n-Vaquero, J. Vigo-Aguiar, Exponential fitting BDF algorithms: explicit and implicit 0-stable methods, J. Comp. Appl. Math. 192 (1) (2006) 100–113. [29] J. Martı´n-Vaquero, Me´todos Exponential Fitting y Adaptados para problemas stiff, Ph.D. Thesis, University Rey Juan Carlos (in spanish), 2006. [30] C.B. Moler, C.F. Van Loan, Nineteen dubious ways to compute the exponential of a matrix, SIAM Rev. 29 (4) (1978) 801–836. [31] A. Nauts, R.E. Wyatt, New approach to many-state quantum dynamics: the recursively residue-generation method, Phys. Rev. Lett. 51 (1983) 2238–2241. [32] S.P. Norsett, Semi explicit Runge–Kutta methods, Mathematics and Computing Report No. 6/74, University of Trodheim, 1974. [33] T.J. Park, J.C. Light, Unitary quantum time evolution by iterative Lanczos reduction, J. Chem. Phys. 85 (1986) 5870–5876. [34] L.R. Petzold, A description of DASSL: a differential-algebraic system solver, in: R.S. Stepleman et al. (Eds.), IMACS Transactions on Scientific Computing, North-Holland, Amsterdam, 1993, pp. 65–68. [35] L.R. Petzold, An efficient numerical method for highly oscillatory ordinary differential equations, SIAM J. Numer. Anal. 18 (1981) 455–479. [36] R.P. Ratowsky, J.A. Fleck Jr., Accurate numerical solution of the Helmholtz equation by iterative Lanczos reduction, Opt. Lett. 73 (1991) 787–789. [37] L.F. Shampine, M. Reichelt, The MATLAB ODE Suite, SIAM J. Sci. Comput. 18 (1997) 1–22. [38] R.B. Sidje, Expokit: software package for computing matrix exponentials, ACM Trans. Math. Software 24 (1) (1998) 130–156. [39] M.N. Spijker, Stiffness in the numerical initial-value problems, J. Comput. Appl. Math. 72 (1996) 393–406. [40] H. Van de Vyver, Frequency evaluation for exponentially fitted Runge–Kutta methods, J. Comput. Appl. Math. 184 (2) (2005) 442–463. [41] J. Vigo-Aguiar, J.M. Ferra´ndiz, A general procedure for the adaptation of multistep algorithms to the integration of oscillatory problems, SIAM J. Numer. Anal. 35 (4) (1998) 1684–1708. [42] J. Vigo-Aguiar, J. Martı´n-Vaquero, R. Criado, On the stability of exponential fitting BDF algorithms, J. Comput. Appl. Math. 175 (1) (2005) 183–194. [43] J. Vigo-Aguiar, J. Martı´n-Vaquero, B. Wade, Adapted BDF algorithms applied to parabolic problems, Numerical Methods for PDE’s 23 (1) (2007) 350–365.