Local error and variable order Adams codes

Local error and variable order Adams codes

APPLIED MATHEMATICS AND COMPUTATION 1, 474 47 (1975) Local Error and Variable Order Adams Codes* L. F. Shampine and M. K. Gordon Sandia Labora...

950KB Sizes 0 Downloads 67 Views

APPLIED

MATHEMATICS

AND COMPUTATION

1, 474

47

(1975)

Local Error and Variable Order Adams Codes* L. F. Shampine

and M. K. Gordon

Sandia Laboratories Albuquerque, New Mexico 87115

Transmitted by M. Scott

ABSTRACT Error estimates needed for the selection of order in Adams codes are studied in the presence of propagated errors. A new derivation of the Nordsieck form of the Adams methods reveals that codes based on this form have changed order incorrectly and shows how to do it properly.

1.

INTRODUCTION

Efficient codes for the solution of ordinary differential step size and order to control their estimated errors

equations vary their at each step. Early

studies of the error estimators presumed that the memorized values used by the Adams methods were “exact.” Henrici [l, pp. 2582581 considers the estimation of truncation error when a constant step size and fixed order are used, while taking into account the propagated what happens when the order is varied. To

error. We are interested

The concept of local error will be defined and related to truncation vary the order for the sake of efficiency, it is necessary that

working at order k one be able to estimate

in

error. when

the error that would be made in a

step if the step were to be taken at orders k- 1, k, and k+ 1. We shall see that this can be done, though the results depend on whether or not one does local extrapolation. Two outstanding codes, Krogh’s DVDQ [2] and Gear’s DIFSUB [3], are based on extrapolated PECE methods and Adams-Moulton (iteration to completion) methods without extrapolation, respectively. We *Work partially supported by the U. S. Atomic Energy Commission. 0 American Elsevier Publishing Company, Inc., 1975

L. F. SHAMPINE AND M. K. GORDON

48

shall consider only these two representative ways of using the Adams methods. With extrapolation we find that the local error and truncation error are asymptotically equivalent at all three orders. We justify practical estimators of the local error with the important, but hardly surprising, observation that the estimate requires

needed

for raising the order is the most delicate

special action. Without

extrapolation

and

the local error and truncation

error are asymptotically equivalent for orders k - 1 and k, but not for order k + 1. We show Gear’s heuristic estimators are correct with the qualification that the estimator

for order k + 1 estimates

truncation

error.

The model situation for changing order that we treat ought to be correctly handled by any reasonable scheme, and it provides the illumination just described.

Still, it leaves much to be desired

in describing

what happens

in

real codes, especially when its asymptotic basis cannot be presumed reasonably valid. Here, as in all phases of the study of real codes for solving differential

equations,

ful experimentation

general

theory,

must complement

analysis of model problems,

The forms of Adams methods being used in to facilitate

the estimation

of errors

and care-

each other. DVDQ

and

and the changing

were chosen

DIFSUB

of order

(as well as

other related tasks). For our study it is necessary to thoroughly understand these forms: the Newton (difference) form in DVDQ and the Taylor (Nordsieck) form in DIFSUB. The difference form is already familiar, SO we need only review essential

facts. Gear developed

the Taylor

form in his text

by transforming from the Lagrangian (ordinate) form. We present a new, direct derivation which has some immediate practical benefits, as well as being of theoretical importance. order methods, and we describe

Apparently Gear’s code should allow higher a simple way to obtain suitable coefficients.

The analysis reveals that Gear does not use an Adams method or lowering

2.

FORMS

when raising

the order in his code and shows how to do this.

OF THE

ADAMS

METHODS

Using Adams methods in a variable

order code depends on the availability

of formulations making the change in order easy. Two such formulations being used today are the Taylor (Nordsieck) form and the Newton (difference) form. A good understanding of both formulations is essential for the study of error estimation in the next section. Gear developed the Taylor form in his text by transforming from the Lagrangian form. We present here a new, simple derivation from first principles which leads to a better understanding of the form. In addition to its application to the next section, our development shows an easy way to calculate the coefficients for all orders. We also observe that Gear does not use Adams methods when changing the order (the same is true of changing the step size); we discuss

Local Error and Variable Order Adam

49

Codes

what he does instead, and how to modify his code to implement Adams methods in this situation. The difference form is already familiar, so we have only to review those facts needed in the next section. The Newton and Taylor forms are quite similar as we view them, and we use this fact at one point to see how to proceed in our development of the latter. The Adams methods in any form as based on the identity

for the solution y(x) of y’(x) = f( ;r, y(x)). The Adams-Bashforth method of order k is based on a polynomial f(t) of degree k - 1 interpolating to values k. We define the Adams-Bashforth polyfn_i=f(~n_iryn-i) for i=l,..., nomial of order k at X~_ r by

Pk,tI&)= Yn-1+

j--f(t)&

(2.1)

and the new computed value is y, = Pk,n_ r(x,). The Adams-Moulton method of order k defines yn implicitly by requiring f(t) interpolate to fn_i for k - 1. This means that if we define the Adams-Moulton polyi=O,l,..., nomial of order k at x*-r by

(2.2) and yn = ‘3’)k,n_r(x,), then we have the requirement that

We are concerned with the Newton and Taylor forms for computation, but we recall a third form which will be convenient for the analysis of the next section. The Lagrangian form is based on the Lagrange or ordinate form of the interpolating polynomials, and leads to the familiar Adams-Bashforth formula

and the Adams-Moulton formula

(2.4)

L. F. SHAMPINE AND M. K. GORDON

50

The Newton or difference form uses Newton’s form of the interpolating polynomials. This leads to the Adams-Bashforth formula k-l Y”Vl +’

Y”=

2

Yi’lfn-1,

(2.5)

i=O

where the backward

differences

Vlf,_r

have their usual meaning

and

1

S( 1

y,=(-1)’

-s

An important

observation

ds.

i

0

is that the coefficient

order k. A similar treatment

yi does not depend

of the Adams-Moulton

on the

formula leads to

k-l

Ynn=Yn-1

yfV’f;l,

+ h c

(2.6)

i=O

where 1 yi* =(-

S( 1. l-s

l)i

0

It is easy to see that

i

yo= yz = 1 and, using an identity

coefficients, y* = yi - yi _ 1, i > 1. Either by a simple recurrence [l]. The

A

set of coefficients

for the binomial can be computed

Adams-Moulton

predicted corrected

value y,, is computed iteratively by first forming a value y,P from the Adams-Bashforth formula (2.5). This value is by forming f(r,,, y,P), using it and the differences Vifn_ 1 to com-

pute approximate values Vffn, and then using the formula (2.6). We shall use the notation Vkfn to mean the backward difference of order i formed from f (x,,, y,P) and f,- r, fn_a, . . . One then computes f (x,,, y,) with this corrected y,; he could form new differences and repeat. Krogh’s DVDQ is a PECE code, meaning that it accepts the first corrected value y, and f,= f(x,,, 9,). Gear’s DIFSUB (in effect) iterates to completion, that is, until y” satisfies an implicit equation like (2.6) exactly. Either way it is most practical to prove by a little manipulation the relation between the Adams-Bashforth and Adams-Moulton values

(2.7)

51

Local Error and Variable Order Adams Codes

In difference form one predicts using (2.5) and iterates with (2.7). Not only is (2.7) convenient for iteration, but it also facilitates error estimates. Notice the great simplicity of the coefficients and the ease of changing the order. After accepting y,, and fn, one computes Vyn, i = 0,1, . . . , k. To lower the order to k - 1 or raise it to k + 1 in the next step using (2.5) is trivial. The Taylor form of Adams method uses Taylor series expansions of the polynomials. Thus the Adams-Bashforth polynomial is to be written

pk,“-,(x)=y,-l+(x-x,-,)y~-,+“’

+

(x-Xn-l)ky:k21 k!

(2.8)

and

++k

yn=Pk,n_l(Xn)=yn_l+hy;_l+A provocative that y: _, = f,

(2.9)

notation yi’?i is used which is consistent with the requirement A code of this type stores the reduced or scaled derivatives h “y~~)l/m! which are analogous to the differences hVmf,_ i. The formula (2.9) is the predictor. We want to obtain the Adams-Moulton polynomial in a form analogous to (2.7). It is easy to verify that _ 1.

and so we seek

with a polynomial

Qk,n_ r(x) of degree k. The interpolation

9 k,AX”)

i=O,l

9D;n-1(xn-i)=fn-i*

uniquely

determine

= Yw ,...,k-1

Qk,” _ i( ZX)as

Q,

_

(%)=

k,n 1

(x-x,-l)...(X-X,-k+l) (k-l)!hk

conditions

L. F. SHAMPINE AND M. K. GORDON

52 Then the Taylor

The predicted

scaled derivatives h”y$QI)

= hrnP&(xn) m. I

m! can be evaluated coefficients

efficiently

~“Q&‘-,k) ck,rn

of ‘Yk n_ 1(x) about X, follows from

series expansion

=

are constants.

by a device

=

m!

=

1 (k

A very practical

way to generate

[2, p. 149 ff.], The

. . (S+k-2)]1

dm[[s(s+l). &“-l

l)!m!

-

due to Gear

s-1

them

is to use the initial

values c2,2=

m=3,4

C2.m =o

t,

)...

and the easily proved recursions 1 ck,2=ck-1,2+

(m-1)

+ ‘k,,

=

The interpolation

‘k-

k=3 , 4 ,...,

2(k_1)

1.m

m(k-

conditions

1)

Ck-l,m--l

actual iteration

is carried

, ....

require

hQi,n- kc,) = 1,

Qk,“-r(%)=Yk-1,

so the

m=2,3

out by

Y,=Y~++k-l[hf(x,,Y,P)--hy~(p)l~

etc., just as with differences. One never needs the scaled derivatives of order greater than one of the Adams-Moulton polynomial; what he needs are the scaled derivatives for the Adams-Bashforth polynomial for the next step,

Local Error and Variable Order Adams Codes Pk,” (r). The interpolation

conditions

53

make it obvious that

Pk,“W ~k,“-1(4; hence the scaled derivatives for the Adams-Moulton polynomial expanded about x,, are exactly those of the Adams-Bashforth polynomial for the next step. As a consequence

we need not distinguish

tives for the two polynomials The preceding

between

if we do not change

the scaled deriva-

order, and Gear does not.

formula

= h”YLrn,“‘(

P)

m!

+ck,m[hf,-kL(dl

is just what is needed for advancing a step. When the order is changed, the convenient

relation

between

Adams-

Moulton and Adams-Bashforth polynomials is impossible because their degrees are different. Gear proceeds in his code as if differences are being used; however,

the consequences

are not the same in the Nordsieck

form.

The Adams polynomials are uniquely determined by the interpolation requirements, and we use these requirements to see how to change the order. In the case of a failed step the order remains the same or is lowered by one. If the order is the same, the predictor

is retained.

Pk,n_I(x)

If the order is

lowered, we need the scaled derivatives of Pk_ l,n_ 1(x). Gear simply drops the term hky,f! ,/k!, but this is not enough, since other terms are also affected. Because of the interpolation conditions the values II, _ 1 and hy,T_ 1 must be the first two terms of the expansion of Pk_ I,n_ 1(x), so we need concern prove

ourselves only with m = 2,. . . , k - 1. It is relatively

P,,“-,(~)=P,-l,“-1(X)+

T (.I

straightforward

to

k!Q+&)

and then h”‘P$“‘,,,_,(xn_,) m!

hmy:‘!!))l =--+ m.I

k!(Ck-

For a failed step in which the order is lowered, and so does not use an Adams method.

I,m

-

Ck,m

hk ‘k’l )( -+).

(2.10)

Gear uses just the first term,

L. F. SHAMPINE AND M. K. GORDON

54

Let us now suppose the step is a success. Again because of the interpolation conditions, the values yn and hy: must be the first two terms of the expansion for the Adams-Bashforth polynomial for the next step; so we need concern

ourselves

only with m > 2. The proper

relations

follow:

where

and

_ hk-ly;‘d, cux,+p=1, o= The relations can be immediately They lead to

h“y,?’ -= m!

(k-I)[%-hY:,(p)l



verified from the interpolation

conditions.

+ term

hmy:"')( P) m!

where Order k: m=2,...,k,

term = c,, m M-hY:,(P)l

Order k+l:

term=Ck+l,m[hfn-

m=2,...,k+l,

Orderk-I:m=2,...,k-1,

h!!A(p)l

term=ck-,,,[hf,-hy~(~)l

h ky;k? 1 +

‘f!(Ck-

1.m

-

ck,m

)(

T-).

(2.13) In his code Gear always computes the scaled derivatives as though he were continuing at order k, that is, using (2.11). If he lowers the order to k- 1, he simply drops the term h kyik)/ k!. If he raises the order, he approximates the additional reduced derivative by a difference quotient:

1 -if--T1* h ky’k’

h “y;“’

Local Error

and

55

Variable Oro!mAdams Codes

This approximation happens to be the same value given by (2.12). Since Pk,n_l(~) is of degree k, yLk+l)( p)=O, and since hk+lQk(l:,f!_l(x)= 1, we see the value’from (2.12) is

Because

the kth derivative

of Pk,n_ 1(x) is constant,

yAk)(p) = yi! i. Then from

(2.1 l), for m = k we find hk

hkYLk’ Gear’s

approach

1

---=$hL-hy,:(p)lv k! .

k! whence

‘k_!

Yn

is seen to give the same scaled derivative

as the

Adams method in this case. On the other hand, the formulas (2.11), (2.12), (2.13) show the other scaled derivatives of orders m > 2 are not those of the Adams method when either raising or lowering the order. What Gear does on changing order has the merit of simplicity,

but the

effect of changing methods is unclear. It is known that there can be stability difficulties due to changing step size by interpolation, as he does, but not if one uses the appropriate established

of no examples stability

Adams formula.

Unconditional

for the Adams formulas when changing of instability

for Gear’s

stability

code when changing

has not been proven without assumptions

has been

order, too [4]. We know order, but its

about how often changes

take place [5]. Gear iterates to completion, so the different predictor does not affect the new values for y and y’ at all, but because he uses (2.11) to advance, the different values of the predictor propagate into the corrected values for the scaled derivatives with m > 2. Equations (2.11), (2.12), (2.13) are written to facilitate computation. One can delay the evaluation of the scaled derivatives of orders m > 2 until he decides what order to use for the next step. In the event of a change in order one should bring in the coefficients c,,, for the new order. If he then computes the scaled derivatives, he will have correctly dealt with orders k and k + 1. To lower the order, he must make another pass over the scaled derivatives to subtract the extra term in (2.13) or use (2.10) for a failed step. There are advantages to using an Adams-Moulton formula of order one higher than the Adams-Bashforth predictor. This is feasible because the extra value fn _ k for the corrector ?Pk+ i n _ r(x) is already needed for the predictor Pk.n_l(x).

It is easy to show that

’ (2.14)

L. F. SHAMPINE AND M. K. GORDON

56

which means that to work in this way with the difference form, one need only use a different constant than in the previously described way (2.7). For the Taylor

form it is easy to prove that 9 k+I,n-1(x)=Pk,n-1(~)+

The actual computation

Q~+~,n-d4M-

is carried

out by

Y”=YnP+YkPf(

hY:, =

W,,-,@)I.

%Y,‘)-hY:,(P)lY

hfb”7Y"L

etc., so differs from not extrapolating only by using yk instead of yk_i. The formulas for the scaled derivatives (2.10), (2.11), (2.12), (2.13) have nothing to do with the choice

of y,, and f,, being merely statements

about interpola-

tion, so apply in the same way. 3.

LOCAL

ERROR,

AND THEIR Suppose

TRUNCATION

ERROR,

ESTIMATION

we solve

YW

=f

byY (4L

(3-I) (3.2)

Y(a)=Ya on an interval [a, b] by a method which computes We define the local for xj=a+jh, j=O,l,.... stepping

approximations yi to y (xi) error of the method in

from x, _ 1 to x, to be

P-3) where the functions

u,- r(x) are defined

for each x,_ 1 by

The local error arises from two sources: the error in approximating u,_ l(xJ by the Adams formulas, and the error in the memorized values Y”_~. We call the discrepancy in approximating the local solution by the discrete variable formulas the local discretization error. To be more specific, suppose the Adams-Bashforth formula of order k is used. Then the local discretization

Local Error and Variable Order Adams Codes

57

error r,P is defined by

which, if u,_ i(x) is sufficiently smooth, has the form [I] (3.7) The local error may now be written

i==l

Similarly, the local discretization error T, of the Adams-Moulton formula of order k is defined by k-l u”-,(x,)=u”-l(x,-,)+h

C P~if(x”-i~u,-l(X,-i))+7,y i=O

where

The familiar (local) truncation error is the discretization error of the formula as applied to the global solution y(x). Thus truncation error refers to the solution of (3.1), (3.2), and not to a local solution at all. If there are no memorized values, as at order one, the local error is simply the local discretization error. More generally, if the memorized values are exact, u, _ r(xn-i)= yn-i> th e same is true. Early heuristic discussions presumed exact memorized values and did not distinguish the local solution u,_ i(x) and the global solution y(x), in which case the local error is the truncation error. We shall show that one can normally ignore the propagated error and that the three concepts are then asymptotically equivalent, but that this is not always the case. Choosing a step size to control the local error is natural in view of its definition. There is no reason equally obvious to control the truncation error; however, it is a fundamental quantity in convergence proofs, and controlling it does control the global error in the same way as controlling local error does. While we much prefer to control the local error because of its direct relation to the computed values, this is a question of taste, and anyway, the two quantities are usually asymptotically equivalent. We emphasize the

L. F. SHAMPINE AND M. K. GORDON

58 distinction,

since Gear’s code in one circumstance

controls

truncation

error

rather than local error. The model situation we use to study error estimation is: We compute through x”_i, using a method of order k and a constant step size h. Of the next step of size h we ask two questions, one theoretical and one practical. If the step is made using the three different orders k - 1, k, and k + 1, can the local errors be estimated in spite of the propagated errors of order k in the preceding values? If only the computation of order k is actually made, can the errors that would have been made at k - 1 and k + 1 be estimated from the available data? This model is consistent with Henrici’s earlier study [l, pp. 2562581 of truncation error at fixed order and step size. Its flaws as a model of real computation using variable step size and variable order, and for which asymptotic estimation

results may be poorly applicable, are obvious. Still, any useful procedure ought to perform properly in this situation and the

model does expose points of theoretical First we make some general remarks y” to y (x,) is generated

and practical importance. about local error. An approximation

using one of several

possible

formulas.

The local

error is Us_ 1(x,) - yn. Since u,_ i(x) and y (CC)are solutions of the same differential equation which differ by the small amount S = y” _ i - y (x,_ i) at the equation kitibariation

of first variation

can be used to relate them. If e(x) is the

of the solution of (3.1), (3.2), then un_ i(x) and y(x) are related

bY

u,-l(;r)=y(x)+6u(x)+o(62), where U(X) is the solution of

u’(x)=+

Y (x))u(xL

0(X,-1)=1. An expansion

of U(X) about x~_ I leads to the local relation

for, say, 0 < i < k. If even true that

f

is smooth,

u?,(x n



as we always suppose,

_1)=yqxn_1)+o(6).

then [6, p. 401 it is

(3.10)

In view of this and the expressions (3.7), (3.8) for the local discretization errors, we see the local discretization errors and the truncation errors are asymptotically equivalent for our model problem.

Local Error and Variable order Adams Codes

59

The idea of the analysis is simple. The computed values IJ~_~ are related to y (x,_,) by standard asymptotic expansions of the global error for a method using fixed step size and order. Then the Y”._~are related to the u,_ r(~,,_~) through (3.9), and so we can discuss the local error. In all cases the local error involves derivatives of un_ r(x), which because of (3.10) can be related to the truncation error. To proceed beyond this general outline we must be more specific about the asymptotic expansion. There are two cases, depending upon whether or not we extrapolate. First we shall look at the case of not extrapolating, so that we use a predictor and corrector of the same order. The principal variable order code of this kind, Gear’s DIFSUB, iterates to completion, so we shall do the same, although the analysis is somewhat easier for a predictor-corrector approach. Only the estimation of error at order k+ 1 will be detailed, since the estimation for other orders is similar and considerably simpler. With our assumptions the global error has an asymptotic expansion of the form [7] yi=y(Xi)+hk&)+~k+v/(~i)+O(~k+2)~

(3.11)

where

v’(x)=&)d4

(3.12)

- Y;y(k+1Y4

and g(x)-f,(x,y(x)). A s in (9), but carrying an extra term in the expansion of u(x) about x”_r, one finds

where 6=y,_,y(x,_J= O(hk) and ~p(x~_,)= q(x,,_J+ O(h). We shall write y,, (k + 1) and the like to signify approximate values from the formula of order k + 1, and le(k + 1) for the local error at x, when at order k + 1. We also writef,_i forf(x,_i,y,_i). I n combination with (3.11) and (3.12) the last equation implies the local relation y~_i-Un_1(X”_i)=(i-l)hk+ry~y(k+1)(X”_J+O(hk+2),

O
(3.13) First one looks at the error associated with a predicted value of order k + 1, k+l

Yr!l,P’+l)=Yn-~+~

IX i=l

pk+l,if,-i*

L. F. SHAMPINE AND M. K. GORDON

60 From (3.6), discretization preceding

(3.7) one sees this may be written in terms of the local error of the Adams-Bashforth method and the error in the

values:

Since

for & _ i between g(q_

y,, _ i and u,, _ r( x” _ i), one can relate the partial derivative

to

r) and use (3.13) to see that

f(x~-i~u~-~(x~-~))-f(~~-i~y~-i)=-g(x~-~)[(i-1)h~+1Yk*y’k+1’(X~-~)] + O(P+s). With this relation

and the identity

which follows from the exactness

of the formula when applied to y(x) = x2/2

for x,, = 1 and h = 1, one arrives at the simple expression

+ ;g(xn_l)hk+1y’k+2’

The Adams-Moulton

value y,,(k + 1) is defined

yn,(k+l)= yn-l+h

(x,_J+O(~~+~). implicitly

(3.14)

by

I$Pk*+l,iffl-i+hPk*+1,0f(x”~Yn(‘+l))’

i=l

The local error is given in terms of the local discretization

error and the error

61

Local Error and Variable order Adams Codes in the computed

solutions:

le(k+1)=y,*,,hk+2~~~+i2’(x,_1)

In exactly the same way we investigated

the error of the predicted

value, we

find

Here we use the identity

After

first realizing

0(hk+2),

these

expressions

for the local

error

imply

that

it is

we can simplify them to

The truncation

error is the local discretization

error (3.8) of the formula

as

applied to the global solution:

The expression for le(k + 1) showing how the propagated error appears in the local error is one of the principal results of this paper. Comparison with te(k + 1) shows that in general local error and truncation error are not equivalent, which is hardly surprising, since the propagated error is of quite a bit lower order than the quantity being estimated. In view of the term g(x,_ i), it appears difficult to estimate the local error. If one chooses to use Adams methods with predictor and corrector of the same order without

L. F. SHAMPINE AND M. K. GORDON

62 extrapolation, provide

it appears that local error is inappropriate.

a convenient

Fortunately

we can

and simple alternative.

A practical estimator cannot use f( x,,, y, (k + l)), since we cannot afford to do two separate computations just to consider a change in order. But if we use the computed

an argument

value f(x”, y,) to define

exactly

like the one for y,P(k + 1) shows that

+ ~g(X,-,)h’+2yi’+2)(a_,)+

0(hk+3’ 1.

.a

Combining estimator

this with

te(k+l)=

(3.14)

-+(

and

using

(3.10),

we arrive

at the practical

y;--(k+l))+O(hk+3)

=h$+1Vk+Ifn+O(hk+3). This says that we can estimate truncation error, which then appears to be the appropriate concept when using Adams methods in this way. We shall now show this is what Gear does for heuristic reasons in his code. In his notation h “yAk’ ak=k!’

Recall from Sec. order, the scaled the same as those Vu,= hV’r,/k!, uses.

2 that because the predictor and corrector are of the same derivatives of the predictor polynomial for the next step are of the corrector for the step in progress. Also recalling that we see the estimator is yk*, ,V2uk/ k! which is what Gear

63

Local Error and Variable Order Adams Codes For orders k and k - 1 the local error is asymptotically truncation

equivalent

to the

error and le(k)=te(k)+O(hk+2)

=~[y,(k)-yy(k)l+O(hi*l) = hyk*Vkffl+ O(P+y, which

differs

only in notation

form. The expression because

it requires

auxiliary quantity

from the same quantity

corresponding the evaluation

yz defined

estimator

used in the Taylor

to (3.15) for order k - 1 is not practical of f( x,, y,( k - 1)). If one introduces

the

by

which uses the valuef(x,,y,), Then the practical

(3.15)

it is easy to see that y,*=yn(k-l)+O(hk+‘). is

le(k-l)=hy,*_,Vk-tfn+O(hk+l). For scaled derivatives

we note hVk-‘f,_,+

hVk-tf,= but because

Pk n ~ 1 is of degree hVk-lf,_,r

hvkfn,

k,

hkP&l(+hky;k~l=

hkyLk’( p),

so

le(k-l)=y*_

k 1[ kf.( hky;yP))+hVyn]

Since Gear always computes the scaled derivatives order k, the estimate appears as h

le(k-l)=y{_,k!

as though continuing

ky(k)

(.I *

+O(hk+‘).

+ O(hk+‘)

at

L. F. SHAMPINE AND M. K. GORDON

64

in his code because

in this case. Let us summarize the conclusions for using Adams methods without extrapolation. The concept of local error is equivalent to that of truncation error for orders k and k - 1, but not for order k + 1. It is feasible to estimate the truncation error for order k+ 1. Practical estimators using only the computations at order k are justified for the model situation, but at order k + 1 it is the truncation error which is estimated. These estimators are the same as those used by Gear. The viewpoint as to what is being controlled by order and step adjustments is quite different for local error and truncation error, though the action taken is the same. Because one is apparently unable to estimate local error for raising the order in this way of using the Adams methods, it is probably best to think of such codes as controlling truncation error in all instances. The second principal way of using the Adams formulas involves extrapolation, for which a PECE approach is best. In this mode the local error of the procedure using predictor and corrector of order k is estimated and the estimate added in to improve the accuracy. In this context we let y,, mean the extrapolated value and use yn(k) for the result of using a corrector of order k. The predicted and corrected values of order k are related by

!I!,(~)= !/,Pck) + hYk-lV;fn. The error estimator to be used is

which is added to y,,(k) to form y,,; written directly yn is Y”=Yn(k)+ =

&(

Y"W

Yrw

y,?(k)+ hY,V;fn.

As we noted in (2.14) this corresponds to using a corrector of order k + 1 and predictor of order k. Henrici [l, p. 261 ff.] shows that with a suitable start the PECE algorithm is asymptotically of order k + 1. Krogh’s DVDQ uses this approach, and we believe it to be one of the most effective ways to use Adams methods. The PECE formulation is more

Local Error and Variable order Adams Codes efficient

than iterating

to completion

65

and about

as efficient

as any of the

PE(CE)m or PE(CE)“C p ossibilities [B]. Extrapolation has important benefits. Krogh [9] was apparently the first to point out that extrapolation improves the stability of the PECE formulation of the Adams method. More corrections and evaluations than PECE are not advantageous as regards stability [lo]. Thus for “large” h, extrapolation enhances absolute stability, and for “small”

h it increases

the accuracy.

When extrapolating one regards the method as being of order k and adjusts the step size and order accordingly. In.point of fact, by adding in the estimate

one does not know what the error actually is; hopefully

it is better.

Asymptotically the error is less than that estimated, so one is being conservative in regarding the order as k, which is generally desirable in real computation. It will turn out that in this mode one can always estimate the local error which we prefer for step size and order variation. Nonetheless, extrapolation will not necessarily improve the error on a non-asymptotic basis and can make it worse. Whether

or not to extrapolate

is rather a matter

of taste, and experts do not agree. The analysis is simple and leads to the conclusions that local error is asymptotically equivalent to truncation error in each case, and that it can be estimated by Milne’s device of comparing predicted and corrected values. The practical estimators at order k and k - 1 are analogous to those for the case of not extrapolating, the exact analog being the use of the corrected value y”(k) in place of the implicit solution. However, it is straightforward to show that the predicted corrected

value y,P(k) is sufficiently

accurate

to replace

the

value. Thus

le(k-

I) = hyf_lVi-Ifn

= $i[

+ O(hk+‘)

kl( hky;:(p))

+ hV;fn]

+ o(hk+‘),

only require the value f( xn, y,P( k)). As a consequence, the decision as to whether or not a step is acceptable can be made after computing only through PEC. If the step is to be rejected, the final evaluation is not done, and one can still estimate a suitable step size at both orders k and k - 1. This is more economical than iterating to completion. The local error estimate at order k + 1 requires yn (k + 1). Because of local extrapolation, the corrected

L. F. SHAMPINE

66 value yn is asymptotically estimate

equivalent

AND M. K. GORDON

and may be substituted.

The practical

is le(k+1)=hy,*,,Vk+tfn+O(hk+3) =~k*+JhVkfn-/lvkfn_1]+O(hk+3).

Thus it is necessary le( k + 1). We have preliminary

been

to do at least PECE

fortunate

manuscript

to receive

with extrapolation

very

detailed

by C. W. Gear, F. T. Krogh,

to estimate

critical

reviews

of a

and T. E. Hull.

REFERENCES 1 2

P. Henrici,

Discrete Variable Methods in Ordinary Differential

New York,

1962.

F. T. Krogh,

VODQ/SVDQ/DVDQ-variable

solution of ordinary

differential

May 1969, Jet Propulsion 3 4

Laboratory,

integrators

Pasadena,

P.

Piowtrowski, for

equations,

Stability,

numerical

Conference

consistency integration

on the

Lecture Notes in Mathematics C. W. Gear multistep

Numerical

Champaign-Urbana, S. Lefschetz,

Report

No.

Stability 571,

Dept.

NPO-11643,

Calif.

Differential

convergence systems

Solution

(Springer-Verlag,

and D. S. Watanabe,

methods,

and

of large

for the numerical

No. CP-2308,

C. W. Gear, Numerical Initial Value Problems in Ordinuy tions, Prentice-Hall, Englewood Cliffs, N. J., 1971. methods

5

order

TU Dot.

equations,

Equations, Wiley,

of

variable

of ordinary 109 (1969),

and convergence of Comp.

Equations, 221-227.

of variable

Sci.,

k-step

differential

of Differential

Berlin)

Equa-

Univ.

order

of Ill.

at

1972.

Equations: Geometric Theory, Interscience,

Differential

New York,

1957. W. B. Gragg, ordinary T. E.

Repeated

differential

Hull

extrapolation

equations,

Thesis,

and A. L. Creemer,

to the limit UCLA,

Efficiency

in the numerical

solution

of

1964. of predictor-corrector

procedures,

IACM, 10 (1963), 291301. F. T. Krogh, A variable step variable order multistep method for the numerical solution of ordinary differential equations, Proceedings of the IFIP Congress 10

1968 Inform&ion

Processing (North-Holland,

G. Hall, Stability

analysis of predictor-corrector

J. Numer. Anal.

11 (1974),

494-505.

Amsterdam) algorithms

68 (1969),

194-199.

of Adams type, SIAM