General linear method: a survey

General linear method: a survey

Applied Numerical Mathematics North-Holland 1 (1985) 273-284 273 GENERAL LINEAR METHOD: A SURVEY J.C. BUTCHER Department of Computer Science, The U...

1MB Sizes 68 Downloads 132 Views

Applied Numerical Mathematics North-Holland

1 (1985) 273-284

273

GENERAL LINEAR METHOD: A SURVEY J.C. BUTCHER Department of Computer Science, The University of Auckland, Auckland. New Zealand

8. Introduction Following the advice of Aristotle, we look for the greatest good as a mean between extremes. Of the various methods devised as generalizations of the classical method of Euler, two extreme approaches are typically followed. One is to generalize the Euler method through the use of multistep methods; the other is to increase the complexity of one-step methods as in the Runge-Kutta method. In the discussion of Section 2, general linear methods are introduced as a middle ground between these types of generalization, and examples of these methods are given. In Section 3, some of the elementary theoretical properties concerned with convergence of the method are considered and a di,scussion of truncation error follows in Section 4. Some of the difficulties of implementation that these methods share with Runge-Kutta methods are discussed in Section 5 and, in Section 6, properties concerned with A-stability and its non-autonomous and non-linear counterparts are defined and interrelated. Finally, in Section 7, we indulge in some speculation about the possible construction of efficient software using a suite of general linear methods.

2. The Euler method and generalizathm The classical method for solving initial value problems numerically, the method of Euler, is ideal as an object of theoretical study but unsatisfactory as a means of obtaining accurate results. Methods proposed since the time of Euler have been successive generalizations in various directions. To solve the differential equation Y’(X) =f(v(-a

by the Euler method, we compute yO, yr, y2, . . . as approximations to y( x,), y( x,), y( x2>, . . . where x1 =x,+h, x,=x,+h, . . . and h is the so called stepsize, using the formulae Yo =Ybo)9

yn=y,,_l+-hf(yn_,),n=lJ,

... .

In the formulation of this method we note that: (a) y,, depends explicitly on yn_ 1 but on no earlier of y,, _ 19 y,, _ 27 . . . ; 0168-9274/85/$3.30 6 1985, Elsevler Science Publishers B.V. (North-Holland)

J.C. Butcher / General linear methods

274

(b) the function f is evaluated only once in the step from the computation of Y,_, to the computation of Y,; (c) only the function f itself is used rather than fi, f3, . . . say which yield values of Y”(X), Y “‘(x), in terms of Y(X) or of j’ (the Jacobian of f ), f “, . . . . The most important generalizations of the Euler method are: (A) The use of formulae which violate (a). That is, Y,, depends on Y,,_*, . . . . y,_, (k > 2) as well as on Y,< _ ,. A simple example is Y, =_Yn-2

+

2~fbLJ.

These methods are known as ‘linear multistep metholds’. (B) The use of formulae which violate (b). That is, more than one evaluation of in a formula for y,. An example of this type of method is Y, =yn-I +hfiYn-,

f is involved

+ 4hf(Yn-1%

These methods are known as ‘Runge-Kutta methods’. (C) The use of formulae w&h violate (c). For example, expressions for Y”(x), y “‘(x), . . , may be used along with an expression for y’(x). The ‘Tayior series method’ is of this type. Alternatively, formulae for j’, j”, . . . as well as for j may be used. ‘Rosenbrock methods’ generalize the Runge-Kutta method by making use of f’ in the procedure for computing Y” in terms of yn_,. It is convenient to represent the various types of generalization in a diagram (see Fig, 1).

generut linear

Euler

Fig. 1.

Fig. 2.

J-C. Butcher / General linear methods

275

This enormous class of methods has hardly been explored at all. In our discussion we will now limit ourselves to the (A, B)-plane, which represents the class of all general linear methods. In the redrawn diagram (Fig. 2), in which we confine ourselves to this plane, a few special types of method are shown. Apart from Runge-Kutta methods, linear multistep methods and of course the Euler method itself, all the types of method shown here were proposed within a remarkably short period of time. Generalized multistep methods were introduced by Gragg and Stetter [25] in 1964. In 1965 the work by Gear [23] on hybrid methods was published and in the same year closely related ideas known as modified multistep methods were suggested by the present author [5]. In 1966 Byrne and Lambert [15] published their work on pseudo Runge-Kutta methods, the same year as the author’s paper on the convergence of general linear methods [6] appeared. The formulation adopted in the present survey, in which the quantities occurring in the method are partitioned into two classes according as they do or do not have a significance outside the current step, is the one introduced by Burrage and the author [2] in 1980. Let Y,, Y2,. . . , Y, denote approximations computed within step number n to advance the solution one unit of distance further forward and let yi”), yjn), . . . , y,!“) denote the vectors of information available at the end of step number f for input to the next step. The way these quantities are computed from ~l(~-*), ~j~-*)~. . . , y,?‘) is by the formulae Y;.=h i

$f(

yj)

$ i

y!“‘=

h i

I

~;;y,!“-~‘,

i= 1,2 ,..., s,

j=l

j=l q?;f(

q)

+

j=l

i

$y;“-“,

i= 1,2 ,...,

The matrices C, ,, &, C&VCZZ,with elements c$ c& c$ matrix C, partitioned as (s + r) X (s + r), Cl, t c,J $____

c= [

r.

j=l

c,.$ will be regarded as blocks of a

.

21 I C22I

TO illustrate the range of possible methods that can be represented in this way, four examples will be given. These are, a particular Runge-Kutta method, a particular linear multistep meld, a hybrid method and finally, a general linear method that cannot easily be motivated in any other way. The Runge-Kutta method

r

1 3

6

i

when written 2 a general linear method, takes the form 0 $ C=

0 0

-1 2 --u_----_i +

011 Oil 011-a; 1 I

J.C. Butcher / General linear methods

276 We

note

that

s is the

number of stages in the Runge-Kutta

r= 1. The linear multistep method linear method, takes the form

method, in this case 3, and that

yn = y,,- 1 + h( if< yn _ 1) - if< yn_ 2 3), when written

as a general

1

Oil 3 -+ ____-_-______ C=Ojl 2 -f. llO0 0 0 i 0101 Note that s = 1 and that for an arbitrary linear k-step method r could be as high as 2k although for Adams methods, as in this example, r = k + 1. The hybrid method L/2

=yn-1

Jn=Yn-l Y” =yn-1+

+ ~~(fLfL*)

+ ih(

-f(Jfl-i)

+?fef-I,,)

-.a)),

+4f!_Ft8-l/2)

-/(Jn))v

th(fCa_,)+4f(~~-,/,) +f(hlij~

makes use of temporary approximations JH_1,2 to y( X~_~/~) and Jn to y( xn), in the construction of the ‘corrected result’ y,,. As a general linear method this takes the form

and we see that r = s = 2. This method is interesting in that the spectral radius of CIl is zero, implying rapid convergence in the iterative computation of jj,,_1,2 and Jn for a smooth non-stiff problem. The final example, derived by Dekker [20], is a general linear method with r = s = 2 which combines the properties of being diagonally implicit, allowing efficient implementation, and algebraic stability, guaranteeing satisfactory behaviour with a wide range of both linear and non-linear stiff problems. This method which has order of accuracy 4, is represented by

As natural extensions of linear multistep methods and of Runge-Kutta methods, we expect general linear methods to share both the advantages and the disadvanzages of those more traditional methods, though possibly in a milder form. Naturally, we hope to select particular methods that possess as many of the advantages and as few of the disadvantages as possible. 3. Consistency, stability and convergence For a general linear method characterized by the four matrices Crr, C12, C2r and C22, we describe it as ‘preconsistent’ if there exists a vector u such that C22u=u and C,,u=e, where e is the vector with each of its s components equal to 1.

J.C. Butcher / General linear methods

277

For a preconsistent method, and a suitably posed differential equation, the assumption that the vectors JJ{~-~), y~n-l),...,y,!n-l) are given by yi(n-l’=uiy(x,_l)+O(fr), j= 1, 2,...,r, leads to the estimates

q=y(x,_,)+O(h),

i=l,2

'"'=UjY(Xn)+O(h), J!i

,..., s,

j= 1,2 ,..., r.

A preconsistent method with preconsistency vector u such that

vector u is said to be ‘consistent’ if there exists a

C22v+ C2,e = v + u. For a consistent method, and a suitably posed problem, the assumption

YY) =uiv(~n-l)+hVjY'(xn-l)+O(h2),

j=l,2,...,r,

that (3 .1)

leads to ri’“’ =ujY(x,)+hvjy’(x,)+O(h2),

i=l,2,...,r.

It is useful to introduce the vector 5 = Clle -t C12u which represent the relative spacing of the vectors Yl, Y,,..., Y, as approximations to values of the solution y. That is, on the assumption that (3.1) holds,

r;.=Ybn-1

.t h5i)+O(h2),

i= 1,2,...,~.

The method characterized by Cll, Cl,, Czl and C,, is said to be ‘stable’ iff C,, has bounded powers. This guarantees that, at least for the trivial differential equations y’(x) = 0, perturbations in the initial value cause bounded changes to the computed solution after an indefinite number of steps. It generalizes the so-called root-condition occurring in the theory of linear multistep methods. ‘Convergence’ can be defined informally as the condition that if J$*) = uiy( x0) + O(h) then Y!“‘=uj_v(xo+nh)+O(h), j=l, 2, . . . r, for all n subject to bounded nh. This condition guarantees that given accurate enough initial approximations we can obtain arbitrarily accurate numerical solutions at any fixed point by taking h sufficiently small. The main theorem concerning the concepts discussed here is that stability and consistency are together necessary and sufficient for convergence. This result, which satisfactorily generalizes the fundamental work on linear multistep methods of Dahlquist 1181,was proved in [6].

4. Order of accuracy

Going beyond mere convergence to the question of obtaining accurate solutions efficiently, we naturally ask how the classical concept of ‘order of accuracy’ should be interpreted for general linear methods. The answer we propose, is to allow for a rather general type of starting process S, which can be characterized by four matrices, S,,( s’x s), S,,( s x I), S,,( r x s), S,,( r x 1) where S may or may not be equal to s, for the method under study. The starting process for the comptitation for y{*), yi*), . . . , yr(*I from an initial value yo, is t0

J. C. Butcher / General linear methods

278

make use of approximations

&, yZ,. . . , Y, analogous to Y,, Y2,. . . , Y, as follows

s

i=l,2

~=h&,‘~f(~)+yo, ,j= I

,..., S,

s

y/O) = h

c sff(~)-+u,yo,

i= 1,2 ,...,

r,

J’1

where 2 is the vector of ones in Iw” and where, for consistency with the method being started, S = u. Apart from the generalization that the factor u, is inserted, y/O) is then computed from u,” by a Runge-Kutta method and the entire starting process can be looked at as being represented as a vector in K’ where K is the set of these generalized Runge-Kutta methods. If we now look at the combined method consisting of S followed by C, we have what amounts to the S + s stage method given by

s,,

0

I?

CllSZl- -___ c,,+-je -_-__[

c22s2,

c-2,

;

u

I

which we can conveniently write as C 0 S. since C 0 S is also representable as a vector in K’, C can be looked on as a mapping K’ + K’. If E denotes the shift operator which takes ytLi, = _r(x0) to y( X, ) = y( x, + h ), corresponding to the exact solution, we also write So E as the starting process applied after E. Order of accuracy can now be defined relative to S as the greatest p for which, for a sufficiently smoothly behaved differential equation, (C 0 S)y,, - (S 0 E)yy,, = 0( hP+‘). This is best illustrated with a diagram (see Fig. 3). The relationships between the elements of C and S inherent in this diagram can be written, as for Runge-Kutta methods, in such a way that the individual equations correspond to the set of all rooted trees with up to p vertices. It is also possible to eliminate the role of 5 ‘uid regard a method as being of order p if it is of this order for sonre starting method. Given that we have this type of 10~5 truncation error result, we can deduce a global error approximation by repeating the approximate parallelogram figure (see Fig. 4) as many times as the solution has taken steps. To recover the final answer we can, for example, retrace one of the components of S. This finishing procedure F should be such t’lat F 0 S forms a Runge-Kutta method that leaves an initial value invariant to within 0( hp) (see Fig. 5).

Fig. 3.

J. C. Butcher / General linear methods

279

Fig. 4.

Fig. 5.

For a stable method of order p relative to S, and with F defined as we have described, the application of F to y(“) gives y( x,) + 0( hJ’) as long as nh is bounded. This generalizes the global accuracy results of the classical methods in a satisfactory way.

5. Implementation costs AS for Runge-Kutta methods, the complexity of the implementation process in general linear methods depends on the structure of the C,, matrix. We can borrow the words ‘semi-implicit’, ‘diagonally-implicit’, ‘ singly-implicit’, from Runge- Kutta terminology. In a semi-implicit method, the matrix C,l is lower triangular so that the implicitness of Y1, Y2,. . -, Y, is only a step by step implicitness. They can be computed one at a time rather than as a giant system of equations. In a diagonally-implicit method, Cl1 is not only lower triangular but has all its diagonals equal. This means that in the process of evaluating Y,, Y,, . . . in turn by modified Newton iterations, the same matrix of partial derivatives (and its prepared LU factorization) can be used at every stage. Finally, in a singly-implicit method, Cl1 has a one point spectrum. This means that C,, differs only by similarity from diagonal implicitness. Implementation in this case is more costly than for

J. C. Butcher / General linear metho&

280

diagonally-implicit methods in that the similarity transformation has to be incorporated into the iterations but for large stiff systems of equations this is asymptotically less expensive than the iterations themselves.

6. Linear and non-linear stability We now consider the behaviour of general linear methods with so-called stiff problems. We introduce three types of problems. Problem I: y’(x) = qy(x), where hq = z E C. Problem 2: y’(x) = q( x)y( x) where for all x E Iw, hq( x) = z(x) E C -. Problem 3: y’(x) = +( y( x)), where (9(u), u) < 0. In Problems 1 and 2, the dimension of the differential system is 1 whereas in Problem 3, it is of ) denotes an inner product with 11 11the corresponding arbitrary dimension. Note that ( , l or..lrm. These problems are successively more general and each has a bounded solution. In particular, for Problem 3, l

l

&lly(x)l12

implying that Our interest the theoretical In the case

Y(X)> = 2(44Y(x))-

= 2(_Y’(xL

Y(X)> G 0,

Ily( x)11 is non-increasing. here is in determining which general linear methods mimic the stable behaviour of solution for each type of problem. of Problem 1, it is found that

Y (It)=

(C,, + X2,(

I -

zc,,j-‘c,2)y(n-1)= M(t)p-I),

say,

so that a method is stable for this problem if M(z) has bounded powers for all z E C -. Such a method is A-stable, extending the well established terminology of Dahlquist. In the case of Problem 2, ler 2 denote diag( z,, z2,. . . , z,) where zi = hq( x,_ 1 + h[i) for i = II, 2, . . . , s. We now find that Y (n),

$f(Z)y'"-'

,

where

M( 2) = c-2, + c,,z( I -

c,Jq-‘c,,.

(6.1)

It is tempting to define a fnethod for which M( 2) necessarily has bounded powers whenever zl, to be AN-:;table (the N referring to ‘Non-autonomous’) since this generalizes in a simple way that definiticn for Runge-Kutta A:&.\ods. We resist this temptation in favour of ‘weakly AN-stable’. Weak AN-stability is, of course, not enough to guarantee stable behavior for problem 2, since different values of Z may occur in every step. Instead, we want the set of products of matrices like M(Z) to be bounded or, equivalently, we want there to exist a norm 11 11such that /M( Z)ii < 1 for all Z with (diagonal) elements in C. This property will be called ‘strong AN-stability’. In the special case that the norm 119II is subordinate to an inner product norm, we will further call the property ‘ Euclidean AN-stability’. Coming now to Problem 3, it is known that stable behaviour is guaranteed if there exists a Z 29 a.* z, E C -

l

J.C. Butcher / General linear methods

positive definite r x r matrix G and a positive diagonal s

X s

281

matrix D, suchthat

1

G - C,T,GC,,

C,T,D-C&GC21

DC,, - CZGC,,

DC,,+ C;D- C,?;GC,,

is non-negative definite. This property, introduced by Burrage and Butcher [2], is known a ‘algebraic stability’. The known relationships between the various stability definitions we have referred to are as follows: algebraic stability u lt Euclidean AN-stability II strong AN-stability II * weak AN-stability II % A-stability

It is not yet known whether or not strong AN-stability implies Euclidean AN-stability. Rather than attempt to explore all these details which appear in a report by the author [14], we will content ourselves with an outline proof that for preconsistent general linear methods, Euclidean AN-stability implies algebraic stability. This is the closest result concerning general linear methods to Dahlquist’s important theorem that, in the case of the one leg formulation of linear multistep methods, G- and A-stability are equivalent [19]. If (u, w) = u*Gw is the inner product occuring in the definition of Euclidean AN-stability, define D = diag( d,, dz, . . . , d,) where [d,d, - . d,] = uTGC,,. We shall prove the algebraic stability property using this G and D. Because of the property of Euclidean AN-stability, l

(6 .2)

v*Gv - v*M( Z)*GM( 2) v >, 0,

for all 2 and any u E UK TO prove di~0, j=l, 2 ,..., s, let Z=diag(z,, z2 . . . . z,) where z,=O (k#j), Zj= --t = u (the preconsistency vector). where t is a small positive number. Choose this 2 in (6.1) and u It is found that M(Z)v=u-

tC,,e,e,TCi,Z4 + O( t*) = U - tC2ie, + 0( t*),

where we have used the fact that C,,U = e. Substitute becomes

into (6.2) so that this inequality now

21uTGC2iej + 0( t*) 2 0, or, since this must hold for arbitrarily small t, dj = uTGC2iej 2 0. Now choose Z = it Y where Y = diag( y, ) y2, . . . , ys) where yl, y2, . r . , ys are real and where t is a small real parameter. If x E llU’,choose u = u + itx and again substitute into (6.2).

J. C. Butcher / General linear methods

282

After some manipulation

t2[“‘v’l

G-

it is found that

C,‘,GC,,

Dc

12 -

cw22

Since this is to hold for arbitrarily inequality is non-negative definite.

CAD

-

C,T,GC,,

x +O(t3)>,0.

DC,, + C,T;D - C;:GC,, I[ y 1

small t and for any X, y, the matrix occurring

in this

7. Software prospects for general linear methods To narrow the choice amongst possible general linear methods, we look to see if some of the following goals can be achieved: (i) For i = 1, 2, . . . , I-, y?) should be a good approximation to h’-‘y~‘-‘J( x,!)/( i - l)! where YI’-~I denotes a derivative. This use of Nordsieck vectors would mean that we have a cheap stepsize changing strategy. (ii) For i= 1, 2,..., S, v should be a good approximation to y( X, _ , + h&). This seems to be desirable because of the danger that for typical stiff problems, the theoretical order is not realisable in a practical sense unless the stage order equals the usual order. The phenomenon leading to this caution has been studied in the case of Runge-Kutta methods from different points of view by Prothero and Robinson 1301and Frank, Schneid and Ueberhuber [22]. (iii) The matrix C,, should have a one point spectrum and a convenient similarity transformation to Jordan form. This would allow the use of the relatively efficient implementation scheme available to singly-implicit Runge-Kutta methods. (iv) The method should be as close as possible to A-stable. (v) Subject to the other requirements the order should be as high as possible. (vi) There should be a cheap method of error estimation. Fortunately, there seem to be methods satisfying these requirements. If one aims for order r + s - 1 not only for the method as a whole but also for the offstep points and for all components of the Nordsieck vector then single-implicitness is achievable if and only if &, 52l”‘r 5, are proportional to the zeros of the generalized Laguerre polynomial Ls’-? Such methods are described in terms of their formal properties including details of the transformations in a recent paper by the author [13]. Further studies of the stability properties of such methods, beyond those begun in that paper, have recently been carried by Burrage and Chipman [3], who identify methods with r - s = - 2, - 1,0 or 1 up to order 14 with acceptable stability properties. Using an extension of the predictor-corrector approach to local error estimation, it seems likely that satisfactory variable order and variable stepsize controls can be provided for these methods. Furthermore, the approach of Nordsieck [29], which has been further developed by Gear [23], of resealing yjn), y3(n),, . . , y,(“) when a stepsize change is made, works more satisfactorily with general linear methods than with linear multistep methods. Although no systematic collection of general linear methods suitable for non-stiff problems stands out as superior to other possible choices, the simple device of using the stiff melhods already referred to, but using functional iteration in place of Newton iterations, seems to give an acceptable type of method. Combining stiff and non-stiff options into essentially the same method has the advantage of making type-insensitivity easier to attain. In summary, many of the problems associated with specifying and rmplementing general linear methods as the central components of differential equation software are solved and the remaining

J.C. Butcher / General linear methods

seem to be solvable. The hope still exists of attaining the excellent stability properties Runge-Kutta methods with the low computational costs of linear multistep methods.

283

of

8. Other aspects of general linear methods In upon. which some

this brief surwy, only some of the questions concerning In addition to the work cited in this paper, there is a addresses many other issues. The bibliography which is of this work in addition to the material surveyed in this

these methods have been touched growing literature on the subject given below includes references to paper.

References [l] K. Burrage, Non-linear stability of multivalue multiderivative methods, BIT 20 (1980) 316-325. [2] K. Burrage and J.C. Eutcher, Non-linear stability of a genera1 class of differential equation methods, BIT 20 (1980) 185-203. [3] K. Burrage and F.H. Chipman, The stability properties of singly-implicit genera1 linear methods, University of

Auckland, Computer Science Report No. 30, 1983. [A] K. Burrage and P. Moss, Simplifying assumptions for the order of partitioned multivalue methods, BIT 20 (1980) 452-465. [S] J.C. Butcher, A modified multistep method for the numerical integration of ordinary differential equations, J. Assoc. Comput. Mach. 12 (1965) 124-135. [6] J.C. Butcher, On the convergence of numerical solutL>ns to ordinary differential equations, Math. Comp. 20 (1966) l-10. [7] J.C. Butcher, A multistep generalization of Runge-Kutta clethods with four or five stages, J. Assoc. Comput. Mach. 14 (1967) 84-99. [8] J.C. Butcher, The order of numerical methods for ordinxy differential equations. Math. Comp. 27 (1973) 793-806. [9] J.C. Butcher, Order conditions for a genera1 class of numerical methods for ordinary differential equations, in: Topics in Numerical Analysis (Academic Press, London and New York, 1973) 35-40. [lo] J.C. Butcher, The order of diffcrentr ‘a: equation methods, Lectures Notes in Math. 362 (Springer, New York, 1974) 72-75. [ 113 J.C. Butcher, Order conditions far general linear methods for ordinary differential equations, Internat. Ser. Numer. Math. 19 (Birkhauser, Base1 and Boston, 1974) 77-81. [12] J.C. Butcher, Stability properties for a general class of methods for ordinary differential equations, SIAM J. Numer. Anal. 18 (1981) 37-44. 1131 J.C. Butcher, A generalization of singly-implicit methods, BIT 21 (1981) 175-189. [14] J.C. Butcher, Linear and non-linear stability for genera1 linear methods, University of Auckland, Computer Science Report No. 28, 2982. [15] G.D. Byrne, and R.J. Lambert, Pseudo-Runge-Kutta methods involving two points, J. Assoc. Comput. Mach. 13 (1966) 114-123. [16] G.J. Cooper, The order of convergence of general linear methods for ordinary differential equations, SIAM J. Numer. Anal. I.5 (1978) 643-661. f17] G.J. Cooper, Error estimates for genera1 linear methods for ordinary differential equations, SIA M J. Numer. Anal. 18 (1981) 65-82. [18] G. Dahlquist, Convergence and stability in the numerical integration of ordinary differential equations. Math. Stand, 4 (1956) 33-53. [19] G. Dahlquist, G-stability is equivalent to A-stability, BIT 18 (1978) 384-401. [20] K. Dekker, Algebraic stability of genera1 linear methods, University of Auckland, Computer Science Report No25, 1981. [21] K. Dekker, Reducibility of algebraically stable general linear methods, Mathematics Centre Amsterdam9 Numerical Mathematics preprint No. NW 131/82, 1982.

284

J. C. Butcher / General linear methods

[22] R. Frank, J. Schneid, and C.W. Ueberhuber, The concept of B-convergence, SIAM J. Numer. Anai. 18 (1981) 753-780. [23] C.W. Gear, Hybrid methods for initial value problems in ordinary differential equations, SIAM J. Numer. Anal. 2 (1965) 69-86. 1241 C.W. Gear, The numerical integration of ordinary differential equations, Math. Comp. 21 (1967) 146-156. [25] W.B. Gragg and H.J. Stctter, Generalized multistep predictor-corrector methods, J. Assoc. Comput. Mach. II (1964) 188-209. [26] E. Hairer, and G. Wanner, Multistep-ntultistage-multiderivative methods for ordinary differential equations, Computing I1 (1973) 287-303. [27] E. Hairer, and 6. Wanner, On the Butcher group and general multi-value methods, Computing 13 (1974) l-15. [28] A.D. Heard, The solution of the order conditions for general linear methods, Thesis, University of Auckland, 1978. [29] A. Nordsieck, On numerical integration of ordinary differential equations, Math. Comp. 16 (1962) 22-49. [30] A. Prothero and A. Robinson, On the stability and accuracy of one-step methods for solving stiff systems of ordinary differential equations, Math. Comp. 28 (1974) 145-162.