Time symmetry and high-order Magnus methods

Time symmetry and high-order Magnus methods

Applied Numerical Mathematics 39 (2001) 379–401 www.elsevier.com/locate/apnum Time symmetry and high-order Magnus methods A. Iserles ∗ , S.P. Nørsett...

222KB Sizes 1 Downloads 48 Views

Applied Numerical Mathematics 39 (2001) 379–401 www.elsevier.com/locate/apnum

Time symmetry and high-order Magnus methods A. Iserles ∗ , S.P. Nørsett, A.F. Rasmussen Department of Applied Mathematics and Theoretical Physics, Silver Street, Cambridge, CB3 9EW, UK

Abstract The subject of this paper is the investigation of the Magnus expansion of a solution of the linear differential equation y  = a(t)y, y(0) ∈ G, where G is a Lie group and a : R+ → g, g being the Lie algebra of G. We commence with a brief survey of recent work in this area. Next, building on earlier work of Iserles and Nørsett, we prove that an appropriate truncation of the expansion is time symmetric. Moreover, we develop a 6th-order Liegroup solver based on the Magnus expansion which requires just three function evaluations and the calculation of seven commutators. The paper is accompanied by a number of numerical experiments.  2001 IMACS. Published by Elsevier Science B.V. All rights reserved.

1. Introduction The conventional means to calculate a numerical solution of an ordinary differential equation is by using one of the classical methods, e.g., Runge–Kutta or linear multistep schemes, typically with variable step-size and error control. It is a matter of long-term experience, underpinned by a great deal of powerful analysis, that this approach is satisfactory for many problems and probably the method of choice for general equations. The situation is different, however, if we are in possession of qualitative information with regard to the underlying equation and wish to recover it under discretization. In particular, if we wish to respect invariants, Iserles [8] proved that all linear multistep methods fail to respect nonlinear invariants. The situation is somewhat better for Runge–Kutta methods, some of which respect quadratic invariants [2], but a recent, yet-unpublished result of Iserles and Zanna restricts the range of invariants that can be conserved by any Runge–Kutta methods to little more than quadratic. These observations underpin much of the recent effort to develop novel families of numerical integrators, which are designed to replicate exactly invariants, strong integrals and symmetries of the underlying differential system, an activity sometimes known as geometric integration. Perhaps the most powerful general technique in geometric integration is the deployment of Lie-group solvers. The main reason is that many invariants (orthogonality, unitarity, volume conservation, . . .) can * Corresponding author.

E-mail address: [email protected] (A. Iserles). 0168-9274/01/$20.00  2001 IMACS. Published by Elsevier Science B.V. All rights reserved. PII: S 0 1 6 8 - 9 2 7 4 ( 0 1 ) 0 0 0 8 8 - 5

380

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

be formulated in terms of Lie groups. Moreover, let M be a homogeneous space: a smooth manifold which is subjected to a transitive action of a Lie group G [16]. Munthe-Kaas and Zanna [15] proved that, provided we can integrate numerically on G, we can amend our algorithm so as to stay on M. This extends the approach of Lie-group solvers to, among many others, spheres, tori and isospectral manifolds. A number of Lie-group algorithms have been proposed and studied in the last few years: the rigidframe technique of Crouch and Grossman [3,17], Runge–Kutta–Munthe-Kaas schemes [12,13], and Fer expansions [7,21]. Recently, Iserles and Nørsett [9] have introduced the very powerful technique of Magnus expansions. The approach itself dates to the seminal paper of Magnus [11] and Magnus expansions have received wide exposure in mathematical physics’ literature as perturbative expansions of linear differential systems. Their systematic use as numerical integrators introduces a number of important issues which have been considered in [9] and, more recently, in [10,22]. This paper follows in the footsteps of [9]. We address ourselves to the linear matrix system y  = a(t)y,

t  0,

y(0) = y0 ,

(1.1)

where y0 ∈ G and a : R+ → g. Here G is a matrix Lie group, while g is its Lie algebra. While referring the reader to [16,20] for Lie-group terminology, we comment that most of this paper can be perfectly well understood by restricting the attention to the case when G coincides with the orthogonal group On (R) (the set of all n × n real orthogonal matrices), whereby g = son (R) is the Lie algebra of n × n real skew-symmetric matrices. The main idea behind the Magnus technique is to represent the solution in the form y(t) = eσ (t ) y0 , where the matrix function σ evolves on g, and seek an expansion to σ in a form that involves the matrix function a in repeated commutation and integration. We defer a more comprehensive review of the method to Section 2. In Section 3 we investigate the order of magnitude of terms in the Magnus expansion. This leads to a proof of an important feature of such expansions: appropriately truncated, they are time-symmetric schemes. This immediately results in a number of benefits and reduced implementation costs. In Section 4 we describe in detail a Magnus method of order 6 which has been designed to reduce complexity and economize in the number of calculations. Finally, Section 5 displays a number of computational results which illustrate the power of the Magnus algorithm. We mention in passing that, although our exposition is restricted to the linear ordinary differential system (1.1), much of our analysis can be extended to the nonlinear case along the lines of [22].

2. Magnus series A Lie group G is a smooth manifold endowed with a multiplicative group structure, consistent with the underlying topology of the manifold. The set GLn (R) of all n × n real matrices is a Lie group (the general linear group), while a Lie group G ⊆ GLn (R) is called a matricial Lie group. Standard examples include Tn (R) (the set of upper-triangular matrices), On (R) (the orthogonal group), SLn (R) (the special linear group of matrices with a unit determinant), SOn (R) (the special orthogonal group, On (R) ∩ SLn (R)), the symplectic group Spn (R) and much more. It is possible to define Lie groups over other fields but this greater generality does not help to elucidate the argument of this paper and we forthwith, without loss of generality, restrict ourselves to Lie groups over reals.

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

381

The tangent space of a Lie group at identity is a Lie algebra: a linear space equipped with a binary, linear, skew-symmetric operation often called commutation or a Lie bracket. For further reference we note that the commutator obeys the Jacobi identity 











x, [y, z] + y, [z, x] + z, [x, y] = O,

x, y, z ∈ g.

(2.1)

In the case of matricial Lie groups, this binary operation is just the standard additive commutator, [x, y] = xy − yx. The Lie algebra corresponding to Tn (R) is the set of n × n real strictly upper-triangular matrices. Likewise, the Lie algebra son (R) of both On (R) and SOn (R) consists of n × n real skew-symmetric matrices. Further examples can be found in [16,20]. Given an element x ∈ g, we can return to the group by exponentiation, et x ∈ G, t ∈ R, which in the case of matricial Lie groups corresponds to the standard matrix exponential. This is a motivation behind the idea of representing the solution of (1.1) in the form y(t) = eσ (t ) y0 , t  0: as long as we approximate σ by another element of g, we can be assured that the outcome evolves on G. Unlike the group G, its Lie algebra g is a linear space. It is considerably easier to restrict a numerical solution to the latter set and this explains why so many Lie-group solvers commence by mapping the problem to the underlying algebra [9,13,21]. The linear differential equation (1.1) can be translated to the underlying algebra (and to the terminology of the the function σ ), and this has been already described by Hausdorff [6]: σ =

∞  Bk

k!

k=0

adkσ a,

t  0,

σ (0) = 0,

(2.2)

k where {Bk }∞ k=0 are Bernoulli numbers [1], while the adjoint operator adx y is defined by iterated commutation:



ad0x y = y,



adkx y = x, adk−1 x y ,

k ∈ N.

On the face of it, (2.2) is hardly an improvement over (1.1), being a nonlinear equation and involving an infinite sum. Fortunately, an observation of Magnus [11] demonstrates that (2.2) possesses a great deal of structure which can be harnessed to produce accurate and relatively affordable approximants. He showed that σ (t) =

t 0

+

+

1 a(ξ1 ) dξ1 − 2  t ξ1 ξ2  1

4

0

1 12

0 0 t  ξ1 ξ1

0

0



a(ξ2 ), a(ξ1 ) dξ2 dξ1

0





a(ξ3 ), a(ξ2 ) , a(ξ1 ) dξ3 dξ2 dξ1



0

 t ξ1 





a(ξ3 ), a(ξ2 ), a(ξ1 ) dξ3 dξ2 dξ1 + · · ·

(2.3)

0

and this expression has been widely used, mainly in mathematical physics, as a means to derive a perturbation solution of (1.1).

382

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

The derivation of a Magnus expansion beyond the terms prescribed in (2.3), as well as its effective numerical implementation, were taken up by Iserles and Nørsett [9]. The general Magnus expansion of σ has the form σ (t) =

t

∞  

α(τ )

m=0 τ ∈Tm

Hτ (ξ ) dξ,

t  0,

(2.4)

0

where the set Tm will be defined soon, α(τ ) are scalars and Hτ (t) ∈ g are functions of t that depend upon a and which, to be consistent with (2.3), are likely to involve integrals and nested commutators. The first main idea in [9] is to associate each term in (2.4) with a rooted tree, according to the following prescription:  ∼ a(t)

(2.5)

and, recursively,

τ=

⇒

Hτ (t) =

 t



Hτ1 (ξ ) dξ, Hτ2 (t) .

(2.6)

0

Thus, appending a root to a tree corresponds to the integration of the underlying Magnus term, while joining two trees with a common root signifies commutation. The set Tm contains all trees with 3m + 1 vertices that can be obtained with the operation (2.6) (that is, ‘integrate the first term and commute the outcome with the second term’). The coefficient α(τ ) is set to one for the trivial tree (2.5). Otherwise, every τ ∈ Tm , m  1, can be written in a unique fashion in the form

τ=

(2.7)

where the number s  m is called the width of τ . In that case α(τ ) =

s  Bs  α τ [l] . s! l=1

The Magnus terms corresponding to (2.2) are thus

(2.8)

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

τ

Hτ (t)

α(τ )

a(t) t

383

1





1 2

a(ξ ), a(t) dξ

0

t ξ1 0

0

t

t







1 4

a(ξ2 ), a(ξ1 ) , a(t) dξ2 dξ1







1 12

a(ξ1 ), a(ξ2 ), a(t) dξ2 dξ1

0

0

and it is easy to generate further terms, whether by hand or symbolic calculation, for as long as necessary [4]. Insofar as an efficient numerical procedure for the solution of (1.1) is concerned, the formal Magnus expansion is just the first step. As can be ascertained at once by inspection, numerical implementation of a Magnus expansion requires the computation of a large number of multivariate integrals over different convex polyhedra. A sine qua non of a viable numerical procedure is an effective and cheap evaluation of such integrals. Fortunately, this can be done by exploiting the fact that the integrand is always of the form L(a(ξ1 ), a(ξ2 ), . . . , a(ξm )), where L is a multilinear function. Let c1 , c2 , . . . , cν be distinct quadrature points in [0, 1], set λk ∈ Pν−1 [t] as the kth cardinal Lagrange polynomial and assume that the univariate quadrature formula 1

f (ξ ) dξ ≈

ν 

bk f (ck ),

bk =

k=1

0

1

k = 1, 2, . . . , ν,

λk (ξ ) dξ,

(2.9)

0

is of order p ∈ {ν, ν + 1, . . . , 2ν}. Every integral in a Magnus expansion is over a convex polyhedron of the general form



Sh = (ξ1 , ξ2 , . . . , ξm ): 0 < ξ1 < h, 0 < ξj < ξij , j = 2, 3, . . . , m , where ij ∈ {1, 2, . . . , j − 1}. We approximate 





L a(ξ1 ), . . . , a(ξm ) dξm · · · dξ1 ≈ hm







bl L a(cl1 h), . . . , a(clm h) ,

l∈C νm

Sh

where C νm is the set of all combinations l = (l1 , l2 , . . . , lm ) from the set {1, 2, . . . , ν} and bl :=

  ν S1 j =1

λlj (ξj ) dξm · · · dξ1 ,

l ∈ C νm .

(2.10)

384

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

An important result of [9] is that the multivariate quadrature formula (2.10) shares the order p of the univariate approximation (2.9). In other words, in terms of function evaluations the cost of all the multivariate quadratures needed to approximate the Magnus expansion to given order is exactly the same as the cost of a single univariate quadrature formula. The apparent downside of (2.10) is that it requires the evaluation of a very large number of commutators: recall that each L is made out of commutators and, needless to say, the number of combinations increases exponentially with m. Fortunately, two devices come to our rescue, both based on the rich range of symmetries of the commutator operation. Firstly, given a tree τ , let us designate by τ c (thereafter called ‘the C-tree of τ ’) the strictly binary tree which is obtained from τ by excising all the vertical edges and fusing their endpoint vertices. We say that two C-trees are equivalent if one can be converted to the other by rotation with respect to a vertical axis. For example, herewith the C-trees corresponding to the only two terms with two commutators:

⇒

⇒

and

.

Clearly, they can be converted to each other by a single vertical rotation, and this means that, for each combination l ∈ Cν3 there exists another 3-tuple k ∈ Cν3 such that L1 (a(cl1 h), a(cl2 h), a(cl3 h)) = −L2 (a(ck1 h), a(ck2 h), a(ck3 h)), where L1 and L2 are the integrands of the two integrals, respectively. In general, it is not the number of trees that matters in numerical calculation but the number of equivalence classes of C-trees [9]. Even more important are the savings that follow by using the Jacobi identity, since many commutators can be expressed as a linear combination of other commutators. This has been already observed in [9], but the most significant result in this area is the important paper of Munthe-Kaas and Owren [14], who employ the theory of graded free Lie algebras to reduce the volume of computation in Magnus expansions and other Lie-group methods. We conclude this section by describing the order-4 discretized Magnus expansion from [9]. Its detailed derivation draws upon the range of ideas and techniques that have been described above. Thus, to timestep from tN to tN+1 = tN + h we evaluate the matrix a at the Gaussian points, √ 1 3 − h , a1 = a tN + 2 6



√ 1 3 + a2 = a tN + h , 2 6



whence √  1 1  3 2 h [a2 , a1 ] + h3 a2 − a1 , [a2 , a1 ] σN+1 = h(a1 + a2 ) + 2 12 80 and yN+1 = eσN+1 yN .

(2.11)

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

385

3. Trees, order and time symmetry The first step in implementing the Magnus expansion (2.4) for either numerical or perturbative ends is a truncation of infinite series. It has been proved in [9] that, except for m = 0, τ ∈ Tm implies Hτ (t) = O(t β+1 ), where β  m + 1. Therefore, for every p  2, the function p−2 

σp (t) :=



t

α(τ )

m=0 τ ∈Tm

Hτ (ξ ) dξ

(3.1)

0

approximates σ to order p, i.e., σp (t) − σ (t) = O(t p+1 ). Our claim is that similar goal can be attained, in general, with far fewer terms. Given τ ∈ Tm for some m ∈ Z+ , we define the power of τ , denoted by pwr τ , as the least integer k such that Hτ (t) = O(t k ) for every sufficiently smooth matrix function a. We designate by Fk the set of all the trees τ of power k  0. Clearly, pwr(  ) = 0 and it is easy to verify by direct calculation that 



= 2.

pwr

With little more effort, we can completely elucidate the power of every tree τ from the recursive rules that have been employed in its construction at the first place. Lemma 1. Suppose that

τ=

.

Then



pwr τ =

pwr τ1 + pwr τ2 + 1, pwr τ1 + pwr τ2 + 2,

Proof. Suppose that



τ1 = τ2 , τ1 = τ2 .

(3.2)



Hτl (t) = cl t pl + dl t pl +1 + O t pl +2 , where pl = pwr τl , l = 1, 2. Then Hτ (t) =

 t



Hτ1 (ξ ) dξ, Hτ2 (t) 0



1 1 c1 t p1 +1 + d1 t p1 +2 + · · · , c2 t p2 + d2 t p2 +1 + · · · p1 + 1 p1 + 2

1 1 1 [c1 , c2 ]t p1 +p2 +1 + [c1 , d2 ] − [c2 , d1 ] t p1 +p2 + · · · . = p1 + 1 p1 + 1 p1 + 2 The required result follows, since τ1 = τ2 implies c1 = c2 .



The first few sets Fm are displayed in Table 1. It is clear (and the point will be pursued further in the sequel) that truncating the Magnus expansion by power, rather than by the number of vertices, leads to significant savings.

386

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

Table 1 The sets Fk . Note that F1 = ∅ The set

The trees

F0

F2

F3

,

F4

,

,

,

Similarly to Section 2, in the context of quadrature it is convenient to consider equivalence classes of C-trees in Fk . Thus, we denote by Fck the set of all the equivalence classes of C-trees in Fk . It follows that Fck , k = 0, 2, 3, 4, 5, are simpletons, while (choosing appropriate representatives)

Fc6 =

,

,

.

Our next step is to investigate the size of the sets Fk and Fck , since this sheds light on the gain in efficiency that we can expect when using the canonical expansion σp (t) =

p−1 



k=0 τ ∈Fk

t

α(τ )

Hτ (ξ ) dξ

(3.3)

0

in preference to (3.1). Let ϕk := #Fk and ϕkc := #Fck , k ∈ Z+ . Given τ ∈ Fk , k ∈ N, it must have been obtained either from τ1 ∈ Fk1 and τ2 ∈ Fk2 , τ1 = τ2 , such that k1 + k2 = k − 1, or τ1 = τ2 ∈ Fk1 and 2k1 = k − 2 (in the latter case k must be even). This gives the following recurrences,

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

387

ϕ0 = 1, ϕ2k−1 =

2k−2 

ϕl ϕ2k−2−l − ϕk−1 ,

k ∈ N,

l=0

ϕ2k =

2k−1  l=0

ϕl ϕ2k−1−l + ϕk−1 ,

k ∈ N.



k ∞ Letting Φ(z) = ∞ k=0 ϕk z be the generating function of the sequence {ϕk }k=0 , we can obtain from the above the functional equation



2



Φ(z) = 1 + z Φ(z) + z(z − 1)Φ z2 . The latter equation has not proved helpful in elucidating the asymptotic behaviour of the sequence {ϕk }∞ k=0 . Instead, we have resorted to numerical means, which seem to indicate that 1/ k

lim sup ϕk

≈ 3.1167417747.

(3.4)

k→∞

This should be compared with θm = #Tm which obeys lim supm→∞ θm1/m = 4 [9]. Similarly to θm and ϕk , we let θmc and ϕkc denote the cardinality of the sets Tcm and Fck , respectively. The sequence {θmc }∞ m=0 has been identified in [9] with Wedderburn–Ethrington numbers and it is known that of the sequence {ϕkc }∞ lim supm→∞ θmc 1/m ≈ 2.483253536. To investigate the asymptotic behaviour k=0 we  ∞ c c k again form the recurrence relations and the generating function Φ (z) := k=0 ϕk z . We can obtain an element in Fc2s in one of two ways. Firstly, we can combine τ1 ∈ Fcl with τ2 ∈ Fc2s−l−1 for some 0  l  s − 1. Secondly, we can obtain an element in Fc2s by combining each element in Fcs−1 with itself. Altogether, we have c = ϕ2s

s−1 

c c ϕlc ϕ2s−l−1 + ϕs−1 =

l=0

 1 2s−1 c ϕcϕc + ϕs−1 , 2 l=0 l 2s−l−1

s ∈ N.

Likewise, for Fc2s+1 , we combine τ1 ∈ Fcl with τ2 ∈ Fc2s−l , where 0  l  s − 1. Moreover, when l = s and both τ1 and τ2 lie in Fcs , we have 12 (ϕsc − 1)ϕsc combinations of distinct terms (as well as ϕsc combinations of identical terms, which are counted in Fc2s+2 ). Therefore, c = ϕ2s+1

2s 1 1 c ϕlc ϕ2s−l − ϕsc . 2 l=0 2

Brief manipulation affirms the identity

 1  c 2 1 c 2 Φ (z) = 1 + z Φ (z) + z − z Φ c z2 . 2 2

(3.5)

Lemma 2. It is true that lim sup ϕkc 1/ k = 2.

(3.6)

k→∞

radius r   0 of convergence about the origin, since Proof. We commence by noting that Φ c has a finite   m θ it diverges for z = 1. Moreover, r  14 , because ki=0 ϕkc  ki=0 θi and ∞ m=0 m z has the radius of convergence 14 [9].

388

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

Since ϕkc  0, it follows that for every |z| < r ∞  c    Φ (z)  ϕ c |z|k = Φ c |z| . k

k=0

Therefore, |Φ c (z)|  Φ c ( 12 ), |z| < 12 , provided that Φ c is summable at 12 . Letting z = 12 in (3.5), we readily obtain Φ c ( 12 ) = 2. Consequently, r  12 . Suppose next that r exceeds 12 . Therefore z = 12 is within the domain of convergence and dΦ c ( 12 )/dz is well-defined. Differentiating the functional equation yields



 dΦ c (z) 1 1 dΦ c (z2 ) dΦ c (z) 1  c 2 = Φ (z) + zΦ c (z) + 2z − Φ c z2 + 2z z2 − z dz 2 dz 2 2 dz

and, letting z = 12 , we obtain Φ c ( 14 ) = −4. This is clearly impossible, since the power series of Φ c has nonnegative coefficients. Therefore, it is impossible for r to exceed 12 and we deduce that this is the radius of convergence of the series. The lemma follows from the root test for convergence, since 2=

1 1/ k = lim sup ϕk . r k→∞



Table 2 displays the numbers θm , θmc , ϕm and ϕmc and it it clear that the overall number of trees in the expansion is reduced a great deal by opting for the canonical expansion (3.3) in place of (2.4), whether with explicit integrals or with quadrature formulae. The canonical expansion (3.3) has another important and most welcome characteristic: as we are about to prove, it is time symmetric. Table 2 The cardinality of different sets of binary trees m

θm

ϕm

θmc

c ϕm

0

1

1

1

1

1

1

0

1

0

2

2

1

1

1

3

5

2

2

1

4

14

4

3

1

5

42

8

6

1

6

132

21

11

3

7

429

52

23

4

8

1430

138

46

7

9

4862

362

98

11

10

16796

980

207

20

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

389

A dynamical system x(t) = Xt −t0 x(t0 ),

t  0,

is said to be time symmetric if Xt ◦ X−t = Id,

t  0,

(3.7)

in other words when, for any t0 < t1 , the integration from t0 to t1 , followed by integration from t1 back to t0 , recovers exactly the initial value [19]. Time symmetry is important in the context of symplectic integration [18] but, within the present context, its main importance is that it implies that Xt = eFt , where Ft can be expanded in odd powers of t [5]. For completeness, we sketch the proof of this classical result. Thus, assuming arbitrarily high smoothness, we can expand Xt =

∞ 

xl (tD)l ,

l=0

where D is the differential operator—this is equivalent to the statement that Xt g(t0 ) =

∞ 

xl t l g (l) (t0 )

l=0

for every analytic function g and for every t in the domain of analyticity. We stipulate that the generating function x(z) =

∞ 

xl zl

l=0

is analytic for sufficiently small |z| and that x0 = 1, the latter being necessary for consistency. It is now trivial to verify that Xt ◦ X−t =

∞ 

xl (tD)

l=0

l

∞ 

xj (−tD) =

j =0

j

 l ∞   l=0

 j

(−1) xj xl−j (tD)l .

j =0

Therefore, (3.7) is equivalent to l 

(−1)j xj xl−j = 0,

l ∈ N,

j =0

and this, in turn, is equivalent to x(z)x(−z) ≡ 1. Since x(0) = 1, the analytic function g is nonzero in a sufficiently small neighbourhood of the origin, where f (z) := log x(z) is well-defined. We deduce that f is analytic and odd, therefore f (z) =

∞ 

fl z2l+1.

l=0

By construction, f is the generating function of Ft , hence we conclude that Ft =

∞  l=0

fl (tD)2l+1 .

390

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

Theorem 3. Let Xt y0 = eσ˜ p (t )y0 , where σp is the truncated canonical Magnus expansion (3.3). Then Xt is time symmetric. Proof. To persuade ourselves that the assertion of the theorem makes sense, we let σ (t) =

t

1 a(ξ ) dξ − 2

0

 t ξ1  0



a(ξ2 ), a(ξ1 ) dξ1 dξ2 + · · ·

0

be the Magnus expansion corresponding to Eq. (1.1) and denote by ρ the Magnus expansion of the equation z = −a(t1 − t)z,

t  0,

z(0) = y(t1 ).

(3.8)

Brief calculation affirms that ρ(t) = −

t 0

1 a(t1 − ξ ) dξ − 2

t ξ1  0



a(t1 − ξ2 ), a(t1 − ξ1 ) dξ2 dξ1 + · · · ,

0

therefore, ρ(t1 ) = −

t1 0

1 a(ξ ) dξ − 2

t1 t1





a(ξ2 ), a(ξ1 ) dξ2 dξ1 .

0 ξ1

According to (3.7), time symmetry is equivalent to the statement that, for every t1  0, integrating (1.1) from t = 0 to t = t1 , followed by the integration of (3.8) in the same range, must bring us back to the initial condition. Since the exact differential equation is time symmetric, it is true that σ (t1 ) + ρ(t1 ) ≡ 0.

(3.9)

This can be verified at once for the first few terms, since t1 t1 0





a(ξ2 ), a(ξ1 ) dξ2 dξ1 ≡ 0

0

and we may deduce that the truncated canonical expansion is time symmetric for p = 0, 1. More importantly, our analysis points out a strategy for the proof of the theorem. The identity (3.9) is always true for sufficiently small t1 > 0. Let γl (t) :=



t

α(τ )

τ ∈Fl

Hτ (ξ ) dξ,

l ∈ Z+ ,

0

and denote by δl (t) the corresponding terms for ρ. Hence σp =

p 

γl ,

l=0

ρp =

p 

δl

l=0

and γl (t), δl (t) = O(t l ). We have already proved that γl (t) + δl (t) ≡ 0, l = 0, 1. Suppose that γl (t) + δl (t) ≡ 0,

l = 0, 1, . . . , p − 1,

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

391

and assume that, given a matrix function a, there exists p  p such that 



˜ , γp (t) + δp (t) = ct p˜ + O t p+1

c = 0.

Suppose first that p = p. Then we reach a contradiction, since ∞  

p  



γl (t) + δl (t) ≡ 0,

l=0





γl (t) + δl (t) = ct p + O t p+1



l=0

implies that ∞  





γl (t) + δl (t) = O t p ,

l=p+1

which is impossible. Assume thus that p > p. It is easy to verify that there exists q > p such that q  







γl (t) + δl (t) = e1 t q + O t q+1 ,

e1 = 0.

l=p

 if p  = p + 2 and γp+1 (t) + δp+1 (t) = O(t pˆ ) then q = min{p,  p}  For example, if p = p + 1, we let q = p, and so on. This is a contradiction, since, again, ∞  

q  



γl (t) + δl (t) ≡ 0,

l=0







γl (t) + δl (t) = e1 t q + O t q+1 ,

l=0

imply ∞  







γl (t) + δl (t) = −e1 t q + O t q+1 ,

l=q+1

in variance with the definition of γl and δl . We therefore deduce that no such p exists, hence γl + δl ≡ 0 for all l ∈ Z+ and the truncated canonical Magnus expansion is time symmetric. ✷ Theorem 4. The order of the truncated canonical Magnus expansion (3.3) is 2(p + 1)/2 . Proof. Because of time symmetry, both σ (t) and σp (t) can be expanded in odd powers of t. Therefore 



ˆ , σp (t) = σ (t) + O t p+1

where p is even. The theorem follows since, by construction, p  p.

✷ 

It is important to bear in mind that we did not prove that every term 0t Hτ (ξ ) dξ such that τ ∈ F2s−1 , say, is necessarily O(t 2s+1 ): it is the linear combination of terms that increases the order! For example, let us consider the case p = 3. There are exactly two terms in F3 and it is possible to show that I1 (t) :=

t ξ1 ξ2





a(ξ3 ), a(ξ2 ) , a(ξ1 ) dξ3 dξ2 dξ1 = −

0

I2 (t) :=

0 0 t  ξ1 ξ1



0

0

0







a(ξ3 ), a(ξ2 ), a(ξ1 ) dξ3 dξ2 dξ1 =

  1 [β, α], α t 4 + O t 5 , 24

  1 [β, α], α t 4 + O t 5 , 8

392

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

where a(t) = α + βt + O(t 2 ) [10]. Therefore  1 1 I1 (t) + I2 (t) = O t 5 . 4 12 Note that Theorems 3 and 4 leave the integrals intact, while numerical implementation of Magnus expansions requires the replacement of integrals by quadrature. However, it is trivial to ascertain that the multivariate quadrature methods from Section 2 are time symmetric, provided that the quadrature points c1 , c2 , . . . , cν are symmetric with respect to the point 12 . This corresponds to their two natural choices, as either Gauss–Legendre or Gauss–Lobatto points. An immediate consequence of Theorem 4 and the last paragraph is that the last term in (2.11) can be removed without affecting the order. In other words, the scheme √ 1 3 2 h [a2 , a1 ] σn+1 = h(a1 + a2 ) + 2 12 is of order 4. Time symmetry has proved itself important in the context of symplectic integration because it leads to superior long-term behaviour of the numerical solver [18]. It makes sense to suspect that it plays similar role in the context of Magnus series, but this question has not yet been investigated.

4. An order-6 Magnus expansion According to Theorem 4, we need to let p = 5 in the canonical expansion (3.3) to obtain order 6. According to Table  2, this results in eight trees, except that (adding an extra root to emphasize that we are dealing with 0t Hτ (ξ ) dξ , rather than with Hτ (t)) the coefficient of

is zero. Mildly abusing notation, we write the truncated canonical Magnus expansion corresponding to the order-6 method in the form

σ5 (t) =





1 8

1 2

+

1 4



+

1 24

1 12

.



1 24

(4.1)

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

393

Requiring order 6, we employ Gauss–Legendre quadrature with ν = 3, therefore √ √ 1 1 1 15 15 , c2 = , c3 = + . c1 = − 2 10 2 2 10 Letting al = a(tN + cl h), l = 1, 2, 3, we commence by evaluating the first integral,

5 4 5 a1 + a2 + a3 . ⇒ h 18 9 18 Likewise, we use the approach of Section 2 to approximate the second integral, √



(4.2)

1 1 1 [a1 , a2 ] + [a1 , a3 ] + [a2 , a3 ] . ⇒ 15h (4.3) 27 54 27 In principle, we could have continued with the multivariate quadrature formulae from [9] and Section 2 for subsequent integrals, accompanied if and when necessary by the Jacobi identity. Thus, for example,

1 4

2

1 12   5  47  3 h a1 , [a1 , a2 ] + a1 , [a1 , a3 ] 13608 3024    97  19  19  + a1 , [a2 , a3 ] − a2 , [a1 , a2 ] + a2 , [a2 , a3 ] 13608 3402 3402 +

⇒

   97  5  47  − a3 , [a1 , a2 ] − a3 , [a1 , a3 ] − a3 , [a2 , a3 ] . (4.4) 13608 3024 13608 This, however, is suboptimal and we can do substantially better in terms of commutator evaluations by adopting a different approach. Suppose that

a(t) = α + βt + γ t 2 + δt 3 + · · · and let µ = [α, δ],

ψ = [β, γ ],

ζ = [α, µ],

τ = [α, ψ],

σ = [β, η],

ξ = [γ , θ].

We substitute the expansion of a into different terms. In the first instance we evaluate

1 1 2 2 3 3 θt + ηt + µ + ψ t4 + · · · . 2 3 4 6 This is a starting point in expanding the two terms in F3 , a task assisted by the techniques in [10]: ⇒



⇒



1 1 1 1 1 1 1 ω + χ h5 − ζ+ τ + σ + ξ h6 − · · · , − ρh4 − 24 30 30 40 180 36 36

394

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401





1 1 1 1 1 4 2 1 ρh + ω + χ h5 + ζ + τ + σ + ξ h6 + · · · . 8 15 20 8 36 18 36

⇒ Adding up, we obtain

1 4

+

1 12





1 1 1 1 1 1 ω− χ h5 + ζ+ τ− σ− ξ h6 + · · · . ⇒ 360 240 240 1080 432 216 We aim to construct a quadrature formula of the form 

h3





bk,l,j ak , [al , aj ] ,

(4.5)

(k,l,j )∈B

where B is a set of triplets of indices. Thus, for example, in (4.4) we have



B = (1, 1, 2), (1, 1, 3), (1, 2, 3), (2, 1, 2), (2, 2, 3), (3, 1, 2), (3, 1, 3), (3, 2, 3) , eight terms altogether. In the sequel we strive to find a smaller set B. To this end we expand 



ak , [al , aj ] 





= α + ck hβ + ck2 h2 γ , α + cl hβ + cl2 h2 γ + cl3 h3 δ, α + cj hβ + cj2 h2 γ + cj3 h3 δ + · · · 









= α + ck hβ + ck2 h2 γ , h(cj − cl )θ + h2 cj2 − cl2 η + h3 cj3 − cl3 µ + cj cl (cj − cl )ψ = h(cj − cl )ρ + h +h

 3

2



3

cj2

− cl2



ω + ck (cj − cl )χ









+ ···



cj3 − cl ζ + cj cl (cj − cl )τ + ck cj2 − cl2 σ + ck2 (cj − cl )ξ + · · · .

Therefore, comparing coefficients, order-6 conditions are 

bk,l,j = 0,

B





(cl + cj )bk,l,j =

2

B

cl2 + cl cj + cj bk,l,j

B



ck (cl + cj )bk,l,j

B

1 , 360



1 , 240

cj cl bk,l,j =

1 , 1080

ck2 bk,l,j = −

1 , 216

B



1 , = 240

ck bk,l,j = −

B



1 , =− 432

B

where bk,l,j = (cj − cl )bk,l,j .

On the face of it, we have seven equations and need seven terms in B to satisfy them: this would represent a saving of just one term over (4.4). This estimate, however, is pessimistic, since it disregards symmetries in Gauss–Legendre nodes. MATLAB and Maple search has resulted in what appears to be the minimal set,



B = (1, 1, 2), (1, 2, 3), (3, 1, 2), (3, 2, 3) ,

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

395

with 1 5 5 , b1,2,3 = , b3,1,2 = − , 432 432 432 We have thus obtained the 6th-order quadrature b1,1,2 =

1 4

+

b3,2,3 = −

1 . 432

1 12

       1 3  (4.6) h a1 , [a1 , a2 ] + 5 a1 , [a2 , a3 ] 5 a3 , [a1 , a2 ] − a3 , [a2 , a3 ] 432 which uses just four terms. Finally, we need to approximate the integrals corresponding to the linear combination of the three terms in F4 . We again expand each integral in power series,

⇒



⇒

1 1 1 1 [α, χ] + [α, ω] + [β, ρ] h6 + · · · , − [α, ρ]h5 − 40 120 45 48

⇒

1 1 1 1 [α, ρ]h5 + [α, χ] + [α, ω] + [β, ρ] h6 + · · · , 120 180 180 144

⇒







1 1 1 1 [α, ρ]h5 − [α, χ] + [α, ω] + [β, ρ] h6 + · · · . 30 36 36 72

Consequently,



1 24 ⇒



1 8



1 24

1 1 1 7 [α, ρ]h5 + [α, χ] + [α, ω] + [β, ρ] h6 + · · · . 720 8640 720 1728

396

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

We seek a quadrature rule of the form 

h4







bk,l,j,i ak , al , [aj , ai ] .

(4.7)

(k,l,j,i)∈A

Since 





ak , al , [aj , ai ]











= α + ck hβ, α + cl hβ, α + cj hβ + cj2 h2 γ , α + ci hβ + ci2 h2 γ 2



2 

+ ···

= α + ck hβ, α + cl hβ, h(ci − cj )θ + h ci2 − cj η + · · · 





= α + ck hβ, h(ci − cj )ρ + h2 ci2 − cj2 ω + cl (ci − cj )χ

2



2



+ ···

= h(ci − cj )[α, ρ] + h cl (ci − cj )[α, χ] + ci2 − cj [α, ω]

+ ck (ci − cj )[β, ρ] + · · · , we obtain the order-6 conditions 

bk,l,j,i =

A





1 , 720

(ci + cj )bk,l,j,i =

A

cl bk,l,j,i =

7 , 8640

ck bk,l,j,i =

1 . 1728

A



1 , 720

A

Remarkably, they can be satisfied by choosing just two terms,



A = (3, 1, 1, 3), (1, 3, 1, 3) , with the weights

√ √ 1 1 15 15 + , b1,3,1,3 = + . b3,1,1,3 = − 5184 4320 5184 4320 Perhaps even more remarkably, the Jacobi identity (2.1) gives 









a3 , a1 , [a1 , a3 ] + a1 , [a1 , a3 ], a3







+ [a1 , a3 ], [a3 , a1 ] = 0

therefore, [[a1 , a3 ], [a3 , a1 ]] = 0 implies that 











a3 , a1 , [a1 , a3 ] = a1 , a3 , [a1 , a3 ] .

We conclude that the order-6 conditions can be satisfied by using just a single term in (4.7),

√  1 1 15 4   1 h a1 , a3 , [a1 , a3 ] . − − ⇒ − 24 8 24 2160 Totting up (4.2), (4.3), (4.6) and (4.8), we obtain the order-6 method √ 1 15 2  h(5a1 + 8a2 + 5a3 ) − h 2[a1 , a2 ] + [a1 , a3 ] + 2[a2 , a3 ] σn+1 = 18 108

(4.8)

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

       1 3  h a1 , [a1 , a2 ] + 5 a1 , [a2 , a3 ] − 5 a3 , [a1 , a2 ] − a3 , [a2 , a3 ] 432 √  15 4   + h a1 , a3 , [a1 , a3 ] . 2160

397

+

(4.9)

The implementation of (4.9) requires just three function evaluations, as well as the computation of seven commutators: a1,2 = [a1 , a2 ], a1,3,1,2 = [a1 − 5a3 , a1,2 ],

a1,3 = [a1 , a3 ], a1,3,2,3 = [5a1 − a3 , a2,3 ],

a2,3 = [a2 , a3 ],   a1,3,1,3 = a1 , [a3 , a1,3 ] ,

whereby the method reads σn+1

√ 1 15 2 h(5a1 + 8a2 + 5a3 ) − h (2a1,2 + a1,3 + 2a2,3 ) = 18 108 √ 15 4 1 3 h (a1,3,1,2 + a1,3,2,3) + h a1,3,1,3. + 432 2160

5. Numerical examples The purpose of this section is neither to present a comprehensive numerical study of the Magnus algorithm nor to compare it with other numerical time-stepping methods. Our sole goal is to demonstrate that the practice complies with theory. To this end we present a small number of computational examples, all in a fixed step-size setting. We refer the reader to [10] for the discussion of error-control and variable time-stepping strategies in the context of Magnus series. Our first example is the equation y =





0 −1/(1 + t)2

1 y, 0

t  0,

y(0) = I,

(5.1)

which evolves on SL2 (R). The exact solution of (5.1) is 

y(t) =



(1 + t)1/2 cos θ(t) − √



3 3

sin θ(t)

− 2 3 3 (1 + t)−1/2 sin θ(t)



√  2 3 (1 + t)1/2 sin θ(t) 3 √   , (1 + t)−1/2 cos θ(t) + 33 sin θ(t)



where θ(t) = 23 log(1 + t). Fig. 1 displays log10 !yN − y(tN )! (i.e., the number of significant digits) for two methods, the 4th order and the 6th-order Magnus algorithms from [9] and from the last section, 1 1 1 , 20 and 40 . The main purpose of this example is to demonstrate respectively, for three step-sizes: h = 10 that the behaviour is consistent with the stipulated order. The second example is the equation 

0 y = −t sin2 t 



1 y, 0

t  0,

y(0) = I,

(5.2)

398

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

Fig. 1. The number of significant digits in the solution of (5.1) by a 4th-order (upper figure) and 6th-order (lower 1 1 figure) Magnus expansions. The solid line stands for h = 10 , the dot–dot line for h = 20 and the dot–dash for 1 h = 40 .

whose solution is highly oscillatory and of increasing amplitude. We have solved (5.2) with two 6th-order methods: the Magnus expansion (4.9) and the 7-stage explicit Runge–Kutta method with the Butcher tableau [5]: 0 1 2

1 2

2 3

2 9

4 9

1 3

7 36

2 9

1 − 12

5 6

35 − 144

− 55 36

35 48

15 8

1 6

1 − 360

− 11 36

− 18

1 2

1 10

22 13

43 156

− 118 39

32 195

80 39

0

11 40

11 40

4 25

4 25

41 1 − 260

13 200

(5.3)

13 200

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

399

Fig. 2. Numerical results for Eq. (5.2). The top two figures display the (1, 1) component of yn for the Magnus and 1 . The bottom three figures display log | det yn | for the Runge–Kutta Runge–Kutta methods, respectively, for h = 10 scheme for different step-sizes.

As can be observed in Fig. 2, both methods have done very well in modelling a fairly complicated solution: the trajectories look virtually indistinguishable and, as a matter of fact, differ only in the fourth significant digit. Note that the solution of (5.2) lives on SL2 (R) and this feature is automatically captured by the Magnus expansion. Insofar as the Runge–Kutta is concerned, however, an exponential growth in log | det yn | can be observed in the figure. Our final example is the SO4 (R) flow   0 t sin 14 π t 0 0   −t sin 1 π t  4 y =  0  

0

t sin 12 π t

−t sin 12 π t

0

    y, 3 t sin 4 π t  

0

t  0,

y(0) = I.

(5.4)

0 0 0 −t sin 34 π t The special orthogonal group being a compact manifold, the trajectory of y(t) is bounded, and this is rendered correctly by the 6th-order Magnus expansion: the error is exactly as predicted by order considerations, while orthogonality error is nil by design. This, however, is not the case with the Runge– Kutta scheme (5.3) which, for the same step-size sequence, displays rapid onset of unstable behaviour and exponentially-growing amplitude. This phenomenon is expressed, as can be read at once from Fig. 3, both in the solution trajectory yn and in the orthogonality error !ynT yn − I !.

400

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

Fig. 3. The (1, 1) component of the numerical solution of the orthogonal flow (5.4) by 6th-order Magnus and Runge–Kutta methods for 0  t  20. The top figure displays the solution sequence for the Magnus method, with 1 ) underneath. Orthogonality error !ynT yn − I ! for the Runge–Kutta the error in the Runge–Kutta method (for h = 10 1 1 1 scheme with h = 10 , 20 and 40 is displayed at the bottom three figures.

This is an illustration of a phenomenon which is at the moment under active investigation, namely that in compact Lie groups Lie-group methods possess significantly better stability properties than more conventional time-stepping methods.

References [1] M. Abramowitz, I. Stegun, Handbook of Mathematical Functions, Dover, New York, 1965. [2] G.J. Cooper, Stability of Runge–Kutta methods for trajectory problems, IMA J. Numer. Anal. 7 (1987) 1–13. [3] P.E. Crouch, R. Grossman, Numerical integration of ordinary differential equations on manifolds, J. Nonlinear Sci. 3 (1993) 1–33. [4] K. Engø, A. Marthinsen, H. Munthe-Kaas, DiffMan: An object-oriented MATLAB toolbox for solving differential equations on manifolds, Appl. Numer. Math. 39 (2001) 323–347 (this issue). [5] E. Hairer, G. Wanner, Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, Springer, Berlin, 1991. [6] F. Hausdorff, Die symbolische Exponential Formel in der Gruppen Theorie, Leipziger Ber. 58 (1906) 19–48. [7] A. Iserles, Solving linear ordinary differential equations by exponentials of iterated commutators, Numer. Math. 45 (1984) 183–199.

A. Iserles et al. / Applied Numerical Mathematics 39 (2001) 379–401

401

[8] A. Iserles, Multistep methods on manifolds, Technical Report 1997/NA13, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, England, 1997. [9] A. Iserles, S.P. Nørsett, On the solution of linear differential equations in Lie groups, Phil. Trans. Roy. Soc. London Ser. A 357 (1999) 983–1019. [10] A. Iserles, A. Marthinsen, S.P. Nørsett, On the implementation of the method of Magnus series for linear differential equations, BIT 39 (1999) 281–304. [11] W. Magnus, On the exponential solution of differential equations for a linear operator, Comm. Pure Appl. Math. 7 (1954) 649–673. [12] H. Munthe-Kaas, Runge–Kutta methods on Lie groups, BIT 38 (1) (1998) 92–111. [13] H. Munthe-Kaas, High order Runge–Kutta methods on manifolds, Appl. Numer. Math. 29 (1999) 115–127. [14] H. Munthe-Kaas, B. Owren, Computations in a free Lie algebra, Phil. Trans. Roy. Soc. London Ser. A 357 (1999) 957–982. [15] H. Munthe-Kaas, A. Zanna, Numerical integration of differential equations on homogeneous manifolds, in: F. Cucker, M. Shub (Eds.), Foundations of Computational Mathematics, Springer, Berlin, 1997, pp. 305–315. [16] P.J. Olver, Equivalence, Invariants, and Symmetry, Cambridge University Press, Cambridge, 1995. [17] B. Owren, A. Marthinsen, Runge–Kutta methods adapted to manifolds and based on rigid frames, BIT 39 (1) (1999) 116–142. [18] S. Reich, Backward error analysis for numerical integrators, Technical Report SC 96-21, Konrad-Zuse Zentrum für Informationstechnik Berlin, 1996. [19] J.M. Sanz-Serna, M.P. Calvo, Numerical Hamiltonian Problems, AMMC, Vol. 7, Chapman & Hall, London, 1994. [20] V.S. Varadarajan, Lie Groups, Lie Algebras, and Their Representations, Graduate Texts in Mathematics, Vol. 102, Springer, New York, 1984. [21] A. Zanna, The method of iterated commutators for ordinary differential equations on Lie groups, Technical Report 1996/NA12, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, England, 1996. [22] A. Zanna, Collocation and relaxed collocation for the Fer and the Magnus expansions, SIAM J. Numer. Anal. 36 (4) (1999) 1145–1182.