Computing periodic orbits with high accuracy

Computing periodic orbits with high accuracy

Computer methods in applied mechanios and engineering Comput. Methods Appl. Mech. Engrg. 170 ( 1999l 331-341 ELSEVIER Computing periodic orbits with...

869KB Sizes 16 Downloads 69 Views

Computer methods in applied mechanios and engineering Comput. Methods Appl. Mech. Engrg. 170 ( 1999l 331-341

ELSEVIER

Computing periodic orbits with high accuracy 1 Won

Gyu Choe

a

-

, J. G u c k e n h e l m e r

b

"

"~Center./~r Applied Mathematics, Cor~wll University. Ithaca, NY 14853. USA b Mathematn's Department and Center for Applied Mathematics. Cornell Universio,, Ithaca. NY 14853, USA Accepted 3(1 April 1998

Abstract

This paper introduces a new family of algorithms for computing periodic orbits of vector fields. These global methods achieve high accuracy with relatively coarse discretizations of periodic orbits through the use of automatic differentiation. High degree Taylor s e r i e s expansions of trajectories are computed at mesh points. On a fixed mesh, we construct closed curves that converge smoothly to periodic orbits as the degree of the Taylor series expansions increase. The algorithms have been implemented in Matlab together with the use of the automatic differentiation code ADOL-C. Numerical tests of our codes are compared with AUTO calculations using the Hodgkin-Huxley equations as a test problem. © 1999 Elsevier Science B.V. All rights r e s e r v e d .

1. I n t r o d u c t i o n

Periodic orbits are fundamental objects within the phase portrait~,, of dynamical systems [5]. Stable; periodic orbits are readily observed as limit sets for trajectories computed with numerical integration, but we also seek numerical algorithms that compute periodic orbits without recourse lo numerical integration of long trajectories. There are two general classes of methods used to solve these boundary value problems [I ]. Shooting methods use numerical integration over one orbit period together with root finding algorithms to accomplish the task, but they are subject to numerical ill-conditioning. Multiple shooting methods improve the condition number of the calculation at the expense of establishing larger root finding problems. Global methods for finding periodic orbits provide a second approach to solving boundary value problems. Instead of numerical integration, these methods view the vector field as an equation on a function space of closed curves. I f f is a Lipschitz continuous vector field on a smooth manifold M and y : S l --~ M is a C ~ closed curve in M, then y is a periodic orbit if the periodic orbit equations ~/'(0 =./{TU)) are satisfied identically along y. The periodic orbit equation may be studied on any space of closed curves contained within C~(S1, M). Computer implementation of global methods for computing periodic orbits requires discretization of closed curves and approximation of the periodic orbit equations. Abstractly, one defines finite dimensional submanifolds Fj of the space F of closed curves and approximates the periodic orbit equations as a map defined on this space. We work with spaces I~ of functions constructed by interpolation from discrete sets of points on a

*Corresponding author. E-mail: [email protected] JResearch partially supported by the Air Force Office of Scientific Research, the Department of Energy and the National Science Foundation. The authors thank the Institute for Mathematics and its Applications and the Geometry Center at the University of Minnesota for hospitality and support. 0045-7825/99/$ - see front matter ~) 1999 Elsevier Science B.V, All rights PII S 0 0 4 5 - 7 8 2 5 ( 9 8 ) 0 0 2 0 1 - 1

reserved.

332

W.G. Cho¢, J. Guckenheimer / Comput, Methods Appl. Me,'h. Engrg. 170 (1999),~31-341

trajectory. As the dimension of ~ increases, one obtains larger svslems of approximating equations. Solving these equations with Newton or quasi-Newton methods becomes both more expensive and less reliable with increasing size. Therefore, it is desirable to limit the dimension of I~ even as approximations of increasing accuracy are sought. The novelty of our approach to finding periodic orbits lies in the decoupling of these two aspects of computing periodic orbits. We demonstrate that it is possible to specify a sequence of submanifolds of fixed dimension and systems of equations whose solutions converge to a hyperbolic periodic orbit of an analytic vector field. Our ability to achieve the seemingly paradoxical objective of computing highly accurate approximations to periodic orbits with systems of equations of fixed size relies upon a process that is called automatic differentiation. Automatic differentiation enables the computation o[" high order derivatives of functions without the truncation errors inherent in finite difference approximations. We use the package ADOL-C 14] to compute the Taylor series of trajectories at phase points. In addition, we use ADOL-C to compute Jacobian derivatives of the Taylor series coefficients with respect to the phase space variables for use in the Newton iteration. As the degree of the computed Taylor series increases, we obtain sequences of curves that converge since the trajectories are analytic. Two geometric considerations complicate the root finding problem for periodic orbits. First, time translation of a trajectory produces another trajectory. Theretore, solutions of the periodic orbit equation are not isolated, but come as one par~tmeter families related by time translation. Second, the period of a periodic orbit is unknown and must be' determined as part of the solution proces,;. Different approaches to these issues are compatible with our methods. A common approach used in published algorithms fixes a 'phase condition' to remove the degeneracy associated with time translation and rescales the vector field until its period is a predetermined quantity.. Our methods can use this strategy, but they also allow us to use effectively a more geometric strategy. This second approach fixes a set of hyperplane~ that are transverse to the vector field, and then solves for points that lie on the intersection of the hyperplanes with the periodic orbit. In this procedure, the time interval required by trajectories to advance from one cross-section to the next must be determined as part of the solution process. With either approach, we produce algorithms in which the system of equations that must be solved has the minimal possible dimension, namely the product of the dimension of the phase space with the number of mesh points. Moreover, the structure of the Jacobian real:rices that are used in the root finding has a simple sparsity pattern that can be exploited in its inversion. Explicit inversion of this block matrix in terms of the inverses of the individual blocks yields a relationship between the regularity of the root finding problem and the hyperbolicity of the periodic orbit. Since our methods produce smooth approximations to periodic orbits, we can evaluate the distance between the tangent vectors to a computed curve and the vector field along that curve. These error estimates enable us to develop strategies for mesh refinement that balance the error in difl;zrent mesh intervals. Since the approximating solution in a mesh interval is determined entirely by its endpoints, mesh refinement is a simple process and does not change the structure of the discretized periodic orbit equations. We have tested our algorithm with the Hodgkin-Huxley equations. These define a four dimensional vector field that models the electrical activity of the squid giant axon in space clamped conditions. The vector field is moderately stiff with strongly stable directions, but it cannot be completely reduced to a two-dimensional vector field [71. The system has a complex bifurcation structure with both twisted and untwisted homoclinic orbits. Periodic orbits near these bifurcations provide a challenging test fl)r .algorithms that compute periodic orbits. We present a set of comparisons of our algorithms with computations using AUTO. The program AUTO [21 has become a standard tool for computing periodic orbits. It combines collocation methods with a continuation strategy for tracking periodic orbits with a varying parameter and computing bifurcation curves in two parameter families. We study the estimated accuracy of the computations using our algorithms as well as that of AUTO, varying parameters that adjust the number of mesh points and collocation points per mesh interval used by AUTO.

2. The algorithm(s) This section describes algorithms for computing periodic orbits of a vector field that are based upon the use of high degree Taylor series expansions of trajectories. We begin by outlining the strategy employed by the

½'.G. ('hoe. J. Guckenheimer / Comput. Methods Appl. Mech. Engrg. 170 (1999)331-341

3113

algorithms. Let.(: R"---~R" define an analytic vector field. We assume that the computation of/" is implemented as an algorithm in the C or C + + computer language in terms of operations and functions for which explicit differentiation rules are available. A periodic orbit y : R---~R" of period T > 0 is a curve with the properties that f(y(t)) = y'(t) and y(t+ T ) = y(t) lbr all t ~ R . Denote by /7' the space of C' periodic curves in R" and G the operator G:I~"--+U ~ defined by G ( y ) = f o y - y '. Periodic orbits are solutions of G = 0 . They are analytic curves, but we construct approximations that are not analytic. The starting data for our algorithms are cyclically ordered collections of N points [(t o, x0), (t~, x] )'''(IN, XN) = (tN, Xo) ] in R X R " with t o < t ~< . . - < t N, We call such a collection an N p o i n t discrete closed curve or dec and denote the space of N point discrete closed curves by D,x;. We ,,;hall define maps S from D x to f'" that yield submanifolds of F . Different choices of S yield variants of the algorithm described below. Given a map S, we seek systems of n XN equations G s on D N whose solutions yield good approximations to periodic orbits o f f . The accuracy of the approximation depends upon both S and G s. Our objective is to define sequences Sj and Gs,~ indexed by 'degree' so that the corresponding curves S; (sj) obtained from solutions of Gs.;(s;)= 0 converge to a periodic orbit o f f . Note that the convergence is to take place with meshes of fixed size, in contrast to most global methods for solving boundary value problems~ The specification of an algorithm thus requires three components: (1) the definition of S (2) the definition of G s (3) a root finding algorithm for solving Gs=O We hardly discuss the lhird component of the algorithm, relying upon Newton's method in our implementations and tests. At a regular point x~, of the vector field f, the trajectory x(t) with x ( 0 ) = x o is analytic and has a convergent Taylor series expansion. The Taylor series can be obtained by repeated differentiation of the differential equation x' =f(x) and recursive substitution of the values of derivatives x(k~(t) of increasing degree. ADOL-C [41 implements this calculation by defining a data structure that encodes, finite degree Taylor series of the trajectory. The t derivatives o f , f composed with the Taylor series are computed using the techniques of automatic differentiation; namely, explicit differentiation rules are applied to the individual operators and functions in the algorithm defining f This process produces values of the Taylor series coefficients that are obtained without truncation errors. ADOL-C further implements the calculation of the Jacobian derivatives of the Taylor series coefficients with respect to the phase space variables. These are used in forming the Jacobian of G s required by Newton's method. Since trajectories are analytic, the Taylor series of x(t) converges on a fixed interval [ - e . el. The domain of convergence can be be estimated from the rate of growth or decay in the computed Taylor series coefficients. With reasonable confidence, we can observe how many terms of the'. Taylor series of x(t) are required to compute the trajectory to specified accuracy for t e l - e, el. 2.1. The map S The map S : D x - ~ F ' from a dcc is defined by interpolation. Lel: d be a positive integer. Given the Taylor series of a trajectory at two adjacent points (x; ], t; ]) and (x,, t;) of a dcc, we define a (U curve p;:[t, ~, t;]---~R" that joins the points x, ~ and x; with a curve whose Taylor series agrees with the Taylor series of the trajectories with initial points x; Mand x, to degree d. If .r [ and x, lie on the same trajectory with time separation t; - t ; ~, we want the interpolating curves l); to converge to the trajectory joining these points as d increases. The map S is defined by joining the points of the dec by their interpolating functions. The map S depends upon d and the choice of interpolating functions p,. We have experimented with two interpolation procedures. The first choice of interpolating function is the unique polynomial of degree 2 d + 1 with the previously determined Taylor series coefficients of degree d at x; ~ and x,. While this choice approximates periodic orbits with piecewise polynomial curves, i~: has the mathematical drawback that the interpolating polynomials converge to a discontinuous function with increasing degree when the end points do not lie on the same trajectory. We use a second choice of interpolating function that avoids this difficulty in our mathematical analysis. Consider the function /3(t) =

4t

334

~ G . Choe, J. Guckenheimer / Comput, Methods Appl. Me,'h. Engrg. 170 ( 1 9 9 9 ) 3 3 1 - 3 4 1

on the interval l - 1,1]. This C ~ function is flat at its two endpoint~, varies monotonically from 1 to 0 and has derivative - 1 at the origin. The function/3 also has the symmetry 1 - / 3 ( t ) = / 3 ( - t ) . If y'J ~ and yl ~ are Taylor polynomials of degree d for the trajectories passing through x~ ~ and x, at times t~ ~ and t~, then the curve

[2t-t,-ti ~)

(

/2t-t-t,

~))

is a C ~ curve whose degree d Taylor series expansions at t, ~ and t agree with those of the trajectories through these points. Derivatives of p(t) of degree higher than d at t~ ~ and t, vanish. If Its-t,_~] is smaller than the radius of convergence of the infinite Taylor series 7~ , and 7, ~, then sol/ converges to a smooth function (~" in the C ~ topology as d-~zc,. Moreover, ( ~ is a weighted average of the trajectories through x~ ~ and xi. If these points lie on the same trajectory, then .£~~ is a segment of this trajectory.

2.2. The equations G j Assume now that we have specified a map S that constructs a closed curve from a dcc Y=[(tc~, x(~), (t~, x~ ) . . . ( t N, x~,)l. Let s, be the midpoint (t i - t , ~)/2 of the time interval [t i ~, t,] and let e i =f(((si))-{:'(si). We amalgamate the n-vectors e~ into a vector of length nN and define G=Gs:D~,---)R ''u by G(Y)=(e~ . . . . . ex). We regard G = 0 as a system of nN equations defined on D~. If Y is the discretization of a periodic orbit, then the equation G = 0 is satisfied. Note, however, that the dimension of D x is N ( n + l ) + l so the system is underdetermined. Moreover, any N point discretization of a periodic orbit gives a solution of G = 0, so solutions come as N + 1 dimensional families parametrized by the location of points in the orbit and translation of time along the periodic orbit. The last degree of freedom from time tran,dation can be eliminated by fixing tc~, or by replacing the time coordinates by t , - t , ~. To obtain a finite dimensional system of equations that we expect to be regular and have isolated solutions, we restrict G further to a subspace V of D N of dimension nN. This can be done in many ways. One choice that will not work is to fix all the time coordinates of Y, since the period of the periodic orbit is not known but will be equal to t N -tc~ in the solution. A prevalent choice of subspace V in the literature about solving boundary value problems is to define V as the direct sum of a line in the subspace of time coordinates with t~j = 0 (this corresponds to time rescaling of the vector field) with a hyperplane in the phase space coordinates (defined by a phase condition). We prefer a second strategy lk~r choosing a st, bspace of D,~ of dimension nN that is attractive from a geometric perspective. Select a set of cross-sections to the flc)w at mesh points. In our implementations, we use the orthogonal complements to the vector field f(x~) at the initial mesh points (x~). The normal subspace to the vector field at x, is determined by computing the QR factoriz~ttion of the n × (n + l) matrix (f(x,), I). The direct sum of these ,z,rthogonal complements together with tinLe coordinates (t~ . . . . . t N) defines an nN dimensional subspace V of D x on which we expect the operator G to be regular. We use Newton's method to solve the equation G = 0 on the space V. This requires that the Jacobian J of Glv. ADOL-C computes the derivatives of the Taylor series coefficients with respect to the phase space coordinates. Linear combinations of Ithese derivatives give the desired Jacobians as follows. For each point xi, we have n - 1 coordinates for the cros..,;-section at x,. We use the differences t ~ - ti j as time coordinates. The value of e, E R " depends only on the points x, l and x~ and the time increment t , - t , ~. Therefore, we order the coordinates in V so that the coordinates ,3f the cross section to f at x~ ~ appear in positions in + 2,.." (i + 1)n, and the coordinate t , - t i _ ~ appears at posi~Iion in. With this ordering of the coordinates of V, J has a block cyclic structure with a total of 2n - 1 non-zero 'cyclic" diagonals, n - 1 above the main diagonal and n - 1 below. The effectiveness of Newton's method is highly dependent upon our ability to accurately and efficiently compute J ~. We have not systematically investigated methods for doing so, but relate the regularity of J to the monodromy matrix of a periodic orbit in the next section. Using the interpolation procedure that weights trajectories witf~ C ~ functions, we obtain a sequence of subspaces Vj and maps Gv,~. These subspaces and maps converge in the C" topology as d---)~: to a limit Gv. ~ :V~--~R '''v that we denote by G:V---)R ''N. Because the convergence occurs in a smooth topology, the regularity of G implies ~he regularity of G~.~ for sufficiently large d. This observation forms the mathematical foundation for our interest in these algorithms. On a mesh of fixed size, they yield regular systems of equations defining discrete approximations of a periodic orbit of increasing accuracy.

W.G. Choe, J. Guckenheimer / Comput. Methods Appl. Mech. Engrg. 170 (1999) 331-341

335

3. Analysis The mathematical foundation of the algorithm we have described above rests upon the regularity of the system of equations G l v = 0 . In this section, we study this regularity and demonstrate its relationship to the spectrum of the monodromy matrix of a periodic orbit. We state our results formally as a theorem:

THEOREM. Let y :R---~R '~ be a periodic orbit of the analytic vector.field f and let Y=[(t o, %), (t l, xl)" "(t~,, x N = (tN, X o)] be an N point discrete closed curve that is a discrei'ization of y. Assume that the Taylor series solution of the trajecto O, passing through x, at time t~ converges on the time interval [t, j, ti+,] for each 0 < i < N and on the time interval [tu i, t j - t~ + tx ] at % = x v. Let V be the subspace qf D N and G : D u--+R'~u the map defined above. If 1 is not an eigenvalue e?f the monodromy matrix qf'y, then the system of equations G/v = 0 has a regular solution at Y. PROOF. The theorem is equivalent to the statement that the Jacobian derivative of GIv is nonsingular. Thus we must examine this Jacobian derivative in more detail. It has the block structure

E2 J

=

.

.

.

.

.

.

DN

E~.

in which E i and D i are n x n matrices respectively. The first column of E i consists of derivatives with respect to changing time, while the last n - 1 columns of Ei are the derivatives; of e i with respect to the coordinates of the cross-section to f at x,. Similarly, the first column of D vanishe,,; and the last n - - 1 columns of D i are the derivatives of ei with respect to x i i. The matrices E, are invertible. We compute them from the time t flow maps d~t of f. Observe that the trajectories y ~ satisfy y,(t)=&,..,,(x~) and t h a t / 3 ' ( 0 ) = - 1. If ~c is a trajectory segment and 6, = l t i - t ~ t)/2, the derivative of the function e = f ( ( ( s i ) ) - ( ' ( s ~ ) with respect to x~ is - D ( & _ ~ ) ( x l~ )(se)/6. Moreover, the derivative of e~ with respect to g is -D,(ck ~)(x?)(s,)/6, -f(s,)/6'. Thus, E~ is - D ( & 6)(xg)/6 evaluated in a basis whose first component is a unit vector along the vector field.f(x ) and whose remaining components span the orthogonal complement of./(x,). Since the flow maps of the vector lield are diffeomorphisms, E is invertible. Similarly, we compute that D~=D(ck~)(xi ~)/6' restricted to the orthogonal complement to J(x~ ~). The final steps in the proof consist in factoring J into three factors. Two of the three factors are inve:rtible and the third factor is invertible if and only if the monodromy matrix of the periodic orbit is nonzero. We use the invertibility of the ~ to factor J as

El

t E21D2 l

J=

I ......

EN

E I I[)j . . . . . .

E N ID,v Denoting A , = E ~ D ~ ,

2

1

the LU factorization of the second factor above is

I

I

Av

!

- A,A I

I+(-1)

x JANA N I . . . A I

The final factor is invertible if and only if the matrix A = ( - A x ) ( - A N _ j ) . . . ( - A j ) does not have 1 as an eigenvector. Now each matrix - A i has first column zero, and the lower right-hand ( n - l ) × ( n - l ) block represents the Jacobian of the flow map from the orthogonal cross-section to the vector field at xi j to the orthogonal cross-section to the vector field at x~. (The first row of the matrix represents the shear' of the time t,-t~._ ~ flow map on the cross-section at x~. ~ relative to cross-section at x~.) The lower right-hand ( n - 1 ) ×

W.G. Char.

336

J. Guckerthrimrr

I Cornput.

Methd

Appl.

Mrrh.

Engrg.

170 (1999)

.U-341

(n - 1) block of the product A is the monodromy matrix of the cross-section at the point .Y(,=x,~. Thus, we conclude that J is invertible if and only if the monodromy matrix of the cross-section does not have 1 as an eigenvalue. This finishes the proof of the theorem.

4. Numerical

tests

We have implemented the algorithms described above in Matlab, using ADOL-C to compute the Taylor series expansions of a vector field. The calculations we describe have been tested extensively with one example: the Hodgkin-Huxley equations [6] for the electric potential of a squid giant axon under space and current clamped conditions. This vector field is given by the following system of equations. v’ m’ n’ h’

= - G(v, tn, n. h) + I = @(T)[( 1 ~- m)y,,(V) - m&(V)1 = @(T)I( 1 -- n)q,(V) - nP,,W)l = @(T)[( I - h)@,,(V) - hp,,(V)]

with

G(V. tn. n, h) =~,<,tn3h(V-i$J

!P(x) =1xl@, ] - I)

f&t+-V,)

+,&(V-k,)

ifs#O ifx = 0

and a,,,(V) = F(s”>

p,,,(V) = 4 evils

~,,(V)=O,lly(~-2)

j3,,(V)=0.125e”X” p,,(v) = (, + ewi 3(l)/I(‘)- I

a,,(V) = 0.07 e”“”

Here. V is the memb’rane potential of the axon, m and h are gating variables that represent the activation and inactivation states of membrane sodium channels, and n is a gating variable that represents the activation of a potassium channel. This system has served as the archetype for compartmental models of electrical activity in neural systems. There are three aspects of the Hodgkin-Huxley equation that are challenging for our purposes: ( 1) while the system is analytic, there are points at which the expression of the system produces an indeterminate value O/O; (2) the system is moderately stiffi al:ld (3) there are many different types of bifurcations of periodic orbits that occur within the system. The input data for the algorithm consists of an N point discrete closed curve. To evaluate the accuracy of algorithms on the Hodgkin-Huxley equations, we began with a discrete closed curve computed as a discretization of a stable periodic orbit at a different value of the parameter I than that used in testing the algorithm. The curve was obtained by following a trajectory computed with a fourth order Runge-Kutta numerical integration until it approached the periodic orbit. This provides starting data sufficiently close to the periodic orbit that con\/lergence was rapid. We evaluated the Hodgkin-Huxley equations with a C+ + routine that makes (QYz,~,~) active variables for constructing an ADOL-C ‘tape’ to be used for computing derivative:;. The Taylor series expansion of the vector field at a point of the (l!ttr,tz,h) phase space and the Jacobian derivatives of the Taylor series coefficients with respect to the phase variables are obtained by applying the routines forode, reverse and accede in ADOL-C to the tape. l‘o check the accuracy of the ADOL-C computations, we also implemented a Maple algorithm to symbolically compute the Taylor series expansions and evaluate them with extended precision arithmetic. Comparison of these two methods for a few points in phase space gave results that agreed to approximately 14 digits of precision. We did find that the Taylor series computed by ADOL-C were sometimes

~EG. Choe. J. Guckenheirner / Comput. Metlu~ds Appl. Me,:h. Engr~,. 170 (1999)331

341

337

erratic near V= - 10 or V= - 2 5 where the formulas of the Hodgkin-Huxley are indeterminate. We have not pursued the causes of this and do not present results that appear to be affected by the inability of ADOL-C to accurately compute the Taylor series of trajectories at these points. In the first stages of the algorithm, we compute the Taylor series coefficients of the trajectories and their Jacobians on our discrete closed orbit to degree d. We have experimented with different values of d up to 30. Using these Taylor series, we construct interpolating functions whose Taylor series match the computed Taylor series to degree d. Hermite splines, interpolating functions that arc polynomials of degree 2d + l gave the best results in our computations. The Hermite polynomials are computed in the following manner. We assume that the Taylor series of degree d are known at the endpoints of a time interval of length 2&. By translating time, we take the domain to be ( - 6 , g ) . Next, observe that the computation of the even and odd degree terms of the interpolating polynomial can be separated from one anotfier. Denote the Taylor coefficients at - 6 by b (j) and those at g by b (j). Let b ( j ) = ( b + ( j ) + b ( j ) ) / 2 and h , ( j ) = ( b ~ ( j ) - b (j))/2. If 2d ~ I

p(t) = ~ aJ ~ k 0

the equations p~'~(+_6)=b::(j) for O<~j<-d give a system of 2 d + 2 equations for the coefficients of p. Adding and subtracting the equations f~)r p'J~(6) and p ' J ~ ( - ~ ) , we obtain two systems of d +1 equations for the even and odd a k in terms of the b ~ and b , ~ . The dependence on ~ can be absorbed into the right-hand sides of these equations, leaving linear systems with integer coefficients. For d > 2 0 , the Matlab solution of these linear systems is inaccurate. Exact solutions of the systems using ratkmal arithmetic obtained from Maple were transformed to Matlab double expressions and used in our calculations. These solutions gave more precise values for the Hermite interpolations than those computed with LU factorization of the linear systems in floating point arithmetic. We also observed that the interpolating polynomials have oscillating coefficients that make them difficult to evaluate accurately near the endpoints of the intervals. The accuracy of the evaluation of the interpolating polynomials can be improved by using a nonoscillatory basis for the space of polynomials on the interval [ - 6 , 6], for example the basis spanned by polynomials ( I - (t/6)~) ~ and ( t / 6 ) ( l - (t/~)2) ~. The next step of the algorithm is the evaluation of a map that vanishes on a discretization of the periodic orbit. We used the difference of the vector field and the tangent w.~ctor to the interpolating polynomial at its midpoint in our implementation. If f:R4--->R 4 denotes the Hodgkin-Huxley vector field and pit) is an interpolating function defined on the time interval [t~ ~, t~], then we evaluatef(p(t, ~+t)/2))-p'((t~ ~-+-t,)/ 2) = e~. Concatenating the e, gives a vector of length Nn that we interpret as the image of the map whose roots we want to find. The domain of this map depends upon the procedure we use for adjusting points. We have experimented with two approaches, in the first, time scaling approach we constrain x,~ to lie on a cross-section, allow the other x~ to vary freely in the phase space and multiply all cf the time increments by the same factor to match the period of the periodic orbit with the period of the dec. In the second, cross-section approach, the points of the dec are each constrained to lie on a cross-section to the flow and the time increment of each interval in the dcc is allowed to vary. Coordinates on a cross-,;ection are defined by perlorming a QR factorization of the the n × (n + 1) dimensional matrix [./(x) I] = Q(x)R(x). The final n -- 1 columns of Q give an orthonormal basis for the perpendicular cross-section to the veclor field at x. This basis is used in the calculations. Both approaches give a domain of dimension nN, so that we have a map G:R ''N--+R''x whose zeros we wish to determine. Newton iteration is now applied to the map G. This requires computation of the Jacobian matrix J of G. Differentiation of G with respect to the phase space variables and time yields expressions that contain first derivatives of the Taylor series coefficients at the mesh points with respect to the phase space variables. These have been computed by ADOI_,-C and stored in Matlab earlier, so J i~ computed explicitly in terms of this data. A Newton iteration is now performed until the value of G is close to the machine precision. Fvaluating the residual functionf(p(t))-p'(t) at several points in each mesh interval, we estimate how close the dec produced by the Newton iteration is to solving the periodic orbit equation. If the magnitude of the residual is larger than a desired error tolerance, we attempt to refine the mesh and/or increase the degree of the Tayh)r series expansions to produce a better approximation to the periodic orbit. In our numerical tests, new mesh points are placed at the midpoints of intervals that contribute more than 1/4 of the total L~ norm of e. At most three mesh intervals satisfy this criterion for any dcc. The refinement tends to produce meshes for which the contributions to the L~ norm of the solution are well distributed around the periodic orbit.

338

V~(G. Choe, J. Guckenheimer / Comput. Methods Appl. Mech. Engrg. 170 (1999)331-341

x 10 1~

_5]............................. 0 x 10-I' 2F

2

4

2

U

6

8

10

V componenl of the computed periodic solution 20 0

---•

-20 V -40

o

//

. . 6. . . . . . . .8 . . .

;o

X 10 -1:~

-60

(c/o~,~/: ~ - ~ ' ~ " ~ - ~ - ~

. . . .

~...~x

-80 - 1

O0

2

4

6

8

10

time

Fig. I. A plot of" voltage for the periodic orbit calculated for l=40.010.

-5

0

2

4

6

8

10

time

Fig. 2. (a) The residual error f(y(t))-y'(t) for the computed periodic orbit; (b) cumulative error compared with the result from an initial value solver with time step h<10 s; (c) Cumulative error inside each interval. Where comparisons are made with the value from an initial value solver, starting from the left endpoint.

We present data giving the most precise calculations of periodic orbits to the Hodgkin-Huxley equations that we have obtained using the algorithms introduced here and AUTO97. Fig. I shows the periodic solution computed at 1= 40.010. The initial data is taken from an approximate periodic solution at I = 39.50 computed with DsTool using a fi)urth order Runge-Kutta numerical integratoT with a fixed step length. The degree of the interpolating polynomial is 15, and 20 initial mesh points are taken uniformly along the periodic orbit At each Newton iteration, the local cross-section method is used and phase conditions are established by requiring that every mesh point remain on the cross-section to the vector field. To accelerate the convergence, we applied a simple knot update scheme after 2 Newton iterations after the initial error drops below 10 6. We calculated the residual error for each interval by summing the norm of the residual error at 10 internal points. If the error for ith interval is greater than 25% of the total residual error, the algorithm inserted a new knot point at the midpoint (t,,, y(t,,,)), where t,, =(t i +C+ ~)/2. Fig. 2 shows the results of our computation after 15 Newton steps and 6 knot updates. The final number of mesh points is 26, many fewer than the number used by typical boundary value problem solvers. Figure 2(a) shows the residual error f(p(t))-p'(t), which iy; of order 10 i i To check the accuracy of the computation, we compared the result with data from an initial value solver. We first computed the cumulative error, where the computed solution is ,compared with the data from an initial value solver starting at (to, ~o)- Fig. 2(b) shows the cumulative error, which is of order l0 '~. We have used a Runge-Kutta 4th order method with time steps less than 10 ~ fi)r this calculation. The pattern of the cumulative error is qualitatively independent of the various computational parameters such as the degree of interpolating polynomial and the time step of the integrator. Figure 2(c) shows another comparison with the data from the initial value solver, where the initial value of the solver is reset to the left endpoint at each interval, and the comparison with value,,; from the interpolation polynomial is made aE 10 internal points. One can check the error is well below 10 ~2. The error is largest and irregular in a region where the vector field is strongly expanding. We believe that the magnitude of these errors is close to those we expect from round-off of the calculations. We computed the the same periodic orbit using AUTO, varying the number of mesh points and internal collocation points in art attempt to obtain the best possible accuracy in the AUTO calculations. Our best results from AUTO are shown in Fig. 3. This AUTO computation is done with 80 mesh points and 7 collocation points for each interval, and with prescribed accuracy 10 ~. We observed that the cumulative error of the solution decreased with increasing the number of mesh points and collocation points, but saturated with 60 mesh points

VKG. Choe. J. Guckenheimer I Comput. Method,~ Appl. Mech. Engrg. 170 (1999),731-341

xlO 6

1[

/ili

deriv(p t - pdot(p(t)

lal_ ............................................ 0 x 10-8

2

4

2r

339

I' II 6

8

10

Cumulative Error Along the Trajectory

i

(b) 0

. . . . . . . . . . . . . . . . . . . . . . . . . . .

0 xlO 2~.~

2

4 6 Cumulative Error for Each Interval

°

ll,';~}

8

10 ll[l

I

_21 0

2

4

6

8

10

time

Fig. 3. The same error plot as in Figure 2, for an output from AUTO with 80 mesh points and 7 collocation points. (a) residual error, (b) The cumulative error measured at the 80 mesh points, (c) cumulative error inside each inlerval at the collocation points.

and 5 collocation points. The solution was computed by starting with a low accuracy periodic solution at I = 4 0 . 0 0 and making continuation in current to I=40.010. Fig. 3(a) shows the residual error for the curve computed with 80 mesh points and 7 collocation points for each interval. The residual error was computed as above by computing the Hermite interpolating polynomial p(t) for each mesh interval and evaluating the function f(p(t))-p'(t) at ten intermediate points of each mesh interval. The largest residual en-ors are approximately 10 ~'. The sum of the residual errors is 2.4876x 10 " Fig. 3(b), shows the cumulative error at the mesh points. While y ( T ) - y ( 0 ) is small, approximately 10 ~, the error is significantly higher in some intervals, and they have an erratic pattern. The cumulative error function depends sensitively on the computational parameters ol' AUTO. Fig. 3(c) shows the cumulative error al collocation points in each mesh interval. Despite the fact that AUTO tries to solve the differential equation exactly at these points, the AUTO solution has a larger error (of orde, r 10 s) than our algorithm. The computed errors for the collocation points are less accurate, sometimes by an order of 10, than that for the mesh points. We are interested in the basin of attraction for the iterative root finding of our algorithm, as well as its accuracy. The larger the basin, the more robust the algorithm will be to inaccurate starting data and the better its ability to track periodic orbits with varying parameters using continuation procedures. We performed two tests to assess the convergence of the algorithm from poor starting data. qhe first added random componenlls to the starting data described above. The second varied the parameter 1 in the range for which periodic solutions of the Hodgkin-Huxley equations exist. In both cases, the endpoints x~ ~, x~ are not expected to lie on a trajectory. Consequently, the residual error increases with the degree of the interpolating polynomial and the insertion of new knot points. Thus, one can use low order approximation to get better initial convergence from a raw data. To compute high accuracy solutions, we then adaptively increased the degree of the Hermite interpolation and applied knot insertions when the error became smaller than a prescribed threshold. We used global time scaling for these tests. We tested the convergence for an ensemble of 100 random data (t[, x~) chosen as follows: t

to = to t t

(4.1)

t

, =ti-i +(ti-ti-,)'t:i'A

i=l,.

x l =x i+(max(x)-min(x)).wi.A, t

.

,

(4.2)

,N

i=0 ..... N-

1

(4.3)

#

xN = xo

(4.4)

340

W.G. (7hoe, J. Guckenheimer / Comput. Methods App/. Mech. Engrg. 170 ( 1 9 9 9 ) 3 2 1 - 3 4 1

Table 1 Number convergent cases out of 100 for A : 0 . 1 5 Mesh points

d=2

d

20 4() 67

90 100 100

67 100 100

3

d-5 22 I (X) 100

Table 2 Convergence interval of current Mesh points

d- 2

d- 3

d=5

20 40 67

[I 7,851 [ 13,88] [ 13,88]

[ 17,80] [ 13,87] [ 13,88]

[ 17,76] [13,86] [ 13,88]

where (t;, x;) is the approximate solution at 1=39.50, v and w are random variables ranging from - 0 . 5 to 0.5, and A is a numerical ['actor introduced to adjust the amplitude of the variation around the periodic orbit. We establish a convergence criterion that the error be reduced to less than 10.0 after 20 iterations. The residual of the initial data is of order 103. With A set to 0.15, we observe convergence for all the test data with a moderate number of mesh points (~-40), and low degree ( = 5 ) (Table 1). To evaluate the behavior of the algorithm with respect to continuation, we tested its convergence with the same initial data while varying the parameter I in unit steps from 1 = 40. The same convergence criterion is used, and the results are given in Table 2. In AUTO, we observe convergence for I in 121,103] when we set the accuracy to 10 v use 60 mesh points and 5 collocation points. The periodic orbit i:s destroyed at I = 12.6253 by collapse with another periodic orbit created by the Hopf bifurcation at 1= 16.13904. One can still compute the periodic orbit near this bifurcation. For example, we obtained convergence at 1= 12.68 with 40 initial knot points, d = 3 , and increasing degree by one for every 3 iterations. When the error becomes less than 1.0 after 9 iterations, we switch to degree 15 and apply our knot insertion scheme. The period of the solution is found to be 19.0397, which is almost twice that of the initial data (T = 9.75 ). We repeated the same numerical experiments using the local section method. Since the interpolation polynomial depends sensitively on the local time interval ~, the local section method has a small basin of attraction in parameter space and does not perform well with noisy data. On the other hand, the convergence of the local section method was faster, the computation usually required a smaller number of mesh points and produced higher accuracy. We achieved good convergence by combining the two methods in the following algorithm: (1) start with global time scaling and low degree, (2) when near convergence to a solution of the low degree equations, increase the degree, (3) when the updates of the period of the numerical solution are small, switch to the local section method with high degree interpolation, (4) refine the mesh using the criteria described above.

5. Discussion Solution of boundary value problems for ordinary differential equations is an important and difficult numerical problem. Varied approaches have been used to solve such problems, with different methods giving superior results on varied problems. We think that the method presented in this paper has a number of attractive features, especially when one wants to compute solutions to high accuracy and to compare geometric information derived from dynamical systems theory with numerical solutions of boundary value problems. Theoretically the method is well grounded. We have proved that the finite dimensional systems of equations derived from computing a hyperbolic periodic orbit are regular. Moreover, the algorithm gives discrele closed

W.G, (?hoe, d. Guckenheimer / Comput. Method~' AppI. ,44ec~. Engrg. 170 (1999) 3Y-341

341

curves and interpolations on a fixed mesh that converge to a periodic orbit with increasing degree of Taylor series expansions. We have not estimated the distance from our computed curves to the actual periodic orbits of a system. Doing so requires a backward error analysis: if G.'I'--~I" is the operator G ( y ( t ) ) = . f ( y ( t ) ) - y ' ( t ) that vanishes on periodic orbits of the vector field f and HG(y)II= E, then we would like to estimate the distance from y to a periodic orbit ~. The derivative of G is given by DG(a(t))=D.f~(,)'et(t)-c~'(t). If DG(ce(t))=O, then a is a periodic solution of the variational equation o~'(t)=OJ~,,.(t). If 7,- is hyperbolic, the only solutions to this equation represent time translations along 7r. Thus, when we restrict D G to a codimension one subspace of y by fixing a phase condition, the restricted mapping D G is injective with inverse D G J on its image. A crude estimate of the distance from y to 7"r is given by the product of IIDG ill with e. Now IIDG ~11 depends upon the" geometry of the periodic orbit. To calculate DG ~ one solves an inhomogeneous, time dependent linear system of ordinary differential equations along 77".We conclude that if n is hyperbolic and • is small, tlhen • is proportional to the distance from y to ~, but we do not obtain bocLnds for the proportionality constants. Computation of high degree Taylor series approximation for trajectories with automatic differentiation has been used for the solution of initial value problems [3], but we know of no previous attempts to construct global methods for solving boundary value problems with these techniques. Our numerical studies of the HodgkinHuxley equation confirm that automatic differentiation and methods based upon the use of Taylor series expansions also provide a useful tool for the solution of boundary value problems. By constructing trajectory approximations of high degree, we are able to estimate minimal mesh sizes for computing periodic orbits with high accuracy. We view this work as a promising first step to the construction of boundary value solvers that will compute periodic orbits of dynamical systems with improved robustness and accuracy compared with existing methods. The next steps will be to develop strategies for extracting good starting data from numerical integrations, developing strategies for mesh refinement, choosing optimal degrees of the Taylor series in the calculations, and using geometric information derived from the algorithm to detect the presence of nearby bifurcations.

References [ll U. Ascher, R. Mattheij and R. Russell, Numerical Solutionof BoundaryValue Problems, (Prentice Hall, Englewood Cliffs, NJ, 1988). [2] E. Doedel, AUTO: Software for Continuationand Bifurcation Problems in Ordinary Differential Equations. (CIT Press, Pasadena, 1986). [3] A. Griewank, ODE Solving via automatic differentiationand rational prediction, Numerical Analysis;(Dundee, 1995), Pitman Res. Notes math. Set., 344 (1995) 36-56. [4] A. Griewank, D. Juedes and J. Utke, ADOL-C: A Package for the Automatic Differentialionof AlgorithmsWritten in C/C + ~-.Version 1.6, Argonne National Laboratory, 1995. 15] J. Guckenheimerand E Holmes. NonlinearOscillations.DynamicalSystems, and Bifurcationsof Vector Fields, (Springer-Verlag, 1983). [6] J. Guckenheimerand I. Labouriau, Bifurcation of the Hodgkin Huxley equations: A New Twist, Bulletin of Math. Biol. 55 (1993) 937-952. 17] A.L. Hodgkin and A.F. Huxley. A quantitativedescription of membrane current and its applicationsto conduction and excitation in nerve. J. Physiology, 117 (152) 500-544.