Chaos, Solitons and Fractals 23 (2005) 1879–1892 www.elsevier.com/locate/chaos
Computing Lyapunov exponents of continuous dynamical systems: method of Lyapunov vectors Jia Lu
a,*
, Guolai Yang b, Hyounkyun Oh c, Albert C.J. Luo
d
a
d
Department of Mechanical and Industrial Engineering, Center for Computer Aided Design, University of Iowa, Iowa City, IA 52242, USA b Department of Mechanical Design and Automation, Nanjing University of Science and Technology, Nanjing 210094, PR China c Applied Mathematical and Computational Sciences, University of Iowa, Iowa City, IA 52242, USA Department of Mechanical and Industrial Engineering, Southern Illinois University Edwardsville, Edwardsville, IL 62026-1805, USA Accepted 5 July 2004
Abstract This paper proposes a new method for computing the Lyapunov characteristic exponents for continuous dynamical systems. Algorithmic development is discussed and implementation details are outlined. Numerical examples illustrating the effectiveness and accuracy of the method are presented. 2004 Elsevier Ltd. All rights reserved.
1. Introduction Lyapunov exponents are asymptotic measures characterizing the average rate of growth (or shrinking) of small perturbations to the solutions of a dynamical system. The concept was introduced by Lyapunov when studying the stability of non-stationary solutions of ordinary differential equations, and has been widely employed in studying dynamical systems since then. Lyapunov exponents provide quantitative measures of response sensitivity of a dynamical system to small changes in initial conditions. One feature of chaos is the sensitive dependence on initial conditions; for a chaotic dynamical system at least one Lyapunov exponent must be positive. For non-chaotic systems all the Lyapunov exponent are non-positive. Therefore, the presence of positive Lyapunov exponents has often been taken as a signature of chaotic motion [1]. There are several methods for finding the numerical values of Lyapunov exponents. The simplest method is perhaps Wolfs algorithm [2], the idea of which is to trace the evolution of an initial sphere of small perturbation to a nominal trajectory. To the first order, the initial sphere of perturbation evolves as a time-varying ellipsoid. The Lyapunov exponents are the average rate of the logarithmic stretch of the ellipsoidal principal axes. In Wolfs code, the ellipsoidal axes (referred to as the Lyapunov vectors) are approximated by the set of vectors obtained via Gram–Schmidt orthogonalization to the fundamental solution of the linearized system. Numerically, orthogonalization plays an indispensable role for computing Lyapunov exponents; without which the vectors in the fundamental solution turn to align to the direction of the largest growth. It has been demonstrated that, for continuous dynamical systems, properly implemented *
Corresponding author. Tel.: +1 319 335 6405; fax: +1 319 335 5669. E-mail address:
[email protected] (J. Lu).
0960-0779/$ - see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.chaos.2004.07.034
1880
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
continuous orthogonalization methods appear superior than Wolfs algorithm [3,4]. The QR method is an important tool of continuous orthogonalization. Instead of solving the fundamental solution directly, one performs the QR decomposition to the fundamental solution, and derives the differential equations for the rotation factor Q. The Lyapunov exponents can be computed from the history of Q. The advantage of continuous orthogonalization lies in that one can control the accuracy of orthogonality between Lyapunov vectors. Dieci [5] showed that the orthogonality of the rotation tensor Q in QR method can be exactly preserved using any simplectic Runge–Kutta integrator. For small systems, Udwadia [6] proposed the use of the exponential map Q = exp(S), with which differential equation for Q is recast into differential equation for S. Numerical integration is conducted in the parametric space and transformed back to find Q. Thus, the method exactly preserves the orthogonality of Q. However, Udwadias method relies on closed-form representations of the exponential map. In [6], the method is implemented in two- and three-dimensions (and systems that can be reduced to two-dimension case) where the exponential map is readily available and relatively simple. General closed-form formulae for the exponential map in higher dimensions, although available [7], can be prohibitively complicated to entail an effective numerical algorithm. Diecis unitary integration technique, on the other hand, does not apply to partial solutions in which only a few Lyapunov exponents are computed [8]. In this case, the QR transformation leads to a weakly skew system, and no RK scheme with constant coefficients can preserve the orthogonality of Q. For details, see [8]. This paper explores an alternative approach for computing the Lyapunov exponents. Akin to the QR method, the idea is to trace the evolution of a set of vectors which are orthogonal continuously. The key difference lies in that the length of the Lyapunov vectors are not constrained. As a result of relaxing length requirement, we are able to establish a set of recursive linear differential equations for the Lyapunov vectors, and to derive numerical algorithms that automatically preserve the orthogonality of some vectors. In particular, the first two Lyapunov vectors are automatically orthogonal. This, among other unique features, makes the algorithm particularly suitable for computing the first few Lyapunov exponents. The development is inspired in part by a recent work of Bartler [9] who used the idea of tracking Lyapunov vectors. However, the present contribution differs from Barlters work on several key aspects, most notably on the identification of Lyapunov vectors and the numerical treatment of orthogonality. In Section 2, we provide a brief examination of Wolfs algorithm and the QR method as a passage to motivate the proposed method, which is discussed in Section 3. Numerical examples are presented in Section 4. Throughout this paper, matrices and vectors are denoted by bold face upper and lower letters respectively. A superposed dot means time derivative. Inner product between vectors a and b is denoted by a Æ b. The operator ‘‘’’ denotes tensor product, pffiffiffiffiffiffiffiffidefined ffi by (a b)c = a(b Æ c) for any vectors a, b and c. The norm of a vector is denoted by k Æ k, meaning kak ¼ a a.
2. Methods for computing Lyapunov exponents Consider an m-dimensional dynamical system x_ ¼ fðx; tÞ; t P 0; x 2 Rm ; xð0Þ ¼ x0 ; m
ð1Þ
m
where f : R R7!R is a smooth vector function. Small perturbations to a trajectory is governed by the linearized equation y_ ¼
of ðxðtÞ; tÞy :¼ AðtÞy: ox
ð2Þ
Here A 2 Rm m is the Jacobian tensor. The solution of (2) with a given initial perturbation y(0) can be expressed as y(t) = Y(t)y(0), where Y(t) is called the fundamental solution, satisfying Y_ ¼ AðtÞY;
Yð0Þ ¼ I:
ð3Þ
We assume that det Y(t) > 0 "t, implying that initially separated trajectories remain separated for all time. Geometrically, the fundamental solution Y(t) can be visualized as a linear map that maps an initial m-sphere in the phase space into an evolving ellipsoid. The Lyapunov vectors are defined as the ellipsoidal principal axes. Namely, the ith Lyapunov vector is given as pi ðtÞ ¼ YðtÞni ðtÞ;
pffiffiffiffiffiffiffiffiffiffi where ni(t) is a (unit) eigenvector of the matrix YT Y. The ith Lyapunov exponent of the system is defined as ki ¼ lim
t!1
1 log kpi ðtÞk: t
The exponent characterizes the long time average rate of exponential growth of the ith ellipsoidal principal axis.
ð4Þ
ð5Þ
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
1881
The simplest algorithm for computing the Lyapunov directions is Wolfs algorithm [2]. The method proceeds as follows: Let Qn = [q1,n, q2,n, . . . , qk,n] (k 6 m) be the (approximate) Lyapunov vectors at time tn, Assume that the vectors are orthonormal, that is, QTn Qn ¼ I. (1) Obtaining Yn+1 by integrating (3), with initial condition Y(tn) = Qn. (2) Obtaining vectors Qn+1 = [q1,n+1, q2,n+1, . . . , qk,n+1] from Yn+1 by performing Gram–Schmidt orthogonalization:
q1;nþ1 ¼
y1;nþ1 ; ky1;nþ1 k
qi;nþ1
i1 P yi;nþ1 ðyi;nþ1 qi;nþ1 Þqi;nþ1 j¼1 ; ¼ i1 P yi;nþ1 ðyi;nþ1 qi;nþ1 Þqi;nþ1 j¼1
(3) Accumulating exponents P Lyapunov N i;s ¼ yi;s i1 j¼1 ðyi;s qi;s Þqi;s , then nþ1 1 X
ki ¼ lim
n!1 t nþ1 s¼1
from
the
i ¼ 1; . . . ; k:
norm
of
Lyapunov
ð6Þ
vectors.
log N i;s :
Denoted
by
ð7Þ
The underlying logic is similar to that of the power method [10] for the algebraic eigenvalue problem of a symmetric matrix. If only one Lyapunov vector is involved, the vector obtained from Wolfs code is expected to converge to the Lyapunov direction corresponding to the largest Lyapunov exponent. This is because the component in this direction will be amplified the most, and thus outgrows any other directions. Numerically, this happens even if the initial vector does not contain a component in the first direction, since the latter will be brought in by round-off error. When multiple vectors are considered simultaneously, maintaining orthogonalization is a key to the success of the method (recall that the Lyapunov vectors, by definition, are mutually orthogonal). Without orthogonalization, the vectors turn to fall parallel to the first direction, leading to inaccurate results. The QR decomposition is an important theoretical and numerical tool for computing the Lyapunov exponents. An in-depth discussion of the method can be found in many references, e.g. [3,4]. From the standpoint of numerical computation, the method may be regarded as a method of continuous orthogonalization. Instead of integrating (3) directly, one seeks the decomposition Y(t) = Q(t)R(t), where Q(t) is orthogonal and R(t) is upper triangular with positive diagonal elements. With this decomposition one can write (3) into the form _ þ RR _ 1 ¼ QT AQ: QT Q _ is skew and RR _ 1 is upper triangular, one reads off the differential equations Since QT Q 8 T i > j; > < ðQ AQÞij ; _ ¼ QX; where Xij ¼ 0; i ¼ j; Q > : T ðQ AQÞji ; i < j;
ð8Þ
ð9Þ
and T R_ ii R1 ii ¼ ðQ AQÞii ;
ð10Þ
where Rii are the diagonal elements of R. It has be established that at long time the Lyapunov exponents are related to the diagonal elements through the formula [4] ki ¼ lim
t!1
1 log Rii : t
ð11Þ
The factor R needs not to be solved for computing the Lyapunov exponents. Noticing R_ ii d log Rii ¼ ¼ ðQT AQÞii ¼ qi Aqi ; dt Rii
ð12Þ
where qi is the ith column vectors of Q, one can accumulate the Lyapunov exponents using ki ¼ lim
t!1
1 t
Z
t
qi Aqi dt: 0
ð13Þ
1882
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
R tn 1 In computation, the Lyapunov exponent R tnþ1 R tn can be accumulated R tnþ1 through a recursive formula. Let ki;n ¼ tn 0 qi Aqi dt. 1 1 1 Notice ki;nþ1 ¼ tnþ1 0 qi Aqi dt ¼ tnþ1 0 qi Aqi dt þ tnþ1 tn qi Aqi dt. Replacing the first integral with tnki,n gives ki;nþ1 ¼
tn tnþ1
ki;n þ
1 tnþ1
Z
tnþ1
qi Aqi dt:
ð14Þ
tn
The QR method and Wolfs method are closely related. In matrix computations, the Gram–Schmidt process is an explicit algorithm for computing the QR decomposition. The fundamental difference lies in the manner orthogonality of Lyapunov vectors is maintained. In Wolfs code, orthogonalization is performed numerically at discrete time points. The QR method, on the other hand, seeks to maintain the orthogonality via solving differential equations that encode the orthogonality continuously. One is thus allowed to control the accuracy of orthogonality in numerical solution. As alluded in Section 1, it is possible to derive integration schemes that automatically preserve the orthogonality of the solution vectors. However, the QR method results in a nonlinear equation for Q due to the orthogonality constraint imposed on Q. Also, the existing orthogonality-preserving algorithms [6,8] work for limited cases. This motives us to develop a new method for computing the Lyapunov exponents.
3. Method of Lyapunov vectors As mentioned in Section 1, the proposed method seeks to derive differential equations for Lyapunov vectors that encode orthogonality in continuum sense. The point of departure is to relax the length constraint on the Lyapunov vectors. Relaxing length restriction allows us to establish linear differential equations for the Lyapunov vectors, and to derive algorithms that numerically preserve the orthogonality between some vectors. As will be shown shortly afterwards, the first two vectors (corresponding to the largest and the second largest growth directions) are automatically orthogonal regardless of the dimension of the problem. In addition, the Lyapunov vectors can be computed recursively, and each involves the solution of linear differential equations. These features make the method attractive for partial solutions which compute only the first few Lyapunov exponents. In some applications, it suffices to know a few leading Lyapunov exponents. 3.1. Evolution equations for Lyapunov vectors Let Y = [y1, y2, . . . , ym] be the fundamental solution to (3). We identify a set of orthogonal vectors by Gram–Schmidt process without normalization p1 ¼ y1 ; q1 ¼ p1 =kp1 k; p2 ¼ y2 ðq1 y2 Þq1 ; q2 ¼ p2 =kp2 k; .. . i1 X ðqj yi Þqj ; qi ¼ pi =kpi k; pi ¼ yi
ð15Þ
j¼1
where i = 1, . . . , k. It is noted that k 6 m, highlighting the intended application to partial solution. As in QR and Wolfs method, it is expected that at long time the vectors pi will approach the Lyapunov vectors. Introduce Q ¼ ½p1 ; p2 ; . . . ; pk ;
ð16Þ
the above orthogonalization process can be written as a QR-bar decomposition Y ¼ Q R;
ð17Þ
where Q 2 Rm k , the columns of which are mutually orthogonal but not necessarily orthonormal, and R 2 Rk k is an upper triangular matrix with diagonal elements Rii ¼ 1. Substituting (17) into (3) and post-multiplying both sides with 1 R gives _ þ Q R_ R1 ¼ AQ: Q
ð18Þ
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
1883
1 Let aij be the entries of R_ R , and notice that aii = 0 (since Rii ¼ 1), we can write (18) as
p_ 1 ¼ Ap1 ; p_ 2 þ a12 p1 ¼ Ap2 ; .. . p_ i þ
i1 X
ð19Þ
aji pj ¼ Api ;
i 6 k:
j¼1
We then eliminate aij from (19) and write the equations into differential equations for pi. Starting from (19)2, taking the inner product of both sides with p1, p1 p_ 2 þ a12 kp1 k2 ¼ p1 Ap2 :
ð20Þ T
By construction, p1 Æ p2 = 0, hence p1 p_ 2 ¼ p2 p_ 1 ¼ p2 Ap1 ¼ p1 A p2 . Therefore 1 a12 ¼ p1 Ap2 þ p1 AT p2 : 2 kp1 k
ð21Þ
Substituting a12 into (19)2 and making use of q1 = p1/kp1k yields p_ 2 ¼ ½A q1 q1 A q1 q1 AT p2 :¼ A2 p2 :
ð22Þ
Proceeding to (19)3, eliminating a13 using the same procedure gives p_ 3 þ a23 p2 ¼ A2 p3 :
ð23Þ
Further, eliminate a23 from (23),
p_ 3 ¼ A2 q2 q2 A2 q2 q2 AT2 p3 :¼ A3 p3 :
ð24Þ
Continuing the process inductively we find p_ i ¼ Ai pi
ðno summationÞ;
ð25Þ
where Ai ¼ Ai1 qi1 qi1 Ai1 qi1 qi1 ATi1 ; A1 ¼ A:
ð26Þ
Interestingly, the equation for the ith vector involves vectors p1 through pi and is a linear and homogeneous of degreeone in pi. Eqs. (25) and (26), to the best of the authors knowledge, have never been reported in the literature. 3.2. Orthogonality between vectors Next, we derive integration algorithms that preserve the orthogonality between some of the vectors. Notice first that the operators in (25) satisfy the identity
ð27Þ pi1 ATi1 þ Ai pi ¼ 0; i ¼ 2; . . . ; k; for any pi1, pi and Ai1, Ai. This identity can be readily verified using (26). With this we then prove that any simplectic Runge–Kutta method preserves the orthogonality between pi1 and pi for i = 2, . . . , k. Consider an s-stage RK method for (25). Let pi,n be the discrete value of pi at time tn and let pi,nj be the corresponding values at intermediate points. Applying the RK scheme [11] to the ith equation in (25), pi;nþ1 ¼ pi;n þ h
s X
bj Ai ðtj Þpi;nj ;
j¼1
pi;nj ¼ pi;n þ h
s X
ð28Þ
ajl Ai ðtl Þpi;nl ;
j¼1
where h = tn+1 tn, tj = tn + cjh, and aij, bj, cj are integration coefficients as defined in [11]. For the sake of notation clarity, the dependence of Ai on q1, . . . , qi1 is omitted. An s-stage RK scheme is simplectic if bj ajl þ bl alj bj bl ¼ 0;
j; l ¼ 1; . . . ; s:
ð29Þ
1884
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
See [12] for details. A straightforward manipulation shows that s X
bj pi1;nj ATi1 ðtj Þ þ Ai ðtj Þ pi;nj pi1;nþ1 pi;nþ1 ¼ pi1;n pi;n þ h j¼1 s X s X
h2 ðbj ajl þ bl alj bj bl Þpi1;nj ATi1 ðtj ÞAi ðtl Þ pi;nl : j¼1
ð30Þ
l¼1
The h2-terms drops out due to simplectic condition (29). The h-term is zero due to the identity (27). It follows that pi1,n+1 Æ pi,n+1 = pi1,n Æ pi,n. Therefore, if pi1,n Æ pi,n = 0, then pi1,n+1 Æ pi,n+1 = 0. In particular, if p1 and p2 are initially orthogonal, they remain orthogonal for all time. 3.3. Lyapunov exponents Upon solving pi, the corresponding Lyapunov exponents are computed according to ki ¼ lim
t!1
1 log kpi k: t
ð31Þ
Similar to the QR method, we can recast (31) into a form that depends only on the direction of Lyapunov vectors. To this end, introduce ki ðtÞ ¼
1 log kpi k; t
ð32Þ
so that ki = limt!1ki(t). Multiplying both sides of (32) with t and taking the time derivative of the resulting equation, and invoking (25), we obtain d d ðtki ðtÞÞ ¼ log kpi k ¼ qi Ai qi ; dt dt
qi ¼
pi : kpi k
ð33Þ
This expression naturally leads to a recursive formula parallel to (14). Integrating (33) over the time interval [tn, tn+1] yields Z tnþ1 tnþ1 ki;nþ1 tn ki;n ¼ qi Ai qi dt: ð34Þ tn
Therefore, ki;nþ1 ¼
tn tnþ1
ki;n þ
1 tnþ1
Z
tnþ1
qi Ai qi dt:
ð35Þ
tn
The integral on the right-hand-side can be computed using quadrature rules consistent with the RK method. Upon converging, we set ki = ki,n+1. 3.4. Renormalization of Lyapunov vectors If ki is positive or negative, the magnitude of the corresponding Lyapunov vector will grow or shrink exponentially and quickly leads to numerical overflow or underflow. The problem can be easily resolved by normalizing the vectors at time steps whenever their magnitudes exceed given upper or lower bound. Renormalization at any time step will not affect the values of Lyapunov exponents because the latter depends only on the direction of the Lyapunov vectors. One can easily see that qi is not effected by scaling transformation pi # api since the differential equation for pi is invariant under this transformation. Numerically, if pi,n # api,n at time tn, any decent numerical integrator will yield pi,n+1 # api,n+1 since the differential equation is linear and homogeneous in pi. Therefore qi is also invariant numerically. The computation procedure is summarized below: Given pi,n, i = 1, . . . , k at time tn. (1) Integrate pR_ i ¼ Ai ðtÞpi sequentially for i = 1, 2, . . . , k using an s-stage simplectic RK scheme. t (2) Compute tnnþ1 qi Ai qi dt using quadrature rule consistent with the RK method, and update ki.
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
1885
(3) Check the norm of Lyapunov vectors. Normalize any vector of which the magnitude exceeds given lower or upper bound. Since the algorithm preserves the inner product between some vectors, normalizing a vector will affect the preserved quantities and for this reason, a re-orthogonalization is performed whenever a vector is re-scaled to ensure the inner products remain small. The solution proceeds recursively; solving the ith Lyapunov exponents requires the knowledge of Lyapunov vectors up to (i 1)th but not of i + 1 on. When the ith equation is integrated, the matrices needed for solving the next equation are formed. This recursive structure and the fact that any two consecutive Lyapunov vectors remain orthogonal make the method attractive for computing the first few Lyapunov exponents. Notably, only linear differential equations are involved in the computations.
4. Numerical examples To assess the capability and numerical properties, the proposed method is tested against selected systems for which some features of the Lyapunov exponents are known. In Section 4.1, the method is applied to linear systems with constant Jacobian. This is one of the few cases in which the exact values of the Lyapunov exponents are known. In Section 4.2, the method is tested for partial solution in which the first two Lyapunov exponents of a six degree-of-freedom system are computed. In Section 4.3, a Lorentz equation is used as an example of general nonlinear system. Finally, the method is applied to a model of mechanical oscillator under the influence of short-range van der Waals force. Due to the presence of strong singular force, the Lyapunov exponents of this system are notoriously difficult to extract. Unless otherwise stated, numerical integration is performed using the mid-point rule, the lowest-order simplectic RK scheme corresponding to s = 1, b1 = 1, a11 = c1 = 0.5 in (28). 4.1. System of constant Jacobian If A is constant, the Lyapunov exponents equal to the real part of the eigenvalues of A [9]. This is one of the few cases where the exact Lyapunov exponents are known. In two-dimensional case, Udwadia and co-authors derived a closed-form solution for the time history of ki(t) [6]. Here, we consider an example for which 0 1 A¼ : 1 1 pffiffiffi The eigenvalues of A are 0:5 3=2i. The history of Lyapunov exponents in the time interval [0, 50], computed using a step size Dt = 0.1, are plotted in Fig. 1. The curves are virtually identical to the closed-form solutions by Udwadia, as
0
λ1 λ2
–0.1
Lyapunov exponent
–0.2 –0.3 –0.4 –0.5 –0.6 –0.7 –0.8 –0.9 –1
0
5
10
15
20
25
30
35
40
45
Time Fig. 1. Evolution of Lyapunov exponents for the 2 · 2 system.
50
1886
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
compared in Fig. 2. The computed Lyapunov exponents clearly converge towards the exact value 0.5. Results at t = 100 and t = 200 are listed in Table 1. Numerical values of p1 Æ p2/kp1k/kp2k versus time are plotted in Fig. 3. Note that the algorithm preserves the inner product p1 Æ p2, not the normalized counterpart. Nevertheless, the normalized values provide a good measure of error in orthogonality. As seen from Fig. 3, orthogonality is maintained to within machine precision. The sum of Lyapunov exponents can also be an indicator of accuracy. It is known that for a general system Z m X 1 t ki ¼ lim tr AðtÞ dt; t!1 t 0 i¼1 see e.g. [4] for a proof. If trA is constant, we have m X ki ¼ tr A: i¼1
For this problem, k1 + k2 = 1. The numerical sums are plotted in Fig. 4. It is clear that the exact values are replicated except at the initial point, where the Lyapunov exponents are set to zero. 4.2. Partial solution One of the unique features of the proposed method is its convenience in computing a few leading Lyapunov exponents. As an example, we consider a 6 · 6 constant Jacobian 2 3 1:9501 0:4565 0:9218 0:4103 0:1389 0:0153 6 0:2311 1:0185 0:7382 0:8936 0:2028 0:7468 7 6 7 6 7 6 0:6068 0:8214 1:1763 0:0579 0:1987 0:4451 7 7 A¼6 6 0:4860 0:4447 0:4057 1:3529 0:6038 0:9318 7 6 7 6 7 4 0:8913 0:6154 0:9355 0:8132 1:2722 0:4660 5 0:7621
0:7919 0:9169 0:0099
0:1988
1:4186
and compute the first two Lyapunov exponents. The eigenvalue of A are eigðAÞ ¼ ½ 3:9187
1:3305 1:1537 0:8062 0:3476i
0:8062 þ 0:3476i 0:1734 :
The exact values of the first two Lyapunov exponents are k1 = 3.9187, k2 = 1.3305. The problem is solved using the present method and Wolfs code, with a step size Dt = 0.1. Numerical values at t = 50 at t = 100 are listed in Table 2.
0 This method Closedform Solution
–0.1
Lyapunov exponent
–0.2
λ1
–0.3 –0.4 –0.5 –0.6
λ2
–0.7 –0.8 –0.9 –1
0
5
10
15
20
25
30
35
40
45
50
Time Fig. 2. Comparison of the Lyapunov exponents with the closed-form solution from Udwadias exp(S) method.
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
1887
Table 1 Values of Lyapunov exponents at selected time points t
100 200
This method
Closed-from [6]
k1
k2
k1
k2
0.4979 0.4991
0.5021 0.5009
0.4979 0.4991
0.5021 0.5009
–15
1.5
x 10
Error in orthogonality
1
0.5
0
–0.5
–1 0
5
10
15
20
25
30
35
40
45
50
450
500
Time Fig. 3. Error in orthogonality.
0
Sum of Lyapunov exponents
–0.2
–0.4
–0.6
–0.8 λ1 + λ2
–1
–1.2
–1.4 0
50
100
150
200
250
300
350
400
Time Fig. 4. Time history of the sum of Lyapunov exponents.
The current method evidently yields more accurate predictions. The error in k1 at t = 100 appears to be one order less than that of Wolfs method. Fig. 5 presents the time-history of the Lyapunov exponents computed using the present method and the Wolfs algorithm. The predicted k2 is close to the exact solution, however, the Wolfs method converges
1888
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
Table 2 Values of the first two Lyapunov exponents at selected time points t
This method k1
Error (%)
k2
Error (%)
k1
Error (%)
k2
Error (%)
50 100
3.9082 3.9134
0.2652 0.1325
1.3358 1.3331
0.4058 0.2027
3.9590 3.9645
1.0318 1.1710
1.3380 1.3352
0.5657 0.3566
Errors are computed as
Wolfs code
kkexact kexact
100%.
4
λ1 3.5
This method Wolf code
Lyapunov exponent
3 2.5 2
λ2
1.5 1 0.5 0
0
10
20
30
40
50
60
70
80
90
100
Time
Fig. 5. Evolution of two leading Lyapunov exponents of the 6 · 6 system. Comparison with Wolfs method.
6
x 10–16
Error in orthogonality
5 4 3 2 1 0 –1 –2
0
10
20
30
40
50
60
70
80
90
100
Time Fig. 6. Error in orthogonality between the first two Lyapunov vectors.
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
1889
to a slightly different value for k1. Orthogonality between vectors is also maintained at machine precision, as shown in Fig. 6. 4.3. Lorentz system Consider a Lorentz system x_ 1 ¼ rðx2 x1 Þ; x_ 2 ¼ ax1 x1 x3 x2 ; x_ 3 ¼ x1 x2 bx3 with an initial condition (x1(0), x2(0), x3(0)) = (0, 1, 0), and r = 16, b = 4 and a = 45.92. The same system was investigated by Udwadia and co-author in [6], where the Lyapunov exponents are computed using the exponential-map method with a fourth-order RK scheme. To compare with Udwadias results, the fourth-order Gauss RK scheme (known to be simplectic) is employed here, and step sizes of Dt = 0.03 and Dt = 0.003 are used. The computed values of the Lyapunov exponents and their sum at t = 1000 are tabulated in Table 3. For this system, the trace of the Jacobian is constant and equals to 21. As is seen from the Table, the current method appears to be less sensitive to step size. Indeed, the current method predicts converged values at around t = 300 for both step sizes. It is know that, for Lorentz system, one of the Lyapunov exponents is zero. Here, k2 is close to zero, further validating the results. Udwadias method nevertheless is more accurate in computing the sum of the Lyapunov exponents. This is because the orthogonality of Q is preserved, whereas in the present method only the orthogonality between the pairs (p1, p2) and (p2, p3) are preserved. Recall that k1 ðtÞ þ k2 ðtÞ þ k3 ðtÞ ¼
1 t
Z
t
ðq1 Aq1 þ q2 Aq2 þ q3 Aq3 Þ dt
0
and notice q1 Aq1 þ q2 Aq2 þ q3 Aq3 tr A ¼ A : ðQT Q IÞ; where ‘‘ : ’’ means contraction between two second-order tensor. It is clear that in Udwadias approach the sum of the Lyapunov exponents can be computed to with machine precision if tr A is constant. Time-history of Lyapunov exponents, computed using Dt = 0.03 and Dt = 0.003, are shown in Figs. 7 and 8. Clearly, the results converge quickly in both cases. 4.4. Lennard–Jones oscillator Finally, we apply the method to compute the Lyapunov exponents of forced vibration of a small-scale oscillator under the influence of van der Waals force. This oscillator, introduced in [13], models a micro-dynamical system which involves short-range surface force described by a Lennard–Jones potential. The (non-dimensionalized) Hamilton equation for this system is given as x_ 1 ¼ x2 ; x_ 2 ¼ x1
d ðx1 þ aÞ
2
þ
dR6 30ðx1 þ aÞ8
cx2 þ C cos Xt:
Table 3 Lyapunov exponents of the Lorentz system Dt
0.03 0.003
This method
Udwadia [6]
k1
k2
k3
P3
1.4838 1.4921
0.0727 0.0051
22.3943 22.4865
20.9832 20.9995
i¼1 ki
Udwadias results in the second row are computed using Dt = 0.001.
k1
k2
k3
P3
1.5792 1.4898
1.1311 0.0048
22.4481 22.4946
21.0000 21.0000
i¼1 ki
1890
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892 20 λ 10
Lyapunov Exponent
1
λ2 λ3
0
10
20
30
40
0
200
400
600
800
1000
Time
Fig. 7. Time history of Lyapunov exponents of the Lorentz system, Dt = 0.03. 20 λ1 λ2
Lyapunov Exponent
10
λ3
0
10
20
30
40
0
100
200
300
400
500
Time
Fig. 8. Time history of Lyapunov exponents of the Lorentz system, Dt = 0.003.
The Jacobian of the linearized system is derived as " # 0 1 6 A¼ 1 þ ðx 2d 15ðx4dRþaÞ9 c þaÞ3 1
1
Our recent experience with small-scale oscillators [14] indicates that, when such a system is oscillating near the singular point, it is very hard to compute the Lyapunov exponents due to the severe singularity in the force term. We are interested in testing the method against motions for which the singular point are frequently approached. For this purpose we adopt the following parameters: R ¼ 0:3;
d¼
4 ; 27
a ¼ 1:2;
X ¼ 1;
C ¼ 2;
c ¼ 0:04:
It has been demonstrated in [13] that the motion under the given parameters is chaotic. The phase-space trajectory corresponding to zero initial condition is shown in Fig. 9. Clearly, the oscillator approaches the singular point of x1 = 1.2 frequently, causing drastic changes in velocity as if being bounced on a rigid surface.
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
1891
6
4
Velocity
2
0
–2
–4
–6 –2
–1
0
1
2
3
4
Displacement Fig. 9. Phase-space trajectory corresponding to zero initial condition.
The Lyapunov exponents are computed with step size Dt = 0.005. For this system, the trace of Jacobian is constant and we expect k1 + k2 = 0.04. At t = 500, the computed values are k1 ¼ 0:031387;
k2 ¼ 0:071365;
k1 þ k2 ¼ 0:039978:
The results agree reasonably with the values in [13], which gives k1 = 0.03956, k2 = 0.07956. Due to the severe singularities, Udwadias exp(S) method fails in integrating the nonlinear differential equation for the rotation angle, at least for the mid-point rule used here. Fig. 10 shows the evolution of Lyapunov exponents versus time. Despite the periodical spikes that appear in the time history (occurring when the singular point is approached), the Lyapunov exponents clearly converge.
1.5 λ1 λ2
Lyapunov exponent
1
0.5
0
–0.5
–1
–1.5 0
50
100
150
200
250
300
350
400
450
500
Time Fig. 10. Time evolution of Lyapunov exponents for the Lennard–Jones oscillator.
1892
J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892
5. Concluding remarks We have developed a new method for computing the Lyapunov exponents of continuous dynamical systems. The idea of this method is akin to that of the QR method; it seeks to extract the Lyapunov exponents by tracking the evolution of a set of orthogonal vectors. In the QR method, the vectors are restricted to be orthonormal, and thus parameterized by a rotation tensor. In the proposed method, the vectors are orthogonal but not necessarily orthonormal. By relaxing the length requirement, we established a set of recursive linear differential equations for the evolution of these vectors. In addition, we derived numerical integration schemes that automatically preserve the orthogonality between any two consecutive vectors. The method is applicable to both partial and full solutions; it is particularly effective for computing the first few Lyapunov exponents of large-scale dynamical systems such as those arisen from finite element modeling. Since some of the Lyapunov vectors remains orthogonal, the method is expected to be more accurate than the Wolfs algorithm in the case of partial solutions. The method appears to be stable even for strongly nonlinear systems, perhaps because the computation involves only linear differential equations.
Acknowledgments Part of the work was conducted while the second author (G. Yang) was visiting the Center of Computer Aided Design, the University of Iowa. G. Yang gratefully acknowledges the financial support from the State Scholarship Fund of the Ministry of Education of China.
References [1] Parker T, Chua L. Practical numerical algorithms for chaotic systems. Berlin: Springer; 1989. [2] Wolf A, Swift J, Swinney H, Vastano J. Determining Lyapunov exponents from a time series. Physica D 1985;16:285–317. [3] Goldhirsch I, Sulem P, Orszag S. Stability and Lyapunov stability of dynamical system: a differential approach and a numerical method. Physica D 1987;27:311–37. [4] Dieci L, Russell R, Vleck EV. On the computation of Lyapunov exponents for continuous dynamical systems. SIAM J Numer Anal 1997;34:403–23. [5] Dieci L, Russell R, Vleck EV. Unitary integrators and application to continuous orthogonalization techniques. SIAM J Numer Anal 1994;31:261–81. [6] Udwadia F, von Bremen H. An efficient and stable approach for computation of Lyapunov characteristic exponents of continuous dynamical systems. Appl Math Comput 2001;121:219–58. [7] Lu J. Exact expansions of arbitrary tensor function f(a) and their derivative. Int J Solids Struct 2004;41:337–49. [8] Dieci L, Vleck EV. Computation of a few Lyapunov exponents for continuous and discrete dynamical systems. Appl Numer Math 1995;17:275–91. [9] Bartler T. Lyapunov exponents and chaos investigation. Ph.D. thesis, Department of Aerospace Engineering, The University of Cincinnati, Cincinnati, OH, 1999. [10] Parlett B. The symmetric eigenvalue problem. Upper Saddle River, NJ: Prentice-Hall; 1998. [11] Hairer E, Wanner G. Solving ordinary differential equations I. Berlin: Springer; 1987. [12] Sanz-Serna J, Calvo M. Numerical Hamiltonian problems. London: Chapman & Hall; 1994. [13] Basso M, Giarro˜ L. Complex dynamics in a harmonically excited Lennard–Jones oscillator: microcantilevers-sample interaction in scanning probe microscopes. ASME J Dyn Syst Measure Control 2000;122:240–5. [14] G. Yang, J. Lu, A. Luo, On the computation of Lyapunov exponents for forced vibration of a Lennard–Jones oscillator, Chaos, Solitons & Fractals, in press.