ATtiEMATICS AND COMPUTERS N SIMULATION ELSEVIER
Mathematics
and Computers
in Simulation
37 (1994) 431-450
Automatic algorithm for the numerical inverse scattering transform of the Korteweg-de Vries equation A.R. Osborne Istituto di Fisica Generale dell’Universit6, Via Pietro Giuria I, Torino 10125, Italy
Abstract The inverse scattering transform (IST) for the periodic Kortweg-de Vries (KdV) equation in the p-function representation is considered and numerical formulations are given (1) for determining the direct scattering transform spectrum of an input discrete wave train and (2) for reconstructing the wave train from the spectrum via the inverse scattering problem. The advantage of the present method over previous approaches is that the numerical computations are automatic, i.e. one is guaranteed that the algorithm will search out and find ail the nonlinear modes (to within the input numerical precision) of a given arbitrary, N degree-of-freedom wave train. The algorithms are most appropriate for the time series analysis of measured and computed data. One is thus numerically able to analyze an input time series with M discrete points: (1) to construct the IST spectrum, (2) to determine the N = M/2 hyperelliptic-function oscillation modes, (3) to reconstruct the input wave train by a linear superposition law and (4) to nonlinearly filter the input wave train. Numerical details of the algorithm are discussed and an example for which N = 128 is given.
1. Introduction
The Korteweg-de Vries equation is the classical prototypical nonlinear, partial differential equation for describing small-but-finite amplitude, long waves in shallow water [l-3]:
The constant coefficients are given by co = (gh>‘~‘~, cy= 3co/2h and p = coh2/6; g is the acceleration of gravity and h is the depth. Subscripts in (1.1) refer to partial derivatives of the surface elevation n(x, t) with respect to x and t. Herein it is assumed that (1.1) is governed by periodic boundary conditions, 7(x, t) = q(x + L, t), for L the period. The general spectral solution to (1.1) was first found for infinite-line boundary conditions [4], a technique since christened the inverse scattering transform (ET) [5-91. The nonlinear Fourier structure of the KdV equation with periodic boundary conditions has also been found [lo-121. 0378-4754/94/$07.00
0 1994 Elsevier
SSDI 0378-4754(94)00029-J
Science
B.V. All rights reserved
432
A.R. Osborne /Mathematics
and Computers in Simulation 37 (1994) 431-450
Periodic IST allows solutions to the KdV equation to be constructed by a linear superposition of the so-called hyperelliptic (p-function) nonlinear oscillation modes. The hyperelliptic modes are generalizations of the sine waves of the associated Fourier series solution to the linearized problem. While an alternative formulation of the inverse problem for periodic IST exists in terms of the B-function representation, herein I focus on the p-function representation. The present work builds on previous progress in the numerical implementation of the inverse scattering transform [13-191. Special emphasis is given herein on space or time series analysis methods. These contrast to numerical methods in that the latter focus on precise computation of specific results, while time series analysis methods generally focus on a rather complete determination of all relevant spectral information in both the direct and inverse problems. Here I discuss a new approach, based upon a variable step algorithm, for the automatic control of the numerical computations. The latter approach provides a way to determine all of the discrete eigenvalues (zeros) in the spectral problem, to within the input numerical precision. If in the search process certain eigenvalues are missed, the algorithm automatically returns to find them. In this sense the computer code is ‘worry free’, i.e. the user need not be an expert in inverse scattering theory to use the approach in the analysis of data. The present paper is organized as follows. Section 2 gives an overview of periodic IST for KdV in the p-representation. Section 3 gives certain facts about periodic IST useful for physical understanding and numerical implementation. Section 4 describes a numerical discretization procedure for the construction of the IST spectrum and for determination of the hyperelliptic inverse problem. Section 5 discusses a method which automatically controls the search for and determination of the eigenvalues in the spectrum. Finally in Section 6 I discuss the application of the algorithm for the space or time series analysis of data and give a numerical example in which a wave train with N = 128 degrees of freedom is analyzed. I furthermore discuss periodic IST as a tool for the nonlinear filtering of data.
2. Periodic IST for KdV in the p-function
representation
The general spectral solution to the periodic KdV equation (1.1) may be written as a linear to as hyperelliptic
superposition of nonlinearly interacting, nonlinear waves which are referred functions, t_bj(x; x0, t) [lo-121 (see Fig. 1):
AV(x?t, = -El +
5
j=l
[2Pj(x;
xO>
‘1 -E2j-E2j+l]'
(2-l)
Here h = cy/6p and the E,, are constant eigenvalues defined below. Eq. (2.1) may be thought of as a linear superposition law. Since (2.1) reduces to an ordinary linear Fourier series in the limit of small amplitude wave motion [20], we refer to (2.1) as a nonlinear Fourier series. The hyperelliptic functions pj(x, t) are the nonlinear oscillation modes of periodic KdV, i.e. they are the ‘sine waves’ of the nonlinear Fourier series (2.1) [21]. The pj(x, t) are however generally quite non sinusoidal in shape (Fig. 1) and implicitly contain the nonlinear dynamics of the solitons and radiation solutions of periodic KdV. The pj(x, t) evolve in space X, for a fixed
A.R. Osborne/Mathematics
433
and Computers in Simulation 37 (1994) 431-450
Solution to KdV Equation Solution to KdV is Sum of HypeAli~c Functim
-1
: K/Y /L\l
pi
0.006
__
0.005 t
f
-2
Hyperelliptic Functions
-3
Cc) _I
“r.
__.
-5 k -6 p4 -7 -8 -9
aw2’ 5 4’ 3L 2’
p3 CL2 CL1
’ ’ ’ I 1 0 -1 -2 -3-4
$ WWN
Fig. 1. Typical spectral representation of a particular solution of the KdV equation. Shown in (a) is a solution to KdV. In (b) is the Floquet discriminant (the trace of the monodromy matrix as a function of E = k*l which has exactly six open bands (degrees of freedom) in the spectrum. Shown in (cl are the six hyperelliptic function oscillation modes. The linear superposition of these six modes gives the exact solution to KdV shown in (a).
value of time t = 0, according to the following system of coupled, nonlinear, ordinary differential equations (ODES): 2iG9?“2( jAj)
dpj dx -=
(2.2) fi
(Pj-P/c)
k=l,j#k
and 2N+ 1 R(Pj)
=
kcl
(pj
-Ek)e
(2.3)
The 0; = + 1 specify the signs of the square root of the function R(pj). The nonlinear functions pj(x, 0) live on two-sheeted Riemann surfaces, each is specified by its own aj. The branch
434
A.R. Osborne/Mathematics
and Computers in Simulation 37 (1994) 431-450
points connecting the surfaces are called the ‘band edges’ Ezj and E2j+l. The pj(x, 0) lie in the intervals Ezj G ,uj G E2j+l, i.e. inside ‘open bands’, and oscillate between these limits as x varies. When a pj(x, 0) reaches a band edge (either Ezj or Ezj+l) the index uj changes sign and the motion moves to the other Riemann sheet. Fig. 1 graphically illustrates many of these ideas. Generally speaking we refer to the construction of the main spectrum (E,, 1 6 k Q 2N + 1) and the auxiliary spectrum (~j(Xg, 01, uj = + 1, 1
3. The spectral structure of periodic IST The direct spectral problem for KdV is the Schroedinger $xx+
[hq(x)+k2]+=0
eigenvalue problem:
w
(k2=E)
where v(x) = 77(x, 0) is the solution to the KdV Eq. (1.1) at the arbitrary time t = 0; k is the wavenumber and E the ‘energy’. Here periodic boundary conditions are assumed so that ~(x, t> = q(x + L, t) for L the period. For the periodic scattering problem (3.1) one normally begins by choosing a basis of eigenfunctions (b(x; x0, k) such that [lo-121: 4(x0; x0, k) = 1, 4X(x0; x0, k) = ik, k) = 1, +:(x0; x0, k) = -ik where * indicates complex conjugate, the subscript x $*(x0; x0, refers to a derivative and x0 is an arbitrary base point in the interval 0
@,(x7xo, k)=
(34
one writes @(x + L; x0, k) =S(x,,
k)@(x;
x0, k)
(3.3)
A.R. Osborne/Mathematics
435
matrix:
k) is the monodromy
S( x0,
and Computers in Simulation 37 (1994) 431-450
qx,, k) =
(34
where a and b are complex numbers. Thus the monodromy matrix by definition carries the solutions of (3.1) one period from the point x to the point x + L. The main spectrum consists of the set of real constants {E,} (1 G k =G2N + 1, where N is the integer number of degrees freedom of a particular solution to KdV); the E, are defined as solutions to the relation:
of
la,(E)1
= 1
(3.5)
(where ‘R’ means ‘real part of’). The a~~iliury spectrum {pj(xO, 0)) (1
= 0.
to: P-6)
(I denotes ‘imaginary part of). The ( Ek} are the eigenvalues corresponding to Bloch eigenfunctions which are either periodic or anti periodic on the period L. The ‘signs’ of the spectrum are given by the Riemann sheet indices 5 =
%n[b,(E)],,.
P-7)
The auxiliary spectrum {pj; 5) may be viewed as the source of phase information in the hyperelliptic function representation of KdV, i.e. the (pj; o$ may be suitably combined to give the phases of the pj(x, t) [23]. The spectrum {E,; pi; q} constitutes the direct scattering transform of a wave train with N degrees of freedom; 1 G k G 2N + 1;l
4%)
C’M
t 4x0)
S’(%)
The matrix
(Y carries
c(x + L) i s(x + L)
where
Q-‘SQ;
ii =o
0
I
P-8)
1’
the solution
matrix from point
x to x + L:
c’(x + L) s’(x + L) ] = [z::
(Y is determined a=
1
x$::;
from the S matrix by the similarity Q =
(; j;, ).
P-9)
::::i) transformation:
(3.10)
Therefore CYis the monodromy matrix in the (c, s) representation. With reasoning similar to that for the basis @(3.2), the main spectrum in the (c, s) basis consists of eigenvalues E, that correspond to the Bloch eigenfunctions for a particular period L. The auxiliary spectrum is defined as the eigenvalues for which the eigenfunctions s(x) have
436
A.R. Osborne /Mathematics
the fixed boundary definitions [20,21]:
conditions
and Computers
s(xO + L) = s(xJ
Main Spectrum {Ek}: +(cfy,,+ (Y& = &s,,
in Simulation
37 (1994) 431-450
= 0. To this end we have the specific
+ S,,) = f 1,
Auxiliary spectrum {pj): a21 = -~(s,,+s,1-s,,-s,,)=o. Riemann Sheet Indices (5): q = (sgn[a,,(E)
spectral (3.11)
(3.12)
-az2(~)]E=I*,) (3.13)
Equations (3.11)-(3.13) constitute the solution of the direct scattering problem. We now discuss briefly the inverse scattering problem for the special case when t = 0. This of course implies that we seek the pj(x, 01 for all x on (0, L), 1 ? The answer is in the affirmative. First note that the IST spectrum as represented by (3.11)-(3.13) corresponds to the particular base point x0 = 0; one thus obtains the main and auxiliary spectra at x0 = 0: {Ek), (~~(0, 0)). In order to obtain the pj(x, 01 for all x we seek the auxiliary spectrum at a nearby point x0 = Ax Ax is the period, x, = nAx is an arbitrary base point, and n is an integer on 0 G n < M - 1. Use of the linear superposition law (2.1) then allows the original wave train to be reconstructed. Therefore, instead of numerically solving the spatial evolution of the ODES (2.2) (numerically ill-conditioned), we simply repeat the direct scattering problem for each desired spatial point in the function pj(x, 0) (numerically well-conditioned). This is a substantial improvement over the numerical problems encountered in the solution to (2.2).
4. A numerical The scattering !PX=AW
discretization problem
may be written
in the following
matrix form:
(4.1)
A.R. Osborne /Mathematics
where !P(x, E) is understood two-by-two matrix A=
i
l
*
0
-4
and Computers
to be a two-vector
components
(4, $,)
and
A is the
(4.2)
T(x + Ax, E) = P(x,
E) +
= T(x,
T(x,
with
437
37 (1994) 431-450
1
where q(x) = AT(X) + E. The two-vector Taylor series around x:
E) = eAxA =
E)‘J’(x,
!
field P(x
+ Ax>, for Ax small, may be expanded
E)
1 a2!P(x,
@J+,
ax
Ax+~
ax*
E) Ax* +
in a
...
E),
T(x, E), which translates of the matrix A:
The matrix exponential
in Simulation
(4.3)
the field P from x to x + Ax at the eigenvalue
E, is the
\
sin(rcAx)
COS( KAX)
-K
K
sin(KAx)
(4.4)
COS(KAX)
for K = ~‘4. While K may be either real or imaginary, the matrix T is always real with determinant 1. This property is exploited in the numerical algorithm below. In (4.1)-(4.4) the wave form ~,J(x, 0) is a continuous function. However, as in previous numerical problems of this type [X3,19], I assume the wave train 17(x, 0) has the form of a piecewise constant function with M partitions on the periodic interval (0, L), where the discretization interval is Ax = L/M. Each partition has constant wave amplitude TV, 0 G YE=GM - 1, associated with a discrete value of x: x, = nAx. The two-by-two scattering matrix M therefore follows from iterating (4.3): M=
h
E)
T(x,
(4.5)
n=M-1
The initial conditions of the numerically convenient basis set Cc, S> are given by (3.8). From the definition of the matrix ‘yij (3.10) we therefore have: c(x (&ijl
= i
+L)
s(x +L)
c’(x+L)
c(x)
s’(x +L) II s(x)
c’(x)
-l
s’(x) 1.
(4.6)
Thus at x0 we find t(% a21 =
+
a**)
M,,.
=
ML
+J422)7
(4.7)
(4.8)
These relate the monodromy matrix (Y (3.10) to the numerically accessible scattering matrix M (4.5). It is worthwhile remarking that the algorithm (4.5) is equivalent numerically to that in Osborne [18,19]. The main advantage of the present approach is that it can be implemented as a real computer code, rather than complex, and the present algorithm is consequently about
438
A.R. Osborne /Mathematics
four times faster than that in Osborne are discussed elsewhere [19].
and Computers
in Simulation
37 (1994) 431-450
[19]. Error
properties
of piecewise
constant
potentials
Implementation of the numerical algorithm Because K = \lAq( x, 0)+ k’ can be either real or imaginary, but not complex, the matrix T is always real. This result allows implementation of an algorithm which is entirely real. As a consequence the following relations have been used in the program:
T,, = Tz2 =
COS( K’Ax)
if
K2 >
0,
cosh(K’hX)
if
K*
0,
<
(4.9)
sin( K’Ax)
Tl2
=
I
K2 2
0,
(4.10)
sinh( K’AX)
-K’
T21 =
if K’
K’
if
K*
<
0,
K’
sin(K’Ax)
if
K2 >
0,
sinh( K’Ax)
if
K*
0,
(4.11) <
where K’=/m=m.
(4.12)
Reconstruction of hyperelliptic functions and periodic solutions to KdV The reconstruction of an input space or time series from the spectrum by (2.1) is carried out by computing the auxiliary spectra pi(x, =x,, 0) for 0 G n < A4 - 1 for which the M different base points have the values X, =x0, xi, x2,. . . , x~_~. This is done by computing M different monodromy matrices (4.5) which differ from each other by a shift Ax in the wave train n, = 7(x,, 0). Thus th e d irect scattering problem is repeated M times in order to construct the j.~~(x,, 0) for all x,, 0
A.R. Osborne/Mathematics
439
and Computers in Simulation 37 (1994) 431-450
hyperelliptic functions by (2.1) gives the solution to the KdV equation shown on the upper right herein begin with the discrete potential 77(x,, 0) and (Fig. l(a)). Th e algorithms presented generate the Floquet discriminant A(E), the discrete eigenvalues E, (1 < k < 2N + l), the (i.e. hyperelliptic functions pj(x,, 0) (1
5. Automatic
numerical
IST algorithm
The purpose of an automatic IST algorithm is to find the exact number of degrees of freedom, N, in the input space or time series, and to subsequently find all the eigenvalues in the main and auxiliary spectra, independent of any initial choice for the resolution in the of the inverse scattering squared-wavenumber domain, E = k *. In the numerical construction transform spectrum it is natural to first compute the main spectrum {E,; 1 < k =S2N + 1) and then the auxiliary spectrum {pj, aj; 1 domain, - E,,,>/NE, where E,,, and Emin are the maximum and the minimum values of E AE=(E,, and NE is the desired number of discrete values of E. Then the Floquet discriminant, :(M,, + M2*), is computed at each E = Ei, 1 < i
0) +E,,,]cC,
= 0;
for +(x0;
x0, E,,,)
The number of zeros in the wave function +!J(x; x0, E,,,) in the system in the energy interval --03
= 1, tiX(xo; x0, E,,,)
= 0
(5.1)
is the number of degrees of freedom This idea derives from the usual
440
A.R. Osborne/Mathematics
and Computers in Simulation 37 (1994) 431-450
determination of the number of solitons for KdV with infinite line boundary conditions, for which E,,, is set identically to zero. For the above problem (5.1) there are exactly N zeros in the interval - ~0< E < E,,,. In the algorithm which follows we normally set x0 = 0, so that the spatial interval of interest is 0 ,
(54 which with the boundary conditions in (5.1) gives: !+W; x07 E) =M&,,
E);
$,x(x,;+
E) =M&n,
E).
(5.3)
Of course (5.2, 5.3) have, for an M-point discrete wave train (where N = M/2), the form:
for M given by (4.5). Equation (5.3) suggests that the number of degrees of freedom of a particular input wave train may be found by judicious scrutiny of the M,, element of the monodromy matrix. Generally speaking M,, depends both upon the spatial variable x, and on the parameter E. Eq. (5.2) is computed by iterating on the following recursion relations for the T matrix:
sin
K,AX
cos
%%+I?E) =
’
K,AX
T(xnE)*
Kn i
-K,
Sin
K,hX
cos
K,~X
(5.5)
/
These in effect carry the Schroedinger eigenfunctions from point x, to x,+ 1 and, after all discrete points in the wave train are iterated on, beginning with T(xo, E), one arrives at the monodromy matrix one period to the right, M(x, + L, E). Here the wavenumber k, is given by
K,(X,, E) = $+n) + E
(5-6)
and Ax = (x,,+r -xJ. At this point I introduce the function S(x, E) which is related to the number of times that the monodromy matrix element M,,(x, E) changes sign between x =x0 = 0 and L. The solutions to M,,(x, + L, E) = 0
(5 J)
are referred to as El? (1
E,F)=j;
l
(5 9
Thus the function S has monotonically increasing integer values as E increases so that S(x, + L, ET) = j up to S(x, + L, EG) = N, where N has a maximum value equal to one half the number of points M in the space or time series, i.e. EG,2 = (k,/212 (for kNy = r/Ax the
A.R. Osborne /Mathematics
-4 0
40
441
and Computers in Simulation 37 (1994) 431-450
80
120
160
200
240
280
320
200
240
280
’ 320
X
0
40
80
120
160 X
Fig. 2. The Zabusky and Kruskal initial sine wave is used to compute (a) the monodromy matrix element M,,(x, E) versus x for the parameter value E = E,,, = O.O6:M,, is seen to oscillate many times (- 11) on the interval 0 G x G 300. The function S(x, E = Em,,) (Eq. (5.9)) is a piecewise constant monotonically increasing function which has exactly 23 steps (degrees of freedom) in the system. In panel (b) are shown M,,(x, E) and S(X, E) for half the previous value: E = E,,, /2 = 0.03.
Nyquist wavenumber) so that following precise definition: S(X?l) E) = k i=O
Iw@,,(-q+,7
S(x, + L, EG,2 > = M/2.
E) - sgn( Ml&,
The
E) I P.
function
S(X, E) is given
the
(5 J>
Only when MJx., E) makes a sign change between successive values of X, and x,+~ (or Ei and Ei+l> is an integer contribution made to the sum in (5.9). Therefore, S(x,, E) is monotonically increasing in both x, and E and is piecewise constant, experiencing only unit jumps at the zeros of M,, in the x and E domains. The functions M,,(x, E) and S(x, E) are shown in Fig. 2 for a particular example problem: a sine wave potential (as considered by Zabusky and Kruskal [24) with M = 300, A = 0.012, NE = 1000, Emin = - 0.02 and E,,, = 0.06 [21]. In Fig. 2(a) M,,(x, E,,,) and S(x, Em,,) are graphed as a function of x(0 GX G 300). Note that as M,, oscillates as a function of x, S increases monotonically in integer steps up to N = 23 degrees of freedom for the selected value of Eln,X. Each step-wise increase in S occurs precisely at a zero of Mll, M,,(x, E = E,,,) = 0.
442
A.R. Osborne/Mathematics
and Computers in Simulation 37 (1994) 431-450
,
_ .!IM I_
cl1+ M221(xo+W) 16
S(x,+L.w Mll(x.+LE) -
-0.02
-0.01
0
0.01
0.02
0.03
0.04
0.05
0.06
0
0.004
0.008
\
/
0.012
0.016
0.020
i
0.024
E - cm-’
E - cni’
Fig. 3. The Zabusky and Kruskal initial sine wave is used to compute (a) the trace of the monodromy t(M,, + M2&xM-,, E), the oscillation function M,,(x,_,, E) and the accounting function S(x,_,, E).
matrix
Another example of the x dependence of M,, and S is given in Fig. 2(b) for E = Em,/2 = 0.03; here N = 16 for the selected energy value E = E,,. In Fig. 3 I show the E dependence of the ‘oscillation function’ M,,(x, + L, E) and the ‘accounting function’ S(x, + L, E). Note that M,, is graphed linearly as a function of E when I M,, I < 1 and logarithmically outside this range; the vertical scale in the Figure indicates powers of ten. Each oscillation in M,, corresponds to a degree of freedom in the solution to KdV. Each zero of M,, in effect counts a degree of freedom in the E domain and causes a step-wise, integer increase in S; note that S increases up to N = 23 as E approaches E,,,. It is worth pointing out that the main advantage of the present approach is that the function S(x,_,, ~9 M-l S(X,-,~
E)
=
c
Iw@41(~i+,~
E))
-
w(Mll(xi7
E))
l/2
(5.10)
n=O
does not depend on any chosen resolution in the E domain. Here xM_, =x0 + L = L. One selects a particular value of E = E,,, and determines uniquely the number of the degrees of freedom in the spectrum in the range Emin GE GE,,,. For the chosen resolution one can then determine S for all values of E as graphed in Fig. 3; in the event that any degrees of freedom are missing in the search process (i.e. the sequence 1, 2, 3,. . . , N is interrupted) then it is quite straightforward to search for intermediate values of S by the interval bisection technique as discussed below. In order to better understand the present approach I now give a rather extensive look at the behavior of the following functions of the monodromy matrix elements in the E domain: (1) the element M,,(x, + L, E) which counts the numbers of degrees of freedom in the system, S(X, + L, Em,,), (2) the trace of the monodromy matrix , ~(M,, +M,,), which determines the ‘main spectrum’ eigenvalues, Ek, and (3) the element M12, which determines the auxiliary spectrum eigenvalues, ~~(0, 0). I consider the above example problem of the ZK initial sine wave and graph all three functions as a function of E for x =x0 + L = L. The results illustrate
A.R. Osborne/
Mathematics
443
and Computers in Simulation 37 (1994) 431-450
and extend the properties of these functions as established by the so-called ‘oscillation theorems’ [6] which in particular describe the oscillatory behavior of the trace of the monodromy matrix and and the element M,,. I give rather detailed graphs in order to illustrate the important aspects of the behavior of these functions, which need to be clearly understood for developing efficient numerical codes. In Fig. 3(a) I compare the functions M,,(x, +L, E), i(M,, +M,,)(x, + L, E) and S(X, + L, E). The trace of the monodromy matrix determines the ‘main spectrum’ (~5,) as found from the zeros of i(E,, + M,,) = f 1. Note further that the oscillation function M,, tracks rather nicely with the trace of the monodromy matrix. The maxima and minima, while they do not coincide, are rather close together in the E domain. The function S is also seen to be rather well behaved, i.e. it increases monotonically by 1 whenever M,, crosses zero. S is evaluated numerically by (5.9) for a selected value of E = E’(Emin < E’ < Em,,) and thus effectively provides a value of the number of degrees of freedom in the range - ~0< E < E’. Furthermore the density of the number of degrees of freedom as a function of E can also be easily determined in this way. Consequently one always knows the local resolution in E required to determine the eigenvalues in the spectrum. To aid the reader in the description of the numerical codes (see details below) a blow-up of the rectangular region of Fig. 3(a) is shown in Fig. 3(b); I explicitly denote the important eigenvalues in the figure for M,,(E) = 0 (denoted by E,?) and for i(M,, + M,,) = + 1 (the main spectrum Ek). In Fig. 4(a) the functions M,i, M,, and S are contrasted. This Figure provides perspective on the behavior of M,, (which generates the auxiliary spectrum, pi) relative to the oscillation function M,, (which generates the ET) and the accounting function S. Here we see that M,, does not track with M,, as well as the latter follows the trace of the monodromy matrix (Fig. 3). Nevertheless, for some localized region in the E domain, one finds that the maxima and minima of M,, are not too far from those of M,,. The rectangle in the center of Fig. 4(a) has been exploded in Fig. 4(b) to provide details not visible in the former figure. Furthermore, the specific eigenvalues of M,, (the E,?) and those of M,, (the hyperelliptic function auxiliary 20
10 M &o+LE) I
E -
cm-’
0
0.01
E -
Fig. 4. The Zabusky and Kruskal initial sine wave is used to compute (a> the elements MII(~M_l, E), M12(xMp1, E) and the accounting function S(x,_,, El.
0.02
cm-’ of the monodromy
matrix
444
A.R. Osborne/Mathematics
20
I
I
/
I
I
‘\ ..+----M,#o+W
I
and Computers I
in Simulation
37 (1994) 431-450
28
S(x.+L.E)
15
24
10
20
5
16
0
12
8
-5
-4 Blow-up in panel (b) , , -0.02
-0.01
0
0.01
0.02
E - cm-’
0.03
(
,
0.04
0.05
0.;
0.01
0.012
0.014
0.016
0.018
0.02
0.022
E - cni’
Fig. 5. The Zabusky and Kruskal initial sine wave is used to compute (a) the trace of the monodromy i(M,, + M22Xx,,,_1, E), the element M,,(x,,,_,, E) and the accounting function S(x,_,, E).
matrix
spectrum values, ,uj> are also shown. These figures provide excellent perspective for coding the algorithm given herein. Finally in Fig. 5(a) I contrast the trace of the monodromy matrix and the monodromy matrix element M,, . Thus the essential ingredients of the direct scattering problem of the inverse scattering transform are graphed in Fig. 5. Also shown in Fig. S(a) is the accounting function S. One clearly sees some of the most important results of the oscillation theorems of the Schroedinger eigenvalue problem in this figure [6]; in particular it is evident that the trace of the monodromy matrix (which is space/time invariant under KdV evolution) provides the upper and lower bounds on the behavior of the hyperelliptic function values E_L~ which must always lie in the ‘open bands’ of the theory (Ezj G pi G Ezj+ 1). These results offer a very nice constraint on the search for the eigenvalues of the auxiliary spectrum, because successive iterates must always lie in the interval (E2j, E2j+l). The rectangle shown in Figure 5(a) is exploded in Fig. 5(b), where the particular eigenvalues are more easily seen. Those who would attempt to code the periodic inverse scattering spectrum will find Figs. 3-5 a welcome aid in understanding the behavior of the functions involved in order to ensure efficient, error-free behavior of the algorithm. What follows is a summary of the three steps used in the simplest version of the program. The approach as given here uses the interval halving or bisection method for determining the eigenvalues in the main and auxiliary spectra [2.5]. The approach has less rapid convergence than the method of Newton, but in the present case interval halving has been chosen because derivatives need not be calculated (more iterations are made, but they are faster) and because one is guaranteed to find a solution inside the starting interval, while this is not necessarily so with the Newton method. Step 1. Determine the exact number of degrees of freedom. Between Emin = -h(nmax - $ and E,,, there are N = S(x,_ 1, Emax) eigenvalues E,? such that (5.11) S(x,,,_,, ET) =j 1
A.R. Osborne/Mathematics
and Computers in Simulation 37 (1994) 431-450
445
The ET are the solutions of M,,(x,_,, E) = 0. The procedure followed is that for every j not found between 1 and N along the E axis, one begins with the eigenvalues E, = E,?_ 1 and E, =Ei*,, for which S is already known; the interval [E,, E2] is divided and S(x,_,, (E, + E,)/2) is computed: If S > j, set E, = (E, + E,)/2, if not take E, = (E, + E,)/2; one proceeds by continuing to divide the interval [E,, E2] until an eigenvalue is found such that S(x,_,, ET) = j. By iterating in this way one finally fills a vector of eigenvalues ET, 1 ,uui.Two eigenvalues E, and E, are searched for until the signs of M,,(E,) and M12(E2) are opposite. At this point the bisection search for the eigenvalues begins. If sgn[M,,(ET)]
= (-1)’
(5.12)
set E, = El?, if not set E, = E,?. The interval is then halved and the checks are made again. The algorithm continues to check the signs of the adjacent values of the M,, (at E,, E2) to be sure that they are opposite throughout the search process; this ensures that the bisection process will function. The iteration of the dividing process is exited from when pZ - pi < E(P~ + p2)/2, for E N 10-12. The Riemann sheet indices aj (3.13) are then computed for each of the pi. Step 3. The main spectrum.
Here one computes the 2M + 1 eigenvalues in the main spectrum E,, 1 < k G 2N + 1. Given the spectrum of the pj we know that their are exactly two associated main spectrum eigenvalues (Ezj+ 1, E2j+2) in the interval (,uj, ~j+ ,>; one eigenvalue corresponds to +TrM = + 1, the other to iTrM= - 1. Thus the bisection search is quite simple since only one main spectrum eigenvalue of each type lies in the interval (,uj, ,uj+i). The search is made by first determining the sign of +TrM at ~j(O, O), sgn[iTrM(pj)]: If the sign is (- 1)’ this implies that ,uj < Ej, alternatively, pj > Ej. Two eigenvalues E, and E, are searched for until the signs of +TrM(E,) and iTrM(E,) are opposite. At this point the bisection search for the main spectrum eigenvalues is initiated. If sgn[i Tr M(E;“)]
= (-1)’
(5.13)
is true set E, = pj, if not set E, = pi. The interval is then halved and the checks are made again. Once again the iteration of the dividing process is exited from when E, -E, < dEI +E,/2), for F - 10-12. An alternative procedure to that just given would be to invert Steps 2 and 3. One would thus use the accounting eigenvalues E,f+ to localize the main spectrum E,. Then the auxiliary spectrum eigenvalues pj are found from the bracketing E,. Both procedures have been
446
A.R. Osborne/Mathematics
and Computers
2.5
in Simulation
37 (1994) 431-450
15
2 10
1.5 2 3 3 z Y5
1
Reference Level
5
0.5 0
30 T -5
-1 -1.5
-10
-2 -2.5
0
50
100
150 x (cm)
200
250
300
-15 -0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
E (cm-’ )
Reference Level
Radiation Modes
x (cm) x (cm) Fig. 6. The periodic scattering transform solutions to the KdV equation for the particular numerical case of a many degree-of-freedom (N = 128) wave train (a) whose linear Fourier spectrum is a power law, f”, (Y= - 1, and whose Fourier phases are random on (0,21r). The Floquet spectrum (one-half the trace of the monodromy matrix, iTrZ’(E), graphed as a function of E = k*) is shown in (b); nine solitons are found in the spectrum. The amplitudes of the hyperelliptic functions, Aj = (E2j+l - Ezj)/2, and the spectral modulus, mj = (E2j+l - E2j)/(E2j+l - E2j_1) are given in (c). The vertical scale of mj has been shifted downward one decade for clarity. The first ten hyperelliptic functions are shown in (d). The sum of the radiation modes (10-128) are shown in (e). In (f) the solitons (modes l-9) and the radiation are shown on the same scale.
A.R. Osborne /Mathematics
and Computers
in Simulation
37 (1994) 431-450
447
programmed and they each give satisfactory results. Because the method outlined above is simpler to program (I also find it theoretically and esthetically more pleasing) I recommend it over the second approach. The bottom line is that both methods work just fine; their codes are of equal length, and are a pleasure to use. Given an input space or time series with hundreds or thousands of discrete points, the program computes both the direct and inverse scattering problems, and reconstructs the input series with little difficulty. The direct scattering problem takes at most a few minutes on a modern workstation, while the inverse problem may take somewhat longer. The reason for the larger times for computing the hyperelliptic functions is that one is repeating the direct problem M times for an M-point time series. Since the direct problem requires M2 steps, the total time for constructing the inverse problem is M3.
6. Example of the analysis of a many-degree-of-freedom
wave train and nonlinear filtering
Here I consider the analysis of a complex wave train solution of the KdV equation, i.e. one which has a many-degree-of-freedom spectrum with N = 128 (M = 256). The example I choose has the Cauchy initial condition 77(x, 0) which is given by a space series that is governed by a power-law power spectrum, f-*, with uniformly distributed random Fourier phases on the interval (0, 27~). Such a space series, even though it is generated by a linear Fourier series, may be viewed as an initial value state of KdV and is here analyzed using the algorithm described above. The results are given in Fig. 6. The input time series corresponds to (Y= 1.0 and is shown in Fig. 6(a). The associated Floquet diagram is given in Fig. 6(b) and has nine solitons in the spectrum (i.e. nine zero crossings of the Floquet discriminant to the left of the ‘reference level’) [21]. Both the hyperelliptic function spectrum and the soliton index are given in Fig. 6(c). The hyperelliptic function spectrum is a graph of the widths of the open bands aj =
-1 __
Original Signal
-2
X
Fig. 7. An example of the reconstruction of the input wave train using linear superposition of the hyperelliptic functions, pj(x, 0). The first 126 of the 128 hyperelliptic functions have been summed; these are shown as a dotted line, while the original input signal is given as a solid line. The small differences are due to the fact that modes 127, 128 have not been included in the reconstructed wave train. Including these last two modes gives excellent agreement ( - 10P lo) between the input and reconstructed wave trains.
448
A.R. Osborne/Mathematics
and Computers
in Simulation
37 (1994) 431-450
(EZj+i - EZj), as a function of wavenumber, kj = 27rj/L. The ‘soliton index’ is given by mj = (E2j+l - E2.j)/(EZj+l - E2j_1); mj is the ‘modulus’ of the hyperelliptic functions and is nearly 1 for a soliton and substantially less than 1 for the radiation components; the ‘reference level’ divides these two fundamental spectral types. The first ten hyperelliptic functions are given in Fig. 6(d). The radiation components occur to the right of the reference level (see Fig. 6(c)); these correspond to the oscillation mode numbers lo-128 and are shown in Fig. 6(e). The solitons are found by first establishing the maximum value of j = J for which mj 2 0.99. In the present case j = 9 so there are 9 solitons which propagate on a reference level defined by in real space [21]. Shown in Fig. 6(f) the soliton components (the linear rl ref = -E,/h superposition of the hyperelliptic mode numbers l-9) together with the radiation components (sum of modes 10-128). The reference level, qref, is also displayed. In Fig. 7 I show how the input series may be reconstructed from the spectrum via the linear superposition law (2.1). I sum the modes 1-126 (rather than the total number of modes, 1-128) so that some difference can be graphically seen between the input space series and its reconstruction. Summing all the 128 modes results in a series which is accurate to within about 10-l’ of the input series. It is emphasized that the determination of a large number of hyperelliptic modes is a highly nontrivial numerical task, which is handled quite straightforwardly with the automatic numerical IST algorithm discussed herein. Computation of the results just given requires about two minutes of processor time on an HP 730 workstation.
7. Summary and conclusions Numerical procedures are introduced to implement the inverse scattering transform of the periodic KdV equation (1) for solving the direct scattering problem of an input discrete wave form (potential) and (2) for computing the associated inverse scattering problem. The methods developed here are useful in time series analysis applications in which one is able to determine the spectrum of a measured, N = M/2 degree-of-freedom input wave train and to reconstruct the wave train from the spectrum or to nonlinearly filter the data. What does the present algorithm offer that has not been provided by other algorithms for the direct and inverse scattering transforms given previously in the literature [18-22,26,27]? Here is a list of items: (1) The construction of the monodromy matrix given herein consists of purely real numbers. This contrasts to other formulations which require complex numbers. As a consequence the computer code described here is about four times faster than fully complex formulations [19]. (2) The partition matrix, T, by a leading order perturbation expansion in KAX K 1, can be reduced to the form given by Ablowitz and Ladik [28] who show that the AKNS class of nonlinear wave equations may be discretized and integrated by the inverse scattering transform. While the numerical approach given herein is discrete, it is not however integrable by IST; nevertheless the method has excellent numerical properties [19]. (3) Finally, the algorithm given herein is automatic in that it finds all the eigenvalues in the direct spectral problem and subsequently finds the hyperelliptic oscillation modes in the inverse scattering problem without requiring the user to have a deep knowledge of the
A.R. Osborne/Mathematics
and Computers
in Simulation
37 (1994) 431-450
449
periodic inverse scattering transform. This has aided us to study a number of theoretical and experimental problems with relative ease [29-321. (4) The inverse scattering transform algorithm uses computer time proportional to N” (N* for each of the N values of the hyperelliptic functions computed a single base point, pj(xO, 0)). This may be contrasted with the Gel’fand-Levitan-Marchenko integral equation for KdV on the infinite line (which is N4> and with the &function inverse problem for periodic boundary conditions (which is theoretically NN, but which may also be reduced to N4) [23]. The inverse algorithm presented here is therefore order N faster than other approaches.
Acknowledgements
This work is supported in part by the Office of Naval Research of the United States of America (ONR Grant N00014-92-J-1330) and by the Progetto Salvaguardia di Venezia de1 Consiglio Nazionale delle Ricerche, Italy.
References [l] [2] [3] [4] [S] [6] [7] [8] [9] [lo] [ll] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]
D.J. Korteweg and G. de Vries, Philos. Mag. Ser. 5 39 (1895) 422. G.B. Whitham, Linear and Nonlinear Waves (Wiley-Interscience, New York, 1974). J.W. Miles, Ann. Rev. Fluid Mech. 12 (1980) 11. C.S. Gardner, J.M. Greene, M.D. Kruskal and R.M. Miura, Phys. Rev. Lett. 19 (1967) 1095. V.E. Zakharov, S.V. Manakov, S.P. Novikov and M.P. Pitayevsky, Theory of Solitons. The Method of the Inverse Scattering Problem (Consultants Bureau, New York, 1984). M.J. Ablowitz and H. Segur, Solitons and the Inverse Scattering Transform (SIAM, Philadelphia, 1981). R.K. Dodd, J.E. Eilbeck, J.D. Gibbon and H.C. Morris, Solitons and Nonlinear Wave Equations (Academic Press, London, 1982). A.C. Newell, Solitons in Mathematics and Physics (SIAM, Philadelphia, 1985). A. Degasperis, Nonlinear Topics in Ocean Physics (A.R. Osborne, ed.) (Elsevier, Amsterdam, 1991). B.A. Dubrovin and S.P. Novikov, Sov. Phys. JETP 40 (1974) 1058; B.A. Dubrovin, V.B. Matveev and S.P. Novikov, Russian Math. SUN. 31 (1976) 59. H. Flaschka and D.W. McLaughlin, Prog. Theoret. Phys. 55 (1976) 438. H.P. McKean and E. Trubowitz, Comm. Pure Appl. Math. 29 (1976) 143. T.R. Taha and M.J. Ablowitz, J. Comp. Phys. 55 (1984) 192; T.R. Taha and M.J. Ablowitz, J. Comp. Phys. 55 (1984) 231. A.R. Bishop, M.G. Forest, D.W. McLaughlin and E.A. Overman II, Physica D 18 (1986) 293; A.R. Bishop am’ P.S. Lomdahl, Physica D 18 (1986) 26. R. Flesch, M.G. Forest and A. Sinha, Physica D 48 (1991) 169. D.W. McLaughlin and CM. Schober, Physica D 57 (1992) 447. G. Terrones, D.W. McLaughlin, E.A. Overman II and A. Pearlstein, SIAM J. Appl. Math. 50 (1990) 791. A.R. Osborne, Nonlinear Topics in Ocean Physics (A.R. Osborne ed.,) (North-Holland, Amsterdam, 1991) 669. A.R. Osborne, J. Comp. Phys. 94 (1991) 284; A. Provenzale and A.R. Osborne, J. Comp. Phys. 94 (1991) 314. A.R. Osborne and L. Bergamasco, Nuovo Cimento B 85 (1985) 229. A.R. Osborne and L. Bergamasco, Physica D 18 (1986) 26. A.R. Osborne and E. Segre, Physica D 44 (1990) 575. A.R. Osborne, Phys. Rev. Lett. (1993) 3115.
450
A.R. Osborne/Mathematics
and Computers in Simulation 37 (1994) 431-450
[24] N.J. Zabusky and M.D. Kruskal, Phys. Rev. Lett. 15 (1965) 240. [2.5] W.H. Press, B.P. Flannery, S.A. Teukolsky and W.T. Vetterling, Numerical Recipes (Cambridge Univ. Press, Cambridge, 1990). [26] A.R. Osborne, E. Segre, G. Boffeta, and L. Cavaleri, Phys. Rev. Lett. 64 (1991) 1733. [27] A.R. Osborne, J. Comp. Phys. (1993). [28] M.J. Ablowitz and J. Ladik, J. Math. Phys. 16 (1975) 598; M.J. Ablowitz and J. Ladik, J. Math. Phys. 17 (1976) 1011. [29] A.R. Osborne and M. Petti, Phys. Rev. E 47 (1993) 1035. [30] A.R. Osborne, Phys. Rev. E 48 (1993) 296. [31] A.R. Osborne, Phys. Lett. A 176 (1993) 75. [32] A.R. Osborne, Proceedings of the Aha Huliko’a Hawaiian Winter Workshop (P. Miiller and D. Henderson eds.) (University of Hawaii Press, 1993).