Applied Mathematics and Computation 215 (2009) 2914–2926
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
The recovery of even polynomial potentials Amin Boumenir Department of Mathematics, University of West Georgia, 1601 Maple St., Carrollton, GA 30118, United States
a r t i c l e
i n f o
a b s t r a c t We consider the problem of reconstructing an even polynomial potential from one set of spectral data of a Sturm–Liouville problem. We show that we can recover an even polynomial of degree 2m from m þ 1 given Taylor coefficients of the characteristic function whose zeros are the eigenvalues of one spectrum. The idea here is to represent the solution as a power series and identify the unknown coefficients from the characteristic function. We then compute these coefficients by solving a nonlinear algebraic system, and provide numerical examples at the end. Because of its algebraic nature, the method applies also to non self-adjoint problems. Ó 2009 Elsevier Inc. All rights reserved.
Keywords: Inverse spectral problem Inverse eigenvalue problem Inverse Sturm–Liouville
1. Introduction In this note we show how to recover the coefficients p2k 2 C of an even polynomial,
pðxÞ ¼ p2m
x2m x2m2 x2 þ p2m2 þ þ p2 þ p0 ; 2m! ð2m 2Þ! 2
ð1Þ
from finite spectral data associated with the Sturm–Liouville problem given by
y00 ðx; kÞ þ pðxÞyðx; kÞ ¼ kyðx; kÞ;
x 2 ½0; b;
yð0; kÞ ¼ 0; yðb; kÞ ¼ 0:
ð2Þ
In practical inverse problems, one is often interested in recovering few meaningful parameters instead of a whole function. The same idea can be used to approximate a potential by polynomials since these are dense in Lð0; bÞ. Obviously, it is computationally more economical to recover a polynomial since one only needs a few coefficients. In 1964, Levitan and Gasymov [6], showed that two complete spectra, that is, two infinite sequences of eigenvalues, were necessary and sufficient to reconstruct any potential. A numerical algorithm was implemented to handle the case of an analytic potential [3]. There, the algebraic system from one spectrum always contained twice more unknowns than equations, which explained the need for two spectra. However if we halve the number of parameters, for example when the potential is even or odd, then one sequence of eigenvalues should be enough. This fact remains true when q is symmetric and not analytic [8]. The main motivation behind our approach is that one should not use infinite data to recover a finite number of parameters only. Our algorithm recovers m þ 1 coefficients from exactly m þ 1 Taylor coefficients of the characteristic function and is easily implemented. Existence and uniqueness of a solution of the algebraic system can be found in Section 3. We now explain the main idea. At the heart of the inverse spectral problem, by the Gelfand–Levitan algorithm [1,2,5–7,9,11,14,17–20], is the integral representation of the normalized solution, y0 ð0; kÞ ¼ 1,
pffiffiffi 1 1 yðx; kÞ ¼ pffiffiffi sinðx kÞ þ pffiffiffi k k
Z
x
pffiffiffi Kðx; tÞ sinðt kÞdt:
0
E-mail address:
[email protected] 0096-3003/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2009.09.037
ð3Þ
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
2915
However if (1) holds then a different representation is possible, since it is now analytic [3], namely it is a power series in both variables x and k
X
yðx; kÞ ¼
cðn; kÞ
n;kP0
xn kk : n! k!
The sums of the Taylor coefficients cðn; kÞ, which depend on p2k , can be evaluated if we know the characteristic function
yðb; kÞ ¼
X
ak
kP0
kk k!
ð4Þ
since
X
n
cðn; kÞ
nP0
b ¼ ak n!
for k ¼ 0; 1; 2; . . .
ð5Þ
In Section 3 we shall prove that the set of Taylor coefficients fan gnP0 is equivalent to the whole set of eigenvalues fkn gnP0 . Thus without loss of generality we can assume that we are given a finite number of Taylor coefficients fan gm n¼0 instead of the eigenvalues, fkn gnP0 and we then need to solve (5). The algorithm that extracts p2k from (5), is direct, fast, and based on parameter identification. In [3], we recovered a potential by using only the triangular structure of the matrix cðn; kÞ. Because we had two infinite systems, generated by two spectra, Gaussian elimination of a large truncated matrix was enough to reduce the truncation error. The setting for even polynomials case is very different. First, from the algebraic point of view, since our potential in (1) depends only on m þ 1 coefficients, we should need m þ 1 equations only from (5). In order to reduce the truncation error, we need to truncate (5) N X
n
cðn; kÞ
nP0
b ¼ ak n!
for k ¼ 0; . . . ; m;
ð6Þ
where N is large enough. Thus we lose the triangular form of the system since m is fixed, and so the Gaussian elimination is not applicable. Instead we use fixed point methods, such as Newton’s methods, and the numerical results show a far better approximation. This follows simply from the fact that the truncation error from (6) decays to zero as N increases. Since algebraic systems are used, we can recover polynomials with complex valued coefficients, i.e. is non self-adjoint operators which is not possible by the Gelfand–Levitan theory. The key to solving the system in (5) is to show that the cðn; kÞ matrix is triangular and also contains a linear part. Section 2 contains the composition of each coefficient cðn; kÞ. This allows us to describe explicitly the linear part and verify its invertm ibility. To find how the Taylor coefficients fak gm k¼0 are related to the coefficients fp2k gk¼0 , we start off with the direct problem. Once these relations are known, we can reverse the procedure and identify the unknown parameters p2k in terms of ak by solving (6). 2. Preliminaries Denote by u the solution of the initial value problem defined by
u00 ðx; kÞ þ pðxÞuðx; kÞ ¼ kuðx; kÞ 0 6 x 6 b;
uð0; kÞ ¼ 0 and u0 ð0; kÞ ¼ 1:
ð7Þ
Obviously u is unique, analytic in x and k, and furthermore uð:; kÞ is an odd function in x, so
X
uðx; kÞ ¼
cðn; kÞ
k;nP0
x2nþ1 kk : ð2n þ 1Þ! k!
ð8Þ
pffiffiffi For example if pðxÞ ¼ 0 in (7) then uðx; kÞ ¼ p1ffiffik sinðx kÞ and so
uðx; kÞ ¼
X nP0
ð1Þn
x2nþ1 kn ) cðn; kÞ ¼ ð1Þn n!dnk : ð2n þ 1Þ!
If pðxÞ – 0; then the Taylor coefficients cðn; kÞ are easily found to satisfy
X
n X
n;kP0
l¼0
! 2n þ 1 x2nþ1 kk X x2nþ1 kk ¼ ; p2l cðn l; kÞ cðn þ 1; kÞ nP0 kcðn; k 1Þ ð2n þ 1Þ! k! ð2n þ 1Þ! k! 2l kP1
which yields the following recursion formula for n; k P 0,
cðn þ 1; kÞ ¼ kcðn; k 1Þ þ
n X 2n þ 1 l¼0
2l
cðn l; kÞp2l ;
ð9Þ
2916
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
where we define cðn; 1Þ ¼ 0 and the binomial coefficient by mula at x ¼ 0 implies 2nþ1
d
u
2nþ1
dx
ð0; kÞ ¼
X
cðn; kÞ
kP0
n n! :¼ l!ðnlÞ! . It is also readily seen that from (8), Taylor’s forl
kk k!
ð10Þ
and by setting n ¼ 0, the initial condition u0 ð0; kÞ ¼ 1; leads to the first column
cð0; 0Þ ¼ 1; and cð0; kÞ ¼ 0 for k ¼ 1; 2; 3; . . .
ð11Þ
Thus (11) defines all the coefficients cð0; kÞ which allows us to start the recurrence relation. Use (9) to get the following coefficients cðn; kÞ knn
0
1
2
3
4
5
0
1
p0
p30 þ 13p0 p2 þ 5p4
cð4; 0Þ
cð5; 0Þ
1
0
1
p20 þ 3p2 2p0
cð4; 1Þ
cð5; 1Þ
2
0
0
2
3p20 13p2 6p0
20p30 þ 420p0 p2 þ 332p4
3
0
0
0
6
12p20 þ 68p2 24p0
4 5
0 0
0 0
0 0
0 0
24 0
60p20 420p2 120p0 120
where
cð4; 0Þ ¼ p40 þ 34p20 p2 þ 40p4 p0 þ 63p22 þ 7p6 ; cð4; 1Þ ¼ 4p30 68p0 p2 40p4 ; cð5; 0Þ ¼ p50 þ 70p30 p2 þ 166p4 p20 þ 531p0 p22 þ 91p0 p6 þ 558p2 p4 þ 9p8 ; cð5; 1Þ ¼ 332p4 p0 210p20 p2 5p40 531p22 91p6 : The matrix is obviously infinite, triangular and structured along the diagonals. We conclude from (9) with the following easily verified identities. Proposition 1. For any k P 0 we have P1 P2 P3 P4
cðn; kÞ ¼ 0 if 0 6 n 6 k 1; cðk; kÞ ¼ ð1Þk k! cðk þ 1; kÞ ¼ ð1Þk ðk þ 1Þ!p0 2 3 2 cðk þ 2; kÞ ¼ ð1Þk k!p20 ðk þ 3k þ 2Þ=2 þ ð1Þk k!ð4k þ 21k þ 35k þ 18Þp2 =6:
Proof. The proofs are by induction. P1 means that the matrix is upper triangular. By (11), P1 holds already for the first column cð0; kÞ ¼ 0 for 1 6 k. Assuming it holds up to the nth column we need to show that it also holds for the next column, i.e. cðn þ 1; kÞ ¼ 0 if n þ 1 6 k 1. Use (9) to see that the assumption already holds for the first term cðn; k 1Þ ¼ 0. On the other hand, all the terms cðn l; kÞ ¼ 0 in the sum are also zero since n l 6 n þ 1 6 k 1. Thus their sum is zero and so cðn þ 1; kÞ ¼ 0 for n þ 1 6 k 1. Using P1, (9) reduces to
cðn þ 1; kÞ ¼ kcðn; k 1Þ þ
nk X 2n þ 1 cðn l; kÞp2l : 2l l¼0
ð12Þ
P2 follows immediately by setting n ¼ k 1 in (12)
cðk; kÞ ¼ kcðk 1; k 1Þ þ
1 X 2k 1 cðk 1 l; kÞp2l ¼ kcðk 1; k 1Þ ¼ ð1Þk k! 2l l¼0
Observe that P3 already holds for k ¼ 0, since cð1; 0Þ ¼ p0 . If we assume that P3 is true up to k 1, then setting n ¼ k in (12) yields
cðk þ 1; kÞ ¼ kcðk; k 1Þ þ
0 X 2k þ 1 cðk l; kÞp2l ¼ kcðk; k 1Þ þ cðk; kÞp0 2l l¼0
¼ kð1Þk1 k!cð1; 0Þ þ ð1Þk k!p0 ¼ ð1Þk ðk þ 1Þ!p0 : For P4, we can compute cð2; 0Þ ¼ p20 þ 3p2 and from the recursion (12), we deduce that
cðk þ 2; kÞ ¼ kcðk þ 1; k 1Þ þ cðk þ 1; kÞp0 þ
2k þ 3 2
cðk; kÞp2
¼ kcðk þ 1; k 1Þ þ ð1Þk ðk þ 1Þ!p20 þ ð1Þk ð2k þ 3Þðk þ 1Þ!p2 ;
ð13Þ
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
2917
which we can solve by using generating functions 2
3
2
cðk þ 2; kÞ=k! ¼ ð1Þk ðk þ 3k þ 2Þp20 =2 þ ð1Þk ð4k þ 21k þ 35k þ 18Þp2 =6: From (13) it is easily verified that cð3; 1Þ ¼ the recurrence (9).
3p20
78p2 =6 and so we can directly compute all cðk þ 2; kÞ without usinag
We now examine further the structure of the coefficients cðn; kÞ. Let us agree to say that two coefficients are similar if they ai a contain the same monomials pi1 i pinin and this fact is denoted by w. For example cð3; 0Þwcð4; 1Þwcð5; 2Þ; see above Proposition 1, since each is a combination of p30 ; p0 p2 ; and p4 only. Proposition 2. For all k; m P 0, the coefficients cðm þ k; kÞ are similar, i.e. cðm þ k; kÞwcðm; 0Þ. Proof. We shall use a double induction on m and k. First, from Proposition 1, we have cð1 þ k; kÞ ¼ ð1Þk ðk þ 1Þ!p0 and so the proposition holds true for m ¼ 1, i.e. cð1 þ k; kÞwcð1; 0Þ. Let us assume that it holds true up to n, then we need to show that cðn þ 1 þ k; kÞwcðn þ 1; 0Þ holds also for k P 0. From (12) we have
cðn þ 1 þ k; kÞ ¼ kcðn þ k; k 1Þ þ
n X 2n þ 2k þ 1 2l
l¼0
cðn l þ k; kÞp2l
ð14Þ
and since cðn l þ k; kÞwcðn l; 0Þ, we obtain for k P 0 and from (12)
cðn þ 1 þ k; kÞ þ kcðn þ k; k 1Þw
n X 2n þ 2k þ 1 cðn l; 0Þp2l 2l l¼0
cðn þ 1 þ k; kÞ þ kcðn þ 1 þ k 1; k 1Þwcðn þ 1; 0Þ: Now use induction on k, with cðn; 1Þ ¼ 0 to see that
cðn þ 1 þ k; kÞwcðn þ 1; 0Þ for all k P 0:
We now find the highest coefficient p2k in cðm; 0Þ which would then appear in all similar cðm þ k; kÞ. Proposition 3. We have for m P 1,
cðm; 0Þ ¼ ð2m 1Þp2m2 þ
1 ð4m2 8m þ 3Þðm 1Þ þ 2m 3 p0 p2m4 þ 3
and thus cðm; 0Þ is linear in p2m2 . Proof. It is enough to take the first three terms,
n X 2n þ 1 cðn l; 0Þp2l 2l l¼0 2n þ 1 2n þ 1 2n þ 1 ¼ cð0; 0Þp2n þ cð1; 0Þp2n2 þ cðn; 0Þp0 þ 2n 2n 2 0 2n þ 1 2n 1 2n þ 1 cð1; 0Þp2n2 þ p0 p2n2 þ ¼ cð0; 0Þp2n þ 2n 2 2n 2 2n ð4n2 1Þn þ 2n 1 p0 p2n2 þ ¼ ð2n þ 1Þp2n þ 3
cðn þ 1; 0Þ ¼
The distribution of coefficients is now clear. Indeed without using the recurrence to compute all the cðn; kÞ, we can predict what the linear part is in each cðn; kÞ. For example cð5; 0Þ ¼ 9p8 þ 91p0 p6 þ and all other cð5 þ k; kÞwill also start with p8 and p0 p6 . For more advanced techniques on how to compute those coefficients by using generating functions, see [13]. We now examine the coefficients in the lower rows. h Proposition 4. For m P 0 and k P 1 we have
cðm þ 1 þ k; kÞ ¼ aðm; kÞp2m þ fmk ðp0 ; . . . ; p2m2 Þ
ð15Þ
where fmk is a nonlinear function and
" k
aðm; kÞ ¼ ð1Þ k! 2m þ 1 þ
k X 2m þ 2l þ 1 l¼1
2m
# :
ð16Þ
2918
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
Proof. From the recurrence relation (9) we first have
cðm þ 1 þ k; kÞ ¼ kcðm þ k; k 1Þ þ
m X 2m þ 2k þ 1 2l
l¼0
cðm þ k l; kÞp2l
and by pulling out the coefficients of p2m only we obtain
aðm; kÞ ¼ kaðm; k 1Þ þ
2m þ 2k þ 1 2m
cðk; kÞ:
k
By Proposition 1, cðk; kÞ ¼ ð1Þ k!; and so
aðm; kÞ þ kaðm; k 1Þ ¼ ð1Þk k!
2m þ 2k þ 1
2m 2m þ 2k þ 1 1 1 aðm; k 1Þ ð1Þk aðm; kÞ ð1Þk1 k! ðk 1Þ! 2m
Adding the left hand side, yields a telescopic sum which reduces to
ð1Þk
k X 1 aðm; kÞ ¼ aðm; 0Þ þ k! l¼1
2m þ 2l þ 1 2m
:
By Proposition 3, we have aðm; 0Þ ¼ 2m þ 1 and thus (16). h From (8) we can express the characteristic function uðb; kÞ as
uðb; kÞ ¼
X
ak
kP0
kk k!
where ak ¼
X
2nþ1
cðn; kÞ
nP0
b ð2n þ 1Þ!
ð17Þ
and recall that the eigenvalues kn are the zeros of uðb; kÞ. We now show that the data provided by fan gnP0 is equivalent to the set of eigenvalues fkn gnP0 . To do so, we need to express uðb; kÞ in terms of the eigenvalues only. Theorem 1. Given all the eigenvalues fkn gnP0 and assume that 0 R fkn gnP0 then the Taylor coefficients an in (17) can be computed explicitly by
a0 ¼ b
Y
kn b
nP0
!
2
ðn þ 1Þ
2
p2
X
and ak ¼ ð1Þk k!a0
06i1
1 ki1 ki2 :kik
f or k ¼ 1; 2; . . .
Proof. From the integral representation (3) and since the kernel Kðb; :Þ is continuous [6], it follows that the characteristic function uðb; :Þ is entire of order 1=2 and type b and thus by Hadamard’s factorization theorem, it can be represented uniquely by its zeros [12, p. 67], which we recall are simple,
uðb; kÞ ¼ uðb; 0Þ
Y k 1 in case all kn – 0: kn nP0
ð18Þ
and in case 0 2 fkn gnP0 then
uðb; kÞ ¼ @ k uðb; 0Þk
Y k 1 in case kj ¼ 0: kn nP0
ð19Þ
n–j
The uniform convergence of the infinite product in any compact domain of C follows from the asymptotics of eigenvalues in the Dirichlet case [12, p. 66],
pffiffiffiffiffi p 1 1 ; kn ¼ nþ1þ gþo n b pn where g ¼ 12
Rb 0
pðxÞdx. It is enough to work with (18) since (19), is analogous. Thus by (26) we have, for all k 2 C;
uðb; kÞ ¼
X kP0
ak
Y kk k 1 ¼ uðb; 0Þ : kn k! nP0
ð20Þ
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
2919
By expanding the right handside and identifying the coefficients of kk ; it follows:
a0 ¼ uðb; 0Þ; a1 ¼ uðb; 0Þ a2 ¼ 2uðb; 0Þ
X 1 ; k 06n n X 1
06i
a3 ¼ 6uðb; 0Þ
ki kj
X 06i
; 1 ki kj kk
ð21Þ
and more generally
X
ak ¼ ð1Þk k!uðb; 0Þ
06i1
1 : ki1 ki2 kik
ð22Þ
The constant k! accounts for the number of permutations k0 ; . . . ; kk1 . It remains to see that in (22) the value uðb; 0Þ also dethis pends on kn . To p ffiffiffi end, use the Lebesgue–Riemman theorem and the continuity of Kðb; :Þ to deduce from (3) that uðb; kÞ ! p1ffiffik sinðb kÞ, as k ! 1, in other words
uðb; kÞ
pffiffiffi ! 1 as k ! 1: sinðb kÞ
p1ffiffi k
ð23Þ
Obviously we have 2 Y pffiffiffi 1 kb pffiffiffi sinðb kÞ ¼ b 1 k ðn þ 1Þ2 p2 nP0
! ð24Þ
and so combining (24) and (18) we have
0 1 1 1 1 kkn 1 kkn u ðb; 0Þ Y u ðb; 0Þ Y @ k kn A ¼ ¼ pffiffiffi ¼ Q : 2 1 kb2 kb2 b b p1ffiffi sinðb kÞ ðnþ1b Þ2 p2 b nP0 1 ðnþ1 nP0 1 nP0 2 2 2 2 k k Þ p ðnþ1Þ p
uðb; 0Þ
uðb; kÞ
Q
nP0
The limit in (23) and then a0 ¼ uðb; 0Þ from (21) yield respectively
0
uðb; 0Þ Y @ 1¼ b
nP0
1
1 kn b2 ðnþ1Þ2 p2
A and a0 ¼ b
!
2
Y
kn b
nP0
ðn þ 1Þ2 p2
ð25Þ
:
Thus we can update (18) to a formula that depends on kn only
uðb; kÞ ¼ b
!
2
Y
kn b
nP0
ðn þ 1Þ2 p2
! 2 Y Y k b 1 ðk kÞ : ¼b n 2 2 kn nP0 nP0 ðn þ 1Þ p
Next we examine the approximation of the Taylor coefficients. In practice if we use only the first N þ 1 eigenvalues k0 ; . . . ; kN then the series in (22) terminates. Thus if we define
X 1 ae k ¼ ð1Þk k!uðb; 0Þ k! 06i
2
k 6N
1 ki1 ki2 kik
then the error, which is the remainder, can be estimated for N large enough and by (20) we have
X 1 X 1 X 1
1
¼O
¼
06n kn 06j6N kj Nþ16j kj N since by the integral test we have
X 1 Nþ16j
j
2
Z 6 N
1
1 1 dx ¼ : x2 N
Similarly
X 06i1
X 1 ki1 ki2 kik 06i
2
k 6N
k X 1 ¼ ki1 ki2 kik j¼1
X ij1
1 ki1 ki2 kik
2920
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
leads to
X
X
X
1 1 1
6
ij1
X
X
X 1
X
1 1 1 1 1
6
6 : ¼ O
Nþ16i kik
i
Nkjþ1 Nkjþ1
ij1
Nþ16ij kij
j1 k The slowest decaying remainder corresponds to k ¼ j which implies
X X 1
06i
2
1
k
2
k 6N
1 1
:
¼O ki1 ki2 kik
N
Combining the above estimates we have Proposition 5. If we use the first N þ 1 eigenvalues only then we have the following error estimates
1
ak a ek ¼ O : N e k by using the asymptotics of eigenvalues given by (20) to supplement Observe that one can improve on the precision of a the missing eigenvalues. The above propositions are the main tools to be used in the inverse problem.
3. The inverse problem Recall that the eigenvalues of (2) are the zeros of the characteristic function uðb; kn Þ ¼ 0, (7), where
uðb; kÞ ¼
X kP0
ak
kk k!
where ak ¼
X
2nþ1
cðn; kÞ
nP0
b : ð2n þ 1Þ!
ð26Þ
Now that we have found the structure of the coefficients cðn; kÞ; in Section 2, the direct problem would be
pðxÞ ) fp2k gm k¼0 ) fcðn; kÞgn;kP0 ) fak gkP0 ) fkk gkP0
ð27Þ
and the inverse problem means reversing the procedure in (27) starting first with fkk g or equivalently fak g. In experimental physics, one can observe the oscillations of a string at the end x ¼ b and measure the function uðb; :Þ over an interval around k ¼ 0. Thus in principle one can compute the Taylor coefficients ak directly if the function uðb; kÞ is available and otherwise one needs the eigenvalues. In this case, we can use the first m þ 1 Taylor coefficients ak ; found in (26)
X
2nþ1
cðn; kÞ
nP0
b ¼ ak ð2n þ 1Þ!
k ¼ 0; . . . ; m:
ð28Þ
In fact, since we have m þ 1 unknowns, it is enough to use any m þ 1 rows of the cðn; kÞ matrix. From Proposition 1, (28) reduces to
cðk; kÞ
2kþ1 2nþ1 2nþ1 2nþ1 n¼mþ1 N 1 X X X b b b b cðn; kÞ cðn; kÞ cðn; kÞ þ þ þ ¼ ak ; ð2k þ 1Þ! n¼kþ1 ð2n þ 1Þ! n¼mþ2 ð2n þ 1Þ! n¼Nþ1 ð2n þ 1Þ!
which we decompose into three blocks in order to set iterations
T m ðpÞ þ U N ðpÞ þ RN ðpÞ ¼ am ;
ð29Þ
where the vectors
p ¼ ðp0 ; p2 ; . . . ; p2m Þ and am ¼
ð1Þk k! 2kþ1 ak b ð2k þ 1Þ!
!m k¼0
and while the updating vector is,
U N ðpÞ ¼ ðuk ðpÞÞm k¼0
where uk ðpÞ ¼
N X n¼mþ2
2nþ1
cðn; kÞ
b ð2n þ 1Þ!
and N P m þ 2:
ð30Þ
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
2921
T m ðpÞ 2 Cmþ1 is also a vector function defined by
T m ðpÞ ¼
n¼mþ1 X
2nþ1
cðn; kÞ
n¼kþ1
b ð2n þ 1Þ!
!m k¼0
and finally the remainder is
RN ðpÞ ¼
!m 2nþ1 b cðn; kÞ : ð2n þ 1Þ! n¼Nþ1 1 X
k¼0
Since we are dealing with convergent series, the remainder
RN ðpÞ ! 0 as N ! 1 and so we can truncate the system (29) by dropping the remainder and solve
T m ðpÞ þ U N ðpÞ ¼ am :
ð31Þ
In order to use fixed point iterations, we need to invert the operator T m . The key feature in the algorithm is to compute linearly the highest coefficient p2k from the ðm kÞth equation. Observe now that the sum, which contains cðk þ 1; kÞ; . . . ; cðm; kÞ, includes only the coefficients p0 ; . . . ; p2m2k2 and by Proposition 4
cðm þ 1; kÞ ¼ aðm k; kÞp2ðmkÞ þ fmk;k ðp0 ; . . . ; p2m2k2 Þ:
ð32Þ
Thus we can decompose the components of T m into the highest coefficient p2k and all the remaining lowest terms, namely 2mþ3
T m ðpÞ ¼
b aðm k; kÞp2ðmkÞ þ F mk;k ðp0 ; . . . ; p2m2k2 Þ ð2m þ 3Þ!
!m ð33Þ k¼0
where aðm k; kÞ is defined by (16) and
F mk;k ðp0 ; . . . ; p2m2k2 Þ ¼
2mþ3 2nþ1 n¼m X b b cðn; kÞ fmk;k ðp0 ; . . . ; p2m2k2 Þ þ : ð2m þ 3Þ! ð2n þ 1Þ! n¼kþ1
We now discuss the main properties of the map T m . Theorem 2. For any given r 2 Cmþ1 ; the equation T m ðpÞ ¼ r has a unique solution p 2 Cmþ1 . Furthermore p ¼ T 1 m ðrÞ can be mþ1 ! Cmþ1 is continuous. found explicitly and the operator T 1 m : C Proof. To show that T m is a bijection in Cmþ1 , we use the triangular form of T m , and Proposition 1. Assume that T m ðpÞ ¼ T m ðqÞ then starting at row k ¼ m, as in the Gaussian elimination, 2mþ3
2mþ3
b b ð1Þm ðm þ 1Þ!p0 ¼ ð1Þm ðm þ 1Þ!q0 ) p0 ¼ q0 : ð2m þ 3Þ! ð2m þ 3Þ!
ð34Þ
2mþ1
2mþ3
b b The previous row k ¼ m 1, contains cðm 1 þ 1; m 1Þ ð2mþ1Þ! þ cðm 1 þ 2; m 1Þ ð2mþ3Þ! ; which by Proposition 1, leads to
2mþ1
ð1Þðm1Þ m!fp0 q0 g
b þ ð1Þðm1Þ ðm 1Þ!fp20 q20 gððm 1Þ2 þ 3ðm 1Þ þ 2Þ=2 ð2m þ 1Þ! 2mþ1
þ ð1Þðm1Þ ðm 1Þ!ð4ðm 1Þ3 þ 21ðm 1Þ2 þ 35ðm 1Þ þ 18Þ
b fp q2 g ¼ 0: ð2m þ 1Þ!6 2
ð35Þ
Since p0 ¼ q0 then p2 ¼ q2 and by induction we can show p2k ¼ q2k for all 0 6 k 6 m i.e. p ¼ q. To see that T m is surjective, use (33) to write T m ðpÞ ¼ am as 2mþ3
b aðm k; kÞp2ðmkÞ þ F mk;k ðp0 ; . . . ; p2m2k2 Þ ¼ amk ð2m þ 3Þ! and then deduce a formula for T 1 m
p2ðmkÞ ¼
1 ð2m þ 3Þ! ½amk F mk;k ðp0 ; . . . ; p2m2k2 Þ aðm k; kÞ b2mþ3
since aðm k; kÞ – 0 by (16).
ð36Þ
2922
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926 ðjÞ
ðjÞ
1 ðjÞ To show the continuity of T 1 ! p; where pðjÞ ¼ T 1 m , we need to prove that if am ! am then p m ðam Þ and p ¼ T m ðam Þ. To this end, we examine each component of
T m ðpðjÞ Þ T m ðpÞ ¼ aðjÞ m am : Starting at row k ¼ m, we have 2mþ3
2mþ3
b b ðjÞ ðjÞ ð1Þm ðm þ 1Þ!p0 ð1Þm ðm þ 1Þ!p0 ¼ ak ak ð2m þ 3Þ! ð2m þ 3Þ! ðjÞ
and since ak ak ! 0 as j ! 1; we deduce that ðjÞ
p0 ! p0
as j ! 1:
The previous row k ¼ m 1; we have by (35) the following 2mþ1
ðjÞ
ð1Þðm1Þ m!fp0 p0 g
b ðjÞ þ ð1Þðm1Þ ðm 1Þ!fðp0 Þ2 p20 gððm 1Þ2 þ 3ðm 1Þ þ 2Þ=2 ð2m þ 1Þ! 2mþ1
þ ð1Þðm1Þ ðm 1Þ!ð4ðm 1Þ3 þ 21ðm 1Þ2 þ 35ðm 1Þ þ 18Þ ðjÞ
b ðjÞ ðjÞ fp p2 g ¼ am1 am1 : ð2m þ 1Þ!6 2
ðjÞ
ðjÞ
Again since am1 am1 ! 0 and by the previous step p0 p0 ! 0 as j ! 1, we deduce that p2 p2 ! 0 as j ! 1. By induction and replacing in the previous row we can show that we have, for k ¼ 0; . . . ; m ðjÞ
pk pk ! 0 as j ! 1:
Thus we can recast (31) as a fixed point equation
p ¼ T 1 m ðam U N ðpÞÞ: If m is large enough then the remainder U N ðpÞ ¼ oð1Þ by (30). The continuity of T 1 m then leads to a first approximation. Proposition 6. If m is large, then a fist approximation is given by
p ’ T 1 m ðam Þ:
ð37Þ
For the error analysis we refer to [3,10,15]. In case m is small, it is still possible to take advantage of the first approximation formula (37) by choosing higher rows i; . . . ; i þ m where i is large enough, which will have the same effect as a large m. The drawback is the need for the corresponding high end Taylor coefficients, i.e. ai ; . . . ; aiþm . However when m is small, and only the first ak ; k ¼ 0; . . . ; m are available then the first approximation, (37) may not be good enough since the remainder U N ðpÞ may not be small. To improve on the precision of (37) we need to include more terms in order for the truncation error to drop significantly. To this end, consider
T m ðpÞ þ U N ðpÞ ¼ am
where N 6 1:
ð38Þ
The case N ¼ 1 corresponds to the exact solution as in (28), while N < 1 to its approximations. We now examine the existence and uniqueness of the inverse problem as defined by (28). Theorem 3. For small p 2 Cmþ1 , the system (28) has a unique solution. Proof. Using the vector p ¼ ðp0 ; p2 ; . . . ; p2m Þ 2 Cmþ1 we can recast the system (28) as
Am p þ UðpÞ ¼ am ;
ð39Þ
where the components of U are infinite series. The entries of the matrix A are obtained from the linear part of cðn; kÞ given by (15)
cðm þ 1 þ k; kÞ ¼ aðm; kÞp2m þ fmk ðp0 ; . . . ; p2m2 Þ; where Am is ðm þ 1Þ ðm þ 1Þ and its entries are explicitly given by
" Am ¼
2jþ2kþ3
aðj; kÞb ð2j þ 2k þ 3Þ!
# : 06k;j6m
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
2923
It is easy to verify that detðAm Þ – 0 for all small m ¼ 1; 2; 3; 4; 5 and so A1 exists and is bounded. For example m when b ¼ 1
2 6 6 6 A4 ¼ 6 6 6 4
1 6 1 60 1 840 1 15120 1 332640
1 40 13 5040 17 90720 1 95040 5 10378368
1 1008 1 9072 83 9979200 31 64864800 173 7783776000
1 51840 13 5702400 79 444787200 2269 217945728000 3637 7410154752000
3
1 4435200 7 29 7 1037836800 7 487 7: 217945728000 7 7 47 352864512000 5 16103 2534272925184000
1 1 1 detðA2 Þ ¼ 75 1600 ; detðA3 Þ ¼ 11 882 233 ; detðA4 Þ ¼ 69 435 849 090 109 , so both matrices A1 3 and A4 exist and 440 000 582 080 000 000 for any finite m it can checked directly that detðAm Þ – 0. Thus we can rewrite Eq. (39) as a fixed point mapping
1 p ¼ A1 m UðpÞ þ Am ðam Þ;
where
UðpÞ ¼ ðF0 ðpÞ; F1 ðpÞ; . . . ; Fm ðpÞÞ; Fk ðpÞ ¼
X
2jþ2kþ3
fmk ðp0 ; . . . ; p2m2 Þ
mP1
b ð2m þ 2k þ 3Þ!
for k ¼ 0; 1; . . . ; m:
1 We now show that the mapping p ! A1 m UðpÞ þ Am ðam Þ has a fixed point. Observe that since the linear part is taken care of by Am , the map U contains only the higher order powers in p0 ; p2 ; . . . ; p2m . Thus it is readily seen that Uð0Þ ¼ 0; @F@pi ðpÞ are entire j functions of p2k and so its Jacobian at p ¼ 0 is
"
DUð0Þ ¼
@Fi ð0Þ @p2j
#
¼ 0: 06i;j6m
If Bð0; cÞ denotes the ball of center 0 and radius c, then by continuity, for any p 2 Bð0; cÞ Cmþ1 , we have
e > 0 there exists c > 0 such that for all
kDUðpÞk 6 e: Next choose
e such that
1 1 1 kDA1 m UðpÞk ¼ kAm DUðpÞk 6 kAm kkDUðpÞk 6 kAm ke 6 q < 1:
ð40Þ
Now if we let am be such that kA1 m am k < ð1 qÞc and p 2 Bð0; cÞ then 1 1 kA1 m UðpÞ þ Am am k 6 qkpk þ kAm am k 6 qc þ ð1 qÞc 6 c; 1 which means that A1 m Up þ Am b maps Bð0; cÞ into itself and furthermore it is a contraction there. Indeed by (40) and the mean value theorem for vector functions
1 1 1 1 1 kðA1 m Up þ Am bÞ ðAm UðqÞ þ Am bÞk ¼ kAm UðqÞ Am UðpÞk 6 qkp qk
and so (28) has a unique solution in Bð0; cÞ by the fixed point theorem for vector mapping. h. Observe that we could also use T m instead of Am , in (29) since they are both bijections
p þ U N ðT 1 m ÞðpÞ ¼ am : The main advantage is one does not have to check the invertibility of Am , since T 1 m always exists. However (40) and the mean value theorem will not be as simple since DU N ð0Þ – 0; DT 1 m ð0Þ – 0 and they both contain triangular blocks of the linear part Am . 4. Examples Without loss of generality, we assume that p0 ¼ 0, since adding a constant to the potential amounts to a translation of the spectrum. On the numerical side, we use Newton’s method, and more precisely BROYDEN ALGORITHM 10.2, that can be found in [4]. We shall set the system to recover four coefficients fp2 ; p4 ; p6 ; p8 g from the first four Taylor coefficients fa0 ; a1 ; a2 ; a3 g We use ð1; 1; 1; 1Þ as our initial guess for Newton’s method, and a better alternative would be the first
2924
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
approximation (37). The system will use the first 20 columns, i.e. N ¼ 20 and the first 4 rows from the matrix cðn; kÞ. In all examples below we take (2) with b ¼ 1. The error in the numerical experiments is expressed in terms of e k which means 10k . Example 1. If pðxÞ ¼ 5x2 =2 24x4 =4! þ 72x6 =6! 80x8 =8!, i.e. p2 ¼ 5; p4 ¼ 24; p6 ¼ 72; p8 ¼ 80 then the first 4 Taylor coefficients are
9161726309178363023881304963780670299622777801293 ; 8287416058075971966715381628054046297292800000000 267154853142136351248919846898541310108358562829 a½1 ¼ ; 1506802919650176721220978477828008417689600000000 974985681108329002854910999263202452731353381957 a½2 ¼ ; 55940058392012810775328825989364812506726400000000 9367788285261886744868216307056800488751027 a½3 ¼ : 7597970579560313857430061254922215620608000000
a½0 ¼
Running Newton’s method: ‘BROYDENS METHOD FOR NONLINEAR SYSTEMS’ yields Iteration
p2
p4
p6
p8
Error
1 2 3 4 5 6 7 8 9
5.19017046 5.10556580 4.99981829 4.99952948 5.00006912 4.99999035 5.00000001 5.00000000 5.00000000
22.79810920 23.43215335 24.17591656 24.03443672 24.00389604 23.99945824 24.00000066 23.99999994 23.99999993
227.7105275 78.73199734 87.20686501 76.07693902 72.15242591 71.97890268 72.00002537 71.99999648 71.99999630
6363.953325 2671.101687 631.8897153 240.9466165 83.37202807 79.53592665 80.00054226 79.99989198 79.99988796
6.366109e+03 3.695856e+03 3.307157e+03 3.911015e+02 1.576235e+02 3.840027e+00 4.650958e01 6.509271e04 4.018640e06
Example 2. Consider pðxÞ ¼ x2 , i.e. p2 ¼ 2; p4 ¼ 0; p6 ¼ 0; p8 ¼ 0. Here we have an explicit characteristic function yð1; kÞ ¼ WhittakerMð1=4k; 1=4; 1Þ where
WhittakerM½m; l; z ¼ zlþ1=2 ez=2 F 1 ½l m þ 1=2; 2l þ 1; z: Here we look for four coefficients, p2 ; . . . ; p8 and employ the same system as in Example 1, but with a new set of Taylor coefficients of WhittakerMð1=4k; 1=4; 1Þ:
141125452021249062099118577676913 ; 134315787666209029452988416000000 184033692855216865029161541993452828613989 a½1 ¼ ; 1070717296442823326218162028534562816000000 708609065951010338040975053761177672422503047270369 a½2 ¼ ; 41573952146214020808089691864971061594842726400000000 2483175265053812573868334936240394286475049747 a½3 ¼ : 2049391311555457991131306904514002839142400000000
a½0 ¼
The algorithm yields Iteration
p2
p4
p6
p8
Error ðe k ¼ 10k Þ
1 2 3 4 5 6 7 8 9
2.01366402 2.00886595 2.080009541 2.0000426 2.00000008 2.00000000 2.00000000 2.00000000 2.00000000
0.24978252 0.14527445 0.01304463 0.00592891 0.00000386 0.00000009 0.00000000 0.00000000 0.00000000
23.9099744 14.2312364 0.95033669 0.43415861 0.00014210 0.00000186 0.00000005 0.00000000 0.00000000
139.779594 49.1329382 34.7416749 15.9088492 0.00362756 0.00001625 0.00000048 0.00000000 0.00000000
1.410031e+02 9.116197e+01 8.523764e+01 1.883990e+01 1.591114e+01 3.646657e03 1.684266e05 4.853951e07 8.887165e10
Iteration number 9 gives solution: 2.00000000, 0.00000000, 0.00000000 , 0.00000000, to within tolerance 1.0000000000e08. Process is complete.
2925
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
Example 3. Consider pðxÞ ¼ 6x2 =2 þ 35x4 =4! 21x6 =6! þ 34x8 =8!, i.e. p0 ¼ 0; p2 ¼ 6; p4 ¼ 35; p6 ¼ 21; p8 ¼ 34 and so
14038125427501134826150893312950368003398404024142873628433 ; 15809795036628136734821368767285130003575122323046400000000 4586639031992508287742194161957085488528009562393211962947 a½1 ¼ ; 29539717561881680306539052220342627082728867102720000000000 50199307482468247282049322275978910629669425283236500150799 a½2 ¼ ; 3168134708511810212876313350631746754622670996766720000000000 1208710878906703597071390620803865520307350519581515361497 a½3 ¼ : 1056044902837270070958771116877248918207556998922240000000000
a½0 ¼
The numerical results are Iteration
p2
p4
p6
p8
Error ðe k ¼ 10k Þ
1 2 3 4 5 6 7 8 9 10 11 12
5.41607966 6.41404464 5.98395847 5.98355272 5.99665098 5.99932203 5.99998054 6.00000043 6.00000001 6.00000000 6.00000000 6.00000000
39.43773823 41.09572850 39.05715849 37.22729518 34.80006855 34.96071958 34.99884196 35.00002505 35.00000040 35.00000001 35.00000001 35.00000001
873.09022321 238.44951491 398.02927901 247.39435135 12.78110077 19.43911886 20.95265986 21.00100046 21.00001613 21.00000046 21.00000046 21.00000046
12650.21846207 22296.21820304 13940.88477752 8605.17279813 155.68389468 0.59556223 32.91655236 34.02230509 34.00036294 34.00001391 34.00001379 34.00001380
1.267944e+04 9.666855e+03 8.356857e+03 5.337838e+03 8.763998e+03 1.552313e+02 3.354630e+01 1.106810e+00 2.196423e02 3.493811e04 1.183671e07 9.443305e10
Example 4. Consider the non self-adjoint problem with pðxÞ ¼ ð2 þ 7IÞx4 =4! þ ð3 5IÞx6 =6!, and thus p0 ¼ 0; p2 ¼ 0; p8 ¼ 0. Thus we just look for two coefficients only, p4 and p6 which are complex valued. The data
a½0 ¼ 1:0020304886596503280 þ 0:0068555166420602589292I; a½1 ¼ 0:16689296160739168856 0:00076086354015320656463I; is extracted from the explicit solution, see [16]
yð1; kÞ ¼ expðI=16=ð15 25IÞð1=2Þ ð70 þ 20I þ ð5=3 þ IÞÞÞHeunBð1=2; a; b; cÞ; a ¼ ð35=2 þ 5IÞ30ð1=2Þ =ð15 25IÞð3=4Þ ; b ¼ ð16875=8 2625I=2Þ=ð15 25IÞð3=2Þ k30ð1=2Þ ð15 25IÞð1=4Þ I; c ¼ I=6030ð1=2Þ ð15 25IÞð1=4Þ : The system in this case reduces to
A ¼ 1=1008p4 þ 1=51840p6 þ 13583=76718377335914496000p6 p34 þ 1=3937521147078456115200000000p56 þ 177521533939=459128586438556159675937587200000p34 p26 þ 19=1828915200p6 p4 þ 1=3773952p24 þ 1=10152345600p26 þ 1=4385813299200000p36 þ 439=44189785345486749696000p6 p44 þ 1=3334621567647744000000p46 þ 3319=90618212450304000p4 p26 þ 1=9956126415126528000p54 þ 1=318277449238764847104000p64 þ 1435117=20718657652392417841643520000000p4 p46 þ 1=30976598016p34 þ 457=239661047808000p6 p24 þ 90977=243620054201082270544035840000p6 p54 þ 1=446063011430400p44 þ 988682340047=54213903486664711334534710296576000000p44 p26 þ 8753=134839900126052352000000p4 p36 þ 12392663=2410545943576030740480000p24 p26 þ 5296973051=716661094335418521618481152000000p24 p36 1:002030489 0:006855516642I; B ¼ 1=9072p4 13=5702400p6 236632796940739=10026043918434109153784950161408000000p34 p26 2486759175821=7028067225793096425406464000000p24 p26 1=421625917440p34 11=475517952p24 40739=279965526736896000p6 p24 37087=1930433266896076800000p36 13=13680285696p6 p4 5156779219=1128670684727395269731785113600000000p4 p46 1823=190965620736000p26 4325727613=368406749739154248105984000p6 p34 7793=343687319080843345920000000p46
2926
A. Boumenir / Applied Mathematics and Computation 215 (2009) 2914–2926
161112491397043=344447339023340475593376902676480000000p24 p36 148283=901394200544004401012932608000p64 2209=15343675467182899200p44 381387062317139=19026143537380452673224396017290444800000p6 p54 989=171235418213761155072000p54 1132613=387392858225049600000p4 p26 296713258999=505041445082411775643531345920000p6 p44 1245868913=267076446300371011239936000000p4 p36 þ 0:0002262949 þ 0:0007608635402I Solving the above system fA ¼ 0; B ¼ 0g yields
fp4 ¼ 2:000011531 þ 6:999999964I; p6 ¼ 2:999424601 4:999998210Ig which shows that algorithm can handle the non self-adjoint case, with complex eigenvalues. This has potential applications in control theory to reconstruct operators with prescribed eigenvalues in the left half plane.
5. Conclusion The fact that p2 was first to converge in the above iterations, should not come as a surprise. Indeed, when the remainder is small, then the triangular part becomes the principal part and p2 is the first variable to be found by the Gaussian elimination. In future work the method will be extended to deal with higher order differential equations, which have important applications. For example oscillations of a bridge can be modeled by a fourth order differential equation and an inverse problem would tell us about its internal structure by nondestructive testing. Another area of application would be partial differential equations that depend on a finite number of parameters. By measuring the solution at one point on the boundary, the above algorithm would help identify parameters without the need for the Dirichlet to Neumann map. Acknowledgment The author sincerely thanks the referees for their valuable comments. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
A.L. Andrew, Asymptotic correction of more Sturm–Liouville eigenvalue estimates, BIT 43 (2003) 485–503. A.L. Andrew, Numerov’s method for inverse Sturm–Liouville problems, Inverse Problems 21 (2005) 223–238. A. Boumenir, The recovery of analytic potentials, Inverse Problems 15 (1999) 1405–1423. R. Burden, D. Faires, Numerical Analysis, seventh ed., Brooks cole, 2000. G. Freiling, V. Yurko, Inverse Sturm–Liouville Problems and their Applications, Nova Science, 2001. M.G. Gasymov, B.M. Levitan, Determination of a differential equation from two of its spectra, Russian Math. Surveys 19 (2) (1964) 1–63. I.M. Gelfand, B.M. Levitan, On the determination of a differential equation from its spectral function, Amer. Math. Transl. 1 (2) (1951) 239–253. R.O. Hryniv, Y.V. Mykytyuk, Half-inverse spectral problems for Sturm–Liouville operators with singular potentials, Inverse Problems 20 (5) (2004) 1423–1444. A. Kirsh, An introduction to the mathematical theory of inverse problems, Appl. Math. Sci., Springer, 1996. M.A. Krasnosel’skii, G.M. Vainikko, R.P. Zabreyko, Ya.B. Ruticki, V.Va. Stet’senko, Approximate Solution of Operator Equations, Wolters-Noordhoff publishing, Groningen, 1972. B.M. Levitan, Inverse Sturm–Liouville Problems, VNU, Utrecht, 1987. V.A. Marchenko, Some questions in the theory of one dimensional linear differential operators of the second order. I, Amer. Math. Soc. Transl. 101 (2) (1973) 1–105. G. Miller, Generating functions recovery of analytic potentials, Int. J. Differ. Equ. Appl. 6 (1) (2002) 69–76. J. McLaughlin, Analytical methods for recovering coefficients in differential equations from spectral data, SIAM Rev. 28 (1986) 53–72. W. Rheinboldt, Methods for solving systems of nonlinear equations, CBMS-NSF Conf. Ser. Appl. Math. 14 (1987). A. Ronveaux, Heun’s Differential Equations, Oxford University Press, 1995. W. Rundell, P.E. Sacks, Reconstruction techniques for classical Sturm–Liouville problems, Math. Comput. 58 (1992) 161–183. V.V. Ternovskii, M.M. Khapaev, Reconstruction of potentials in the Sturm–Liouville inverse problem by the variational method, Dokl. Math. 73 (2006) 107–111. V.A. Yurko, Inverse Spectral Problems for Differential Operators and their Applications, Gordon and Breach, Amsterdam, 2000. V.A. Yurko, Method of spectral mappings in the inverse problem theory, Inverse and Ill-posed Problems Series, VSP, Utrecht, 2002.