Wiener-Hopf Factorization of the S Matrix Roger G. Newton Indiana University
Abstract The Schrodinger S matrix is regarded as a function from the real axis t o the group of unitary operators L 2 ( S ) H L 2 ( S ) , where 8 is the unit sphere in R3. We pose a standard left factorization problem with respect t o the real line. Such a Wiener-Hopf factorization is not identical with the kind of factorization that defines the Jost function and which has been found to be a useful tool for the solution of the inverse scattering problem. A number of results will be given that relate the two factorizations, their existence as well as the indices they give rise to. Some known results on the standard factorization lead t o new results for the inverse scattering problem. Subject classification: 47A40, 30E25, 35P25, 35R30, 45E10, 81F05
1
Introduction
As you know, the Schrodinger equation in three dimensions with a central potential, that is, with a potential that depends on IzI only, is separable. Its separation leads to infinitely many ordinary differential equations, the radial Schrodinger equations, one for each angular momentum 1. An extremely useful tool for the study of these equations and the inverse problem in this case is the Jost function, which is usually denoted by jlI(k), R H @; k here is the wave number or the square root of the energy. Under very general conditions on the potential this function is the continuous boundary value of an analytic function that is holomorphic in @+ and approaches unity at Differeniial Equaiiom and Marhcinalical P h y w ,
236
Roger G. Newton
1k1 + 00. Furthermore, it has a finite number of simple zeros on the positive imaginary axis at points irc, if and only if -rc$ is an eigenvalue of the radial Schrodinger equation of that particular angular momentum. The eigenvalue S l ( k ) of the S matrix corresponding to that angular momentum can be factored as 1
in which one factor is meromorphic on the upper half-plane and the other factor is holomorphic in the lower half-plane. Of course, one can also isolate the zeros and poles so that the remaining factors are holomorphic and free of zeros. One then has a standard WienerHopf factorization of the symbol lSll = 1. For complex-valued functions such a factorization is trivial and can be done explicitly by quadrature. When the potential in the Schrodinger equation is not central, on the other hand, matters are more complicated. If the particle described by the Schrodinger equation has an intrinsic angular momentum then the direction-dependence of the potential may be caused by its spin-dependence. Then the differential equation may still be separable but with coupling between different orbital angular momenta. In that case the S matrix for a given total angular momentum (which is conserved) will be a square matrix, and so will the Jost function. In that case the Wiener-Hopf factorization is no longer trivial. Jost and I studied it about thirty five years ago [7, 91; it was my first introduction to the inverse scattering problem. In the most general case of a potential that depends on 3: f R3 the Schrodinger equation cannot be separated and the S matrix is a function on IW with values in the group U of unitary operators L2(S) w L2(S), where S is the unit sphere in R3. Physically, each point on S stands for the asymptotic direction of the momentum of a particle. (If the Schrodinger equation is used for the description of waves other than quantum mechanical, then each point of S stands for the direction of a wave vector.) In any event, the Wiener-Hopf factorization of this operator-valued symbol is still an important tool in the study of the Schrodinger equation and particularly for the solution of the inverse-scattering problem.
s~,
237
Wiener-Hopf Factorization of the S Matrix
2
Factorizations
Let us begin by defining a class of symbols in whose factorization we are interested.
Definition 2.1 S E S' i f and only i f S ( k ) = 1 - & A ( k ) , where 1 is the unit opemtor and ( i )the kernel A ( k , 0,B') that defines the operatorfamily A ( k ) , k E R, with values in the ring of bounded operators L 2 ( § )H Lz(S), is a continuous, uniformly bounded, differentiable function R x 9 x 9 H C, (ii) Q A Q = 2 [this is called reciprocity; the tilde here means the opemtor whose kernel is the transpose and ( Q f ) ( 0 )= f(-0)]; (iii) A ( - k ) = where 3 is the operator with the complex conjugate kernel; ( i v ) S*S = SS' = 1, (unitarity; * denotes the adjoint) (v) 11s - 111 E L 2 ( R ) [[I . 11 here is the operator norm].
m,
We shall later form a subclass by adding a sixth requirement, but for the moment we won't need that. If S is admissible as an S matrix of the Schrodinger equation with a potential that is in a specified class, then S E S'. For example, the following class will do:
Definition 2.2 Vo = {V I V E R, limlzl+ooV(x) = 0, and 3 a , C , ~> 0, such that for all z E R3, IVV(z)l < C ( a
+
151)-4-~}.
This class is smaller than it needs to be but it is easy to define. A standard left Wiener-Hopf factorization of S with respect to the real line (also called proper) [2, 3, 5 , 61 is a decomposition of the form S = W+DW-, where D=Po+CPj 2 1
+
(;:
;)p'
7
(2-1)
Pj = P;, t r P j = 1, j 2 1, Po CjZr~j = 1, Pj Pi = o if i # j, W* is holomorphic and invertible everywhere in @*, limlkl,w IlW* - 111 =
Roger G . Newton
238
0, and the partial indices pj are nonzero integers. If D = 1 then the factorization is called canonical (or regular). Whereas the partial indices are uniquely determined by S , the factors Wk and D are not. (But if a canonical factorization exists, it is unique.) The sum of the partial indices is called the total index or the sum index. We shall call it the Wiener-Hopf indez and denote it by
indwHS := p = C p j . j
It was proved by Aktosun and van der Mee [l]that if the potential underlying a given S is in a specified class, say in VO,and if zero is not an exceptional point of the Schrodinger equation, then S has a left standard factorization. They also showed that if S = QS#-'Q (which is the case if S E S') and S has a left standard factorization S = W+DW- then it is always possible to choose the factorization so that W- = QW,#-'Q, in other words,
It follows that in that case DQ = QD, because D# = D-'. (Remember that W#(lc):= W ( - k ) . )Note that in the standard factorization the poles are in fixed positions at f i and of a standard form. The Jost function factorization, on the other hand, which is needed for the solution of the inverse scattering problem is of a different kind. Here the factors are required to be meromorphic with simple poles at specified positions (on the imaginary axis) that are not standard, and furthermore the residues are to be operators that have specified finite-dimensional ranges. (This corresponds t o the specification of the angular momentum in the case of central potentials.) These data are collected in the following set.
Definition 2.3 The set B consists of all finite sets of p, pairs consisting of a positive number K, and a 6,-dimensional subspace H , of L 2 ( S ) 6(, < cm). The set (6,) will be denoted by N, and their sum by no; the set { K , } will be called P,. We also define a class of relevant functions:
Wiener-HopfFactorization of the
S Matrix
239
Definition 2.4 M + is the set of operator-valued [L2(S) H L2(S)/ functions o n R in L 2 ( R ) that are boundary values of analytic functions, meromorphic in @+, and whose operator norm approaches zero at infinity there. Similarly, N + is the set of functions in M + that are holornorphic in C+ . One then poses a Riemann-Hilbert problem with operator-valued solutions:
Problem W,'(S) Let S E S' and u E B be given. Find a function F on R such that ( i ) F - 1 E M + , with simple poles at the points i K m , tcm E P,, and residues there whose ranges equal (ii)on R, F satisfies the equation
Xm;
F# = Q S # F Q .
(2.3)
Here F # ( ~ c:= ) ~(-k). If this problem has a solution that is invertible, with an inverse that is holomorphic in @+, then this inverse is the Jost function and we have a factorization of the form
This is very similar to (2.2), except that the prescribed poles are in the factor-function F itself and their form is more specifically given. If the set (T is empty, we shall denote the corresponding problem by WJ. The following proposition allows us to form an equivalence class of problems for a given symbol S: Proposition 2.5 Suppose that the problem W:(S) has a hoiomorphically invertible solution. Then (i) a necessary condition f o r another problem W L ( S ) to have a holomorphically invertible solution is that no = n,. (ii) for every choice of P, with p , 5 n, there exist sets p E B with n, = nu such that W:(S) also has a holomorphically invertible solution.
240
Roger G . Newton
In other words, the pole positions in u can be shifted at will without changing the solvability, and so can the ranges of the residues to a certain extent. The latter, however, cannot be changed completely freely. (This corresponds to the well-known fact that for a given S matrix that belongs to an unknown central potential the boundstate zeros of the Jost function of a given I-value can be shifted freely, but they cannot be shifted from one angular momentum to another.) The sum of the dimensions of the ranges of the residues, on the other hand, is fixed. In a certain sense this is the “total number of poles” that the solution has. We may therefore define an index, which we shall call the Jost index, a non-negative integer, by the following: Suppose that W:( S ) has a holomorphically invertible solution F . Then indJS := nn. In other words, indJS is the sum of the dimensions of the ranges of the residues of F at all its (simple) poles in @+. The case without poles is easy to treat. The following lemma, which was first proved in [l],relates the canonical factorization of S to the problem W i ( S ) [which is W,‘ with n = 01. Note that if S E S‘ then S-’ = QS#Q. Lemma 2.6 Suppose that S-’ = Q S # Q . Then S has a left canonical factorization S = W+W- if and only if W J ( S )has a solzltion F that is holo~orphicaZ~y invertible. W e then have F = QW+Q, W- = QW,#-lQ, and F is the unique solution of W i ( S ) .
The general case is more complicated. First let us note that the existence of a holomorphically invertible solution of WJ implies that the solution of W,‘ is unique and vice versa.
Lemma 2.7 W ; ( S ) has a holomorphically invertible solution F , if and only i f F is the only solution of W,‘(S). Suppose that a given S has a left standard factorization. The first step in deciding the solvability of W,‘(S) is to reduce it to a similar problem for the diagonal factor, which is a rational function.
Wiener-Hopf Factorization of the S Matrix
24 1
Lemma 2.8 Suppose that S = QS#-’Q has the left standard factorization S = W+DQW,#-’Q; then the following holds: If W ; ( S ) has a unique solution then there exists a set u‘ E f3 with P,, = P, and Nut = N u such that W;,(D) has a unique solution, and vice versa: If W,‘(D) has a unique solution then 30‘ E f? with P,I = P, and Nut = N u such that W;,(S) has a unique solution. The same holds if “unique” is everywhere replaced b y “holomorphically invertible. ”
The problem W,‘(D) for the rational function D is easier to solve than that for the function S . The following states its solvability. Lemma 2.9 Suppose that
+
Pj = P,: t r P j = 1, j 2 1, Po Cj>l~j = 1, PjP; = o if i # j, all the p j are integers, and Q D = DQ. A necessary condition for the problem W,‘(D) to have a holomorphically invertible solution is that 1) each pj is even and positive, and 2) nu = i Cj pj. Conversely, if each pj is even and positive, then there exists a set IS E f? with nu = C j p j such that the problem W ; ( D ) has a holomorphically invertible solution.
The combination of Lemmas 2.8 and 2.9 allows us to conclude the following theorem. Theorem 2.10 Suppose that S = QS#-’Q has a left standard factorization. Then a necessary condition for the problem W:(S) to have a holomorphically invertible solution is that 1) either S has no left partial indices, or each left partial index of S is even and positive, and 2) nu = $indwHS. Conversely, i f 1) holds, then there exists a set (T E f? with nu = $indwHS and arbitrarily prescribed P, such that the problem W ; ( S ) has a holomorphically invertible solution.
It should be noted that the set u for which W:(S) has a holomorphically invertible solution is not entirely freely a t our disposal once a left standard factorization is given. That is why these results
242
Roger
G. Newton
always speak of the existence of a set cr E B such that W,' has a holomorphically invertible solution. They do not say that such a solution exists for all cr E f? with nu = $indwHS. Now that we have connections between the two kinds of factorization, the question of course is, are they of any help in actually implementing the solution of W,'(S) and determining if the solution found is holomorphically invertible? The answer is that, in fact, known results on the standard factorization have led to new results on the solution of the Riemann-Hilbert problem. I will first have to tell you a bit about how to find solutions of a problem like W i ( S ) . As a first step let us define the Fourier transform of S - 1 and somewhat reduce the size of the class S' by adding another requirement.
Definition 2.11 S E S if and only i f S E S' and in addition (vi) the operators 6 and G# defined b y (2.5), (2.6), and (2.7) in terms of A are compact. For (vi) we need the following functions:
@,P E R+, 8,e' E s, qa,e;p,e') := G(Q + p,e,e'), @(a,e;p,e') := G(-. - p,e,e/), a,@ E R+, e,e/ E s.
(2.6) (2.7)
Q , p E R+, e,er E s, (2.8) These integral kernels define the operators 6, @, 7-1; the first two are self-adjoint, and it follows from the unitarity of S that 11G11 I 1 and 11G#11 I 1. If 5' is admissible with a potential V E Uo as defined earlier, then
3-I(a,e;p,ef):= G("
-
p,e,e'),
s E s.
We implement the solution of the problem W,'(S) by Fourier transformation as follows [8]. Define nu functions R+ x S H C, y R ( a , 6) = Y:m(-OB)e-aKm,
(2-9)
Wiener-lfopf Factorization of the S Matrix
243
where the functions Y j - , b = 1,...,6 , span the space 'Hm and K, E P,. Let the functions z;'( span the nullspaces of ( 1 f @), respectively. Then define the matrices s* by (2.10) and the column matrices c*(8) by
cF'(8) := Here
('PI, Gf)+(8).
is the inner product on L2(R+ x S ) and G#(cr,8',8) := G(-a,8',8) is to be regarded as a family of vectors in L2(R+ x S) parametrized by 8 E S ; G6 := G i ( 1 f Q ) . The generalized Marchenko equations then are the following Fredholm equations of the second kind on R+ x S : (.,a)+
where
r = r++ r-
is related to F ( k ) by (2.12)
&(a, 870') :=
c
Y&(%
e)P;"b(8'),
m,b
and the functions p z b are to be determined by the set of linear algebraic equations Cp'(8) = 2 s:,,bp;"b(d). (2.13) m,b
A part of the following theorem was proved in [8] and the rest of
the proof will be given elsewhere:
Theorem 2.12 The problem W ; ( S ) has a unique solution i f and only i f the following thme conditions hold: (i) the opemtor G2 does not have the eigenvalue 1, (ii) dim nul(1 + @) = dim nul(1 - @) = no, and (iii) the matrices sf of (2.10) are invertible.
244
r-
Roger G. Newton
The solution is obtained from the solution of (2.11) by I' = I?+ and
where pmb := p T b
+
+ pYb.
Note that if S is given, then G and G# are given, and hence, so are the nullspaces of P f G#. Therefore the number no = indjS of the problem W,'(S) that has a unique solution, if it exists, can be determined directly from S.
3
Admissible S Matrices
We have, so far, made no assumptions concerning the admissibility of the given symbol S, i.e., we have not assumed that there exists an underlying potential. If a potential in Vo, say, exists and it causes Nb bound states of negative energy (let us assume that zero is not an exceptional point) then this number can be recognized from S by means of the generalized Levinson Theorem. That leads to the definition of a third kind of index in terms of the total phase change of the Fredholm determinant of S. Since that Fredholm determinant generally does not approach unity as k 00 even though [ IS- 111 approches naught, we have to proceed cautiously. --f
Definition 3.1 The function S ( k ) : IW H U [the set of unitary operators L 2 ( S ) H L 2 ( S ) ] is in U c i f and only i f it has the following properties: ( i ) S ( k ) is continuous; (ii) S(-k) = S ( k ) ; (iii) limk,o IlS(k) - 111 = 0; ( i v ) for each k E Iw the Fredholm determinant det S exists; ( v ) 3c1, c2 such that as k + 00
Wiener-Hopf Factorization of the S Matrix
245
where ~ ( k =) $ arg det S(E) is defined to be continuous, det S(E) = e2iq(k). The eigenvalues eZi6, of the unitary operator S define the eigenphase shifts .6, Since l i m l + + m11s-111 = 0, each eigenphase shift can be defined so as to approach naught as lkl -+ 00. The phase 7 is related to the eigenphase shifts by ~ ( k=) CS,(E)but the convergence of the series is not uniform in k: Even though 6,(00) = 0 for each n, their sum grows linearly as k -+ 00. If the potential V E VO then S E U c , c2 is an integral multiple of n,and it is always permissible to choose c2 = 0 [8], which we shall do. Item (iii)implies that there are no half-bound states. W e now define the Levinson index of S by 1 indLS := -q(O).
n
A three-dimensional generalization of Levinson's theorem [S] can then be stated in the following form:
Generalized Levinson's Theorem: If S is the S rnatriz of a potential V E VO that produces Nb bound states (counting their degeneracies) and there is no half-bound state, then indLS = Nb. Thus if S is admissible, then indLS is a nonnegative integer. It is related to the Wiener-Hopf index by the following result.
Lemma 3.2 If S = QS#-'Q, S E U c , and it has a left standard factorization, then indLS = 3indwHS. The proof of this would be trivial if there were assurance that the Fredholm determinants of the Wiener-Hopf factors exist and are in U c . This, however, is not known, and we have to use a formula due to Gohberg and Leiterer [4]. I won't give the proof here. Now, if we are given an S matrix that is admissible, with a potential that leads to Nb bound states, then we seek a factorization with Nb poles; in other words, we pose W i ( S )with n, = Nb. By Theorem 2.12 we then need Nb = dim nul(1 + G # ) = dim nul(1 - @). The following lemma assures that this requirement is, in fact, satisfied:
246
Roger
G. Newton
Lemma 3.3 If S is admissible and the underlying potential causes Nb bound states of negative energy (counting their multiplicities) then dim nul( 1 G#) = dim nul( 1- @) = Nb.
+
The main questions that remain are whether the finite-dimensional matrices s* are invertible, and whether the number 1 is in the spectrum of E2. (If it is not then 11G11 < 1). If for every admissible S matrix the answer to the first question is yes and that to the second, no, then W,‘(S) has a unique and thus holomorphically invertible solution and hence the Jost function exists for all admissible 5’. At the same time that would solve one of the problems I posed at the meeting here four years ago: The three-dimensional generalization of the “regular solution” of the Schrodinger equation would always exist, and so would a solution of the quasi-Goursat problem I posed at that time. The answer to the second question is now known, and a known theorem on the standard Wiener-Hopf factorization, as well as Lemma 3.2, are instrumental in its proof. Theorem 3.4 If S is admissible with a potential V E Vo and no exceptional point at k = 0 , then 11G11 < 1. Proof Now that we have all the tools necessary, the proof of this theorem is quite simple. The following formula is an immediate consequence of Theorem 1.1 of IS], page 165, and formulas (2.26) and (2.28) of
[a]:
dim nul(1 - G*2)
-
dim nul(l - G’) = indwaS.
(3.1)
Using Lemma 3.3 we get dim nul(1 - G2) = dim nul(1 - G#’) - indijrH.5’ = 2Nb
- indwHS.
Therefore the desired result follows from the generalized Levinson 0 theorem together with Lemma 3.2. After the end of this conference I succeeded in answering the first question: If S is admissible then the matrices S* are invertible. The proof of this will be found in [lo], as will the remaining proofs and details.
Wiener-Hopf Factorization of the S Matrix
247
Acknowledgement I am indebted to Professor C. van der Mee for a translation of [4]. This work was supported in part by a grant from the National Science Foundation.
Bibliography [I] T. Aktosun and C. van der Mee, Solution of the inverse scattering problem f o r the 3-d SchrGdinger equation by Wiener-Hopf factorization of the scattering operator, J. Math. Phys., to be published.
[a] K.
Clancey and I. C. Gohberg, Factorization of Matrix Functions and Singular Integral Operators, Birkhauser, Basel, 1981.
131 I. C. Gohberg and M. A. Kaashoek, Constructive Methods in Wiener-Hopf Factorization, Birkhauser Verlag, Basel, 1986.
[4]I. C. Gohberg and J. Leiterer, Factorizations of openztor functions with respect to a contour. III. Factorization in algebras, Math. Nachr., 1973, 55:33-61, in Russian.
[5] I. C. Gohberg, The factorization problem for operator functions, Izv. Akad. Nauk SSSR Ser. Mat., 1964, 28:1055-1082. English translation: American Math. SOC.Transl., 1966, 49:130-161. [6] W. Greenberg, C. van der Mee, and V. Protopopescu, Boundary Value Problems in Abstract Kinetic Theory, Birkhauser Verlag, Basel, 1987. [7] R. G. Newton, Connection between the S-matrix and the tensor force, Phys. Rev., 1955, 100:412-428. [8] R. G. Newton, Inverse Schrcjdinger Scattering in Three Dimensions, Springer Verlag, New York, 1989. [9] R. G. Newton and R. Jost, The construction of potentials from the S-matrix for systems of differential equations, Nuovo Cimento, 1955, 1:590-622.
248
Roger G. Newton
[lo] R. G . Newton, Factorizations of the S Mutriz, J. Math. Phy., to be published.