JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.1 (1-32)
Journal of Functional Analysis ••• (••••) •••–•••
Contents lists available at ScienceDirect
Journal of Functional Analysis www.elsevier.com/locate/jfa
On finite rank Hankel operators D.R. Yafaev IRMAR, Université de Rennes I, Campus de Beaulieu, 35042 Rennes Cedex, France
a r t i c l e
i n f o
a b s t r a c t
Article history: Received 9 April 2013 Accepted 5 December 2014 Available online xxxx Communicated by D. Voiculescu
For self-adjoint Hankel operators of finite rank, we find an explicit formula for the total multiplicity of their negative and positive spectra. We also show that very strong perturbations, for example, a perturbation by the Carleman operator, do not change the total number of negative eigenvalues of finite rank Hankel operators. As a by-product of our considerations, we obtain an explicit description of the group of unitary automorphisms of all bounded Hankel operators. © 2014 Published by Elsevier Inc.
MSC: 47A40 47B25 Keywords: The sign-function Necessary and sufficient conditions for the sign-definiteness Total multiplicity of the positive and negative spectra The Carleman operator and its perturbations
1. Introduction. Main results 1.1.
Hankel operators can be defined as integral operators ∞ (Hf )(t) =
h(t + s)f (s)ds 0
E-mail address:
[email protected]. http://dx.doi.org/10.1016/j.jfa.2014.12.005 0022-1236/© 2014 Published by Elsevier Inc.
(1.1)
JID:YJFAN 2
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.2 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
in the space L2 (R+ ) with kernels h that depend on the sum of variables only. Of course H is symmetric if h(t) = h(t). Integral kernels of finite rank Hankel operators H are given (this is the Kronecker theorem – see, e.g., Sections 1.3 and 1.8 of the book [6]) by the formula M
h(t) =
Pm (t)e−αm t
(1.2)
m=1
where Re αm > 0 and Pm (t) are polynomials of degree Km . If H is self-adjoint, then necessarily the sum in (1.2) contains both exponentials e−αm t and e−α¯ m t . Let Im αm = 0 ¯ m for m = M0 + 1, . . . , M0 + M1 . Thus for m = 1, . . . , M0 , Im αm > 0 and αM1 +m = α M = M0 + 2M1 ; of course the cases M0 = 0 or M1 = 0 are not excluded. The condition h(t) = h(t) requires also that Pm (t) = Pm (t) for m = 1, . . . , M0 and PM1 +m (t) = Pm (t) for m = M0 + 1, . . . , M0 + M1 . As is well known and as we shall see below, rank H =
M
Km + M =: r.
m=1
For m = 1, . . . , M0 , we set (Km ) pm = Pm ,
(1.3)
that is, pm /Km ! is the coefficient at tKm in the polynomial Pm (t), and (m)
= N−
(m)
− 1 = N−
N+ N+
(m) N+
(m)
=
(m) N−
⎫ ⎪ ⎪ ⎬
= (Km + 1)/2
if Km is odd
(m)
if Km is even and pm > 0
= Km /2
− 1 = Km /2
ifKm is even and pm
⎪ ⎪ < 0. ⎭
(1.4)
For a self-adjoint operator A, we denote by N+ (A) (by N− (A)) the total multiplicity of its strictly positive (negative) spectrum. Our main result is formulated as follows. Theorem 1.1. Let H be a self-adjoint Hankel operator of finite rank with kernel h(t) given (m) by formula (1.2) where Pm (t) are polynomials of degree Km , and let the numbers N± be defined by formula (1.4). Then the total numbers N± (H) of (strictly) positive and negative eigenvalues of the operator H are given by the formula
N± (H) =
M0 m=1
(m)
N±
M 0 +M1
+
Km + M1 =: N± .
(1.5)
m=M0 +1
Formula (1.5) shows that every pair of complex conjugate terms Pm (t)e−αm t + Pm (t)e−α¯ m t ,
m = M0 + 1, . . . , M0 + M1 ,
(1.6)
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.3 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
3
in representation (1.2) of h(t) yields Km + 1 positive and Km + 1 negative eigenvalues. In view of (1.4) the contribution of every real term Pm (t)e−αm t also consists of the equal numbers (Km + 1)/2 of positive and negative eigenvalues if the degree Km of Pm (t) is (K ) odd. If Km is even, then there is one extra positive (negative) eigenvalue if Pm m > 0 (Km ) (Pm < 0). In particular, in the question considered, there is no “interference” between different real terms Pm (t)e−αm t , m = 1, . . . , M0 , and pairs (1.6). According to (1.5) the operator H cannot be sign-definite if M1 > 0. Moreover, according to (1.4) the operator H cannot be sign-definite if Km > 0 at least for one m = 1, . . . , M0 . Therefore we have the following result. Corollary 1.2. A Hankel operator H of finite rank in the space L2 (R+ ) is positive1 (negative) if and only if its kernel is given by the formula
h(t) =
M0
pm e−αm t
m=1
where αm > 0 and pm > 0 (pm < 0) for all m. Let us recall the paper [4] by A.V. Megretskii, V.V. Peller, and S.R. Treil. In the particular case of finite rank self-adjoint operators, it follows from the results of [4] that the spectra of Hankel operators are characterized by the condition that the multiplicities of eigenvalues λ and −λ do not differ by more than 1. Compared to Theorem 1.1, this result is of a completely different nature although both results mean that there is a certain balance between positive and negative parts of the spectrum. 1.2. The result of Theorem 1.1 turns out to be stable under a large class of perturbations of finite rank Hankel operators. As an example, we consider the sum H = H0 +V of the Carleman operator H0 , that is, of the Hankel operator with kernel h0 (t) = t−1 , and of a finite rank Hankel operator V . Recall that the Carleman operator has the absolutely continuous spectrum [0, π] of multiplicity 2. We obtain the following result. Theorem 1.3. Let H0 be the Hankel operator with kernel h0 (t) = t−1 . If V is a self-adjoint Hankel operator of finite rank and H = H0 + V , then N− (H) = N− (V ). In particular, H ≥ 0 if and only if V ≥ 0. The inequality N− (H) ≤ N− (V ) is of course obvious because H0 ≥ 0. On the contrary, the opposite inequality N− (H) ≥ N− (V ) looks surprising because the Carleman operator 1
We often use the term “positive” instead of a more precise but lengthy term “non-negative”.
JID:YJFAN 4
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.4 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
is “much stronger” than V ; it is not even compact. Nevertheless its adding does not change the total number of negative eigenvalues. It is natural to compare (this point of view goes back to J.S. Howland [2]) Hankel operators H with “perturbed” kernels h(t) = t−1 + v(t) to Schrödinger operators D2 + V(x) in the space L2 (R). Hereby the Carleman operator plays the role of the “free” operator D2 . The assumption that v(t) decays sufficiently rapidly as t → ∞ and is not too singular as t → 0 corresponds to a sufficiently rapid decay of a potential V(x) as |x| → ∞. As shown in [9], the results on the discrete spectrum of the operator H lying above its essential spectrum [0, π] are close in spirit to the results on the discrete (negative) spectrum of the Schrödinger operator D2 + V(x). On the contrary, according to Theorem 1.3 the results on the negative spectrum of Hankel operators are drastically different from those for the Schrödinger operators. 1.3. Our proofs of Theorems 1.1 and 1.3 rely on the approach suggested in [10]. It is shown in [10] that a Hankel operator H has the same numbers of negative and positive eigenvalues as an operator S of multiplication by some function s(x), that is, N± (H) = N± (S).
(1.7)
In particular, ±H ≥ 0 if and only if ±S ≥ 0. Therefore we use the term “sign-function” for s(x). It admits an explicit expression in terms of the kernel h(t) of H, and the operators H and S are linked by the equation H = L∗ SL
(1.8)
where L is the Laplace transform. Since L is invertible, Eq. (1.8) implies (1.7). In general, the sign-function is a distribution so that S need not be defined as an operator. Therefore we work with quadratic forms (Hf, f ) and (Su, u) which is both more general and more convenient. For finite rank Hankel operators, s(x) is a linear combination of delta functions and their derivatives. Therefore number (1.7) equals N± (S) for some Hermitian matrix S (the sign-matrix of the operator H) constructed in terms of s(x). It turns out that the matrix S has a very special structure. Actually, it consists of blocks Sm corresponding to the real terms Pm (t)e−αm t , m = 1, . . . , M0 , and to pairs (1.6). For m = 1, . . . , M0 , the matrix Sm has dimension Km + 1 and is skew triangular. Its skew diagonal elements have the same (m) sign as pm . We deduce from these facts that the total numbers of its positive N+ and (m) negative N− eigenvalues are given by formulas (1.4). For m = M0 + 1, . . . , M0 + M1 , the matrix Sm is, in its turn, a two-by-two antidiagonal block matrix with blocks being matrices of dimension Km + 1. It is easy to see that such matrices have Km + 1 positive and Km + 1 negative eigenvalues. Summing N± (Sm ) over all m = 1, . . . , M0 + M1 , we find N± (S) which yields formula (1.5). As far as Theorem 1.3 is concerned, we note that the sign-function of the Carleman operator equals 1. Its support is essentially disjoint from supports of the sign-functions
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.5 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
5
of finite rank Hankel operators V . Very loosely speaking, it means that the operators H0 and V “live in orthogonal subspaces”, and hence the positive operator H0 does not affect the negative spectrum of H = H0 + V . 1.4. Let us briefly describe the structure of the paper. We collect necessary results of [10] in Section 2. Proofs of Theorems 1.1 and 1.3 are given in Section 3. Hankel operators can be standardly realized not only in L2 (R+ ) but also in the Hardy spaces H2+ (R), H2+ (T) and in the space of sequences l2 (Z+ ). The interrelations between different representations are discussed in the auxiliary Section 4. This information is used in Section 5 to reformulate Theorems 1.1 and 1.3 in the spaces H2+ (R), H2+ (T) and l2 (Z+ ). Finally, in Appendix A we describe the group of unitary automorphisms of the set of bounded Hankel operators in all these spaces as well as in the space L2 (R+ ). Let us introduce some standard notation. Let T be the unit circle in the complex plane, and let Z+ be the set of all nonnegative integers. We denote by Φ, −1/2
∞
(Φu)(ξ) = (2π)
u(x)e−ixξ dx,
−∞
the Fourier transform. The space Z = Z(R) of test functions is defined as the subset of the Schwartz space S = S(R) which consists of functions ϕ admitting the analytic continuation to entire functions in the complex plane C and satisfying, for all z ∈ C, bounds
ϕ(z) ≤ Cn 1 + |z| −n er| Im z| for some r = r(ϕ) > 0 and all n. We recall that the Fourier transform Φ : Z → C0∞ (R) and Φ∗ : C0∞ (R) → Z. The dual classes of distributions (continuous antilinear functionals) are denoted S , C0∞ (R) and Z , respectively. We use the notation ·, · and ·, · for the duality symbols in L2 (R+ ) and L2 (R), respectively. They are linear in the first argument and antilinear in the second argument. The Dirac function is standardly denoted δ(·); δn,m is the Kronecker symbol, i.e., δn,n = 1 and δn,m = 0 if n = m. The letter C (sometimes with indices) denotes various positive constants whose precise values are inessential. 2. The sign-function Here we briefly discuss necessary results of [10] adapting them to the case of bounded Hankel operators. 2.1. Let us consider a Hankel operator H defined by equality (1.1) in the space L (R+ ). Actually, it is more convenient to work with sesquilinear forms instead of operators. Let us introduce the Laplace convolution 2
JID:YJFAN 6
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.6 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
t (f¯1 f2 )(t) =
f1 (s)f2 (t − s)ds
(2.1)
0
of functions f¯1 and f2 . Then (Hf1 , f2 ) = h, f¯1 f2 =: h[f1 , f2 ]
(2.2)
where we write ·, · instead of (·, ·) because h may be a distribution. We consider form (2.2) on elements f1 , f2 ∈ D where D is defined as follows. Put
(U f )(x) = ex/2 f ex . Then U : L2 (R+ ) → L2 (R) is the unitary operator. The set D consists of functions f (t) such that U f ∈ Z. Since f (t) = t−1/2 (U f )(ln t) and Z ⊂ S, we see that functions f ∈ D and their derivatives satisfy the estimates (m)
f (t) = Cn,m t−1/2−m 1 + |ln t| −n for all n and m. Of course, D is dense in the space L2 (R+ ). It is shown in [10] that if f1 , f2 ∈ D, then the function
Ω(x) = (f¯1 f2 ) ex belongs to the set Z. With respect to h, we assume that the distribution
θ(x) = ex h ex
(2.3)
is an element of the space Z . The set of all such h will be denoted Z+ , that is, h ∈ Z+
⇐⇒
θ ∈ Z .
It is shown in [10] that this condition is satisfied for all bounded Hankel operators H. Since Ω ∈ Z, the form ∞ h, f¯1 f2 = 0
is correctly defined.
∞ h(t)(f1 f¯2 )(t)dt =
θ(x)Ω(x)dx =: θ, Ω −∞
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.7 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
7
Note that h ∈ Z+ if h ∈ L1loc (R+ ) and the integral
∞
h(t) 1 + |ln t| −κ dt < ∞ 0
converges for some κ. In this case the corresponding function (2.3) satisfies the condition ∞
θ(x) 1 + |x| −κ dx < ∞,
−∞
and hence θ ∈ S ⊂ Z . 2.2. Let us now give the definition of the sign-function of a Hankel operator H or of its kernel h(t). Set ∞ 1 0 h(t)t−iξ dt . b(ξ) = 2π 0∞ e−t t−iξ dt
(2.4)
Of course b(−ξ) = b(ξ) if h(t) = h(t). We call b(ξ) the b-function of a Hankel operator H (or of its kernel h(t)) and we use the term the sign-function for the Fourier transform √ s(x) = 2π(Φ∗ b)(x) of b(ξ). Let the function θ(ξ) be defined by formula (2.3). If h ∈ Z+ , then θ ∈ Z and hence its Fourier transform −1/2
∞
a(ξ) = (Φθ)(ξ) = (2π)
h(t)t−iξ dt
(2.5)
0
is an element of C0∞ (R) . Then definition (2.4) can be rewritten b(ξ) = (2π)−1/2 a(ξ)Γ (1 − iξ)−1
(2.6)
where Γ (·) is the gamma function. Note that Γ (1 −iξ) = 0 for ξ ∈ R, but according to the Stirling formula it tends exponentially to zero as |ξ| → ∞. Nevertheless the distribution b ∈ C0∞ (R) and hence s ∈ Z . For a test function f ∈ D, we set g(ξ) = Γ (1/2 + iξ)(ΦU f )(ξ) =: (Ξf )(ξ).
(2.7)
Since U f ∈ Z, the functions ΦU f ∈ C0∞ (R) and hence g ∈ C0∞ (R). We note that (ΦU f )(ξ) is the Mellin transform of f (t).
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.8 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
8
The following result was obtained in [10]. Theorem 2.1. Suppose that h ∈ Z+ . Define the distribution b ∈ C0∞ (R) by formula (2.4), √ ∗ and set s = 2πΦ b ∈ Z . Let fj ∈ D, j = 1, 2, let the functions gj ∈ C0∞ (R) be defined by formula (2.7) and uj = Φ∗ gj = Φ∗ Ξfj ∈ Z. Then the identity
h, f¯1 f2 = s, u ¯1 u2 =: s[u1 , u2 ]
(2.8)
holds. Identity (2.8) yields the precise sense to (1.8) where L = Φ∗ Ξ. It can be easily seen that, formally, L is the Laplace transform. 2.3. For an arbitrary distribution h ∈ Z+ , we have constructed in Theorem 2.1 its sign-function s ∈ Z . It turns out that, conversely, the kernel h(t) can be recovered from its sign-function s(x). Proposition 2.2. Let h ∈ Z+ , and let s ∈ Z be its sign-function. Then
∞ h(t) =
e−te
−x
e−x s(x)dx.
(2.9)
−∞
As we shall see in the next section, for kernels (1.2), the corresponding sign-function s(x) is a highly singular distribution. Nevertheless the mapping h(t) ↔ s(x) yields the one-to-one correspondence between the classes Z+ and Z . We emphasize that formula (2.9) is understood in the sense of distributions. 2.4. Suppose now that h(t) = h(t) so that the Hankel operator H defined by formula (1.1) is symmetric. Then the identity (2.8), or equivalently (1.8), implies relation (1.7). To be more precise, we use the following natural definition. Denote by N± (s) the maximal dimension of linear sets L± ⊂ Z such that ±s[u, u] > 0 for all u ∈ L± , u = 0. We apply the same definition to the form h[f, f ] considered on the set D and observe that N± (h) = N± (H) if the operator H is bounded. Note that formula (2.7) establishes one-to-one correspondence between the sets D and C0∞ (R). Of course the Fourier transform establishes one-to-one correspondence between the sets C0∞ (R) and Z. Therefore the following assertion is a direct consequence of Theorem 2.1. Theorem 2.3. Let h(t) = h(t), and let the Hankel operator H with kernel h be bounded. √ Define the distribution b ∈ C0∞ (R) by formula (2.4) and set s = 2πΦ∗ b ∈ Z . Then N± (H) = N± (s).
(2.10)
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.9 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
9
In particular, relation (2.10) means that a Hankel operator H is positive (or negative) if and only if the function s(x) is positive (or negative). This justifies the term “sign-function” for s(x). 3. Proofs of Theorems 1.1 and 1.3 3.1.
Let us first calculate the b- and s-functions of the kernel h(t) = tk e−αt
where k = 0, 1, . . . , Re α > 0,
(3.1)
but we do not assume that Im α = 0. Calculating integral (2.5) we see that a(ξ) = (2π)−1/2
∞
tk e−αt t−iξ dt = (2π)−1/2 α−1−k+iξ Γ (1 + k − iξ),
0
where arg α ∈ (−π/2, π/2), and hence function (2.6) equals b(ξ) = α−1−k+iξ
Γ (1 + k − iξ) . 2πΓ (1 − iξ)
Since k is integer, this yields the following result. Lemma 3.1. Let h(t) be given by formula (3.1). If k = 0, then b(ξ) = (2π)−1 α−1+iξ and s(x) = α−1 δ(x − β),
β = − ln α.
(3.2)
If k = 1, 2, . . . , then b(ξ) = (2π)−1 α−1−k+iξ (1 − iξ) · · · (k − iξ) and s(x) = α−1−k (1 − ∂) · · · (k − ∂)δ(x − β).
(3.3)
Let us use the notation ν,k for the coefficients of the expansion (1 − z) · · · (k − z) =
k
ν,k z
=0
for k ≥ 1, ≤ k, and set ν0,0 = 1. Then formulas (3.2) and (3.3) can be rewritten as s(x) = α−1−k
k
ν,k δ () (x − β).
=0
Therefore Lemma 3.1 implies the following more general result.
JID:YJFAN 10
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.10 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
Lemma 3.2. Let h(t) = P (t)e−αt
(3.4)
where Re α > 0 and P (t) =
K
pk tk
(3.5)
k=0
is a polynomial. Set
qk =
K
νk, α−1− p ;
(3.6)
=k
in particular, qK = (−1)K α−1−K pK . Then the b- and s-functions of kernel (3.4) equal b(ξ) = (2π)−1 e−iξβ Q(ξ)
where Q(ξ) =
K
qk (iξ)k ,
(3.7)
k=0
β = − ln α, and s(x) =
K
qk δ (k) (x − β).
(3.8)
k=0
Observe that distribution (3.8) is positive if and only if Im β = 0, qk = 0 for all k ≥ 1 and q0 > 0. Therefore the Hankel operator with kernel (3.4), (3.5) cannot be expected to be sign-definite unless α is real and K = 0. Theorem 1.1 provides essentially more advanced results in this direction. Note that for u ∈ Z ∞ δ −∞
(k)
k 2 k () k ¯ (x − β) u(x) dx = (−1) u (β)u(k−) (β). =0
This leads to the following result. Lemma 3.3. For the distribution given by formula (3.8), we have K ¯ s, |u|2 = sj, u() (β)u(j) (β),
j,=0
where sj, = 0 for j + > K and
u ∈ Z,
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.11 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
sj, = (−1)j+
j+ qj+ j
11
(3.9)
for j + ≤ K; in particular, sj, = (−1)K
K qK j
for j + = K.
It is now convenient to introduce Definition 3.4. Let a kernel h(t) be given by formulas (3.4), (3.5), and let qk be coefficients (3.6). Denote by S the matrix of order K +1 with the elements sj, defined in Lemma 3.3. We call S = S(P, α) the sign-matrix of the kernel h(t). It is only essential for our proof of Theorem 1.1 that the sign-matrix S is skew tri
angular, that is, sj, = 0 for j + > K, and that its elements sj, = Kj α−1−K pK on the skew-diagonal j + = K are not zeros if pK = 0. In this case Det S = 0. Note also that S(P¯ , α ¯ ) = S(P, α)∗ ;
(3.10)
in particular, S(P, α) is symmetric if α = α ¯ and P (t) = P (t). Let us define the mapping JK (β) : Z → CK+1 by the relation2
JK (β)u = u(β), u (β), . . . , u(K) (β) .
(3.11)
Then Lemma 3.3 yields the following assertion. Proposition 3.5. For a kernel h(t) defined by (3.4), (3.5), the sign-function is given by the formula
¯ s, |u|2 = S(P, α)JK (β)u, JK (β)u , K+1
β = − ln α,
(3.12)
where (·, ·)K+1 is the scalar product in CK+1 . Formula (3.12) is convenient for real α and P (t). In the complex case, we consider the real kernel ¯ h(t) = P (t)e−αt + P (t)e−αt ,
2
Re α > 0, Im α > 0,
The upper index “” means that a vector is regarded as a column.
(3.13)
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.12 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
12
corresponding to two complex conjugate points α and α ¯ . It follows from Proposition 3.5 that the corresponding sign-function equals
¯ ¯ JK (β)u + S(P¯ , α ¯ )JK (β)u, . s, |u|2 = S(P, α)JK (β)u, JK (β)u K+1 K+1
Let us rewrite this equality in the “matrix” form taking into account relation (3.10). Proposition 3.6. For a kernel h(t) defined by (3.5), (3.13), the sign-function is given by the formula
¯ , JK (β)u, JK (β)u ¯ , s, |u|2 = S(P, α) JK (β)u, JK (β)u 2K+2
β = − ln α,
where S(P, α) =
S(P, α)∗ 0
0 S(P, α)
.
(3.14)
Let us now consider kernel (1.2). We can apply Proposition 3.5 to all real terms corresponding to m = 1, . . . , M0 and Proposition 3.6 to all complex conjugate terms corresponding to pairs m, M1 + m where m = M0 + 1, . . . , M1 . Various objects will be endowed with the index m = 1, . . . , M0 + M1 . Thus we set Sm = S(Pm , αm ) m , αm ) for m = M0 + 1, . . . , M0 + M1 . The for m = 1, . . . , M0 and Sm = S(P mappings Jm = JKm (βm ) : Z → Crm are defined for m = 1, . . . , M0 by formula (3.11) where βm = − ln αm and rm = Km + 1. If m = M0 + 1, . . . , M1 , we set Jm u = (JKm (βm )u, JKm (β¯m )u) ; then Jm : Z → Crm where rm = 2Km + 2. It is convenient to rewrite the above results in the vectorial notation. We set Cr =
M 0 +M1
Crm
(3.15)
m=1
and introduce the mapping J : Z → Cr by the formula Ju = (J1 u, . . . , JM0 +M1 u) .
(3.16)
The sign-matrix of kernel (1.2) is defined as the block-diagonal matrix S = diag{S1 , . . . , SM0 +M1 }.
(3.17)
It follows from Propositions 3.5 and 3.6 that the sign-function of kernel (1.2) is given by the formula
M 0 +M1 s, |u|2 = (SJu, Ju)r = (Sm Jm u, Jm u)rm . m=1
(3.18)
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.13 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
13
3.2. Below we need the following elementary assertion. We give its proof because similar arguments will be used in Subsection 3.4 under less trivial circumstances. Lemma 3.7. Let β1 , . . . , βM ∈ C and K1 , . . . , KM ∈ Z+ . Then there exist functions (l) ψk,m ∈ Z where m = 1, . . . , M and k = 0, . . . , Km such that ψk,m (βn ) = δm,n δk,l for all n = 1, . . . , M and l = 0, . . . , Km . Proof. Choose some m = 1, . . . , M and K ∈ Z+ . Let a0 , a1 , . . . , aK be any given numbers. It suffices to construct a function ψ ∈ Z such that ψ (l) (βn ) = 0 for all n = m and ψ (l) (βm ) = al where l = 0, . . . , K. Let ϕ0 ∈ Z be an arbitrary function such that ϕ0 (0) = 0. Set ω(z) = 1 if M = 1, ω(z) =
M
(z − βn )K+1
if M ≥ 2,
(3.19)
n=1;n=m
and ϕ(z) = ω(z)ϕ0 (z − βm ).
(3.20)
Of course ϕ(βm ) = 0. Let us seek the function ψ in the form ψ(z) = Q(z − βm )ϕ(z)
(3.21)
where Q(z) =
K
qj z j
(3.22)
j=0
is a polynomial. Clearly, ψ ∈ Z and ψ has zeros of order K + 1 at all points βn , n = m. It remains to satisfy the conditions ψ (l) (βm ) = al . In view of (3.21), (3.22) they yield the equations l l! (l − j)!−1 qj ϕ(l−j) (βm ) = al ,
l = 0, 1, . . . , K,
(3.23)
j=0
for the coefficients qj . For l = 0, we find that q0 = ϕ(βm )−1 a0 .
(3.24)
Then Eq. (3.23) determines ql if q0 , . . . , ql−1 are already found. Defining now the polynomial Q(z) by formula (3.22), we see that the corresponding function (3.21) satisfies all necessary conditions. 2
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.14 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
14
In our case βm = β¯m for m = 1, . . . , M0 and βm = β¯m+M1 , Km = Km+M1 for m = M0 + 1, . . . , M0 + M1 . Set um = (u0,m , u1,m , . . . , uKm ,m ) ∈ CKm +1 for m = 1, . . . , M , um = um for m = 1, . . . , M0 and um = (um , um+M1 ) for m = M0 + 1, . . . , M0 + M1 . Then um ∈ Crm and u = (u1 , . . . , uM0 +M1 ) is an element of the direct sum (3.15). Let us define the mapping Y : Cr → Z by the formula (Yu)(z) =
Km M
uk,m ψk,m (z)
(3.25)
m=1 k=0
where ψk,m are the functions constructed in Lemma 3.7. By the definition of the functions ψk,m , for mapping (3.16) we have the relation JY = I
(3.26)
(I is the identity operator in Cr ). In view of Theorem 2.3, for the proof of Theorem 1.1 we only have to calculate the numbers N± (s). This can be reduced to a problem of the linear algebra. Lemma 3.8. Let s be the sign-function of kernel (1.2), and let S be the corresponding sign-matrix defined by formula (3.17). Then N± (s) = N± (S).
(3.27)
Proof. We proceed from identity (3.18). Consider, for example, the sign “−”. If s, |u|2 < 0, then (Su, u)r < 0 for u = Ju. This shows that N− (s) ≤ N− (S). Let us prove the opposite inequality. It follows from the identities (3.18) and (3.26) that
s, |Yu|2 = (Su, u)r ,
∀u ∈ Cr .
Thus if (Su, u)r < 0, then s, |u|2 < 0 for u = Yu. 2 3.3.
It remains to calculate the numbers N± (S) =
M 0 +M1
N± (Sm ).
(3.28)
m=1
It is quite easy to find N± (Sm ) for m ≥ M0 + 1. Lemma 3.9. Under the assumptions of Proposition 3.6 suppose that pK = 0. Then matrix (3.14) has exactly K + 1 positive and K + 1 negative eigenvalues (they are opposite to each other).
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.15 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
15
Proof. Set S = S(P, α) and recall that Det S = 0. If S∗ Sf = λ2 f for some λ > 0, then S
λf ±Sf
=
0 S∗ S 0
λf ±Sf
= ±λ
λf ±Sf
.
Thus we put into correspondence to every eigenvalue λ2 of the matrix S∗ S of order K +1 of order 2K + 2. 2 the eigenvalues λ and −λ of the matrix S In the case m ≤ M0 we need some information on skew triangular matrices. We consider Hermitian matrices S of order K + 1 with elements sj, , j, = 0, . . . , K, such that sj, = s¯,j . We say that a matrix S is skew triangular if sj, = 0 for j + > K. It is easy to see (reasoning, for example, by induction) that Det S = (−1)K(K+1)/2 s0,K s1,K−1 · · · sK,0 .
(3.29)
In particular, Det S = 0 if (and only if) all skew diagonal elements are not zeros. Let us first consider skew diagonal matrices. Lemma 3.10. Let S0 be a Hermitian matrix of order K +1 such that sj, = 0 for j+ = K. If K is odd, then S0 has the eigenvalues ±|sj,K−j | where j = 0, . . . , (K − 1)/2. If K is even, then S0 has the eigenvalues ±|sj,K−j | where j = 0, . . . , K/2 − 1 and the eigenvalue sK/2,K/2 . Proof. Let us consider the equation S0 f = λf for f = (f0 , . . . , fK ) . Since S0 f = (s0,K fK , s1,K−1 fK−1 , . . . , sK,0 f0 ) this equation is equivalent to the system sj,K−j fK−j = λfj ,
j = 0, . . . , K.
(3.30)
If K is odd, then (3.30) decouples into (K + 1)/2 systems of two equations for fj and fK−j where j = 0, . . . , (K − 1)/2. Every such system has two simple eigenvalues √ λ = ± sj,K−j sK−j,j = ±|sj,K−j |. If K is even, then (3.30) decouples into K/2 systems of the same two equations for fj and fK−j where j = 0, . . . , K/2 − 1 and the single equation sK/2,K/2 fK/2 = λfK/2 . The last equation has of course the eigenvalue λ = sK/2,K/2 . 2 For applications to Hankel operators, we need the following result. Lemma 3.11. Let S be a Hermitian skew triangular matrix of order K + 1 such that sj,K−j = 0 for j = 0, . . . , K. If K is odd, then S has (K + 1)/2 positive and (K + 1)/2 negative eigenvalues. If K is even, then S has K/2 + 1 positive and K/2 negative eigenvalues for sK/2,K/2 > 0 and it has K/2 positive and K/2 + 1 negative eigenvalues for sK/2,K/2 < 0.
JID:YJFAN 16
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.16 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
Proof. According to formula (3.29), Det S depends only on elements sj, on the skew diagonal where j + = K. Let us use that eigenvalues of S depend continuously on its matrix elements so that they cannot cross the point zero unless one of skew diagonal elements hits the zero. Let us consider the family of matrices S(ε) where ε ∈ [0, 1] with elements sj, (ε) = εsj, for j + < K and sj, (ε) = sj, for j + ≥ K. Since Det S(ε) = Det S = 0 for ε ∈ [0, 1], all matrices S(ε) and, in particular, S(1) = S and S(0), have the same numbers of positive and negative eigenvalues. So it remains to apply Lemma 3.10 to the matrix S(0). 2 The following result is a particular case of Lemma 3.11. Lemma 3.12. Let S = S(P, α) be the sign-matrix of kernel (3.4), (3.5) where Im α = 0, P (t) = P (t) and pK = 0. The total numbers N+ = N+ (S) and N− = N− (S) of strictly positive and negative eigenvalues of the matrix S are given by the equalities N+ = N− = (K + 1)/2
if K is odd
N+ − 1 = N− = K/2
if K is even and pK > 0
N+ = N− − 1 = K/2
if K is even and pK < 0.
Combined with equality (3.28), Lemmas 3.9 and 3.12 show that N± (S) =
M0
(m)
N±
m=1
+
M 0 +M1
Km + M1 .
(3.31)
m=M0 +1
Putting this result together with relations (2.10) and (3.27), we conclude the proof of Theorem 1.1. 3.4. In this subsection we consider operators H = H0 + V where H0 is the Carleman operator (or a more general operator) and V is a finite rank Hankel operator. Various objects related to the operator H0 will be endowed with the index “0”, and objects related to the operator V will be endowed with the index “v”. Our goal is to get an explicit formula for the total number N− (H) of negative eigenvalues of the operator H. Theorem 3.13. Suppose that the sign-function s0 (x) of a Hankel operator H0 is bounded and nonnegative. Let the kernel v(t) of V be given by the formula v(t) =
M
Pm (t)e−αm t
m=1 (m)
where Pm (t) is a polynomial of degree Km . Define the numbers N− by formula (1.4) where pm is coefficient (1.3). Then the total number N− (H) of negative eigenvalues of the operator H = H0 + V is given by formula (1.5).
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.17 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
17
Comparing Theorem 1.1 for the operator V and Theorem 3.13, we can state the following result. Theorem 3.14. Under the assumptions of Theorem 3.13, we have N− (H) = N− (V ). In particular, H ≥ 0 if and only if V ≥ 0. Since for the Carleman operator C the sign-function s0 (x) = 1, Theorem 3.14 applies to H0 = C and hence implies Theorem 1.3. The proof of Theorem 3.13 is essentially similar to that of Theorem 1.1. Relation (2.10) remains of course true but instead of (3.18) we now have
s, |u|2 =
∞
2 s0 (x)u(x) dx + (Sv Ju, Ju)r .
(3.32)
−∞
Compared to Subsection 3.2, we additionally have to consider the first term in the right-hand side of (3.32). This is possible if instead of Lemma 3.7, one uses a more special construction. We emphasize that in the assertion below, real and complex βm are considered in essentially different ways. Lemma 3.15. Let β1 , . . . , βM ∈ C, K1 , . . . , KM ∈ Z+ and ε > 0. Then there exist func(l) tions ψk,m (ε) ∈ Z where m = 1, . . . , M and k = 0, . . . , Km such that ψk,m (βn ; ε) = δm,n δk,l for all n = 1, . . . , M and l = 0, . . . , Km . Moreover, these functions satisfy the condition ∞
ψk,m (x; ε)2 dx = O(ε),
ε → 0.
(3.33)
−∞
Proof. Choose some m = 1, . . . , M , K ∈ Z+ and ε > 0. Let a0 , a1 , . . . , aK be any given numbers. It suffices to construct a function ψ(ε) ∈ Z such that ψ (l) (βn ; ε) = 0 for all n = m and ψ (l) (βm ; ε) = al where l = 0, . . . , K. We also have to satisfy condition (3.33) for the functions ψ(x; ε). + iβm . As in Lemma 3.7, ω(z) is function (3.19) and ϕ0 ∈ Z is an Let βm = βm arbitrary function such that ϕ0 (iβm ) = 0. If βm = 0, we set (cf. (3.20))
ϕ(z; ε) = ω(z)ϕ0 (z − βm )/ε .
(3.34)
ϕ(z; ε) = ω(z)e−i(sgn βm )(z−βm )/ε ϕ0 z − βm .
(3.35)
If βm = 0, we set
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.18 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
18
We again seek the function ψ(ε) in the form ψ(z; ε) = Q(z − βm ; ε)ϕ(z; ε)
(3.36)
where Q is polynomial (3.22) with the coefficients qj = qj (ε) depending on ε. Obviously, ψ(ε) ∈ Z and due to the factor ω(z) the function ψ(z; ε) has zeros of order K + 1 at all points zn where n = m. The conditions ψ (l) (βm , ε) = al yield again Eq. (3.23), but now the coefficients (l−j) ϕ (βm ; ε) depend on ε. Note that
ϕ(βm ) := ϕ(βm ; ε) = ω(βm )ϕ0 iβm = 0 does not depend on ε and according to (3.34) or (3.35) (k) ϕ (βm ; ε) ≤ Ck ε−k .
(3.37)
The coefficient q0 is again determined by formula (3.24). Solving Eq. (3.23) successively for q1 (ε), . . . , qK (ε) and using estimates (3.37) we find that qj (ε) ≤ Cj ε−j ,
j = 0, . . . , K.
(3.38)
It remains to prove (3.33). Let N = (K + 1)(M − 1). If Im βm = 0, it follows from (3.34) and (3.36) that ∞
ψ(x; ε)2 dx
−∞
∞ ≤C
1 + ε−2K |x − βm |2K
2 1 + |x − βm |2N ϕ0 (x − βm )/ε dx.
−∞
Making the change of variables x − βm = εy, we see that this integral is bounded by Cε. If Im βm = 0, then ψ(x; ε) is defined by formulas (3.35) and (3.36). Since −i(sgn β )(x−β )/ε i(sgn β )β /ε /ε m m e = e = e−βm m m ,
we have the estimate ∞ −∞
|/ε −2K ψ(x; ε)2 dx ≤ Ce−|βm ε
∞
2K+2N 2 ϕ0 x − βm dx. 1 + x − βm
−∞
The right-hand side here tends to zero exponentially due to the factor e−|βm |/ε .
2
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.19 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
19
Let us return to Theorem 3.13. Since H0 ≥ 0, we see that N− (H) ≤ N− (V ) where N− (V ) = N− according to Theorem 1.1. Thus we only have to check that N− (H) ≥ N− or, by virtue of Theorem 2.3 and formula (3.31), that N− (s) ≥ N− (Sv ).
(3.39)
Let L be the subspace of Cr spanned by the eigenvectors of Sv corresponding to its negative eigenvalues. Then dim L = N− (Sv ) and there exists λ0 > 0 such that (Sv u, u)r ≤ −λ0 ur ,
∀u ∈ L.
We again define the function u(ε) = Y(ε)u by formula (3.25) where ψk,m (z, ε) are the functions constructed in Lemma 3.15 for sufficiently small ε. Similarly to (3.26), we have JY(ε) = I and hence
Sv Ju(ε), Ju(ε)
r
= (Sv u, u)r ≤ −λ0 ur .
(3.40)
Since s0 ∈ L∞ (R), it follows from estimate (3.33) that ∞
2 s0 (x)u(x; ε) dx ≤ Cεur ,
∀u ∈ Cr .
(3.41)
−∞
Substituting (3.40) and (3.41) into equality (3.32), we find that for all u ∈ L 2 s, u(ε) ≤ −(λ0 − Cε)ur < 0 if ε < λ0 /C. It follows that N− (s) ≥ dim L. This yields estimate (3.39) and hence concludes the proof of Theorem 3.13. 2 3.5. It follows from Lemma 3.2 that the b- and s-functions of kernel (1.2) are the sums (over m) of terms (3.7) and (3.8), respectively. The coefficients qk,m of the corresponding polynomials Qm (ξ) are constructed by formula (3.6) in terms of the coefficients pk,m of the polynomials Pm (t). It turns out that formulas (3.7) or (3.8) for the b- or s-functions characterize finite rank Hankel operators. Moreover, the coefficients of the polynomials Pm (t) are determined by the coefficients of the polynomials Qm (ξ). This follows from the assertion below. Lemma 3.16. Let α with Re α > 0 be given, and let β = − ln α. If a function b(ξ) is defined by formula (3.7), then there exists the unique polynomial P (t) of degree K such that b(ξ) is the b-function of the kernel h(t) = P (t)e−αt .
JID:YJFAN 20
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.20 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
Proof. Given the coefficients q0 , . . . , qK , we have to solve Eq. (3.6) for the coefficients p0 , . . . , pK . We will find successively pK , . . . , p0 . Recall that νk,k = (−1)k . Therefore according to Eq. (3.6) where k = K we have pK = (−1)K α1+K qK and, more generally, pk = (−1)K α1+K qk − (−1)K
K
νk, αk− p .
=k+1
This yields an expression for pk if pK , . . . , pk+1 are already found. 2 Since both functions b(ξ) and s(x) determine the coefficients q0 , . . . , qK , Lemma 3.16 can be equivalently reformulated in terms of the sign-functions s(x). Of course the reconstructions of h(t) by formula (2.9) and by the method of Lemma 3.16 are consistent with each other. Finally, we state an equivalent assertion in terms of the sign-matrices S (see Definition 3.4). We recall that the sign-matrix S = S(P, α) of the kernel h(t) defined by (3.4),
(3.5) is skew triangular and its matrix elements sj, = Kj α−1−K pK if j + = K. As usual, we suppose that pK = 0. Moreover, the matrix S possesses an additional property: the numbers !j!s,j =: ρ+j ,
, j = 0, 1, . . . , K,
(3.42)
depend on the sum + j only. We also note that the matrix S is symmetric if α = α ¯ and P (t) = P (t). The following assertion shows that there is a one-to-one correspondence between Hankel kernels h(t) = P (t)e−αt and such matrices. Lemma 3.17. Let elements s,j of a skew triangular matrix S of order K + 1 satisfy condition (3.42). Then, for every α with Re α > 0, there exists the unique polynomial P (t) of degree K such that S = S(P, α) is the sign-matrix of the kernel h(t) = P (t)e−αt . Proof. Comparing relations (3.9) and (3.42), we see that qk = (−1)k k!−1 ρk . Thus it remains to use Lemma 3.16. 2 4. Various representations of Hankel operators Hankel operators can be realized in various spaces. We distinguish four representations: in the spaces H2+ (T), H2+ (R), l2 (Z+ ) and L2 (R+ ). The last one was already used above. For the precise definitions of the Hardy classes H2+ (T) and H2+ (R), see, e.g., the book [1]. In this section we describe bounded Hankel operators in various representations
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.21 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
21
in terms of their quadratic forms. Our presentation seems to be somewhat different from those in the books [6,7]. 4.1.
Let us start with the representation of Hankel operators in the Hardy space ⊂ L2 (T) of functions analytic in the unit disc. An operator G in the space H2+ (T) is called Hankel if, for some ω ∈ L∞ (T), its quadratic form admits the representation H2+ (T)
(Gu, u) =
ω(μ)u(¯ μ)u(μ)dm(μ),
dm(μ) = (2πiμ)−1 dμ,
∀u ∈ H2+ (T).
(4.1)
T
Note that dm(μ) is the Lebesgue measure on T normalized so that m(T) = 1. The operator G is determined by the function ω(μ), that is, G = G(ω). The function ω(μ) is known as the symbol of the Hankel operator G(ω). Of course the symbol is not unique because G(ω1 ) = G(ω2 ) if (and only if) ω1 −ω2 ∈ H∞ − (T) (the space of analytic functions outside of the unit disc bounded and decaying at infinity). Of course, the operator G is bounded. Hankel operators in the Hardy space H2+ (R) ⊂ L2 (R) of functions analytic in the upper half-plane are defined quite similarly. An operator H in the space H2+ (R) is called Hankel if, for some ϕ ∈ L∞ (R), its quadratic form admits the representation (Hw, w) =
ϕ(λ)w(−λ)w(λ)dλ,
∀w ∈ H2+ (R).
(4.2)
R
The operator H is determined by the function ϕ(λ), that is, H = H(ϕ). The function ϕ(λ) is known as the symbol of the Hankel operator H(ϕ). Of course the symbol is not unique because H(ϕ1 ) = H(ϕ2 ) if (and only if) ϕ1 − ϕ2 ∈ H∞ − (R) (the space of bounded analytic functions in the lower half-plane). Of course, the operator H is bounded. In the space l2 (Z+ ) of sequences ξ = (ξ0 , ξ1 , . . .), a Hankel operator G = G(κ) is defined via its quadratic form (Gξ, ξ) =
∞
κn+m ξm ξ¯n ,
κ = (κ0 , κ1 , . . .).
(4.3)
n,m=0
It is first considered on vectors ξ with only a finite number of non-zero components, and it is supposed that ∞ κn+m ξm ξ¯n ≤ Cξ2 .
(4.4)
n,m=0
Then there exists a bounded operator G such that relation (4.3) holds. We note that condition (4.4) directly implies that κ ∈ l2 (Z+ ). Indeed, passing from the quadratic form for ξ to the sesquilinear form for ξ, η and choosing η = (1, 0, 0, . . .), we see that, for all ξ,
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.22 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
22
∞ κn ξm ≤ Cξ, n=0
whence κ ∈ l2 (Z+ ). Finally in the space L2 (R+ ), a Hankel operator H is defined via its quadratic form (Hf, f ) = h, f¯ f
(4.5)
where f ∈ C0∞ (R+ ), f¯ f is the Laplace convolution (2.1) (note that f¯ f ∈ C0∞ (R+ )) and the distribution h ∈ C0∞ (R+ ) . If h, f¯ f ≤ Cf 2 ,
(4.6)
then there exists a bounded operator H = H(h) such that relation (4.5) holds. It is easy to see that Hankel operators G(ω), H(ϕ), G(κ) and H(h) are self-adjoint if ω(¯ μ) = ω(μ),
κn = κn
ϕ(−λ) = ϕ(λ),
and h(t) = h(t),
respectively. 4.2. Let us establish one-to-one correspondences between the representations of Hankel operators in the spaces H2+ (T), H2+ (R), l2 (Z+ ) and L2 (R+ ). Let us introduce the notation A(H) for the linear set of all bounded Hankel operators acting in one of these four Hilbert spaces H. It is easy to see that A∗ ∈ A(H) together with A and that A(H) is a closed set in the weak operator topology. Recall that the function ζ=
z−i z+i
(4.7)
determines a conformal mapping z → ζ of the upper half-plane onto the unit disc. The unitary operator U : H2+ (T) → H2+ (R) corresponding to this mapping is defined by the equality (Uu)(λ) = π
−1/2
−1
(λ + i)
λ−i . u λ+i
(4.8)
Making the change of variables (4.7) in (4.1) and taking into account (4.2), we see that
G(ω) ∈ A H2+ (T)
⇐⇒
H(ϕ) = UG(ω)U ∗ ∈ A H2+ (R)
if the symbols ω and ϕ are linked by the formula ϕ(λ) = −
λ−i λ−i ω . λ+i λ+i
(4.9)
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.23 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
23
The unitary mapping F : H2+ (T) → l2 (Z+ ) corresponds to expanding a function in the Fourier series: ξn = (Fu)n = u(μ)μ−n dm(μ). T
Conversely, for a sequence ξ = (ξ0 , ξ1 , . . .), we have ∞ ∗
u(μ) = F ξ (μ) = ξn μn . n=0
Substituting this expansion into (4.1), we see that
∞
κn+m ξm ξ¯n = G(κ)ξ, ξ G(ω)u, u =
(4.10)
n,m=0
if κn = (Fω)n ,
n ∈ Z+ .
(4.11)
If ω ∈ L∞ (T), then expression (4.10) satisfies estimate (4.4), and hence the Hankel operator G(κ) defined by relation (4.3) in the space l2 (Z+ ) is bounded. Conversely, according to the Nehari theorem (see the original paper [5], or the books [6], Chapter 1, §1 or [7], Chapter 1, §2) under assumption (4.4) there exists a function ω ∈ L∞ (T) such that equalities (4.11) hold. The corresponding Hankel operators are linked by the relation G(ω) = F ∗ G(κ)F. Thus,
G(ω) ∈ A H2+ (T)
⇐⇒
G(κ) = FG(ω)F ∗ ∈ A l2 (Z+ ) .
For a function ϕ ∈ L∞ (R), put h = (2π)−1/2 Φϕ. Passing to the Fourier transforms, we see that for all f ∈ C0∞ (R+ ) the identity
H(h)f, f = H(ϕ)w, w ,
w = Φ∗ f,
(4.12)
holds. Therefore the Hankel operator H(h) = ΦH(ϕ)Φ∗ is bounded together with the Hankel operator H(ϕ). Conversely, suppose that h ∈ C0∞ (R+ ) and that estimate (4.6) holds. Then according to the continuous version of Nehari theorem (see Theorem 1.8.1 in the book [6]) there exists a function ϕ ∈ L∞ (R) such that h = (2π)−1/2 Φϕ
(4.13)
or, more precisely, h, g = (2π)−1/2 ϕ, Φ∗ g for all g ∈ C0∞ (R+ ). This implies the identity (4.12) so that
H(ϕ) ∈ A H2+ (R)
⇐⇒
H(h) = ΦH(ϕ)Φ∗ ∈ A L2 (R+ ) .
JID:YJFAN 24
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.24 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
Finally, we note that the representations in the spaces l2 (Z+ ) and L2 (R+ ) can be directly connected by the unitary operator L constructed in terms of the Laguerre functions. But we do not need this construction here. The relations between different representations can be summarized by the following diagrams: U
−−−−→ w(λ) = (Uu)(ν) ⏐ ⏐
H2+ (T) −−−−→ H2+ (R) ⏐ ⏐ ⏐ ⏐ Φ F
ξn = (Fu)n −−−−→ f (t) = (Φw)(t)
l2 (Z+ ) −−−−→ L2 (R+ )
u(μ) ⏐ ⏐
(4.14)
L
and −−−−→ H = UGU ∗ ⏐ ⏐
ω(μ) −−−−→ ϕ(λ) ⏐ ⏐ ⏐ ⏐
G = FGF ∗ −−−−→ H = ΦHΦ∗
κn −−−−→ h(t)
G ⏐ ⏐
(4.15)
Of course, the unitary transformations F, U, Φ and L realizing isomorphisms in (4.14) are not unique. We can compose each of them with an automorphism of the corresponding set of Hankel operators A(H) where H is one of the spaces H2+ (T), H2+ (R), l2 (Z+ ) or L2 (R+ ). The group G(H) of automorphisms of the set A(H) will be described in Appendix A. R), H 2+ (T T) and l2 (Z Z+ ) 5. Finite rank Hankel operators in the spaces H 2+ (R Here we reformulate Theorems 1.1 and 1.3 in terms of Hankel operators acting in the Hardy spaces H2+ (R) and H2+ (T) of analytic functions and in the space of sequences l2 (Z+ ). We proceed from relations between various representations described in Section 4. Now we have to specify diagrams (4.15) for finite rank Hankel operators. 5.1. Originally, the Kronecker theorem was formulated in the space l2(Z+ ) (see the paper [3] or the book [6], Theorem 3.1 in Chapter 1). It states that a Hankel operator G = G(κ) has finite rank if and only if the function
ω(ζ) =
∞
κn ζ n
(5.1)
n=0
is rational and its poles lie outside of the unit disc. Moreover, if ω(ζ) = P(ζ)Q(ζ)−1 where the polynomials P(ζ) and Q(ζ) are in their lowest terms, then rank G = max{deg P + 1, deg Q}.
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.25 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
25
It follows that for some numbers γm ∈ C, |γm | > 1, and Km ∈ Z+ , m = 2, . . . , M , function (5.1) admits the representation
ω(ζ) = R1 (ζ) +
M
Rm (ζ)(ζ − γm )−Km −1 ,
(5.2)
m=2
where all Rm (ζ), Rm (γm ) = 0, are polynomials and deg Rm ≤ Km for m ≥ 2. Of course it is possible that M = 1; then the sum in the right-hand side of (5.2) is absent. Note that rank G = deg R1 +
M
Km + M.
m=2
The Kronecker theorem also implies that if G = G(ω) is a finite rank Hankel operator in the space H2+ (T), then its symbol can be chosen in the form (5.2). A Hankel operator H = H(ϕ) in the space H2+ (R) has finite rank if and only if the operator G = U ∗ HU has finite rank. The symbols ω(ζ) and ϕ(z) of these operators are linked by formula (4.9). It follows that
ϕ(z) =
M
Qm (z)(αm − iz)−Km −1 ,
Re αm > 0, Qm (−iαm ) = 0,
(5.3)
m=1
where Qm (z) are polynomials and deg Qm ≤ Km . We also have αm =
γm + 1 γm − 1
and Q1 (−i) = (−1)K1 2K1 +1
1 (K1 ) R , K1 ! 1
K1 = deg R1 , α1 = 1,
Qm (−iαm ) = −2Km +1 γm (γm − 1)−2Km −2 Rm (γm ),
αm = 1.
(5.4)
Recall that the operator H(h) = ΦH(ϕ)Φ∗ acts by formula (1.1) where the kernel h is given by relation (4.13). Since ∞ k!
(α − iz)−k−1 e−izt dz = 2πtk e−αt ,
Re α > 0,
−∞
it follows from equality (5.3) that h satisfies relation (1.2) where Pm (t) is a polynomial of degree Km and (Km ) Pm = Qm (−iαm ).
(5.5)
JID:YJFAN 26
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.26 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
Thus, as stated in the Introduction, all finite rank Hankel operators in the space L2 (R+ ) are given by formulas (1.1), (1.2). Finally, the Kronecker theorem for Hankel operators G = G(κ) in the space l2 (Z+ ) can be stated directly in terms of elements κn . Indeed, expanding the function (5.2) into the Fourier series and using relation (4.11), we find that
κn = τn +
M
n Tm (n)qm ,
|qm | < 1,
(5.6)
m=2
where τn = 0 for n ≥ K1 + 1 and Tm are polynomials of degree Km . We note that −1 q m = γm and (K1 )
K1 !τK1 = R1
,
(Km ) Km +1 Tm = (−1)Km +1 qm R(γm ).
(5.7)
Let us now consider the self-adjoint case. If G = G∗ and the sum in (5.2) contains a term with γm , then necessarily it also contains the term with γ¯m . We suppose that Im γm = 0 for m = 2, . . . , M0 and Im γm < 0, γM1 +m = γ¯m for m = M0 +1, . . . , M0 +M1 . ¯ = Rm (ζ) for m = 1, . . . , M0 and RM +m (ζ) ¯ = Rm (ζ) for m = M0 + Then Rm (ζ) 1 1, . . . , M0 + M1 . Similarly, if H = H∗ and the sum in (5.3) contains a term with αm , then necessarily it also contains the term with α ¯ m . We again suppose that Im αm = 0 for m = 1, . . . , M0 and Im αm > 0, αM1 +m = α ¯ m for m = M0 + 1, . . . , M0 + M1 . Then Qm (¯ z ) = Qm (−z) z ) = Qm (−z) for m = M0 + 1, . . . , M0 + M1 . for m = 1, . . . , M0 and QM1 +m (¯ Finally, if G = G∗ , then necessarily τn = τ¯n and if the sum in (5.6) contains a n n term Tm (n)qm , then it also contains the term Tm (n)¯ qm . We suppose that Im qm = 0 for m = 2, . . . , M0 , Im qm > 0 for m = M0 + 1, . . . , M0 + M1 and qM1 +m = q¯m for m = M0 + 1, . . . , M0 + M1 . The coefficients of the polynomials Tm for m = 2, . . . , M0 are of course real. 5.2. Now we are in a position to reformulate Theorems 1.1 and 1.3 in various representations of Hankel operators. Recall that the numbers pm were defined by formula (1.3). Let us start with finite rank Hankel operators H in the space H2+ (R). Given the one-to-one correspondence between Hankel operators with kernels (1.2) and symbols (5.3) and, in particular, equality (5.5), the following result is equivalent to Theorem 1.1. Theorem 5.1. Let the symbol ϕ of a self-adjoint finite rank Hankel operator H in the space H2+ (R) be given by formula (5.3) where Qm (z) are polynomials of degree deg Qm ≤ Km . (m) Let the numbers N± be defined by formula (1.4) where pm = Qm (−iαm ). Then the total numbers N± (H) of (strictly) positive and negative eigenvalues of the operator H are given by the formula (1.5).
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.27 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
27
In particular (cf. Corollary 1.2), H ≥ 0 (H ≤ 0) if and only if all poles of its symbol lie on the imaginary axis, are simple, the real parts of the residues are equal to zero and their imaginary parts are positive (negative). Note that according to (4.13) the symbol of the Carleman operator can be chosen as ϕ0 (λ) = πi sgn λ where λ ∈ R. Therefore the next result is a direct consequence of Theorem 1.3. Theorem 5.2. Let H be the Hankel operator with symbol πi sgn λ + ϕ(λ) where ϕ(λ) is function (5.3). Then the total number N− (H) of its negative eigenvalues is given by formula (1.5). Quite similarly, given the one-to-one correspondence between Hankel operators with symbols (5.2) and (5.3) and, in particular, equalities (5.4), the following result is equivalent to Theorem 5.1. Theorem 5.3. Let the symbol ω of a self-adjoint finite rank Hankel operator G in the space H2+ (T) be given by formula (5.2) where Rm (ζ) are polynomials of degree deg Rm ≤ Km . (m) Let the numbers N± be defined by formula (1.4) where (K1 )
p1 = R1
and
pm = −Rm (γm ) sgn γm
if m = 2, . . . , M0 .
Then the total numbers N± (G) of (strictly) positive and negative eigenvalues of the operator G are given by the formula (1.5) (if R1 (ζ) = 0, then the first sum in (1.5) starts with m = 2). In particular (cf. Corollary 1.2), G ≥ 0 (G ≤ 0) if and only if all poles of its symbol lie on the real axis, are simple and the residues are positive (negative); moreover, it is required that deg R1 = 0 and R1 ≥ 0 (R1 ≤ 0). Theorem 5.2 can also be reformulated in an obvious way in terms of Hankel operators in the space H2+ (T). Their symbols are functions of μ ∈ T. Theorem 5.4. Let G be the Hankel operator with symbol πiμ−1 sgn Im μ + ω(μ) where ω(μ) is function (5.2). Then the total number N− (G) of its negative eigenvalues is given by formula (1.5). Finally, we use the one-to-one correspondence between Hankel operators with symbols (5.2) and with matrix elements (5.6) and, in particular, equalities (5.7). Therefore the following result is equivalent to Theorem 5.3. Theorem 5.5. Let G be a self-adjoint finite rank Hankel operator in the space l2 (Z+ ) with matrix elements (5.6) where τn = 0 for n > K1 , τK1 = 0 and Tm are polynomials of (m) degree Km . Let the numbers N± be defined by formula (1.4) where
JID:YJFAN 28
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.28 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
p1 = τK1
and
(Km ) pm = Tm
if m = 2, . . . , M0 .
Then the total numbers N± (G) of (strictly) positive and negative eigenvalues of the operator G are given by the formula (1.5) (if τn = 0 for all n ≥ 0, then the first sum in (1.5) starts with m = 2). In particular (cf. Corollary 1.2), G ≥ 0 (G ≤ 0) if and only if κn = t1 δn,0 +
M0
n tm qm ,
qm ∈ (−1, 1),
m=2
where all numbers t1 , . . . , tM0 are positive (negative). Theorem 5.4 can also be reformulated in an obvious way in terms of operators in the space l2 (Z+ ) if one takes into account that the matrix elements of the Carleman operator (0) (0) equal κn = 2(n + 1)−1 for n even and κn = 0 for n odd. Theorem 5.6. Let G be the Hankel operator in the space l2 (Z+ ) with matrix elements (0) κn + κn where the numbers κn are defined by formula (5.6). Then the total number N− (G) of its negative eigenvalues is given by formula (1.5). Appendix A. The automorphism group of Hankel operators A.1. Let H be one of the spaces H2+ (R), L2 (R+ ), H2+ (T) or l2 (Z+ ), and let A(H) be the set of all bounded Hankel operators in H (see Section 4). Our goal here is to describe the group G(H) of all automorphisms of the set A(H). By definition, a unitary operator U ∈ G(H) if and only if U AU ∗ ∈ A(H) for all A ∈ A(H). Of course, for a Hankel operator A and an arbitrary unitary operator U , the operator U AU ∗ is not necessarily Hankel. Hence the group G(H) is smaller than the group of all unitary operators. It turns out that this group admits a simple description. As explained in Subsection 4.2, it is sufficient to describe G(H) for one of the spaces 2 H+ (R), L2 (R+ ), H2+ (T) or l2 (Z+ ). We choose H = H2+ (R). Then other groups are obtained by conjugations with the unitary transformations Φ, U ∗ and F:
G L2 (R+ ) = ΦG H2+ (R) Φ∗ ,
G H2+ (T) = U ∗ G H2+ (R) U,
G l2 (Z+ ) = FG H2+ (T) F ∗ . Let us define the dilation operators Dρ , ρ > 0, in the space H2+ (R): (Dρ u)(λ) = ρ1/2 u(ρλ).
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.29 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
29
Obviously, the operators Dρ are unitary. Set
(Iu)(λ) = iλ−1 u −λ−1 . Then I : H2+ (R) → H2+ (R), I is the involution, i.e. I = I 2 , and I is also unitary. It is easy to see that Dρ H(ϕ)D∗ρ = H(ϕρ )
and IH(ϕ)I ∗ = H(ϕ)
(A.1)
where ϕρ (λ) = ϕ(ρλ) and ϕ(λ) = ϕ(−λ−1 ). In particular, Dρ ∈ G(H2+ (R)) and I ∈ 2 G(H+ (R)). It turns out that the group G(H2+ (R)) is exhausted by these transformations. Let us state the precise result. Theorem A.1. A unitary operator U ∈ G(H2+ (R)) if and only if it has one of the two forms: U = θDρ or U = θDρ I for some θ ∈ T and ρ > 0. Actually, we will prove a stronger statement. Theorem A.2. Let Hα be the Hankel operator in the space H2+ (R) with symbol ϕα (λ) = 2α(α − iλ)−1 . Suppose that an operator U is unitary and UHα U∗ ∈ A(H2+ (R)) for all α > 0. Then either U = θDρ or U = θDρ I for some θ ∈ T and ρ > 0. Proof. Set U = ΦUΦ∗ and Hα = ΦHα Φ∗ . It follows from formula (4.13) that Hα is the Hankel operator in the space L2 (R+ ) with kernel hα (t) = 2αe−αt , that is, Hα f = (f, ψα )ψα where ψα (t) =
√ 2αe−αt .
(A.2)
Since UHα U∗ ∈ A(H2+ (R)), the operator U Hα U ∗ ∈ A(L2 (R+ )). It has rank one, and its non-zero eigenvalue equals 1. By the Kronecker theorem, all rank one Hankel operators have kernels pe−β t for some p, β ∈ C with Re β > 0. They are self-adjoint and have the √ eigenvalue 1 if and only if β > 0 and p = 2β. Therefore U Hα U ∗ = Hβ and hence (f, U ψα )U ψα = (f, ψβ )ψβ for all f ∈ L2 (R+ ) and some β = v(α). It follows that U ψα = θ(α)ψv(α) ,
θ(α) = 1, v(α) > 0.
(A.3)
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.30 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
30
We have to find the functions θ(α) and v(α). Let us take the unitarity of U into account. Since (U ψα1 , U ψα2 ) = (ψα1 , ψα2 ), relation (A.3) implies that θ(α1 )θ(α2 )(ψv(α1 ) , ψv(α2 ) ) = (ψα1 , ψα2 ),
∀α1 , α2 > 0.
(A.4)
Note that ψα (t) > 0 for all α > 0 and t > 0 and hence θ(α1 )θ(α2 ) > 0. Using also that |θ(α)| = 1, we see that θ(α1 ) = θ(α2 ); thus θ(α) =: θ does not depend on α. Returning to (A.4) and using the explicit expression (A.2) for ψα (t), we obtain the equation
√ α1 α2 v(α1 )v(α2 ) = . v(α1 ) + v(α2 ) α1 + α2 Setting here α1 = 1, α2 = α, we get the equation for x = v(α)/v(1): √ x α = . x+1 α+1 √
It has two solutions x = α and x = α−1 so that v(α) = ρ−1 α
and v(α) = (ρα)−1
where ρ = v(1)−1 . It now follows from (A.3) that U ψα = θψρ−1 α
or U ψα = θψ(ρα)−1 .
√ Since Φϕα = 2 παψα , these equalities can be rewritten as √ Uϕα = θ ρϕρ−1 α
√ or Uϕα = θα ρϕ(ρα)−1 .
(A.5)
Note that Dρ ϕα =
√
ρϕρ−1 α
and Iϕα = αϕα−1 .
Hence (A.5) are equivalent to the equalities Uϕα = θDρ ϕα
or Uϕα = θDρ Iϕα ,
∀α > 0.
Thus Uf = θDρ f or Uf = θDρ If on linear combinations of all functions ϕα where α > 0 is arbitrary. To extend these relations to all f ∈ H2+ (R), we have to show that these linear combinations are dense in H2+ (R). Equivalently, we can show that linear combinations of the functions ψα defined by formula (A.2) are dense in L2 (R+ ). Suppose that ∞ (Lf )(α) := 0
e−αt f (t)dt = 0
JID:YJFAN
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.31 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
31
for some f ∈ L2 (R+ ) and all α > 0. Then the Laplace transform L has zero eigenvalue. However this bounded self-adjoint operator in the space L2 (R+ ) has the purely absolutely continuous spectrum (see, e.g., [8]). 2 A.2. Let us now describe the group G(H) in other representations of the space H. In view of Theorem A.1, to that end we only have to calculate the operators Wρ = U ∗ Dρ U,
Dρ = ΦDρ Φ∗ ,
Wρ = FWρ F ∗
and J = U ∗ IU,
I = ΦIΦ∗ ,
J = FJ F ∗
acting in the spaces H2 (T), L2 (R+ ), l2 (Z+ ), respectively. According to formula (4.8) we have (Wρ f )(μ) =
1 2ρ1/2 μ + τ (ρ) f , ρ + 1 τ (ρ)μ + 1 τ (ρ)μ + 1
τ (ρ) =
ρ−1 ∈ (−1, 1), ρ+1
and (J f )(μ) = f (−μ).
(A.6)
The role of (A.1) is now played by the relations Wρ G(ω)Wρ∗ = G(ωρ )
and J G(ω)J ∗ = G( ω)
where
μ + τ (ρ) ωρ (μ) = ω τ (ρ)μ + 1
and ω (μ) = ω(−μ).
The operator Dρ is again the dilation, (Dρ f )(t) = ρ−1/2 f (ρ−1 t), and Dρ H(h)Dρ∗ = H(hρ ) where hρ (t) = ρ−1 h(ρ−1 t). Apparently there is no simple formula for the operator I. κ ) where κ n = It follows from (A.6) that (J ξ)n = (−1)n ξn , and J G(κ)J ∗ = G( n (−1) κn . On the contrary, there seems to be no explicit expression for the operators Wρ . References [1] K. Hoffman, Banach Spaces of Analytic Functions, Prentice-Hall, Inc., Englewood Cliffs, New York, 1962. [2] J.S. Howland, Spectral theory of operators of Hankel type. I, II, Indiana Univ. Math. J. 41 (1992) 409–426 and 427–434. [3] L. Kronecker, Zur Theorie der Elimination einer Variablen aus zwei algebraischen Gleichungen, Monatsber. Königl. Preuss. Akad. Wiss. Berlin (1881) 535–600.
JID:YJFAN 32
AID:7157 /FLA
[m1L; v1.143-dev; Prn:17/12/2014; 11:26] P.32 (1-32)
D.R. Yafaev / Journal of Functional Analysis ••• (••••) •••–•••
[4] A.V. Megretskii, V.V. Peller, S.R. Treil, The inverse spectral problem for self-adjoint Hankel operators, Acta Math. 174 (1995) 241–309. [5] Z. Nehari, On bounded bilinear forms, Ann. of Math. 65 (1957) 153–162. [6] V.V. Peller, Hankel Operators and Their Applications, Springer Verlag, 2002. [7] S.R. Power, Hankel Operators on Hilbert Space, Pitnam, Boston, 1982. [8] D.R. Yafaev, The discrete spectrum in the singular Friedrichs model, in: Adv. Math. Sci., vol. 189, Amer. Math. Soc., 1999, pp. 255–274. [9] D.R. Yafaev, Spectral and scattering theory for perturbations of the Carleman operator, St. Petersburg Math. J. 25 (2) (2014) 339–359. [10] D.R. Yafaev, Criteria for Hankel operators to be sign-definite, Anal. PDE, in press, arXiv:1303.4040.