Characterization of self-adjoint ordinary differential operators

Characterization of self-adjoint ordinary differential operators

Mathematical and Computer Modelling 54 (2011) 659–672 Contents lists available at ScienceDirect Mathematical and Computer Modelling journal homepage...

321KB Sizes 0 Downloads 56 Views

Mathematical and Computer Modelling 54 (2011) 659–672

Contents lists available at ScienceDirect

Mathematical and Computer Modelling journal homepage: www.elsevier.com/locate/mcm

Characterization of self-adjoint ordinary differential operators M.A. El-Gebeily a , Donal O’Regan b , Ravi Agarwal a,c,∗ a

Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia

b

Department of Mathematics, National University of Ireland, Galway, Ireland

c

Department of Mathematical Sciences, Florida Institute of Technology, 150 West University Blvd, Melbourne, FL 32901-6975, United States

article

abstract

info

Symmetric differential expressions ℓ of order n = 2k with real valued coefficients give rise to self adjoint operators in the space of weighted square integrable functions. Characterization theorems exist in the literature that describe such self-adjoint operators. All such characterizations begin by constructing the maximal domain of definition of the expression ℓ. The Glazman–Krein–Naimark theorem constructs the maximal domain in terms of eigenfunctions corresponding to a nonreal parameter λ. Representations in terms of certain functions related to a real parameter λ can also be found in the literature. In this paper we construct the maximal domain from two complementary self-adjoint realizations of ℓ. One operator is assumed to be known and the other one is computed explicitly. From these two operators we explicitly give all other self-adjoint operators associated with ℓ. A special class of operators associated with ℓ is what we call Type I operators. They arise in connection with a certain bilinear form that results from the weak formulation of the expression ℓ. Depending on the deficiency index of ℓ and the properties of the bilinear form we can have two complementary self-adjoint operators (two Type I operators) and, as it turns out, one of them is the celebrated Friedrich Extension. The other operator appears to be new. As in the general case, using these two operators we give an explicit characterization of all other operators of the same Type I. © 2011 Elsevier Ltd. All rights reserved.

Article history: Received 2 February 2011 Accepted 3 March 2011 Keywords: Differential operators Self-adjoint operators Deficiency index Friedrich extension Bilinear form

1. Introduction Given a positive weight function w defined on an interval J = (a, b) , −∞ ≤ a < b ≤ ∞ and a formally self-adjoint differential expression ℓ our goal is to characterize all self-adjoint realizations of the equation

ℓy = λwy

(1)

in the weighted Hilbert space H := Lw (J ). The differential expression ℓ is of order n = 2k and, following the works of Everitt and Zettl (see, e.g., [1,2]), we define it as follows. Let 2

1 Zn (J ) := {Q = (qrs )nr,s=1 : qrs : J → R, qr ,r +1 ̸= 0 a.e. on J , q− r ,r +1 ∈ Lloc (J ), 1 ≤ r ≤ n − 1,

qrs = 0 a.e. on J , 2 ≤ r < s ≤ n, qrs ∈ Lloc (J ), s ̸= r + 1, 1 ≤ r ≤ n}.

Given a matrix Q ∈ Zn (J ) we let V0 := {y : J → C : y is measurable} , y[0] := y,



Corresponding author at: Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia. E-mail address: [email protected] (R. Agarwal).

0895-7177/$ – see front matter © 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.mcm.2011.03.009

660

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

and, for r = 1, 2, . . . , n we let Vr := y ∈ Vr −1 : y[r −1] ∈ ACloc (J ) ,





 := (−1)

[r]

r

y

1 q− r ,r + 1

[r −1]′

y



r −

 ,

[s−1]

qrs y

y ∈ Vr

s=1

where qn,n+1 := 1 and ACloc (J ) denote the set of complex-valued functions which are absolutely continuous on any compact subinterval of J. The expression y[r] is called the rth quasiderivative of y. We finally set

ℓy := y[n] .

(2)

The expression ℓ is called symmetric if the matrix Q satisfies Q = −En−1 Q ∗ En ,

(3)

where En is the symplectic matrix of order n

n

En := (−1)r δr ,n+1−s r ,s=1 .



The expression ℓ generates in H two special differential operators: the minimal operator L0 with domain D0 and the maximal operator L with domain D. Any self adjoint realization  L of ℓ in H satisfies L0 ⊆  L ⊆ L. A characterization of all self-adjoint realizations is given by what is now known as the GKN (Glazman–Krein–Naimark) Theorem [3]. Theorem 1 (The GKN Theorem). If the deficiency index of L0 is d, then  L is a self-adjoint realization of ℓ in H if and only if (a) there exist d functions v1 , v2 , . . . , vd in D that are linearly independent modulo D0 ,

b

= 0, i, j = 1, 2, . . . , d and a      b  (c) D L = y ∈ D : y, vj a = 0 .

(b) vi , vj



Here [·, ·]ba = [·, ·] (b)− [·, ·] (a) and [·, ·] (t ) is the Lagrange bracket associated with ℓ (see Section 2). If the expression ℓ is regular, this characterization reduces to themore  familiar way of describing self-adjoint operators in terms of the boundary conditions to be satisfied by functions in D  L , namely, y(a)



.. .

A

 y

[n−1]





(a)

y(b)



.. .

  = 0, [n−1] y (b)

   + B

(4)

where A and B are n × n complex matrices satisfying rank [A : B] = n,

(5)

AEn A = BEn B .

(6)





  If either one or both endpoints are singular, this characterization is not possible since functions in D  L and their quasiderivatives do not have finite limits, in general, at the singular endpoint(s). The characterization (4)–(6) was extended in [2] to the case where the expression ℓ has one regular and one singular endpoints: Theorem 2. Suppose that the endpoint a is regular and the endpoint b is singular. Assume that the deficiency index of L0 on J is d. Let m = 2d − 2k and assume that for some λ ∈ R, (1) has d linearly independent solutions lying in H. Then there exist m    linearly independent solutions v1 , . . . , vm ∈ D such that the matrix vi , vj (b) is nonsingular. Moreover, a subdomain  D⊂D is the domain of a self-adjoint extension  L of L0 if and only if there exists a complex d × n matrix A and a complex d × m matrix B such that (a) (b) (c)

rank [A : B] = d, AEn A∗ = BEm B∗ ,  D = {y ∈ D : AY (a) + B [y, V ] (b) = 0}, where

 Y ( a) = 



y(a)

.. .



 , [n−1] y ( a)



[y, v1 ] (b)



.. .

 . [y, vm ] (b)

[y, V ] (b) = 



The Lagrange brackets [y, vi ](b), i = 1, . . . , m exist as limits and Ej is the symplectic matrix of order j.

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

661

If both endpoints are singular, one can apply the above theorem on a subinterval (a, c ) of J and then on (c , b). If dj denotes the deficiency index of (1) on (a, c ) and on (c , b), respectively, j = 1, 2 and if for some λ = λ1 ∈ R, (1) has d1 linearly independent solutions on (a, c ) which lie in L2w (a, c ) and for some λ = λ2 ∈ R (1) has d2 linearly independent solutions on (c , b) which lie in L2w (c , b), then we can find m1 linearly independent solutions u1 , . . . , um1 ∈ D(a,c ) (LC solutions; see Section 2) and m2 linearly independent solutions v1 , . . . , vm2 ∈ D(c ,b) , where mj = 2dj − 2k; j = 1, 2. For later reference, we state the result when the two endpoints are singular in a theorem [4]. Theorem 3. Let m1 and m2 be as defined above. A linear submanifold  D ⊂ D is the domain of a self-adjoint extension  L of L0 if and only if there exists a matrix A ∈ Cd×m1 and a matrix B ∈ Cd×m2 such that the following three conditions hold: (a) rank [A : B] = d, (b) AEm1 A∗ = BEm2 B∗ ,   (c) D  L = {y ∈ D : A [y, U] (a) + B [y, V ] (b) = 0}, where

 [y, U] (a) = 

[y, u1 ] (a)

.. .

 

y, um1





 , ( a)

[y, V ] (b) = 

[y, v1 ] (b)

 

.. .

y, vm2



  = 0. ( b)

(7)

The Lagrange brackets [·, ·] in (7) have finite limits. Theorem 3 is also an extension of a result in [5]. Both results make use of the so-called Naimark Patching Lemma [1] to construct the maximal domain D from the minimal domain D0 . Lemma 4 (Naimark Patching Lemma). Let Q ∈ Zn (J ) and assume that ℓ is regular on J. Suppose that w ∈ L(J ), w > 0 on J. Let α0 , . . . , αn−1 , β0 , . . . , βn−1 ∈ C. Then there is a function y ∈ D such that y[r ] (a) = αr ,

y[r ] (b) = βr

(r = 0, . . . , n − 1).

The characterization in Theorem 3 remains implicit in the sense that the set of all matrices A, B satisfying conditions (a) and (b) in the theorem is not explicitly given. Certain classes of matrices A, B are discussed in [6] in connection with the classification of the boundary conditions into separated, coupled or mixed. Although it is assumed in [6] that one endpoint is regular, it appears that the discussion is extendable to the case where both endpoints are singular. In this paper we give an explicit characterization of all self-adjoint extensions of L0 starting from only one such extension, say,  LI . The choice of the starting self-adjoint extension is completely arbitrary. For example, one could start from any of the examples given in [6] (extended to the fully singular case if appropriate), any of the self-adjoint extensions given in [5], or any of the GKN realizations. The procedure is summarized as follows. 1. Start with d functions u1 , u2 , . . . , ud describing the domain of a self-adjoint extension D  LI of D0 :

 

D  LI = D0 u span {u1 , u2 , . . . , ud } .

 

2. Let the functions v1 , v2 , . . . , vd be linearly independent modulo D  LI so that

 

D = D0 u span {u1 , u2 , . . . , ud } u span {v1 , v2 , . . . , vd } .

T

T

Put ΦI = u1 u2 · · · ud and Ψ1 = v1 v2 · · · vd . 3. Choose any Hermitian matrix Z ∈ Cd×d and form the vector function



ΦZ = Ψ1 +



1 2



[Ψ1 , Ψ1 ] + Z



[ΦI , Ψ1 ]−1 ΦI .

(8)

Here [ΦI , Ψ1 ] denotes  the matrix Lagrangian explained in Section 2. Then ΦZ generates a self-adjoint domain complementary to D  LI (see Definition 17).   4. If ΦZ , Ψ1 are suitably normalized (see Theorem 28), then a linear submanifold D  L of D is the domain of a self-adjoint extension  L of L0 if D  L = D0 u span v1′ , v2′ , . . . , vd′ ,

 





(9)

where v1 , v2 , . . . , vd are the components of the vector function Φ with ′

Φ=





[

Is 0

]

[

0 0 ΦZ + 0 0

0 Id−s

]

ΦI ,

(10)

where Ir is the identity matrix of order r and  I0 = 0 and 0 ≤ s ≤ d.   L of D is the domain of a self-adjoint extension  L of L0 then D  L0 satisfies (9) 5. Conversely, if the linear submanifold D  ′ ′ ′ with v1 , v2 , . . . , vd being the components of the vector function Φ satisfying (10), ΦZ a vector function satisfying (8) up to a normalization, Z ∈ Cd×d a Hermitian matrix and ΦI , Ψ1 the vector functions described in parts (1) and (2) above.

662

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

A special class of self-adjoint extensions of L0 is what we call Type I operators (see also [7,8]). To introduce this class we associate with ℓ the formal sesquilinear form a (u, v) =

k − k −

    (−1)r qn−r +1,s u[r −1] , v [s−1] + qk,k+1 u[k] , v [k] ,

(11)

r =1 s=1

where

⟨u, v⟩ =

b



uv. a

Type I operators are defined to be those self-adjoint extensions of L0 for which a (u, v) =  Lu, v





  ∀u, v ∈ D  L .

Central to the characterization we give below is a Type I operator  LK which satisfies the above equation not only for all  v ∈ DK = D  LK but also all v ∈ D (a) where D (a) ⊂ L2w (J ) is the domain of definition of the sesquilinear form (11). The

operator  LK exists, for example, if the form a (·, ·) is closed and bounded below (see [9]) or if the expression ℓ is regular. If  DK exists and D ⊂ D (a), then the well-known Friedrich Extension  LF is also a Type I operator. Otherwise, the Friedrich Extension is not a Type I operator. It turns out that if  DF is also a Type I domain, then it is the only domain complementary to  DK and that they satisfy dual boundary conditions in a sense that will be made clear in Section 4.1. In this case all Type I domains are generated by vector functions of the form

Φ=

[

]

Is 0

[

0 0 ΦK + 0 0

]

0 Id−s

ΦF ,

where ΦF , ΦK suitably normalized generators of  DF ,  DK , respectively. Thus, we only have 2n Type I operators corresponding to the number of subsets or a set of n elements. While the Friedrich Extension is well-known and was in the focus of many investigations [10–15] because of its importance in applications, we believe that the operator  LK was not investigated in the literature before. The bilinear form (11) appears also in the context of the left definite spectral theory of differential operators and associated left definite spaces [16–21]. This theory is important, for example, in the study of orthogonal polynomials. The closure of the form associated with powers of the expression ℓ give rise to Hilbert–Sobolev scales. In our setting here, we are always going to consider the form (11) in the base space H. There is only one occasion where we consider a Hilbert–Sobolev setting and that is in Theorem 8 in order to prove the existence of the operator LK . It would be interesting to investigate how the characterization of Type I operators obtained here interacts with the left definite spectral theory and the associated left definite spaces. For more information and terminology one can also consult [22–24]. This paper is organized in 3 sections besides the introduction. In Section 2 we introduce the operators associated with the symmetric expression ℓ. We also give rigorous definitions, including that of Type I operators and their domains and related material. In Section 3 we discuss the construction of the maximal domain D from D0 and of D ∩ D (a) using a set of real valued functions. The two subsections of Section 4 give characterizations of Type I domains as well as general self-adjoint domains associated with ℓ. 2. Operators associated with symmetric expressions The basic space for our analysis is the Hilbert space H := L2w (a, b) of square integrable functions with weight w . The inner product and norm in H will be denoted by ⟨·, ·⟩w and ‖·‖w , respectively. In this section we will introduce and study some properties of the operators induced in H by the symmetric expression ℓ and the sesquilinear form a (·, ·). The amount of exposition in this section is just what is needed in the following two sections in order to arrive at the characterizations discussed in the introduction. We begin by briefly recalling some basic definitions and notation. For more details, the reader is referred to [1] or [2]. The maximal operator L and the minimal operator L0 induced by ℓ in H are defined by D := D(L) := Ly =

1

w

ℓy,

 y ∈ H ∩ Vn :

1

w



ℓy ∈ H ,

y ∈ D,

L0 := L∗ , D0 := D (L0 ) . D and D0 are dense in H , L and L0 are closed and L0 is symmetric. Thus L0 ⊂ L. Any self-adjoint realization  L of ℓ is an extension of L0 (a restriction of L), i.e. L0 ⊂  L ⊂ L. The operator L0 is the closure of the preminimal operator L′0 with domain D′0 consisting of functions in D which have compact support in J.

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

663

Let us introduce the ‘‘half-Lagrangian’’

{u, v} (x) := (−1)k

k −

(−1)r −1 u[2n−r] (x) v [r −1] (x)

(12)

r =1

= (−1)k Vl∗ Ek Uh (x),

x ∈ J,

(13)

where, for a y ∈ D, Yl = y[0]

y[1]

Yh = y[k]

y[k+1]





···

t

,  [n−1] t

y[k−1]

···

y

(14)

,

(15)

and the Lagrangian [u, v ] (x) := {u, v} (x) − {v, u}(x)

  = (−1)k Vl∗ Ek Uh (x) − Vh∗ Ek Ul (x) = (−1)k V ∗ En U (x),

x ∈ J,

where, for a y ∈ D,

[ ] Y =

Yl . Yh

Note also that the half-Lagrangian can be written as

{u, v} (x) = (−1) V k



[

0 0

]

Ek U (x). 0

We put {u, v} = {u, v} (b) − {u, v} (a) and [u, v ] = [u, v ] (b) − [u, v ] (a). The identities a (u, v) = ⟨ℓu, v⟩w − {u, v} ,

(16)

⟨ℓu, v⟩w = [u, v ] + ⟨u, ℓv⟩w

(17)

hold whenever the involved expressions exist. In the sequel we will also need to deal with arrays involving the Lagrangians and half-Lagrangians. For this purpose,  t  t assume that the vector functions Φ := ϕ1 ϕ2 · · · ϕp and Ψ := ψ1 ψ2 · · · ψm are related by

Φ = DΨ ,



where D ∈ Cp×m . For any vector function Y = y1 [Φ , Y ] (x) :=

   ϕi , yj (x) ,

{Φ , Y } (x) =  



y2

···

t

yl we formally define

and

  ϕi , yj (x) ,

where aij denotes a matrix, the dimensions of which should be clear from the context, with aij being the element in row i and column j. Some elementary consequences of the above definitions are: [Φ , Y ] = D [Ψ , Y ] and [Y , Φ ] = [Y , Ψ ] D∗ . In particular, [Φ , Φ ] = D [Ψ , Ψ ] D∗ . Similar properties hold also for the half-Lagrangian. Furthermore, one should note that the matrix valued Lagrangian is anti-Hermitian: [Φ , Y ] = − [Y , Φ ]∗ . Define the subspace W of H by W := D (a) = {u ∈ H ∩ Vk : a (u, u) < ∞} . The subspace W is dense in H since it contains the preminimal domain D′0 . We will say that the sesquilinear form a (·, ·) is bounded below on W if for some µ ∈ R, a (u, u) ≥ µ ‖u‖2w

∀u ∈ W .

(18)

664

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

Proposition 5. Let  D := D ∩ W . Then for any u ∈  D and v ∈ W both {u, v} (a) and {u, v} (b) exist and are finite. Proof. Take x, y ∈ (a, b) such that x < y. Then, for any u ∈  D and v ∈ W , ax,y (u, v) = − {u, v} (y) + {u, v} (x) +

y



ℓuv, x

where ax,y (·, ·) is the restriction of a (·, ·) to the interval (x, y). Since the expression ℓ is regular on (x, y) , {u, v} (x) and {u, v} (y) exist and are finite. This follows from the definition of the half-Lagrangian (12) and the fact that, for any z ∈ Vk and any t ∈ [x, y], the values z [r] (t ), 0 ≤ r ≤ k exist and are finite (see, e.g., [2]). Fixing x and taking the limit on both sides as y → b− we see that {u, v} (b) is finite. Similarly we see that {u, v} (a) is finite.  Definition 6. A domain  D of a self adjoint extension L of L0 is called a Type I domain and L is called a Type I operator if  D ⊂ D and a (u, v) =  Lu, v w





∀ u, v ∈  D.

(19)

If the expression ℓ is regular, then D ⊂ W and hence  D = D. In this case, all self-adjoint domains are contained in  D including Type I domains. Another sufficient condition for the existence of Type I domains is given in Theorem 8. See also Remark 11. The following proposition provides a characterization of Type I operators in terms of boundary conditions in a manner that parallels the GKN characterization (Theorem 1). Proposition 7. Suppose  D is a linear manifold in H such that D0 ⊂  D ⊂ D and 1. {u, v} = 0 for all u, v in  D, 2. any z ∈  D for which {u, z } = {z , u} = 0 for all u ∈  D belongs to  D, then  D is a Type I domain. Conversely, if  D is a Type I domain then it satisfies Properties 1 and 2 above. Proof. The fact that  D is a self adjoint domain follows from the GKN Theorem 1. Property 1 in the proposition, (16) and the fact that  D ⊂ D ⊂ W give (19). To show the converse statement, observe that property 1 follows from (19). For property 2 assume that z ∈  D such that {u, z } = {z , u} = 0 for all u ∈  D. Then the equalities

  ⟨u, Lz ⟩w = a (u, z ) =  Lu, z w  ∗ imply that z ∈ D  L = D. 

∀u ∈  D

Theorem 8 (Existence of Type I Domains). If the sesquilinear form a (·, ·) is closed and bounded below, then 1. 2. 3.

D0 ⊂ W , ∀u ∈ D0 , v ∈ W , a (u, v) = ⟨L0 u, v⟩w .  D contains a Type I domain  D.

Proof. To show property 1, observe first that under the conditions stated in the theorem, W can be given the structure of a Hilbert space if equipped with the inner product induced by the form aλ (·, ·) := a (·, ·) + λ ⟨·, ·⟩w where λ is chosen such that λ > max {−µ, 0} (see (18)). Let u ∈ D0 . Since L0 is the closure of the preminimal operator L′0 there exists a sequence H

H

{un } ⊂ D′0 such that un → u and L0 un → L0 u. Then aλ (un − um , un − um ) = ⟨L0 (un − um ) , un − um ⟩w

≤ ‖L0 (un − um )‖w ‖un − um ‖w .

Hence, {un } is a Cauchy sequence in W . Since W is complete, u ∈ W . To show property 2, let u ∈ D0 and {un } ⊂ D′0 be a H

H

sequence such that un → u and L0 un → L0 u. For any v ∈ W , a (u, v) = lim a (un , v) = lim ⟨L0 un , v⟩w = ⟨L0 u, v⟩w . To show property 3, we know from [9, Theorem 2.6, page 323], that there exists a self-adjoint operator T , bounded from below such that D(T ) ⊂ W and a (u, v) = ⟨Tu, v⟩w

∀u ∈ D(T ), v ∈ W .

(20)

Fix u ∈ D(T ). For any v ∈ D0 , using property 2, we have

⟨u, L0 v⟩w = a (u, v) = ⟨Tu, v⟩w , which means that the functional v → ⟨u, L0 v⟩ is continuous on D0 . It follows that u ∈ D and

⟨Lu, v⟩w = ⟨u, L0 v⟩w = ⟨Tu, v⟩w . Therefore, D(T ) ⊂ D and T is a restriction of L to D(T ).



M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

665

Remark 9. The operator T arising in the proof of Theorem 8 is unique. Indeed if T1 and T2 are two self-adjoint operators satisfying (20), then, fixing u ∈ D (T2 ), we have, for any v ∈ D (T1 )

⟨v, T2 u⟩w = a (v, u) = ⟨T1 v, u⟩w ,   which means that u ∈ D T1∗ = D (T1 ) and T1∗ u = T1 u = T2 u. Thus, T1 ⊃ T2 . Since both operators are self-adjoint, T1 = T2 . This uniqueness is actually a consequence of the stronger condition (20) defining T than the condition (19) defining Type I operators. In the sequel, the operator T will be denoted by  LK and its domain will be denoted by  DK . This operator and its corresponding domain will play a central role in characterizing Type I operators in Section 4. Remark 10. Since the existence of the operator  LK is implied by the assumption that the sesquilinear form a (·, ·) is closed and bounded below, a weaker condition on the expression ℓ is that it admits the realization of a self-adjoint extension of L0 satisfying (20). This happens, for example, when the expression ℓ is regular without any assumptions of closedness or boundedness from below. The reason is that in the regular case all functions in D have n − 1 absolutely continuous pseudo derivatives which assume finite values at the endpoints. The boundary condition Yh = 0 (see (15)) gives rise to the selfadjoint realization  LK in the regular case. Yet it is known that if the coefficient pk,k+1 is not either positive or negative, then the expression ℓ is oscillatory (see e.g. [25,26]) and the sesquilinear form a (·, ·) is not bounded below. A weaker condition still is to assume that the expression ℓ admits a realization of a Type I operator in the sense of Definition 6. However, this assumption is too weak to produce full characterization of Type I operators as it results in a system of equations that is difficult to solve. a. For more on this see the discussion at the beginning of Section 4.1. Remark 11. Examples of expressions ℓ for which the form a (·, ·) is closed and bounded below include regular expressions with constant coefficients, the Schrödinger operator with nonnegative potential (and its integral powers) [9]. Other examples of regular as well as singular expressions can also be found in [9,7,8]. Corollary 12. If the expression ℓ admits the representation  LK (see Remark 10) with domain  DK , then

{u, v} = 0 for all u ∈  DK and v ∈ W . Lemma 13. If the expression ℓ admits the representation  LK with domain  DK in H, then  DK is symmetric (u ∈  DK H⇒ u ∈  DK )  and satisfies separated boundary conditions ({u, v} (a) = {u, v} (b) = 0 ∀u, v ∈ DK ). Proof. Let u ∈  DK . Since v ∈ W if and only if v ∈ W , the boundary condition {u, v} = 0 for all v ∈ W (see Corollary 12) gives {u, v} = 0 for all v ∈ W . Therefore, (16) gives a (u, v) = ⟨ℓu, v⟩w

∀v ∈ W .

In particular, this is true for all v ∈  DK . Then



u,  LK v w = a (u, v) = ⟨ℓu, v⟩w



∀v ∈  DK .

This shows that u ∈  DK and  LK u =  LK u. To show the separated boundary conditions, let u, v ∈  D0 . Replace v by a function v ′ ∈ W such that v ′ agrees with v near a and vanishes identically near b. The procedure involves using a straightforward adaptation of the Naimark Patching Lemma 4 and is possible because D =  D ⊂ W in the regular case. Then

    {u, v} (a) = u, v ′ (a) = u, v ′ = 0. In a similar way we show that {u, v} (b) = 0.



If the expression ℓ admits a Type I realization  L with domain  D then we have the inclusions D0 ⊂  D ⊂  D ⊂ D. The deficiency index d satisfies d = 1/2 dim (D mod D0 ). The construction of Type I operators, however, has more to do with the dimension of  D mod D0 than it has to do with d. We give this number a special name in the following definition. Definition 14. Suppose the expression ℓ admits a Type I realization. The codeficiency index δ is defined by

  δ = dim  D mod D0 . Clearly, if the expression ℓ admits a Type I realization then d ≤ δ ≤ 2d and if δ = 2d then  D = D otherwise,  D is a proper subspace of D. For examples involving all these possibilities, the reader is referred to [7,8].

 3. A construction for D In Section 4 we are going to characterize Type I domains associated with an expression ℓ generated by Q ∈ Zn (J ) satisfying (3) as well as the more general self adjoint operators induced by ℓ in H. For this purpose we need to obtain a decomposition

666

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

for the domain  D using a certain set of real valued functions. For this purpose it is enough to assume here that D0 ⊂  D; a sufficient condition for which is that the sesquilinear form (11) be closed and bounded below (see Section 2). We assume that the deficiency index of the minimal operator L0 is d with 0 ≤ d ≤ n. Then the codeficiency index δ is defined and satisfies 0 ≤ δ ≤ d. If δ = 0 then  D = D0 and the decomposition of  D is trivial. So, let us assume that δ > 0. Our starting point will be the construction of the maximal domain D introduced in [5] based on solutions of (1) for λ ∈ C with Im λ ̸= 0 and adapted in [2] for λ ∈ R. Let c ∈ (a, b) and let and let dj , j = 1, 2 denote the deficiency indices of the minimal operators resulting when the expression ℓ is restricted to (a, c ) and to (c , b), respectively. Then k ≤ dj ≤ nj , j = 1, 2 and d = d1 + d2 − n. Assume that for some λ = λ1 ∈ R (1) has d1 (real-valued) linearly independent solutions belonging to L2w (a, c ) and that for λ = λ2 ∈ R (1) has d2 linearly independent solutions belonging to L2w (c , b). Let mj = 2dj − 2n, j = 1, 2. Then there exist solutions ϕ1 , ϕ2 , . . . , ϕm1 ∈ L2w (a, c ) and θ1 , θ2 , . . . , θm2 ∈ L2w (c , b) such that the matrices [Φ , Φ ] (a) and [Θ , Θ ] (b)

t

t

are nonsingular, where Φ = ϕ1 ϕ2 · · · ϕm1 and Θ = θ1 θ2 · · · θm2 . These functions are called LC solutions in [2]. Using the Naimark Patching Lemma 4 we extend the functions ϕ1 , ϕ2 , . . . , ϕm1 to ones in D such that ϕj (t ) = 0 for all t near b, j = 1, 2, . . . , m1 and extend the functions θ1 , θ2 , . . . , θm2 to ones in D such that θj (t ) = 0 for all t near a, j = 1, 2, . . . , m2 . We still denote the extended functions ϕ1 , ϕ2 , . . . , ϕm1 and θ1 , θ2 , . . . , θm2 , respectively. Observe that   m1 + m2 = 2d. Letting {ψ1 , ψ2 , . . . , ψ2d } = ϕ1 , ϕ2 , . . . , ϕm1 , θ1 , θ2 , . . . , θm2 we have the decomposition





D = D0 u span {ψ1 , ψ2 , . . . , ψ2d } . Next we turn to the construction of  D from the functions {ψ1 , ψ2 , . . . , ψ2d }. The assumption that the codeficiency index δ is positive means that some δ linear combinations of the functions ψ1 , ψ2 , . . . , ψ2d lie in  D mod D0 . The next lemma establishes that such linear combinations can be assumed to be real. Lemma 15. There exist δ real-valued functions in  D mod D0 . Proof. We begin by showing that there is at least one real valued function in  D mod D0 . Since we assumed that δ > 0, there is at least one function z ∈  D mod D0 then Re z and Im z are in  D. At least one of these two functions is in  D mod D0 .

 2d

Let X be the linear space consisting of all real-valued linear combinations of the functions ψj j=1 that lie in  D. We claim   that s := dim X ≥ δ . If not, then let ψ1′ , ψ2′ , . . . , ψs′ be a basis for X . Extend this basis to a linearly independent set ψ1′ , ψ2′ , . . . , ψs′ , ψs′+1 , . . . , ψδ′

  ′ ⊂  D. In particular, the subset ψs′+1 , . . . , ψm consists of strictly complex linear 1  2d  t  ′   ′  ′ ′ t ′ ′ t ′ combinations of ψj j=1 . Let Ψ = ψ1 ψ2 · · · ψ2d , Ψ1 = ψ1 ψ2 · · · ψs and Ψ2′ = ψs+1 ψs+2 · · · ψδ . Then there exist real matrices A (s × 2d) , B (s × 2d) , C (δ − s × 2d) and E (δ − s × 2d) such that Ψ1′ = (A + iB) Ψ , Ψ2′ = Dδ−s , (C + iE ) Ψ . Since Ψ1′ is real, BΨ = 0. Since the components of Ψ are linearly independent, B = 0. Also, since Ψ2′ ∈  ′ ′ ′ δ− s both Re Ψ2 and Im Ψ2 are in  D . Then we can write C Ψ = F Ψ1 for some real (δ − s × s) matrix F . It is easy to see that Ψ2′ − C Ψ = Ψ2′ − F Ψ1′ and Ψ1′ are componentwise linearly independent. This means that we can take, without loss of generality, C = 0. But then, the components of Ψ1′ , 1i Ψ2′ are δ real-valued linearly independent functions in  D mod D0 .  



The following theorem summarizes the foregoing discussion, recapitulates the assumptions and gives decompositions of the domains D in terms of D0 ,  D in terms of D0 and D in terms  D. While the statement of the theorem is weaker than the discussion above (or the results in [2]) in that it ignores some of the properties of the functions ψ1 , ψ2 , . . . , ψ2d describing D mod D0 , we chose to state it this way in order to reduce technicality and to have a statement which is sufficient for our purposes. The proof of is straightforward and therefore, we omit it. Theorem 16 (Decompositions of D and  D). Let ℓ be a symmetric differential expression defined by a matrix  Q ∈ Zn(J ) satisfying (3). Assume L0 has deficiency index d with 0 ≤ d ≤ n and that D0 ⊂  D with codeficiency index δ = dim  D mod D0 > 0. Let c ∈ (a, b) and let dj , j = 1, 2 denote the deficiency indices of the minimal operators resulting when the expression ℓ is restricted to (a, c ) and to (c , b), respectively. Assume further that for some λ = λ1 ∈ R (1) has d1 (real-valued) linearly independent solutions belonging to L2w (a, c ) and that for λ = λ2 ∈ R (1) has d2 linearly independent solutions belonging to L2w (c , b). Then d = d1 + d2 − n and there exists 2d linearly independent real-valued functions ψ1 , ψ2 , . . . , ψ2d in D mod D0 such that 1. ψ1 , ψ2 , . . . , ψδ are in  D mod D0 , 2. ψδ+1 , ψδ+2 , . . . , ψ2d are in D mod  D, 3. D = D0 u span {ψ1 , ψ2 , . . . , ψ2d }, 4.  D = D0 u span {ψ1 , ψ2 , . . . , ψδ }, 5. D =  D u span {ψδ+1 , ψδ+2 , . . . , ψ2d }.

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

667

4. A characterization of self adjoint domains As usual, we let ℓ be a symmetric differential expression defined by a matrix Q ∈ Zn (J ) satisfying (3). We assume that the minimal operator L0 has deficiency index d = dim (D mod D0 ), with 0 ≤ d ≤ n. In this section we give a characterization of the self adjoint domains associated with the differential expression ℓ. If d = 0, then the only self-adjoint extension of L0 is L0 itself. If L0 is a Type I operator (e.g. under the conditions of Theorem 8) then its domain D0 is a Type I domain satisfying separated boundary conditions as asserted by Lemma 13. Therefore, we may assume from now on that d > 0. We first characterize Type I domains and then characterize the more general self adjoint domains. The general assumptions of Theorem 16 apply to this section. In the sequel, for a given vector Θ , {Θ } will denote the components of Θ listed as a set. Definition 17. We will say that a vector function Φ generates a domain D1 ⊇ D0 (the minimal domain) if 1. {Φ } is linearly independent modulo D0 (see [23] or [24]) and 2. dim Φ = dim (D1 mod D0 ). If Φ generates D1 we will write D1 = D0 u {Φ } . We will say that a vector function Φ generates a domain D2 complementary to the domain D1 if {Φ } is linearly independent modulo D1 and D2 = D1 u {Φ } . In this case we will also say that Φ generates D2 from D1 . 4.1. Generation of Type I domains In this subsection we discuss the characterization of Type I domains. In addition to the assumptions  of Theorem 16  D mod D0 ≥ d. If we put we also assume that the operator  LK exists. This means that the codeficiency index δ = dim  r = dim D mod  D , then 0 ≤ r = 2d − δ ≤ d.





Choose functions ψ1 , ψ2 , . . . , ψ2d ∈ D mod D0 according to Theorem 16 and suppose that  D is a Type I domain generated by the functions ϕ1 , ϕ2 , . . . , ϕd ∈  D mod D0 .Then there exists an A ∈ Cd×δ such that

Φ = AΨ  where Φ = ϕ1

(21)

ϕ2

···

ϕd

t

 and Ψ = ψ1

A {Ψ , Ψ } A∗ = 0.

ψ2

···

t

ψδ . The boundary condition {Φ , Φ } = 0 yields (22)

The problem of characterizing the matrices A that satisfy this condition for general n appears to be difficult. The difficulty of this problem can be reduced by introducing normalizations regarding the choices of the vector function Ψ . This approach was taken in [2] and resulted in the more simplified problem appearing in Theorem 3. Using their finds, the authors successfully characterized in [6] all self-adjoint extensions of L0 satisfying separated boundary conditions. Our approach here (one which eliminates this difficulty altogether) is to construct D starting from a set of functions that generates a Type I domain and extend it by a complementary set of d functions. Because of our assumptions we can start with a generator ΦK of the domain  DK . The idea is to take full advantage of Corollary 12 which implies, in the case  D = D, that {ΦK , Ψ } = 0. In the case  D is a  proper subset of D, we have only one Type I operator, namely, LK (see Theorem 18). As we shall see repeatedly below, this idea will greatly simplify the systems (22) we have to deal with. For example, Theorem 20 gives explicitly the formula for the required matrix solution. Thus, we may assume, without loss of generality, that

 ΦK = ψr +1

ψr +2

t ψδ ,

···

(23)

or what is the same

 DK = D0 u span {ΦK } . The generator Ψ of  D is thus partitioned into [ ] Ψ1 Ψ = , ΦK with

 Ψ1 = ψ1

ψ2

···

which generates D from  D.

ψr

t

(24)

668

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

Theorem 18 (Type I Domains for r > 0). If  D is a proper subspace of D (that is if r > 0), then  DK is the only Type I domain contained in  D. Proof. Suppose that  D is a Type I domain different from  DK . The construction of  D means that we can find matrices A ∈ Cd×r and B ∈ Cd×d such that

Φ = AΨ1 + BΦK

(25)

with A ̸= 0. Then s := rank A > 0. Write

[

I A=M s 0

]

0 N 0

with invertible M ∈ Cd×d and N ∈ Cr ×r . Then M

−1

[

I Φ= s 0

]

0 N Ψ1 + M −1 BΦK , 0

where Is is the identity matrix of order s. Accordingly, the system (25) may be partitioned as

Φ1 = Ψ11 + B1 ΦK , Φ2 = B2 ΦK ,

(26) (27)

with the definitions implied by the compatibility of the multiplication conditions. Let us clarify the definition of the vector Ψ11 . To do this we write

[

Is 0

]

0 N Ψ1 = 0

[

Is 0

=

N1 N3

N2 N4

] [ 1] Ψ1 Ψ12 [ ] N1 Ψ11 + N2 Ψ12 [

=

][

0 0

N1 0

] [ 1] Ψ1 Ψ12

N2 0

0

=

[ ] Ψ11 0

with the appropriate dimensions. The linear of Φ2 implies that B2 has rank (d − s) ≥ r > 0 since s ≤ r. Since  independence  M −1 Φ still generates  D, (16) implies that Φi , Φj = 0, i, j = 1, 2. The condition {Φ1 , Φ2 } = 0 gives 0 = {Ψ11 + B1 ΦK , B2 ΦK }

= {Ψ11 , ΦK } B∗2 + B1 {ΦK , ΦK } B∗2 = {Ψ11 , ΦK } B∗2 . Since B2 has full rank, B∗2 B2 is invertible. Hence, {Ψ11 , ΦK } = 0. By Corollary 12, {ΦK , Ψ11 } = 0. Using Proposition 7, we   conclude that {Ψ11 } = N1 Ψ11 + N2 Ψ12 ⊂  DK . On the other hand, since {Ψ1 } ⊂ D mod  D ⊂ D mod  DK , we must have N1 = 0 and N2 = 0. This contradicts the invertibility of the matrix N.  It follows from Theorem 18 that a necessary condition for the existence of more than one Type I operator is that  D=D (or r = d). Therefore, we continue our discussion with the assumption that  D = D. Lemma 19. If Ψ1′ generates D from  DK then Ψ1′ , ΦK given by (23) and (24), respectively.





is invertible. In particular {Ψ1 , ΦK } is invertible, where ΦK and Ψ1 are

    only. If Ψ1′ , ΦK is not invertible, then there is a 0 ̸= α ∈ Cd such that       α ∗ Ψ1′ , ΦK = α ∗ Ψ1′ , ΦK = 0. Since we also have ΦK , Ψ1′ = 0 by Corollary 12, α ∗ Ψ1′ ∈  DK by Proposition 7. But since  ′ DK by assumption, we must have α = 0; a contradiction.  Ψ1 ⊂ D mod  Proof. We prove the statement for Ψ1′ , ΦK

Theorem 20 (The Domain  DF ). The vector function

ΦF = Ψ1 + BΦK , where B∗ = − {Ψ1 , ΦK }−1 {Ψ1 , Ψ1 } generates a Type I domain  DF complementary to  DK . The domain  DF is unique in the sense that if  D is a Type I domain generated by

Φ ′ = AΨ1′ + B′ ΦK , where Ψ1′ generates D from  DK , then  D = DF .

(28)

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

669

Proof. A direct calculation gives {ΦF , ΦF } = 0. Next, we check that {ΦF } ⊂ D mod D0 . We will actually prove the stronger statement {ΦF } ⊂ D mod  DK ⊂ D mod D0 . If not, then there is a 0 ̸= α ∈ Cd such that α ∗ ΦF ∈  DK . Then α ∗ Ψ1 = α ∗ ΦF − α ∗ BΦK ∈  DK . But since {Ψ1 } ⊂ D mod  DK , we must have α = 0; a contradiction. Hence, ΦF defines a Type I domain  DF . The domain  DF is complementary to  DK since {ΦF , ΦK } = {Ψ1 , ΦK } ̸= 0. To show the uniqueness of  DF assume that  D is a Type I domain generated by Φ ′ = AΨ1′ + B′ ΦK , where Ψ1′ generates D from  DK . Since {Ψ1 } ∪ {ΦK } generates D, there exist matrices R and S such that

Ψ1′ = RΨ1 + S ΦK .

(29)

Then



 Ψ1′ , ΦK = R {Ψ1 , ΦK } ,

which means, by Lemma 19, that R is invertible. By multiplying both sides of Eq. (29) by R−1 and noting that R−1 Ψ1′ generates the same space as Ψ1′ we may assume that

Ψ1′ = Ψ1 + S ΦK . It can be argued similar to theproof of  Theorem 20 that A in (28) is invertible. Therefore, we may also assume that Φ ′ = Ψ1′ + B′ ΦK . The condition Φ ′ , Φ ′ = 0 yields

−1  ′ ′  Ψ1 , Ψ1   −1 = − {Ψ1 , ΦK } Ψ1 , Ψ1′ .

B′ = − Ψ1′ , ΦK



A straightforward calculation shows that Φ ′ , ΦF





  = ΦF , Φ ′ = 0. Therefore,  D = DF .



Remark 21. The domain  DF has a sort of a dual boundary value property, namely, {Ψ , ΦF } = 0 for all {Ψ } ⊂ D. Indeed, for any such Ψ , we may write Ψ = RΦF + S ΦK . Therefore, {Ψ , ΦF } = 0.  DF satisfies separated boundary conditions just like  DK , is symmetric and its associated Type I operator  LF is real. The reasoning is exactly like Lemma 13 except with functions in D mod D0 . Remark 22. In the regular case, the condition {Ψ , ΦF } = 0 for all {Ψ } ⊂ D reduces to Φl (a) = Φl (b) = 0 (see (14) for notation). Thus (cf. [14]),  DF is none other than the celebrated Friedrich Extension [27]. In the singular case, the Friedrich Extension was discussed in [13,28] where it was treated as a general self-adjoint extension of L0 and, as such its boundary conditions were described in terms of the Lagrangian [·, ·]. Our results here give more details about the nature of the Friedrich Extension. For example, Theorem 20 tells us that if D =  D then the Friedrich Extension is a Type I operator and its boundary conditions may be described in terms of the half-Lagrangian {·, ·}, while Theorem 18 tells us that if  D is a proper subspace of D, then the Friedrich Extension is not a Type I operator (remember that we are always assuming the existence of  LK ). Incidentally, ΦK , ΦF satisfy the boundary condition {ΦK , ΦF } = 0 while {ΦF , ΦK } has full rank by Lemma 19. We turn next to the construction of other Type I domains. This will be done by using {ΦK } ∪ {ΦF } to generate D. Theorem 23 (Type I Domains for r = 0). Assume  D = D. The following statements are equivalent: (a)  D is a Type I domain. (b) There is a choice of the generators ΦK , ΦF of the domains  DK ,  DF , respectively such that {ΦF , ΦK } = Id and  D is generated by

[

]

I Φ= s 0

[

0 0 ΦK + 0 0

]

0 Id−s

ΦF ,

(30)

where Is is the identity operator of order s, 0 ≤ s ≤ d and I0 = 0. Proof. Assume (b) holds. Then

{Φ , Φ } =

[ [

=

0 0

[ =

]

Is 0

0 0

[

0 0 ΦK + 0 0

]

0 Id−s

[

0 I {ΦF , ΦK } s Id−s 0 0 Id−s

][

Is 0

]

[ I ΦF , s 0 ]

]

[

0 0 ΦK + 0 0

0 Id−s

]

ΦF



0 0

]

0 = 0. 0

Furthermore, one can easily check that Φ generates  D. Hence,  D is a Type I domain. Assume (a) holds. Choose generators ΦF , ΦK of  DF ,  DK , respectively such that {ΦF , ΦK } = Id . Let  D be generated by a vector function Φ . We may write

Φ = P ΦK + RΦF .

670

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

If rank P = 0 then

Φ = RΦF . Since Φ , ΦF have linearly independent components, R has full rank. Therefore,  D= DF . This situation corresponds to (30) with s = 0. If rank P = d then the condition {Φ , Φ } = 0 yields 0 = RP ∗ . Since P ∗ is invertible, R = 0. Therefore,  D= DK . This situation corresponds to (30) with s = d. It remains to consider the case 0 < rank P := s < d. Write

[ P =M

]

Is 0

0 N 0

with invertible M and N. Then M

−1

[

I Φ= s 0

]

0 N ΦK + M −1 RN ∗ N ∗−1 ΦF . 0

Observe that N ∗−1 ΦF , N ΦK = N ∗−1 {ΦF , ΦK } N ∗ = N ∗−1 N ∗ = Id . Thus, without loss of generality, we may replace M −1 Φ by Φ , N ΦK by ΦK , N ∗−1 ΦF by ΦF and M −1 RN ∗ by R in the above equation to obtain



Φ=



[

]

Is 0

0 ΦK + RΦF . 0

The condition {Φ , Φ } = 0 yields

[ R

]

Is 0

0 = 0. 0

(31)

Partition R into

[ R=

U V

S T

]

and substitute in Eq. (31) to get U = 0 and V = 0. Thus

[

I Φ= s 0

]

[

0 0 ΦK + 0 0

]

S ΦF . T

(32)

Since the last (d − s) components in Φ are linearly independent, T must have full rank. Next, we check that Φ is independent of the choice of S and T . Assume

Φi =

[

Is 0

]

[

0 0 ΦK + 0 0

]

Si ΦF , Ti

i = 1, 2.

Then

[

0 Φ2 = Φ1 + 0

] 1S ΦF , 1T

where 1S = S2 − S1 and 1T = T2 − T1 . It can be directly checked that {Φ1 , Φ2 } = {Φ2 , Φ1 } = 0. Therefore, Φ1 and Φ2 generate the same domain  D. It follows from this that we can choose S = 0 and T = Id−s in (32). Thus, we get the representation (30).  Corollary 24. If both  LF and  LK exist, then there are exactly 2d Type I operators. Remark 25. In [8] it was found that there are 6 Type I operators in the regular case with n = 2. Boundary conditions of the form u(a) + u′ (b) = 0 are reported as generating a Type I domain. However, a closer look at the describing functions of the maximal domain there reveals that functions which satisfy u(a) = 0 also satisfy u′ (b) = 0 and vice versa. Therefore, coupled boundary conditions are actually subsumed by separated boundary conditions and there are actually 4 Type I operators. This situation corresponds to the finding in the proof of Theorem 23 that we can select the submatrix S = 0. 4.2. Generation of general self-adjoint domains In this subsection we discuss the characterization of the general self-adjoint domains associated with ℓ. Here we follow the same approach of the previous subsection which consists of the following three steps: 1. Start with a generator ΦI of a self-adjoint domain  DI . 2. Find a generator ΦC of a complementary self-adjoint domain  DC . 3. Since {ΦI } ∪ {ΦC } generate D, we can use these functions to construct the general self-adjoint domain.

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

671

For Step 1 above, one can start with any one of the domains described in [6] or in [5]. The advantage of the former type of domains is that they can be used to investigate the spectral properties of self-adjoint operators and that they are easier to find [2]. Unlike the case of Type I domains, we will find an abundance of complementary self-adjoint domains in the general case. This is not surprising, given the characterizations in the literature of such operators (see [23,2,24]). So, choose functions ψ1 , ψ2 , . . . , ψ2d ∈ D mod D0 according to Theorem 16. Then D = D0 u span {Ψ } , where

 Ψ = ψ1

ψ2

t ψ2d .

···

(33)

Suppose

 ΦI = ψd+1

ψd+2

···

ψ2d

t

(34)

generates a self-adjoint domain  DI . Let Ψ1 be defined by

 Ψ1 = ψ1

ψ2

···

t ψd .

(35)

We can show that [ΦI , Ψ1 ] is invertible in the same way we did in the proof of Lemma 19. Theorem 26 (Self-Adjoint Domains Complementary to  DI ). Let Ψ be the generator of the maximal domain D and let ΦI be the generator of a self-adjoint domain  DI , where Ψ is given by (33) and ΦI is given by (33). Let Ψ1 be defined by (35). The vector function Φ generates a self-adjoint domain complementary to  DI if and only if there exists a Hermitian matrix Z ∈ Cd×d such that

Φ = Ψ1 +



1 2

[Ψ1 , Ψ1 ] + Z



[ΦI , Ψ1 ]−1 ΦI .

(36)

Furthermore, the mapping Z → Φ defined through (36) is a one to one correspondence which is independent of the choice of the generator Ψ1 of D from  DI . Proof. A straightforward calculation shows that, for any Hermitian matrix Z ∈ Cd×d , the function Φ defined by (36) generates a self-adjoint domain. [Φ , ΦI ] is invertible since [Φ , ΦI ] = [Ψ1 , Φ1 ]. It follows that {Φ } is linearly independent modulo  DI . To see this, assume {Φ } is not linearly independent modulo  DI . Then there exists a nontrivial α ∈ Cd such that α ∗ Φ ∈  DI . By Theorem 1, α ∗ [Φ , ΦI ] = [α ∗ Φ , ΦI ] = 0, which contradicts the invertibility of [Φ , ΦI ]. Therefore, Φ generates a self-adjoint domain complementary to  DI . On the other hand, assume that Φ generates a self adjoint domain complimentary to  DI . Then, since {ΦI } ∪ {Ψ1 } generates D we can find matrices A, B ∈ Cd×d such that

Φ = AΨ1 + BΦI . We claim that A is invertible. If not then there exists a nontrivial α ∈ Cd such that α ∗ A = 0. Then α ∗ Φ = α ∗ BΦI which implies that α ∗ Φ ∈  DI which contradicts the linear independence of {Φ } modulo  DI . Thus, we may assume, without loss of generality that

Φ = Ψ1 + BΦI . The boundary condition [Φ , Φ ] = 0 yields [Ψ1 , Ψ1 ] + B [ΦI , Ψ1 ] + [Ψ1 , ΦI ] B∗ = 0. Since [Ψ1 , ΦI ] = − [ΦI , Ψ1 ]∗ , this system may be rewritten in the form F + BC − C ∗ B∗ = 0,

(37)

where F = [Ψ1 , Ψ1 ] and C = [Ψ1 , ΦI ]. Eq. (37) is the well-known Sylvester equation (see [29,30]), which has the general solution ∗

 B=

1 2

 F +Z

C −1 ,

(38)

where Z ∈ Cd×d is an arbitrary Hermitian matrix (here C is invertible since C ∗ is). Thus, Φ is given by (36) for some Hermitian matrix Z ∈ Cd×d .   We can easily check that if Φi = Ψ1 + Bi ΦI , Bi = 12 F + Zi C −1 , i = 1, 2, then [Φ1 , Φ2 ] = Z1 − Z2 , which means that Φ1 , Φ2 generate different self-adjoint domains. This establishes the one to one correspondence Z → Φ . To show the independence from the choice of the generator of D from  DI assume Ψ1′ is another generator of D from  DI . Then we may ′ ′ write Ψ1 = AΨ1 for some invertible matrix A. Putting Φ = Ψ1′ + B′ ΦI we get A−1 Φ ′ = Ψ1 + A−1 B′ ΦI . Therefore, formula (38) gives B′ = AB. Hence, Φ ′ = AΦ , which means that Φ ′ and Φ generate the same domain  D. 

672

M.A. El-Gebeily et al. / Mathematical and Computer Modelling 54 (2011) 659–672

Remark 27. The formula (36) for self-adjoint domains complementary to  DI yields that any generator Φ of such domain satisfies [Φ , ΦI ] = [Ψ1 , ΦI ] independent of Z . It is always possible to ‘‘change base’’ so that [Φ , ΦI ] = Id .This observation will make it possible to arrive at the general self-adjoint domain in the following theorem. Theorem 28 (The General Self-Adjoint Domain). Let  DI be any self adjoint domain with generator ΦI . Let Z ∈ Cd×d be a Hermitian matrix and let ΦZ be defined by (36). Denote by  DZ the self-adjoint domain complementary to  DI which is generated by ΦZ . Assume that ΦZ , ΦI are normalized such that [ΦZ , ΦI ] = Id . Then the domain  D generated by the vector function

Φ=

[

Is 0

]

[

0 0 ΦZ + 0 0

0 Id−s

]

ΦI ,

(39)

where Is is the identity matrix of order s, 0 ≤ s ≤ d and I0 = 0, is a self-adjoint domain. Conversely, if  D is a self-adjoint domain then there is Hermitian matrix Z ∈ Cd×d and an integer s with 0 ≤ s ≤ d such that the generator Φ of  D satisfies (39), where ΦZ is defined by (36) with a possible change of base so that [ΦZ , ΦI ] = Id . Proof. The proof is similar to that of Theorem 23.



Acknowledgements The authors are very thankful to the reviewer for comments that led to a huge improvement of the paper. Research for the first author was supported by King Fahd University of Petroleum and Minerals under grant number IN090029. The first author would like to express his gratitude to King Fahd University for their support. References [1] W.N. Everitt, A. Zettl, Generalized symmetric ordinary differential expressions I; the general theory, Nieuw Arch. Wiskd. 27 (3) (1979) 363–397. [2] A.P. Wang, J. Sun, A. Zettl, Characterization of domains of self-adjoint ordinary differential operators, J. Differential Equations 246 (2009) 1600–1622. [3] W.N. Everitt, L. Markus, Boundary Value Problems and Symplectic Algebra for Ordinary Differential and Quasi-Differential Operators, in: Math. Surveys Monogr., vol. 61, Amer. Math. Soc, 1999. [4] A. Wang, J. Sun, A. Zettl, The classification of self-adjoint boundary conditions of differential operators with two singular endpoints, J. Math. Anal. Appl. 378 (2) (2011) 493–506. [5] J. Sun, On the self-adjoint extensions of symmetric ordinary differential operators with middle deficiency indices, Acta Math. Sinica (NS) 2 (2) (1986) 152–167. [6] A.P. Wang, J. Sun, A. Zettl, The classification of self-adjoint boundary conditions: separated, coupled, and mixed, J. Funct. Anal. 255 (2008) 1554–1573. [7] M. El-Gebeily, D. O’Regan, A characterization of self adjoint operators determined by the weak formulation of second order singular differential expressions, Glasg. Math. J. 51 (2009) 387–404. [8] M. El-Gebeily, D. O’Regan, The boundary condition description of type I domains, Glasg. Math. J. 52 (2010) 619–633. [9] T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, Berlin, Heidelberg, New York, Tokyo, 1976. [10] Attila B. von Keviczky a, Nasser Saad b, Richard L. Hall, Friedrichs extensions of Schrödinger operators with singular potentials, J. Math. Anal. Appl. 292 (1) (2004) 274–293. [11] W.N. Everitt, H. Kalf, The Bessel differential equation and the Hankel transform, J. Comput. Appl. Math. 208 (1) (2007) 3–19. [12] A. Fleige, S. Hassi, H. de Snoo, H. Winkler, Generalized friedrichs extensions associated with interface conditions for Sturm–Liouville operators, in: M. Langer, A. Luger, H. Woracek (Eds.), Operator Theory Advances and Applications, in: Operator Theory and Indefinite Inner Product Spaces, vol. 163, Birkhäuser, Basel, 2005, pp. 135–145. [13] M. Marletta, A. Zettl, The friedrichs extension of singular differential operators, J. Differential Equations 160 (2001) 404–421. [14] M. Moller, A. Zettl, Semi-boundedness of ordinary differential operators, J. Differential Equations 115 (1995) 24–49. [15] M. Moller, A. Zettl, Symmetrical differential operators and their friedrichs extension, J. Differential Equations 115 (1) (1995) 50–69. [16] W. Everitt, L. Littlejohn, R. Wellman, Legendre polynomials, Legendre–Stirling numbers, and the left-definite spectral analysis of the Legendre differential expression, J. Comput. Appl. Math. 148 (1) (2002) 213–238. [17] W. Everitt, K. Kwon, L. Littlejohn, R. Wellman, G. Yoon, Jacobi–Stirling numbers, Jacobi polynomials, and the left-definite analysis of the classical Jacobi differential expression, J. Comput. Appl. Math. 208 (1) (2007) 29–56. [18] Q. Kong, H. Wu, A. Zettl, Singular left-definite Sturm–Liouville problems, J. Differential Equations 206 (2004) 1–29. [19] L. Littlejohn, R. Wellman, A general left-definite theory for certain self-adjoint operators with applications to differential equations, J. Differential Equations 181 (2) (2002) 280–339. [20] G. Wei, S. Fu, Left-definite spaces of singular Sturm–Liouville problems, J. Math. Anal. Appl. 345 (1) (2008) 420–430. [21] G. Wei, J. Wu, Characterization of left-definiteness of Sturm–Liouville problems, Math. Nachr. 278 (2006) 932–941. [22] N.I. Akhiezer, I.M. Glazman, Theory of Linear Operators in Hilbert Space, vol. II, Fredrick Ungar, New York, 1963. [23] M.A. Naimark, Linear Differential Operators, Part II, Ungar, New York, 1968. [24] J. Weidmann, Spectral Theory of Ordinary Differential Operators, in: Lecture Notes in Mathematics, vol. 1258, Springer-Verlag, Berlin, Heidelberg, New York, London, Paris, Tokyo, 1987. [25] P.B. Bailey, W.N. Everitt, J. Weidmann, A. Zettl, Regular approximations of singular Sturm–Liouville problems, Results Math. 23 (1993) 3–22. [26] W.N. Everitt, Man Kam Kwong, A. Zettl, Oscillation of eigenfunctions of weighted regular Sturm–Liouville Problems, J. Lond. Math. Soc. s2–27 (1) (1983) 106–120. [27] K.O. Friedrichs, Spektraltheorie halbbeschra nkter operatoren, Math. Ann. 109 (1934) 465–487. and 685–713; also 110 (1935), 777–779. [28] H.-D. Niessen, A. Zettl, The Friedrichs extension of regular ordinary differential operators, Proc. Roy. Soc. Edinburgh Sect. A 114 (1990) 229–236. [29] D.S. Djordjević, Explicit solution of the operator equation A ∗ X + X ∗ A = B, J. Comput. Appl. Math. 200 (2) (2007) 701–704. [30] J.H. Hodges, Some matrix equations over a finite field, Ann. Mat. Pura Appl. (4) 44 (1) (1957) 245–250.