Available online at www.sciencedirect.com
ScienceDirect J. Differential Equations 259 (2015) 4651–4682 www.elsevier.com/locate/jde
Principal and antiprincipal solutions at infinity of linear Hamiltonian systems ✩ Peter Šepitka, Roman Šimon Hilscher ∗ Department of Mathematics and Statistics, Faculty of Science, Masaryk University, Kotláˇrská 2, CZ-61137 Brno, Czech Republic Received 7 August 2014 Available online 2 July 2015
Abstract The concept of principal solutions at infinity for possibly abnormal linear Hamiltonian systems was recently introduced by the authors. In this paper we develop the theory of antiprincipal solutions at infinity and establish a limit characterization of the principal solutions. That is, we prove that the principal solutions are the smallest ones at infinity when they are compared with the antiprincipal solutions. This statement is a generalization of the classical result of W.T. Reid, P. Hartman, or W.A. Coppel for controllable linear Hamiltonian systems. We also derive a classification of antiprincipal solutions at infinity according to their rank and show that the antiprincipal solutions exist for any rank in the range between explicitly given minimal and maximal values. We illustrate our new theory by several examples. © 2015 Elsevier Inc. All rights reserved. MSC: 34C10 Keywords: Linear Hamiltonian system; Antiprincipal solution at infinity; Principal solution at infinity; Order of abnormality; Genus of conjoined bases; Moore–Penrose pseudoinverse
✩ This research was supported by the Czech Science Foundation under grant P201/10/1032 and by grant MUNI/A/0821/2013 of Masaryk University. * Corresponding author. E-mail addresses:
[email protected] (P. Šepitka),
[email protected] (R. Šimon Hilscher).
http://dx.doi.org/10.1016/j.jde.2015.06.027 0022-0396/© 2015 Elsevier Inc. All rights reserved.
4652
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
1. Introduction Principal and antiprincipal solutions play an important role in the study of ordinary differential equations, especially in the oscillation and spectral theory, see e.g. [1,5,6,8,15]. Principal solutions at infinity are, roughly speaking, the smallest ones when they are compared with other linearly independent solutions of the equation. In this paper we consider the theory of principal and antiprincipal solutions for the linear Hamiltonian system x = A(t) x + B(t) u,
u = C(t) x − AT (t) u,
t ∈ [a, ∞),
(H)
where A(t), B(t), C(t) are piecewise continuous n × n matrix-valued functions on [a, ∞), the matrices B(t) and C(t) are symmetric, and B(t) satisfies the Legendre condition B(t) ≥ 0 for all t ∈ [a, ∞).
(1.1)
Here n ∈ N is given dimension and a ∈ R is fixed. System (H) is traditionally studied under the complete controllability assumption. This means that the only solution (x, u) of (H) such that x(t) ≡ 0 on a nondegenerate subinterval of [a, ∞) is the trivial solution (x, u) ≡ (0, 0). If (H) is completely controllable, W.T. Reid defined in [16] the principal solution of (H) at infinity as a ˆ Uˆ ) for which X(t) ˆ conjoined basis (X, is eventually invertible and t lim
t→∞
Xˆ −1 (s) B(s) Xˆ T −1 (s) ds
−1
= 0,
(1.2)
see also the monographs by P. Hartman [7, Section XI.10] or W.T. Reid [18, pg. 316] or ˆ Uˆ ) is then the smallest solution of (H) at W.A. Coppel [4, Setion 2.2]. The principal solution (X, infinity in the sense that ˆ lim X −1 (t) X(t) =0
(1.3)
t→∞
ˆ Uˆ ) and X(t) is for any conjoined basis (X, U ) of (H) which is linearly independent on (X, eventually invertible, see [1, Theorem 2.2], [4, Proposition 4, pg 43], [18, Theorem VII.3.2], [7, Theorem XI.10.5]. In this context the conjoined basis (X, U ) is called an antiprincipal solution of (H) at infinity and similarly to (1.2) it is characterized by the property, see [1, Theorem 3.1(ii)], t lim
t→∞
X −1 (s) B(s) X T −1 (s) ds
−1
= T,
with T nonsingular.
(1.4)
Recently in [19,20], we have introduced a new concept of principal solutions of (H) at infinity, when the complete controllability assumption is absent. This involves the presence of the Moore–Penrose pseudoinverses in (1.2), i.e., t lim
t→∞
Xˆ † (s) B(s) Xˆ †T (s) ds
† = 0.
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4653
ˆ Uˆ ) with the rank This more general concept gives rise to a whole scale of principal solutions (X, ˆ of X(t) eventually equal to any integer between n − d∞ and n, where d∞ is the maximal order of abnormality of (H), see Section 3 for more details. The aim of this paper is to continue in the above study by introducing the concept of antiprincipal solutions at infinity for possibly abnormal linear Hamiltonian system (H). The main results of this paper (Theorems 5.8, 5.13, and 6.1) contain: (i) the existence of antiprincipal solutions (X, U ) of (H) at infinity with all ranks of X(t) between n − d∞ and n when system (H) is nonoscillatory, (ii) a classification of the antiprincipal solutions of (H) at infinity in terms of the Wronskians with principal solutions, and (iii) the limit comparison of the principal and antiprincipal solutions of (H) at infinity in the sense of (1.3). As the main tools we utilize the analysis of the S-matrices associated with conjoined bases (X, U ) of (H) with constant kernel and the concept of genera of conjoined bases of (H), which was introduced in [20]. The results of this paper reopen the very traditional theory of principal and antiprincipal solutions of (H) at infinity in [4,7,18] and bring it to the current research of possibly abnormal linear Hamiltonian systems [9–11,13,14,21,22]. Some of our results are new even in the controllable case. For example, in Corollary 5.17 we describe a rich class of antiprincipal solutions at infinity of a controllable system (H). Moreover, in Corollary 4.11 we show that the conjoined bases (X, U ) of (H) lead to matrices T in (1.4) with rank T between 0 and n. The antiprincipal solutions of (H) at infinity then correspond to the maximal value of rank T (i.e., rank T = n), while the principal solutions of (H) at infinity correspond to the minimal value of rank T (i.e., rank T = 0). The values of the rank of T strictly between 0 and n then lead to a class of “nonstandard” solutions of (H), i.e., to solutions which are neither principal nor antiprincipal at infinity, see also Remark 6.7(iii). The paper is organized as follows. In Section 2 we recall some notions from matrix analysis, in particular about orthogonal projectors and Moore–Penrose pseudoinverses. In Section 3 we summarize important properties of linear Hamiltonian systems and their solutions. We also review needed results about S-matrices associated with conjoined bases of (H) with constant kernel. In Section 4 we describe the minimal genus of conjoined bases of (H) and provide a complete characterization of matrices T in (1.4). In Section 5 we utilize these results for the proof of the existence and classification of the antiprincipal solutions of (H) at infinity with prescribed rank. In Section 6 we apply the theory of antiprincipal solutions in order to prove a limit characterization of principal solutions at infinity within one genus in the sense of (1.3). Finally, Section 7 contains several examples which illustrate these new results. 2. Review of matrices and matrix functions In this paper we utilize a standard matrix notation, which was also used in [19,20]. More precisely, for a real matrix M we denote by Im M, Ker M, rank M, def M, M T , M −1 , and M † the image, kernel, rank (i.e., the dimension of the image), defect (i.e., the dimension of the kernel), transpose, inverse (when M is square and invertible), and Moore–Penrose pseudoinverse of M (see Remark 2.1), respectively. Furthermore, for any square matrix M ∈ Rn×n we denote by (M)k the k-th leading principal submatrix of M, i.e., (M)k is formed by the elements mij of M for i, j = 1, . . . , k. For a symmetric matrix M ∈ Rn×n we write M ≥ 0 when M is nonnegative definite. By I and 0 we denote the identity and zero matrices of the corresponding dimension. We use the notation diag{M1 , . . . , Mk } for the block-diagonal matrix with (block) entries M1 , . . . , Mk on its diagonal. For any linear subspace V of Rn we denote by dim V and
4654
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
V ⊥ the dimension of V and the orthogonal complement of V in Rn with respect to the canonical inner product. In this paper we frequently use the orthogonal projectors. According to [3, Section 0.2], if V is a linear subspace of Rn , then a matrix PV ∈ Rn×n is said to be an orthogonal projector onto V if PV v = v for all v ∈ V, and PV v = 0 for all v ∈ V ⊥ . The matrix PV is uniquely determined by the subspace V. The matrix I − PV is then the orthogonal projector onto V ⊥ . Moreover, PV is symmetric and Im PV = Ker (I − PV ) = V,
Ker PV = Im (I − PV ) = V ⊥ .
A matrix P ∈ Rn×n is the orthogonal projector onto a subspace of Rn if and only if P is symmetric and idempotent, i.e., P 2 = P . In this case P is the orthogonal projector onto Im P . Orthogonal projectors are easily constructed by using Moore–Penrose pseudoinverses. Their important properties are collected in the following remark, see [3, Section 1.4] and [2, Chapter 6]. Remark 2.1. (i) For any matrix M ∈ Rm×n there exists a unique matrix M † ∈ Rn×m , called the Moore–Penrose pseudoinverse of M, satisfying the equalities MM † M = M,
M † MM † = M † ,
MM † = (MM † )T ,
M † M = (M † M)T .
(2.1)
Note that Im M † = Im M T and Im M †T = Im M. (ii) The matrix MM † is the orthogonal projector onto Im M and the matrix M † M is the orthogonal projector onto Im M † . Moreover, rank M = rank MM † = rank M † M. (iii) For matrices M and N we have (MN )† =(PIm M T N )† (MPIm N )† =(M † MN)† (MN N † )† . Matrix functions of the variable t ∈ [a, ∞) will be denoted by the capital letters. Limits, differentiation, and integration of matrix-valued functions are always understood elementwise. By Cp we denote the set of piecewise continuous matrix-valued functions on [a, ∞), i.e., a function f ∈ Cp has finitely many discontinuities t1 , . . . , tm in every subinterval [a, b] ⊆ [a, ∞) with finite one-sided limits at the points t1 , . . . , tm . By C1p we denote the set of piecewise continuously differentiable functions on [a, ∞), i.e., a function f ∈ C1p is continuous on [a, ∞) with f ∈ Cp . In particular, the one-sided derivatives f (t0+ ) and f (t0− ) are finite at points t0 , where f (t) is not continuous. These values are then used by convention in all formulas involving f (t) without any further notice. We will also need the following results on the differentiability of the Moore–Penrose pseudoinverse M † (t). Remark 2.2. (i) By [3, Theorems 10.5.1 and 10.5.3], for a differentiable matrix-valued function M(t) on [α, ∞) its Moore–Penrose pseudoinverse is differentiable on [α, ∞) if and only if rank M(t) is constant on [α, ∞). In particular, when Ker M(t) is constant on [α, ∞) and M(t) is symmetric, then Im M (t) ⊆ Im M(t) = Im M † (t) and [M † (t)] = −M † (t) M (t) M † (t),
t ∈ [α, ∞).
(ii) Let M(t) be a matrix function such that M(t) → M for t → ∞. Then by [3, Theorems 10.4.1 and 10.4.2] the function M † (t) has a limit (say N ) as t → ∞ if and only if rank M(t) = rank M for large t . In this case we have N = M † .
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4655
3. Linear Hamiltonian systems and properties of their solutions In this section we review some recent results about linear Hamiltonian systems (H) from [13, 19,20,22]. For a classical theory of these systems we refer to [4,12,18]. Following our general convention, matrix solutions (X, U ) of (H) will be denoted by the capital letters, i.e., X, U : [a, ∞) → Rn×n with X, U ∈ C1p . In order to shorten the notation and the calculations, we sometimes suppress the argument t in the solutions. For any two matrix solutions (X1 , U1 ), (X2 , U2 ) of (H) their Wronskian X1T U2 − U1T X2 is constant on [a, ∞). A solution (X, U ) of (H) is called a conjoined basis if rank(X T (t), U T (t))T = n and X T (t) U (t) is symmetric at some and hence at any t ∈ [a, ∞). The principal solution (Xˆ α , Uˆ α ) at the point α ∈ [a, ∞) is an example of such a conjoined basis. It is defined as the solution of (H) with the initial conditions Xˆ α (α) = 0 and Uˆ α (α) = I . By [12, Corollary 3.3.9], a given conjoined basis (X, U ) can be ¯ U¯ ). In addition, the completed to a fundamental system of (H) by another conjoined basis (X, ¯ U¯ ) can be chosen so that (X, U ) and (X, ¯ U¯ ) are normalized, i.e., we have conjoined basis (X, X T U¯ − U T X¯ = I.
(3.1)
The oscillation of conjoined bases of (H) is defined via the concept of proper focal points, see [23, Definition 1.1]. This concept will not be explicitly needed in this paper, as we deal only with a nonoscillatory system (H). By [22, Definition 2.1], a conjoined basis (X, U ) of (H) is called nonoscillatory if there exists α ∈ [a, ∞) such that Ker X(t) is constant on [α, ∞). The main result of [22] then describes the (non)oscillatory behavior of system (H), see Proposition 3.1 below. Based on this result we say that system (H) is nonoscillatory if one (and hence all) conjoined bases of (H) are nonoscillatory. Proposition 3.1. Assume that the Legendre condition (1.1) holds. Then there exists a nonoscillatory conjoined basis of (H) if and only if every conjoined basis of (H) is nonoscillatory. In the remaining part of this section we recall some important notions and results from [19,20]. If (X, U ) is a conjoined basis of (H), then by its kernel we mean the kernel of X. Furthermore, we define on [a, ∞) the orthogonal projectors onto the subspaces Im X T (t) and Im X(t) by P (t) := PIm XT (t) = X † (t) X(t),
R(t) := PIm X(t) = X(t) X † (t).
(3.2)
If (X, U ) has constant kernel on [α, ∞), then P (t) is constant on [α, ∞) and we set P := P (t) on [α, ∞).
(3.3)
In this case we have by Remark 2.1(ii) that r := rank X(t) = rank P = rank R(t)
on [α, ∞),
(3.4)
and we say that (X, U ) has rank r. Moreover, it follows by Remark 2.2(i) that X † ∈ C1p on [α, ∞). With (X, U ) we then associate the S-matrix t S(t) :=
X † (s) B(s) X †T (s) ds, α
t ∈ [α, ∞).
(3.5)
4656
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
Under (1.1), the matrix S(t) is symmetric, nonnegative definite, S ∈ C1p on [α, ∞), and the set Im S(t) is nondecreasing and hence eventually constant with Im S(t) ⊆ Im P , see [19, Theorem 4.2]. Therefore, by the notation in (3.2), the orthogonal projector PS (t) onto the set Im S(t) is eventually constant and we write PS (t) := PIm S(t) = S(t) S † (t) = S † (t) S(t),
PS ∞ := PS (t)
for t → ∞.
(3.6)
In addition, on [α, ∞) we have the inclusions Im S(t) = Im PS (t) ⊆ Im PS ∞ ⊆ Im P .
(3.7)
Remark 3.2. The function S(t) is closely related with a certain class of conjoined bases of (H) which are normalized with (X, U ). More precisely, in [19, Theorem 4.4] we proved that for a given conjoined basis (X, U ) with constant kernel on [α, ∞) there exists a conjoined basis ¯ U¯ ) of (H) such that (X, U ) and (X, ¯ U¯ ) are normalized, i.e., (3.1) holds, and (X, ¯ X † (α) X(α) = 0.
(3.8)
¯ Moreover, the matrix X(t) is uniquely determined by (X, U ) on [α, ∞), as well as the matrices U¯ P = U S + X †T + U (I − P ) X¯ T X †T
¯ = XS, XP
(3.9)
are uniquely determined by (X, U ) on [α, ∞), where P is given in (3.3), see [19, Remark 4.5.(ii)]. The following result from [19, Theorem 4.6] shows that under a certain condition the conjoined bases of (H) with constant kernel on [α, ∞) are mutually representable. Proposition 3.3. Assume (1.1). Let (X1 , U1 ) and (X2 , U2 ) be conjoined bases of (H) with constant kernels on [α, ∞) and let P1 and P2 be the projectors defined in (3.3) through the functions X1 and X2 , respectively. Let (X2 , U2 ) be expressed in terms of (X1 , U1 ) via matrices M1 , N1 and let (X1 , U1 ) be expressed in terms of (X2 , U2 ) via matrices M2 , N2 , that is,
X2 U2
=
X1 U1
X¯ 1 U¯ 1
M1 N1
,
X1 U1
=
X2 U2
X¯ 2 U¯ 2
M2 N2
on [α, ∞),
(3.10)
where (X¯ 1 , U¯ 1 ) and (X¯ 2 , U¯ 2 ) are conjoined bases of (H) satisfying (3.1) and (3.8) with regard to conjoined bases (X1 , U1 ) and (X2 , U2 ). If Im X1 (α) = Im X2 (α), then (i) M1T N1 and M2T N2 are symmetric and N1 + N2T = 0, (ii) M1 and M2 are nonsingular and M1 M2 = M2 M1 = I , (iii) Im N1 ⊆ Im P1 and Im N2 ⊆ Im P2 . Moreover, the matrices M1 , N1 do not depend on the choice of (X¯ 1 , U¯ 1 ), and the matrices M2 , N2 do not depend on the choice of (X¯ 2 , U¯ 2 ).
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4657
Remark 3.4. (i) Formula (3.10) at t = α with equations (3.8) and (3.9) imply that the conjoined bases (X1 , U1 ) and (X2 , U2 ) in Proposition 3.3 satisfy X2 (α) = X1 (α) M1 ,
U2 (α) = U1 (α) M1 + X1†T (α) N1 ,
(3.11)
X1 (α) = X2 (α) M2 ,
U1 (α) = U2 (α) M2 + X2†T (α) N2 .
(3.12)
Moreover, the matrix N1 is the Wronskian of (X1 , U1 ) and (X2 , U2 ), while N2 = −N1T is the Wronskian of (X2 , U2 ) and (X1 , U1 ). In addition, P2 M2 = (P1 M1 )† . (ii) The first equality in (3.9) applied to conjoined bases (X1 , U1 ) and (X2 , U2 ) allows to rewrite expressions (3.10) into the form X2 = X1 (P1 M1 + S1 N1 ),
X1 = X2 (P2 M2 + S2 N2 )
on [α, ∞),
(3.13)
where S1 (t) and S2 (t) are the S-matrices associated with (X1 , U1 ) and (X2 , U2 ). Hence, Im X1 (t) = Im X2 (t) on [α, ∞), that is, the conjoined bases (X1 , U1 ) and (X2 , U2 ) have eventually the same image. (iii) A more detailed analysis of the statements in parts (i) and (ii) shows that if only (X1, U1 ) has constant kernel on [α, ∞) and (3.11) holds, then the first equality in (3.13) is satisfied. Similarly, if (X2 , U2 ) has constant kernel on [α, ∞) and (3.12) holds, then the second equality in (3.13) is satisfied. Following the standard notation in [17, Section 3] and [19, Section 5], we denote by [α, ∞) the linear space of n-dimensional vector-valued functions u ∈ C1p which satisfy the equations u = −AT (t) u and B(t) u = 0 on [α, ∞). The functions u ∈ [α, ∞) then correspond to the solutions (x ≡ 0, u) of system (H) on [α, ∞). Obviously, [α, ∞) is finite-dimensional with d[α, ∞) := dim [α, ∞) ≤ n. The number d[α, ∞) is called the order of abnormality of system (H) on the interval [α, ∞). We remark that system (H) is called normal on [α, ∞) if d[α, ∞) = 0, while it is called identically normal (or completely controllable) on [α, ∞) if d(J ) = 0 for every nondegenerate subinterval J ⊆ [α, ∞). As we showed in [19, Section 5], the number d[α, ∞) is one of the important parameters of system (H) on [α, ∞) and it is intimately connected with the rank of the matrix S(t) defined in (3.5), see [19, Theorem 5.2 and Remark 5.3]. In particular, for the orthogonal projector PS ∞ defined in (3.6) we have rank PS ∞ = n − d[α, ∞).
(3.14)
Moreover, the number d[α, ∞) determines the set of admissible values, which may be attained by the rank of X(t) from conjoined bases (X, U ) of (H) with constant kernel on [α, ∞). More precisely, for all such conjoined bases (X, U ) of (H) we have the estimate n − d[α, ∞) ≤ rank X(t) ≤ n
on [α, ∞),
(3.15)
see also Remark 3.7 below. Note that the integer-valued function d[t, ∞) is nondecreasing, piecewise constant, and right-continuous on [a, ∞). Therefore, there exists the maximal order of abnormality d∞ of (H), which is defined by d∞ := max d[t, ∞). t∈[a,∞)
(3.16)
4658
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
By (3.15), the rank r of a conjoined basis (X, U ) of (H) defined in (3.4) then satisfies n − d∞ ≤ r ≤ n.
(3.17)
Throughout this paper we will consider only the intervals [α, ∞) with d[α, ∞) = d∞ .
(3.18)
The monotonicity of d[t, ∞) then implies that condition (3.18) holds also for every t ≥ α. In a similar way we define the integer-valued function d[α, t], which is nonincreasing, piecewise constant, and left-continuous on [α, ∞), see also [19, Section 5]. In [19, Remarks 5.10 and 5.13] we showed, how conjoined bases of (H) with constant kernel on [α, ∞) can be constructed by using the equivalence of solutions of (H) and the relation “being contained” between conjoined bases of (H). More precisely, as in [19, Definition 5.6] we say that two solutions (X1 , U1 ) and (X2 , U2 ) of (H) are equivalent on [α, ∞) and write (X1 , U1 ) ∼ (X2 , U2 ) on [α, ∞) if X1 (t) = X2 (t) for all t ∈ [α, ∞). Definition 3.5. Let (X, U ) and (X∗ , U∗ ) be two conjoined bases of (H) such that (X, U ) has constant kernel on [α, ∞). Let P and PS ∞ be the associated orthogonal projectors for (X, U ) defined in (3.3) and (3.6). We say that (X∗ , U∗ ) is contained in (X, U ) on [α, ∞) (or that (X, U ) contains the conjoined basis (X∗ , U∗ ) on [α, ∞)) if there exists an orthogonal projector P∗ such that (X∗ , U∗ ) ∼ (XP∗ , U P∗ ) on [α, ∞) and Im PS ∞ ⊆ Im P∗ ⊆ Im P .
(3.19)
In [19, Remark 5.10] we showed that every conjoined basis (X∗ , U∗ ) of (H), which is contained in a conjoined basis (X, U ) of (H) with constant kernel on [α, ∞), has also a constant kernel on [α, ∞). This follows from the equations X∗ (t) = X(t) P∗
and
Ker X∗ (t) = Ker P∗
on [α, ∞).
(3.20)
The importance of the relation in Definition 3.5 can be seen from the following proposition and remark, see [19, Remark 5.10(ii)] and [20, Theorem 5.6, Remark 5.9]. Proposition 3.6. Assume (1.1). Let (X, U ) be a conjoined basis of (H) with constant kernel on [α, ∞) and let R(t) and P be the associated orthogonal projectors defined in (3.2) and (3.3). Then the following statements hold. (i) For every orthogonal projector P∗ satisfying (3.19) there always exists a conjoined basis (X∗ , U∗ ) of (H) which is contained in (X, U ) with Im X∗T (t) = Im P∗ on [α, ∞). (ii) For every orthogonal projectors P˜α and R˜ α satisfying Im P ⊆ Im P˜α ,
Im R(α) ⊆ Im R˜ α ,
rank P˜ = rank R˜ α ,
(3.21)
˜ U˜ ) of (H) with constant kernel on [α, ∞) which there always exists a conjoined basis (X, ˜ contains (X, U ) and satisfies Im X˜ T (t) = Im P˜α on [α, ∞) and Im X(α) = Im R˜ α .
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4659
Remark 3.7. Given a conjoined basis (X, U ) of (H) with constant kernel on [α, ∞), the equalities in (3.15) and (3.18) together with Proposition 3.6 guarantee the existence of a conjoined basis (X∗ , U∗ ) of the same type, for which the rank X∗ (t) is any integer between n − d∞ and n and (X∗ , U∗ ) is either contained in (X, U ) or contains (X, U ) on [α, ∞). Moreover, if rank X∗ (t) = n − d∞ , that is, if rank X∗ (t) is minimal on [α, ∞), then (X∗ , U∗ ) is said to be a minimal conjoined basis on [α, ∞). From [19, Remarks 5.13 and 5.14] it follows that every minimal conjoined basis of (H) on [α, ∞) can be constructed from (X, U ) by using the relation “being contained” with the choice P∗ := PS ∞ . In [19, Theorem 5.11] we proved that the relation “being contained” preserves the corresponding S-matrices. Proposition 3.8. Assume (1.1). Let (X, U ) be a conjoined basis of (H) with constant kernel on [α, ∞) and let S(t) be its corresponding S-matrix defined in (3.5). If (X∗ , U∗ ) is any conjoined basis of (H) which is contained in (X, U ) on [α, ∞) and if S∗ (t) is its corresponding S-matrix, then S∗ (t) = S(t) for all t ∈ [α, ∞). Remark 3.9. In [19, Remark 5.16] we showed that all minimal conjoined bases (X, U ) of (H) on ⊥ [α, ∞) have the same Im X(α) = 0 [α, ∞) where 0 [α, ∞) ⊆ Rn is the subspace of initial values of functions u ∈ [α, ∞). Thus, any two minimal conjoined bases of (H) on [α, ∞) are mutually representable in the sense of Proposition 3.3. In this case the corresponding S-matrices can also be expressed in terms of each other, see [19, Theorems 4.10 and 5.17] and Proposition 3.10 below. Moreover, if (X∗1 , U∗1 ) and (X∗2 , U∗2 ) are the minimal conjoined bases of (H) on [α, ∞), which are contained in the conjoined bases (X1 , U1 ) and (X2 , U2 ) from ProposiT N tion 3.3 on [α, ∞), then there exist matrices M∗1 , N∗1 such that M∗1 is invertible, M∗1 ∗1 is symmetric and the equalities X∗2 (α) = X∗1 (α) M∗1 ,
†T U∗2 (α) = U∗1 (α) M∗1 + X∗1 (α) N∗1
hold. In addition, in [20, Lemma 6.9] we proved the formulas P1 M1 PS2 ∞ = PS1 ∞ M∗1 ,
−1 P2 M1−1 PS1 ∞ = PS2 ∞ M∗1 ,
−1 N∗1 M∗1 = PS1 ∞ N1 M1−1 PS1 ∞
with PS1 ∞ , PS2 ∞ , M1 and N1 defined in (3.6) and Proposition 3.3 through (X1 , U1 ) and (X2 , U2 ). Now we collect some important results about S-matrices from [19], needed in this paper. In the following proposition we use the notation from Proposition 3.3. Proposition 3.10. Assume (1.1). Let (X1 , U1 ) and (X2 , U2 ) be minimal conjoined bases of (H) on [α, ∞) with their corresponding S-matrices S1 (t) and S2 (t). Moreover, let M1 and N1 be the matrices defined through (X1 , U1 ) and (X2 , U2 ) as in Proposition 3.3. Then for every β ∈ [α, ∞) such that Im S1 (t) = Im PS1 ∞ and Im S2 (t) = Im PS2 ∞ for t ≥ β we have S2† (t) = M1T S1† (t) M1 + M1T N1
on [β, ∞).
(3.22)
4660
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
Proposition 3.11. Assume (1.1). Let (X, U ) be a conjoined basis of (H) with constant kernel on [α, ∞) and let S(t) be defined in (3.5). Then the following statements hold. (i) The matrix S(t) is nondecreasing on [α, ∞). Moreover, if S(t) has constant kernel on some subinterval I ⊆ [α, ∞), then S † ∈ C1p (I) and S † (t) is nonincreasing on I. Consequently, the limit of S † (t) as t → ∞ exists. (ii) There exists a constant orthogonal matrix V ∈ Rn×n such that for all t ∈ (α, ∞) S(t) = V diag{(t), 0n−r(t) } V T ,
S † (t) = V diag{ −1 (t), 0n−r(t) } V T ,
(3.23)
where (t) ∈ Rr(t)×r(t) is symmetric and positive definite and r(t) := rank S(t). In the sequel we shall use the function r(t) for t ∈ [α, ∞) as defined in Proposition 3.11. Remark 3.12. Using formulas (3.23) in Proposition 3.11, the orthogonal projectors PS (t) and PS ∞ in (3.6) can be expressed as PS (t) = V diag{Ir(t) , 0n−r(t) } V T ,
t ∈ [α, ∞),
PS ∞ = V diag{Ir∞ , 0n−r∞ } V T ,
(3.24)
with r∞ := rank PS ∞ = n − d∞ . Moreover, the matrices T ∈ Rn×n and T ∈ Rr∞ ×r∞ defined by T := lim S † (t) = V diag{T , 0n−r∞ } V T , t→∞
T := lim −1 (t), t→∞
(3.25)
are symmetric, nonnegative definite, and Im T ⊆ Im PS ∞ by (3.6). The matrix T is called as the T -matrix which corresponds to the conjoined basis (X, U ) with constant kernel on [α, ∞). The matrices T1 and T2 associated with (X1 , U1 ) and (X2 , U2 ) in (3.22) then satisfy T2 = M1T T1 M1 + M1T N1 .
(3.26)
Finally, the next statement is from [19, Theorem 6.8] and its proof. Proposition 3.13. Assume (1.1) and (3.18). Let S(t) be the S-matrix associated with a conjoined basis of (H) with constant kernel on [α, ∞). Then the following statements hold. (i) Im[PS ∞ − S(t) T ] = Im PS ∞ = Im[PS ∞ − S(t) T ]T for all t ∈ [α, ∞). (ii) −1 (t) − (T )r(t) > 0 for all t ∈ (α, ∞). ˆ Uˆ ) of (H) is a principal soluFollowing [20, Definition 7.1], we say that a conjoined basis (X, ˆ ˆ ˆ in (3.5) tion at infinity if (X, U ) has constant kernel on [α, ∞) and its corresponding matrix S(t) † ˆ ˆ satisfies S (t) → 0 as t → ∞. This means that T = 0 in (3.25). For brevity, the terminology “at infinity” will often be dropped. By (3.17), the principal solutions of (H) can be classified acˆ cording to the rank of X(t) on [α, ∞). In particular, the minimal principal solution (Xˆ min , Uˆ min ) of (H) at infinity is determined by the property rank Xˆ min (t) = n − d∞ , while the maximal principal solution (Xˆ max , Uˆ max ) of (H) at infinity is determined by the property rank Xˆ max (t) = n, ˆ Uˆ ) of (H) with hence Xˆ max (t) is invertible on [α, ∞). The remaining principal solutions (X, ˆ rank X(t) strictly between n − d∞ and n are called intermediate principal solutions at infinity. In
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4661
the following statement we recall the result from [19, Theorems 7.2 and 7.6], where it is shown that the minimal principal solution of (H) at infinity exists and is unique for a nonoscillatory system (H). Proposition 3.14. Assume (1.1). System (H) is nonoscillatory if and only if there exists a minimal principal solution of (H) at infinity. In this case the minimal principal solution is unique up to a right nonsingular constant multiple. The nonoscillation of system (H) is characterized by the existence of a principal solution of (H) at infinity with any possible rank, see [20, Theorem 7.6]. Proposition 3.15. Assume that (1.1) holds. Then the following statements are equivalent. (i) System (H) is nonoscillatory. (ii) There exists a principal solution of (H) at infinity. (iii) For any integer r satisfying n − d∞ ≤ r ≤ n there exists a principal solution of (H) at infinity with rank equal to r. Given a minimal principal solution (Xˆ min , Uˆ min ) of (H) at infinity, we define the point αˆ min ∈ [a, ∞) associated with (Xˆ min , Uˆ min ) by αˆ min := inf α ∈ [a, ∞), (Xˆ min , Uˆ min ) has constant kernel on [α, ∞) .
(3.27)
The uniqueness of the minimal principal solution of (H) at infinity in Proposition 3.14 then implies that the point αˆ min does not depend on the particular choice of (Xˆ min , Uˆ min ). Note that d[α, ∞) = d∞ for every α > αˆ min , see [20, Theorem 7.9]. 4. Minimal genus of conjoined bases The main results of this paper are based on the following important concept of genera of conjoined bases of (H). According to [20, Definition 6.3], two conjoined bases (X1 , U1 ) and (X2 , U2 ) of (H) have (or belong to) the same genus G if X1 (t) and X2 (t) have eventually the same image, i.e., there exists α ∈ [a, ∞) such that Im X1 (t) = Im X2 (t) on [α, ∞). In [20, Remark 7.14] we showed that all conjoined bases of (H) with the rank n − d∞ have eventually the same image, in particular all minimal principal solutions of (H) have eventually the same image, thus they belong to the same genus Gmin called the minimal genus. In this section we provide a complete description of the minimal genus Gmin of conjoined bases of (H) (see Theorem 4.4, Corollary 4.6, and Proposition 4.7). We also classify the matrices T introduced in (3.25) which are associated with conjoined bases of (H) belonging to the genus Gmin (see Theorem 4.9), as well as to any genus G (see Corollary 4.11). We recall the definition of the point αˆ min in (3.27), which determines the maximal interval (αˆ min , ∞) on which the minimal principal solutions of (H) have constant kernel. In our first result we prove that a conjoined basis from the minimal genus Gmin can have constant kernel only on a subinterval of (αˆ min , ∞). Theorem 4.1. Assume (1.1) and let (X, U ) be a conjoined basis of (H) belonging to the minimal genus Gmin . If (X, U ) has constant kernel on [α, ∞), then α ≥ αˆ min and (3.18) holds.
4662
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
Proof. Let (X, U ) ∈ Gmin be as in the theorem. Since the rank of (X, U ) is n − d∞ , we have n − d[α, ∞) ≤ n − d∞ by (3.15). This implies that d[α, ∞) ≥ d∞ . The definition of d∞ in (3.16) then implies that d[α, ∞) = d∞ , i.e., condition (3.18) holds. In particular, (X, U ) is a minimal conjoined basis on the interval [α, ∞). Let S(t) be the S-matrix corresponding ˆ Uˆ ) of (H) defined by to (X, U ) in (3.5) and let T be given in (3.25). Consider a solution (X, ˆ Uˆ ) := (X, U ) − (X, ¯ U¯ ) T on [α, ∞), where (X, ¯ U¯ ) is the conjoined basis associated with (X, ˆ Uˆ ) is the (X, U ) through Remark 3.2. From the proof of [19, Theorem 7.2] it follows that (X, ˆ Uˆ ) has minimal principal solution of (H) with respect to the interval [α, ∞). In particular, (X, constant kernel on [α, ∞). Thus, the inequality α ≥ αˆ min holds by (3.27). 2 Remark 4.2. It follows from Theorem 4.1 that (αˆ min , ∞) is the maximal open interval for which there exists a conjoined basis (X, U ) ∈ Gmin with constant kernel on this interval. More precisely, the point αˆ min in (3.27) has the equivalent expression αˆ min := inf α ∈ [a, ∞), (X, U ) ∈ Gmin has constant kernel on [α, ∞) .
(4.1)
In [20, Remark 7.11] we show that the orthogonal projector PSˆ ∞ in (3.6), which is associated ˆ Uˆ ) of (H) at infinity, is the same for all initial points α ∈ (αˆ min , ∞). with a principal solution (X, Formula (4.1) now yields that the same property holds for any conjoined basis (X, U ) and an interval, where (X, U ) has constant kernel. More precisely, if (X, U ) is a conjoined basis of (H) with constant kernel on [α, ∞) ⊆ (αˆ min , ∞), then the associated orthogonal projector PS ∞ defined in (3.6) is the same for all initial points β ∈ [α, ∞). The following result is an extension of [19, Theorems 6.5 and 6.8] in the sense that the maximal projector PS ∞ is replaced by the projector PS (t) and the statement is considered for t in the whole interval [α, ∞) instead only for large t . Lemma 4.3. Assume (1.1) and (3.18). Let (X, U ) be a minimal conjoined basis of (H) on [α, ∞) with S(t) and T defined in (3.5) and (3.25). Then we have on [α, ∞) S † (t) − PS (t) T PS (t) ≥ 0,
Ker[S † (t) − PS (t) T PS (t)] = Ker PS (t).
(4.2)
Proof. When t = α, the formulas in (4.2) hold trivially, because S(α) = 0 = PS (α). Fix now t ∈ (α, ∞). With the aid of the expressions in (3.23), (3.24), and (3.25) we have S † (t) − PS (t) T PS (t) = V diag{ −1 (t) − (T )r(t) , 0n−r(t) } V T .
(4.3)
By (4.3) and Proposition 3.13(ii), we get S † (t) − PS (t) T PS (t) ≥ 0 and rank[S † (t) − PS (t) T PS (t)] = rank[ −1 (t) − (T )r(t) ] = r(t). The above equality together with the facts that S † (t) = S † (t) PS (t) and rank PS (t) = r(t) then yields the second condition in (4.2). 2 In [20, Theorem 7.13] applied to the minimal genus Gmin we derived a complete classification of all minimal principal solutions (H), see also [20, Remark 7.14]. In the next theorem and its corollary we extend this result to a complete classification of all conjoined bases in the minimal
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4663
genus Gmin . This turns out to be one of the crucial results of this paper, as it will be utilized in the characterization of the matrices T in Theorem 4.9 as well as in the construction of minimal antiprincipal solutions of (H) in Theorem 5.8. Theorem 4.4. Assume (1.1). Let (X, U ) ∈ Gmin be a conjoined basis of (H) with constant kernel ˜ U˜ ) of (H) belongs on [α, ∞) and with PS ∞ and T defined in (3.6) and (3.25). Then a solution (X, to Gmin and it has constant kernel on [α, ∞) if and only if there exist matrices M, N ∈ Rn×n such that ˜ X(α) = X(α) M, M is nonsingular,
U˜ (α) = U (α) M + X †T (α) N,
M N = N M, T
T
Im N ⊆ Im PS ∞ ,
NM
(4.4) −1
+ T ≥ 0.
(4.5)
Proof. Let (X, U ) and α be as in the theorem. Then rank X(t) = n − d∞ on [α, ∞). From Theorem 4.1 we have that d[α, ∞) = d∞ . Thus, (X, U ) is a minimal conjoined basis on [α, ∞) by Remark 3.7. In particular, the orthogonal projector P defined in (3.3) satisfies P = PS ∞ . If ˜ U˜ ) belongs to Gmin and has constant kernel on [α, ∞), then it is also a minimal conjoined (X, ˜ basis on [α, ∞). Consequently, Im X(α) = Im X(α) by Remark 3.9. Therefore, by Proposi˜ U˜ ), there exist matrices tion 3.3 and Remark 3.4 with (X1 , U1 ) := (X, U ) and (X2 , U2 ) := (X, M, N ∈ Rn×n such that (4.4) and the first three conditions in (4.5) hold. Moreover, let T and T˜ ˜ be the T -matrices defined in (3.25) through the functions S(t) and S(t) in (3.5), which are as˜ U˜ ), respectively. By using formula (3.26) with T1 := T , T2 := T˜ , sociated with (X, U ) and (X, M1 := M, and N1 := N , we have T˜ = M T T M + M T N,
i.e.,
N M −1 + T = M T −1 T˜ M −1 ≥ 0,
(4.6)
˜ U˜ ) be a solution since T˜ ≥ 0. This shows the fourth condition in (4.5). Conversely, let (X, of (H) satisfying (4.4) and (4.5). The first three conditions in (4.5) together with the identity ˜ U˜ ) is X T (α) X †T (α) = P = PS ∞ and the fact that (X, U ) is a conjoined basis imply that (X, also a conjoined basis of (H). Let S(t) be the S-matrix in (3.5) corresponding to (X, U ) on [α, ∞). By Remark 3.4(iii), condition (4.4) then yields ˜ X(t) = X(t) [PS ∞ M + S(t) N ]
on [α, ∞).
(4.7)
˜ U˜ ) has constant kernel on [α, ∞) and that Ker X(t) ˜ We will show that (X, = Ker PS ∞ M on [α, ∞). First we note that the symmetry of M T N and the identity PS ∞ N = N imply that NM −1 PS ∞ = M T −1 N T PS ∞ = M T −1 N T = N M −1 holds. Hence, by (4.7), ˜ X(t) = X(t) [PS ∞ M + S(t) N M −1 M] = X(t) [I + S(t) N M −1 ] PS ∞ M
(4.8)
˜ ˜ on [α, ∞). Therefore, Ker PS ∞ M ⊆ Ker X(t) on [α, ∞). Fix now t ∈ [α, ∞), v ∈ Ker X(t), and −1 set w := PS ∞ Mv. Then X(t) [w + S(t) N M w] = 0 by (4.8). Multiplying the latter equality by X † (t) from the left and using the identities X † (t) X(t) = PS ∞ , PS ∞ S(t) = S(t) and w = PS ∞ w, we get w = −S(t)N M −1 w. This implies by using (3.6) and (3.7) that w ∈ Im S(t) = Im PS (t) and consequently, w T S † (t) w = −w T S † (t) S(t) N M −1 w = −w T PS (t) N M −1 PS (t) w.
(4.9)
4664
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
Equality (4.9) and the last condition in (4.5) then yield w T S † (t) w ≤ w T PS (t) T PS (t) w, or equivalently w T [S † (t) − PS (t) T PS (t)] w ≤ 0. But S † (t) − PS (t) T PS (t) ≥ 0 according to Lemma 4.3 and thus, w ∈ Ker[S † (t) − PS (t) T PS (t)] = Ker PS (t), by the second formula in (4.2). Hence we obtain that w ∈ Ker PS (t) ∩ Im PS (t) = {0}. This shows that v ∈ Ker PS ∞ M, ˜ ˜ i.e., Ker X(t) ⊆ Ker PS ∞ M. Finally, (4.4) and the invertibility of M imply that rank X(t) = ˜ ˜ U˜ ) and (X, U ) have eventually rank X(α) = rank X(α) = n − d∞ on [α, ∞). Consequently, (X, ˜ U˜ ) belongs to the minimal genus Gmin , as we comment at the beginning the same image, i.e., (X, of this section. The proof is complete. 2 Remark 4.5. From the proof of Theorem 4.4 it follows that the matrices T˜ and T , which corre˜ U˜ ) and (X, U ) in (3.25), satisfy (4.6). This implies that spond to the conjoined bases (X, rank T˜ = rank(N M −1 + T ).
(4.10)
Formula (4.10) will be important for the construction of antiprincipal solutions of (H) at infinity in Section 5. In the following corollary to Theorem 4.4 we utilize the observation that the orthogonal projector PSˆ ∞ in (3.6) associated with a minimal principal solution (Xˆ min , Uˆ min ) of (H) is the same for all initial points α ∈ (αˆ min , ∞), where αˆ min is defined in (3.27), see [20, Remark 7.11]. Corollary 4.6. Assume that (1.1) holds and system (H) is nonoscillatory. Let (Xˆ min , Uˆ min ) be a minimal principal solution of (H) at infinity and let PSˆ ∞ be defined in (3.6) through the function Xˆ min (t) on (αˆ min , ∞). Then a solution (X, U ) of (H) belongs to the minimal genus Gmin if and ˆ Nˆ ∈ Rn×n such that only if for some α ∈ (αˆ min , ∞) there exist matrices M, ˆ X(α) = Xˆ min (α) M, Mˆ is nonsingular,
†T U (α) = Uˆ min (α) Mˆ + Xˆ min (α) Nˆ ,
ˆ Mˆ Nˆ = N M, T
ˆT
Im Nˆ ⊆ Im PSˆ ∞ ,
Nˆ Mˆ
(4.11) −1
≥ 0.
(4.12)
Proof. First we note that according to [20, Theorem 7.9], (Xˆ min , Uˆ min ) is a minimal principal solution with respect to the interval [α, ∞) for every α ∈ (αˆ min , ∞). In particular, this means that ˆ (Xˆ min , Uˆ min ) has constant kernel on (αˆ min , ∞) and the function S(t) defined in (3.5) through Xˆ min (t) satisfies Sˆ † (t) → Tˆ = 0 as t → ∞ for every initial point α ∈ (αˆ min , ∞). Assume that (X, U ) belongs to Gmin and has constant kernel on a given interval [α, ∞). From Theorem 4.1 we know that α ≥ αˆ min . Without loss of generality we assume α > αˆ min . By Theorem 4.4, with ˜ U˜ ) := (X, U ), (X, U ) := (Xˆ min , Uˆ min ), PS ∞ := P ˆ , and T := Tˆ = 0, there exist matri(X, S∞ ˆ Nˆ ∈ Rn×n such that (4.11) and (4.12) hold. The opposite implication is a consequence ces M, of Theorem 4.4 with (X, U ) := (Xˆ min , Uˆ min ), since for every α ∈ (αˆ min , ∞) a solution (X, U ) of (H) satisfying (4.11) and (4.12) is a conjoined basis, which belongs to the minimal genus Gmin and which has constant kernel on [α, ∞). 2 The next result provides an additional information about the structure of the minimal genus Gmin , as we comment in Remark 4.8 below. We recall that for a fixed α ∈ [a, ∞) the principal solution (Xˆ α , Uˆ α ) at the point α is defined as the conjoined basis of (H) satisfying the initial conditions Xˆ α (α) = 0 and Uˆ α (α) = I .
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4665
Proposition 4.7. Assume that (1.1) holds and system (H) is nonoscillatory with αˆ min defined in (3.27). Then for every α > αˆ min the principal solution (Xˆ α , Uˆ α ) at α belongs to Gmin . Proof. Let α > αˆ min be fixed. Then rank Xˆ α (t) = n − d[α, t] for all t ∈ [α, ∞) by [19, Theorem 5.2], since the minimal principal solution (Xˆ min , Uˆ min ) has constant kernel on (αˆ min , ∞). The nonoscillation of (H) and the monotonicity of the function d[α, t] imply that there exists β ∈ [α, ∞) such that (Xˆ α , Uˆ α ) has constant kernel on [β, ∞) and d[α, t] = d[α, ∞) for all t ≥ β. Therefore, on [β, ∞) we have rank Xˆ α (t) = n − d[α, ∞) = n − d∞ , by Theorem 4.1. Thus, the rank of (Xˆ α , Uˆ α ) is n − d∞ and (Xˆ α , Uˆ α ) belongs to the minimal genus Gmin . 2 Remark 4.8. The result of Proposition 4.7 shows that there is no universal interval [α, ∞) in (αˆ min , ∞) such that all the conjoined bases (X, U ) in the minimal genus Gmin would have its kernel constant on this universal interval. In the next result we establish a criterion for the classification of all T -matrices, which correspond to conjoined bases from the minimal genus Gmin . Theorem 4.9. Assume that (1.1) holds and that system (H) is nonoscillatory. Then D ∈ Rn×n is a T -matrix of some conjoined basis (X, U ) from the minimal genus Gmin if and only if D is symmetric,
D ≥ 0,
rank D ≤ n − d∞ .
(4.13)
Proof. Let D be a T -matrix associated with a conjoined basis (X, U ) from Gmin with constant kernel on a given interval [α, ∞) in (αˆ min , ∞). Let S(t) and T be defined in (3.5) and (3.25), so that D = T . By Theorem 4.1 we have d[α, ∞) = d∞ . From Remark 3.12 we obtain that D is symmetric, nonnegative definite, and Im D ⊆ Im PS ∞ with PS ∞ defined in (3.6). But since rank PS ∞ = n − d[α, ∞) = n − d∞ by (3.14), the condition rank D ≤ n − d∞ follows. Conversely, assume that D ∈ Rn×n satisfies (4.13). From the third condition in (4.13) we have that there exists an orthogonal projector Q such that Im D ⊆ Im Q and rank Q = n − d∞ . Furthermore, the nonoscillation of (H) implies that there exists a conjoined basis (X, U ) in Gmin . Let [α, ∞) be the interval where (X, U ) has constant kernel and let S(t), PS ∞ , and T be the matrices associated with (X, U ) in (3.5), (3.6), and (3.25). Since d[α, ∞) = d∞ , we have rank PS ∞ = n − d∞ = rank Q, and hence there exists an invertible matrix E satisfying Im EPS ∞ = Im Q. The matrix E can be obtained e.g. from the diagonalization of PS ∞ and Q or from [20, Theorem 9.2] with P∗ := 0. In particular, we then have Im E −1 Q = Im PS ∞ , i.e., PS ∞ E −1 Q = E −1 Q. Define now the matrices M, N ∈ Rn×n by M := E T ,
N := E −1 D − T E T .
(4.14)
We show that these matrices satisfy conditions (4.5) in Theorem 4.4. The matrix M is invertible by its definition. The symmetry of D and T implies that M T N = D − E T E T is also symmetric. Moreover, the equalities QD = D, PS ∞ E −1 Q = E −1 Q, and PS ∞ T = T yield PS ∞ N = PS ∞ E −1 QD − T E T = E −1 QD − T E T = E −1 D − T E T = N. This means that Im N ⊆ Im PS ∞ . Finally, the inequality D ≥ 0 implies the fourth condition in (4.5), since N M −1 + T = (E −1 D − T E T ) E T −1 + T = E −1 D E T −1 ≥ 0. Therefore, we
4666
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
proved that for a given D satisfying (4.13) and for any (X, U ) from Gmin with constant kernel on [α, ∞) the matrices M and N in (4.14) satisfy the conditions in (4.5). Consider now the solution ˜ U˜ ) of (H) given by the initial conditions (4.4). By Theorem 4.4, it follows that (X, ˜ U˜ ) is (X, ˜ U˜ ) has constant kernel on [α, ∞). Moreover, a conjoined basis belonging to Gmin and that (X, ˜ U˜ ) in (3.25) then by (4.6) satisfies T˜ = M T T M + M T N . By the matrix T˜ associated with (X, using (4.14) we then obtain that T˜ = D. Therefore, the matrix D is a T -matrix associated with ˜ U˜ ) from the minimal genus Gmin . 2 the conjoined basis (X, Remark 4.10. From Theorem 4.9 it follows that the property of D being a T -matrix for a conjoined basis (X, U ) from Gmin with constant kernel on [α, ∞) does not depend on the particular choice of α ∈ (αˆ min , ∞). This follows from the fact that the conditions in (4.13) do not depend on α. The statement of Theorem 4.9 can be generalized to any genus G on intervals where the abnormality of (H) is maximal. Corollary 4.11. Assume that (1.1) holds and that system (H) is nonoscillatory. Then a matrix D ∈ Rn×n is a T -matrix of a conjoined basis (X, U ) of (H) with constant kernel on [α, ∞) with d[α, ∞) = d∞ if and only if D satisfies the conditions in (4.13). Proof. The statement follows from Theorem 4.9 and Proposition 3.8, since the relation “being contained” for conjoined bases of (H) with constant kernel on [α, ∞) preserves the corresponding S-matrices and hence also the T -matrices. Indeed, if (X, U ) is a conjoined basis of (H) with constant kernel on [α, ∞) such that condition (3.18) is satisfied, then a minimal conjoined basis (X∗ , U∗ ) which is contained in (X, U ) on [α, ∞) has rank X∗ (t) = n − d∞ on [α, ∞), by Remark 3.7. Thus, (X∗ , U∗ ) belongs to the minimal genus Gmin and its corresponding matrix T∗ satisfies (4.13). However, by Proposition 3.8 we have T = T∗ , which completes the proof. 2 5. Antiprincipal solutions at infinity In this section we introduce the antiprincipal solutions of (H) at infinity and study their properties. In particular, we prove the existence of all antiprincipal solutions at infinity with their rank between n − d∞ and n for a nonoscillatory system (H), and provide a construction of all antiprincipal solutions from minimal antiprincipal solutions (see Theorems 5.8 and 5.11). In addition, we derive a criterion for antiprincipal solutions in a given genus in terms of a principal solution from this genus (see Theorem 5.13). This result then yields a limit characterization of principal solutions of (H) at infinity in terms of the antiprincipal solutions in the next section (see Theorem 6.1). Definition 5.1 (Antiprincipal solution at infinity). A conjoined basis (X, U ) of (H) is said to be an antiprincipal solution at infinity if there exists α ∈ [a, ∞) with d[α, ∞) = d∞ such that (X, U ) has constant kernel on [α, ∞) and its corresponding matrix T defined in (3.25) satisfies rank T = n − d∞ . When it is clear from the context, we will drop the term “at infinity”. The properties of (X, U ) in Definition 5.1, namely that (X, U ) has constant kernel on [α, ∞) with d[α, ∞) = d∞ and that rank T = n − d∞ with T in (3.25), are required to hold simultaneously. We can see from
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4667
Corollary 4.11 that the antiprincipal solutions of (H) are defined by the maximal possible rank of the associated matrix T . In the following remark we introduce an analogous terminology and notation as in Section 3 for principal solutions at infinity. Remark 5.2. Let (X, U ) be an antiprincipal solution of (H) at infinity and let r be its rank from (3.4). If r = n − d∞ , then (X, U ) is called a minimal antiprincipal solution at infinity, while if r = n, then (X, U ) is called a maximal antiprincipal solution at infinity. This terminology corresponds to the two extreme cases in formula (3.17). As before, we will use the notation (Xmin , Umin ) and (Xmax , Umax ) for the minimal and maximal antiprincipal solutions of (H). Moreover, if n − d∞ < r < n, then the antiprincipal solution (X, U ) is called intermediate (of the rank r). The first result of this section contains a characterization of the antiprincipal solutions of (H) in terms of the limit of S(t) as t → ∞. Theorem 5.3. Assume (1.1). Let (X, U ) be a conjoined basis of (H) with constant kernel on [α, ∞) ⊆ (αˆ min , ∞) with αˆ min defined in (3.27). Let S(t) and T be the matrices defined in (3.5) and (3.25). Then (X, U ) is an antiprincipal solution of (H) at infinity if and only if lim S(t) = T † .
t→∞
(5.1)
Proof. Let (X, U ) and α be as in the theorem. From the definition of αˆ min in (3.27) it follows that condition (3.18) holds, that is d[α, ∞) = d∞ . Since S † (t) → T as t → ∞ and rank S † (t) = rank S(t) = n − d∞ for large t by (3.6) and (3.14), it follows from Remark 2.2(ii), in which we take M(t) := S † (t) and M := T , that (X, U ) is an antiprincipal solution of (H) at infinity if and only if (5.1) holds. 2 Remark 5.4. Condition (5.1) in Theorem 5.3 can be replaced by the weaker (but equivalent) condition, which is only the existence of the limit of S(t) for t → ∞. This can be seen from Remark 2.2(ii). In the next statement we show that the initial point α in Definition 5.1 can be arbitrarily moved to the right side. This corresponds to the situation with the principal solutions of (H) at infinity in [20, Remark 7.11]. Theorem 5.5. Assume (1.1) and let (X, U ) be an antiprincipal solution of (H) at infinity with respect to the interval [α, ∞). Then (X, U ) is an antiprincipal solution also with respect to the interval [γ , ∞) for every γ > α. Proof. Let (X, U ) be as in the theorem and let S(t), PS ∞ , and T be the matrices in (3.5), (3.6), and (3.25) associated with (X, U ) on [α, ∞). By Definition 5.1, we have that condition (3.18) holds and rank T = n − d∞ . Fix now any γ > α. The monotonicity of the order of abnormality implies that d[γ , ∞) = d∞ . Let Sγ (t) := S(t) − S(γ ) be the S-matrix corresponding to (X, U ) on [γ , ∞) and let Tγ be the associated T -matrix, i.e., Sγ† (t) → Tγ as t → ∞. We show that Im Tγ = Im T . In [19, Equation (7.2)] we derived that
4668
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
Tγ = T [PS ∞ − S(γ ) T ]† .
(5.2)
This implies that Im Tγ ⊆ Im T . Moreover, by using Proposition 3.13(i) and Remark 2.1(ii) with M := [PS ∞ − S(γ ) T ]† , formula (5.2) yields Tγ [PS ∞ − S(γ ) T ] = T [PS ∞ − S(γ ) T ]† [PS ∞ − S(γ ) T ] = T PS ∞ = T , since Im T ⊆ Im PS ∞ . This shows that Im T ⊆ Im Tγ . Therefore, Im Tγ = Im T holds. This then implies that rank Tγ = rank T = n − d∞ , i.e., (X, U ) is an antiprincipal solution of (H) at infinity also with respect to [γ , ∞), by Definition 5.1. 2 Remark 5.6. The proof of Theorem 5.5 shows that when the initial point α in the definition of S(t) and T is shifted to some point γ > α, then the image of the associated T -matrix does not change, i.e., Im Tγ = Im T . Moreover, this statement obviously holds for any conjoined basis (X, U ) with constant kernel on [α, ∞) such that d[α, ∞) = d∞ . Next we present an analogue of [20, Theorem 7.3] for the antiprincipal solutions of (H). Theorem 5.7. Assume (1.1). Let (X, U ) be an antiprincipal solution of (H) at infinity with respect to the interval [α, ∞). Then every conjoined basis of (H) with constant kernel on [α, ∞), which is either contained in (X, U ) on [α, ∞) or which contains (X, U ) on [α, ∞), is also an antiprincipal solution of (H) at infinity with respect to the interval [α, ∞). Proof. The result follows from Proposition 3.8 and Definition 5.1, since the relation “being contained” for conjoined bases of (H) with constant kernel on [α, ∞) preserves the corresponding S-matrices. 2 In the following result we characterize the nonoscillation of system (H) in terms of the existence of antiprincipal solutions of (H) at infinity with any rank between n − d∞ and n in the same spirit as in Proposition 3.15 for the principal solutions. We note that in contrast with the minimal principal solutions of (H) in Proposition 3.14, the minimal antiprincipal solutions of (H) are not in general unique (up to a right nonsingular multiple), see Remark 5.16 below. Theorem 5.8. Assume that (1.1) holds. Then the following statements are equivalent. (i) System (H) is nonoscillatory. (ii) There exists an antiprincipal solution of (H) at infinity. (iii) For any integer r satisfying n − d∞ ≤ r ≤ n there exists an antiprincipal solution of (H) at infinity with rank equal to r. Proof. If (H) is nonoscillatory, then by Theorem 4.9 for any symmetric and nonnegative definite matrix D with rank D = n − d∞ there exists a conjoined basis (X, U ) in Gmin such that X(t) has constant kernel on [α, ∞) for some α ∈ [a, ∞) and its corresponding matrix T in (3.25) satisfies T = D, i.e., rank T = n − d∞ . Since d[α, ∞) = d∞ by Theorem 4.1, we have from Remark 5.2 that (X, U ) is a minimal antiprincipal solution of (H) at infinity. Suppose now that (ii) holds and let (X, U ) be an antiprincipal solution of (H) at infinity, i.e., there exists α ∈ [a, ∞) such that (3.18) holds, (X, U ) is a conjoined basis of (H) with constant kernel on [α, ∞), and its
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4669
associated matrix T satisfies rank T = n − d∞ . By Remark 3.7, for any integer r between n − d∞ ˜ U˜ ) of (H) with constant kernel and rank X(t) ˜ and n there exists a conjoined basis (X, = r on ˜ U˜ ) is either contained or contains (X, U ) on [α, ∞). Therefore, (X, ˜ U˜ ) [α, ∞) and such that (X, is also an antiprincipal solution of (H), by Theorem 5.7, showing part (iii). Finally, if (iii) is satisfied, then system (H) is nonoscillatory by Proposition 3.1. 2 If system (H) is completely controllable, then d∞ = 0. In this case the antiprincipal solutions of (H) are defined by the property that the matrix T in (3.25) is invertible, see also (1.4). Moreover, the notions of minimal and maximal (and all other) antiprincipal solutions of (H) at infinity coincide. This means that there is only one type of antiprincipal solutions of (H), i.e., the antiprincipal solutions (X, U ) with X(t) invertible for large t . The following corollary to Theorem 5.8 then corresponds to [1, Theorem 3.1(ii)]. Corollary 5.9. Assume that (1.1) holds and (H) is completely controllable. System (H) is nonoscillatory if and only if there exists an antiprincipal solution (X, U ) of (H) at infinity with rank equal to n, i.e., with X(t) eventually invertible. Remark 5.10. In [20, Corollary 7.8] we derive an existence result for principal solutions at infinity of two nonoscillatory linear Hamiltonian systems. Based on Theorem 5.8, the statement in [20, Corollary 7.8] holds also for antiprincipal solutions at infinity in exactly the same form. In the next result we provide a construction of all antiprincipal solutions of (H) at infinity from minimal antiprincipal solutions. This corresponds to [20, Theorem 7.10], where the principal solutions of (H) were considered. Theorem 5.11. Assume that (1.1) holds and system (H) is nonoscillatory with αˆ min defined in (3.27). A solution (X, U ) of (H) is an antiprincipal solution at infinity if and only if (X, U ) is a conjoined basis of (H), which contains some minimal antiprincipal solution of (H) at infinity on [α, ∞) for some α ∈ (αˆ min , ∞). Proof. Let (X, U ) be an antiprincipal solution of (H) at infinity. This means by Definition 5.1 that (X, U ) is a conjoined basis with constant kernel on [α, ∞) for some α ∈ [a, ∞) satisfying (3.18) and the corresponding matrix T in (3.25) satisfies rank T = n − d∞ . By Theorem 5.5, we may assume that α > αˆ min . From Remark 3.7 we know that there exists a conjoined basis (X∗ , U∗ ) of (H) with constant kernel on [α, ∞) and with rank X∗ (t) = n − d∞ on [α, ∞) such that (X, U ) contains (X∗ , U∗ ) on [α, ∞). In turn, Theorem 5.7 and Remark 5.2 imply that (X∗ , U∗ ) is a minimal antiprincipal solution of (H) with respect to the interval [α, ∞). Conversely, if (X, U ) is a conjoined basis of (H) with constant kernel on [α, ∞) ⊆ (αˆ min , ∞) and such that (X, U ) contains some minimal antiprincipal solution of (H) on [α, ∞), then (X, U ) is also an antiprincipal solution of (H) by Theorem 5.7. 2 The following two theorems classify the antiprincipal solutions of (H) in a given genus G. More precisely, in Theorem 5.12 we show that there exists an antiprincipal solution in every genus G, while in Theorem 5.13 we present a classification of all such antiprincipal solutions in terms of their Wronskian with a given principal solution at infinity from the genus G. We recall from [20, Theorem 7.12] that there exists a principal solution at infinity in every genus G.
4670
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
Theorem 5.12. Assume that (1.1) holds and system (H) is nonoscillatory. Let G be a genus of conjoined bases of (H). Then there exists an antiprincipal solution of (H) in G. Proof. By Theorem 5.8, there exists a minimal antiprincipal solution (Xmin , Umin ) of (H). Suppose that (X, U ) ∈ G is a conjoined basis of (H) with constant kernel on some interval [α, ∞). By Theorem 5.5 we may assume that the point α is such that (Xmin , Umin ) is a minimal antiprincipal solution with respect to [α, ∞). In particular, (Xmin , Umin ) has constant kernel on [α, ∞) and d[α, ∞) = d∞ , by Definition 5.1 and Theorem 4.1. Furthermore, by Remark 3.7 there exists a conjoined basis (X∗ , U∗ ) of (H) with constant kernel on [α, ∞) and with rank X∗ (t) = n − d∞ on [α, ∞) such that (X, U ) contains (X∗ , U∗ ) on [α, ∞). Therefore, (Xmin , Umin ) and (X∗ , U∗ ) are minimal conjoined bases of (H) on [α, ∞) and thus, Im Xmin (α) = Im X∗ (α) by Remark 3.9. Moreover, Im X∗ (α) ⊆ Im X(α), by the first equation in (3.20) at t = α. Denote by Rmin (t), R∗ (t), R(t) the orthogonal projectors in (3.2) defined by Xmin (t), X∗ (t), X(t), respectively. Then Rmin (α) = R∗ (α) and Im Rmin (α) = Im R∗ (α) ⊆ Im R(α). By Proposition 3.6(ii) with (X, U ) := (Xmin , Umin ), R := Rmin , R˜ α := R(α), and arbitrary P˜α satisfying (3.21), there exists ˜ U˜ ) of (H) with constant kernel on [α, ∞), which contains (Xmin , Umin ) a conjoined basis (X, ˜ ˜ U˜ ) is on [α, ∞) and Im X(α) = Im R(α). According to Theorem 5.7, the conjoined basis (X, ˜ an antiprincipal solution of (H) with respect to [α, ∞) and Im X(α) = Im R(α) = Im X(α). ˜ Hence, Im X(t) = Im X(t) on [α, ∞) by Remark 3.4(ii). This means that the antiprincipal so˜ ˜ lution (X, U ) belongs to the genus G, which completes the proof. 2 Theorem 5.13. Assume that (1.1) holds and system (H) is nonoscillatory. Let G be a genus of ˆ Uˆ ) be a principal solution of (H) at infinity belonging to G and conjoined bases of (H). Let (X, let (X, U ) be a conjoined basis from G. Denote by PSˆ ∞ and PS ∞ their associated projectors in (3.6) and Remark 4.2. Then (X, U ) is an antiprincipal solution of (H) at infinity if and only if ˆ Uˆ ) and (X, U ) satisfies the (constant) Wronskian Nˆ := Xˆ T (t) U (t) − Uˆ T (t) X(t) of (X, rank PSˆ ∞ Nˆ PS ∞ = n − d∞ .
(5.3)
ˆ Uˆ ) and (X, U ) be as in the theorem. Then there exists α > αˆ min such that Proof. Let (X, ˆ Uˆ ) and (X, U ) have constant kernel on [α, ∞). Let Pˆ and P be the orthogonal projec(X, ˆ Uˆ ) and (X, U ). By (3.27) we have d[α, ∞) = d∞ . Since tors in (3.3) associated with (X, ˆ Uˆ ) and (X, U ) belong to the same genus G, we may assume without loss of generality that (X, ˆ ˆ Uˆ ) and (X, U ) Im X(t) = Im X(t) on [α, ∞). Thus by Proposition 3.3, the conjoined bases (X, ˆ ˆ are mutually representable on [α, ∞). Furthermore, denote by (X∗ , U∗ ) and (X∗ , U∗ ) the minˆ Uˆ ) and (X, U ) on [α, ∞), imal conjoined bases of (H) on [α, ∞), which are contained in (X, ˆ ˆ respectively. In particular, (X∗ , U∗ ) is a minimal principal solution of (H) and hence, the matrix ˆ Uˆ ), (X, U ) and of (Xˆ ∗ , Uˆ ∗ ), (X∗ , U∗ ) in Tˆ∗ = 0 in (3.25). We apply the representations of (X, ˆ Uˆ ), (X2 , U2 ) := (X, U ), Proposition 3.3 and Remark 3.9. More precisely, with (X1 , U1 ) := (X, ˆ ˆ ˆ (X∗1 , U∗1 ) := (X∗ , U∗ ), (X∗2 , U∗2 ) := (X∗ , U∗ ), P1 := P , P2 := P , PS1 ∞ := PSˆ ∞ , PS2 ∞ := PS ∞ , and N1 := Nˆ , the Wronskian Nˆ satisfies Pˆ Nˆ = Nˆ and P Nˆ T = Nˆ T , and there exist matriˆ Mˆ ∗ , Nˆ ∗ such that Mˆ and Mˆ ∗ are invertible, Mˆ T Nˆ and Mˆ ∗T Nˆ ∗ are symmetric, and ces M, X∗ (α) = Xˆ ∗ (α) Mˆ ∗ , ˆ S ∞ = P ˆ Mˆ ∗ , Pˆ MP S∞
Uˆ (α) = Uˆ ∗ (α) Mˆ ∗ + Xˆ ∗†T (α) Nˆ ∗ ,
P Mˆ −1 PSˆ ∞ = PS ∞ Mˆ ∗−1 ,
(5.4)
Nˆ ∗ Mˆ ∗−1 = PSˆ ∞ Nˆ Mˆ −1 PSˆ ∞ . (5.5)
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4671
ˆ = Nˆ we then obtain By using (5.5) and the equality NP Nˆ ∗ Mˆ ∗−1 = PSˆ ∞ Nˆ P Mˆ −1 PSˆ ∞ = PSˆ ∞ Nˆ PS ∞ Mˆ ∗−1 .
(5.6)
Now let (X, U ) be an antiprincipal solution of (H) at infinity. Then also (X∗ , U∗ ) is an antiprincipal solution, by Theorem 5.7. From (4.10) we have that the matrix T∗ in (3.25) defined through (X∗ , U∗ ) satisfies rank T∗ = rank(Nˆ ∗ Mˆ ∗−1 + Tˆ∗ ) = rank Nˆ ∗ Mˆ ∗−1 . Since rank T∗ = n − d∞ by Definition 5.1, we get from (5.6) that rank PSˆ ∞ Nˆ PS ∞ Mˆ ∗−1 = rank Nˆ ∗ Mˆ ∗−1 = n − d∞ , i.e., formula (5.3) holds. Conversely, if (5.3) is satisfied, then from (5.6) we have rank Nˆ ∗ Mˆ ∗−1 = n − d∞ . Therefore, rank T∗ = n − d∞ by (4.10), and so (X∗ , U∗ ) is an antiprincipal solution of (H) at infinity. Finally, Theorem 5.7 implies that (X, U ) is an antiprincipal solution as well. 2 As a corollary of Theorem 5.13 we obtain a characterization of antiprincipal solutions of (H) at infinity in the minimal genus. Corollary 5.14. Assume that (1.1) holds and system (H) is nonoscillatory. Let (Xˆ min , Uˆ min ) be the minimal principal solution of (H) at infinity and let (X, U ) be a minimal conjoined basis T (t) U (t) − U ˆ T (t) X(t) the (constant) Wronskian of (Xˆ min , Uˆ min ) of (H). Denote by Nˆ := Xˆ min min and (X, U ). Then (X, U ) is a minimal antiprincipal solution of (H) at infinity if and only if rank Nˆ = n − d∞ . Proof. By Theorem 5.13 and its proof with Pˆ := PSˆ ∞ and P := PS ∞ , we have PSˆ ∞ Nˆ = Nˆ and PS ∞ Nˆ T = Nˆ T . Therefore, PSˆ ∞ Nˆ PS ∞ = Nˆ and the statement follows from (5.3). 2 In the following result we present an interesting class of antiprincipal solutions at infinity. In particular, the principal solutions at the points α > αˆ min are examples of minimal antiprincipal solutions at infinity. This observation also reveals the complicated structure of the set of all antiprincipal solutions at infinity, see Remark 5.16 below. Proposition 5.15. Assume that (1.1) holds and system (H) is nonoscillatory with αˆ min defined in (3.27). Then for every α > αˆ min the principal solution (Xˆ α , Uˆ α ) at the point α is a minimal antiprincipal solution of (H) at infinity. Proof. Let α > αˆ min be fixed. From Proposition 3.14 we know that there exists the minimal principal solution (Xˆ min , Uˆ min ) of (H) at infinity with constant kernel on the interval (αˆ min , ∞). ˆ Uˆ ) := (Xˆ min , Uˆ min ). Let Pˆ , In order to simplify the notation, we put (X, U ) := (Xˆ α , Uˆ α ) and (X, ˆ ˆ R(t), S(t), and PSˆ ∞ be the matrices in (3.3), (3.2), (3.5), (3.6), and Remark 4.2 defined through ˆ the function X(t) on [α, ∞). In particular, Pˆ = PSˆ ∞ . By [19, Theorem 5.2], we have ˆ S(t) ˆ Xˆ T (α), X(t) = X(t)
ˆ = rank X(t), rank S(t)
t ∈ [α, ∞).
(5.7)
Let β ≥ α be such that (X, U ) has constant kernel on [β, ∞). Since (X, U ) ∈ Gmin by Proposition 4.7, we have rank X(t) = n − d∞ on [β, ∞) and d[β, ∞) = d∞ . Therefore, we obtain from ˆ = n − d∞ on [β, ∞). Consequently, Im S(t) ˆ = Im P ˆ the second formula in (5.7) that rank S(t) S∞ on [β, ∞), by the definition of PSˆ ∞ in (3.6). We will show that X † (t) = Xˆ †T (α) Sˆ † (t) Xˆ † (t),
t ∈ [β, ∞).
(5.8)
4672
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
ˆ S(t) ˆ Xˆ T (α) and N := Xˆ †T (α) Sˆ † (t) Xˆ † (t) for fixed t ∈ [β, ∞), we verify that Setting M := X(t) ˆ ˆ Xˆ † (t) = R(t), ˆ = Pˆ , X(t) and the four equations in (2.1) are satisfied. The identities Xˆ † (t) X(t) ˆ = P ˆ imply that N M = R(α) ˆ ˆ are symmetric. Moreover, and MN = R(t) Sˆ † (t) S(t) S∞ ˆ NMN = (N M) N = R(α) Xˆ †T (α) Sˆ † (t) Xˆ † (t) = Xˆ †T (α) Sˆ † (t) Xˆ † (t) = N, ˆ S(t) ˆ Xˆ T (α) = M. ˆ X(t) ˆ S(t) ˆ Xˆ T (α) = X(t) MNM = (MN ) M = R(t) It follows from Remark 2.1 that M † = N and hence, formula (5.8) holds. Now we construct the matrix Sβ (t) in (3.5) through the function X(t) on [β, ∞), that is, we set t Sβ (t) :=
X † (s) B(s) X †T (s) ds,
t ∈ [β, ∞).
(5.9)
β
Inserting (5.8) into (5.9) and using the equality Sˆ (t) = Xˆ † (t) B(t) Xˆ †T (t) on [β, ∞) and Reˆ mark 2.2 with M(t) := S(t), we obtain Sβ (t) = Xˆ †T (α)
t
Sˆ † (s) Sˆ (s) Sˆ † (s) ds Xˆ † (α) = −Xˆ †T (α)
β
t
[Sˆ † (s)] ds Xˆ † (α)
(5.10)
β
on [β, ∞). Performing the integration in (5.10) yields the formula Sβ (t) = Xˆ †T (α) [ Sˆ † (β) − Sˆ † (t) ] Xˆ † (α),
t ∈ [β, ∞).
(5.11)
Finally, since Sˆ † (t) → 0 for t → ∞, equality (5.11) implies that the function Sβ (t) has the limit Xˆ †T (α) Sˆ † (β) Xˆ † (α) as t → ∞. Thus, according to Remark 5.4 and Theorem 5.3 the conjoined basis (X, U ) is an antiprincipal solution of (H) at infinity. From Proposition 4.7 we know that (X, U ) belongs to the minimal genus Gmin , i.e., (X, U ) is a minimal antiprincipal solution of (H) at infinity. Note that by (5.1) the matrix Tβ in (3.25) associated with (X, U ) satisfies ˆ ˆ Tβ† = Xˆ †T (α) Sˆ † (β) Xˆ † (α), and hence Tβ = X(α) S(β) Xˆ T (α) by Remark 2.1(i). This additional information is however not needed in the proof. 2 Remark 5.16. The result of Proposition 5.15 shows that in contrast to the minimal principal solution of (H) at infinity (Proposition 3.14), a minimal antiprincipal solution of (H) at infinity is not uniquely determined. Thus, one cannot expect to have a unifying classification of all minimal antiprincipal solutions at infinity in the spirit of [20, Theorem 7.13], see also Remark 4.8. Moreover, the nonuniqueness of antiprincipal solutions at infinity in the minimal genus Gmin implies the same property for all antiprincipal solutions in every other genus G. When system (H) is completely controllable, we obtain from Theorem 5.13 and Proposition 5.15 an interesting characterization of its antiprincipal solutions at infinity. This result is new even in this controllable case.
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4673
Corollary 5.17. Assume that (1.1) holds and system (H) is nonoscillatory and completely ˆ Uˆ ) be the principal solution of (H) at infinity. Then a conjoined basis controllable. Let (X, (X, U ) is an antiprincipal solution of (H) at infinity if and only if the (constant) Wronskian ˆ Uˆ ) and (X, U ) is invertible. In particular, for every Nˆ := Xˆ T (t) U (t) − Uˆ T (t) X(t) of (X, α > αˆ min the principal solution (Xˆ α , Uˆ α ) at the point α is antiprincipal at infinity. Or more generally, for α ∈ [a, ∞) the principal solution (Xˆ α , Uˆ α ) at the point α is antiprincipal at infinity if ˆ and only if X(α) is invertible. Proof. If (H) is completely controllable, then d∞ = 0 and for every conjoined basis (X, U ) of (H) the function X(t) is eventually invertible. This means that there is only one (maximal) genus of conjoined bases, see also [20, Remark 7.15]. The orthogonal projectors PS ∞ and PSˆ ∞ ˆ Uˆ ) in this case satisfy PS ∞ = I = P ˆ . in (3.5) and Remark 4.2 associated with (X, U ) and (X, S∞ The first part of the statement now follows directly from Theorem 5.13, while the second part follows from Proposition 5.15. Finally, if (Xˆ α , Uˆ α ) is the principal solution of (H) at some point α ∈ [a, ∞), then Xˆ α (α) = 0 and Uˆ α (α) = I and hence Nˆ = Xˆ T (α). This means by the first part ˆ that (Xˆ α , Uˆ α ) is an antiprincipal solution at infinity if and only if X(α) is invertible. 2 In the last result of this section we present a construction of antiprincipal solutions at infinity with given rank from the antiprincipal solutions of systems with lower dimensions. This is an extension of [19, Theorem 7.8] and [20, Theorem 7.17], where principal solutions at infinity were considered. Therefore, with system (H) we consider another linear Hamiltonian system x = A(t) x + B(t) u,
u = C(t) x − AT (t) u,
t ∈ [a, ∞),
(H)
where A(t), B(t), C(t) are given n × n piecewise continuous matrices on [a, ∞) such that B(t) and C(t) are symmetric and B(t) ≥ 0 on [a, ∞).
(5.12)
From systems (H) and (H) we construct the “augmented” linear Hamiltonian system x∗ = A∗ (t) x∗ + B∗ (t) u∗ ,
u∗ = C∗ (t) x∗ − AT∗ (t) u∗ ,
t ∈ [a, ∞),
(H∗ )
where A∗ , B∗ , C∗ ∈ Cp are (n + n) × (n + n) matrices defined on [a, ∞) by A∗ (t) :=
0 , A(t)
A(t) 0
B∗ (t) :=
B(t) 0
0 , B(t)
C∗ (t) :=
C(t) 0
0 . C(t)
Theorem 5.18. Assume that the Legendre conditions (1.1) and (5.12) hold and that the systems (H) and (H) are nonoscillatory. Let (X, U ) and (X, U ) be antiprincipal solutions of (H) and (H) with rank equal to r and r, respectively. Then the pair (X∗ , U∗ ) defined by X∗ (t) :=
X(t) 0 , 0 X(t)
U∗ (t) :=
U (t) 0
0 , U (t)
t ∈ [a, ∞),
(5.13)
is an antiprincipal solution of system (H∗ ) at infinity with rank equal to r + r. Consequently, the antiprincipal solution (X∗ , U∗ ) constructed in (5.13) is minimal (maximal) if and only if the antiprincipal solutions (X, U ) and (X, U ) are minimal (maximal).
4674
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
Proof. The statement follows from Definition 5.1 and Theorem 5.5 by exactly the same arguments as in [20, Theorem 7.17]. 2 6. Limit properties of principal and antiprincipal solutions at infinity In this section we derive a limit characterization of principal solutions of (H) at infinity in the sense of (1.3). This can be regarded as a generalization of the classical limit result for principal solutions at infinity of controllable linear Hamiltonian systems (see Corollary 6.6). Below we use the same notation as in Theorem 5.13 and its proof. Theorem 6.1. Assume that (1.1) holds and system (H) is nonoscillatory with αˆ min defined ˆ Uˆ ) and (X, U ) be two conjoined bases of (H) from a given genus G and in (3.27). Let (X, ˆ Uˆ ) and (X, U ) have constant kernel on [α, ∞). Denote by let α > αˆ min be such that (X, ˆ P , PSˆ ∞ , and PS ∞ their associated projectors in (3.3), (3.6), and Remark 4.2. Moreover, let ˆ Uˆ ) and (X, U ). Then (X, ˆ Uˆ ) Nˆ := Xˆ T (t) U (t) − Uˆ T (t) X(t) be the (constant) Wronskian of (X, is a principal solution of (H) at infinity and rank PSˆ ∞ Nˆ PS ∞ = n − d∞ if and only if ˆ lim X † (t) X(t) =L
t→∞
with
Im LT = Im(Pˆ − PSˆ ∞ ).
(6.1)
In this case (X, U ) is an antiprincipal solution of (H) at infinity. ˆ and S(t) be the S-matrices in (3.5) which are associated Proof. With α as in the theorem, let S(t) ˆ Uˆ ) and (X, U ) on [α, ∞). By Remark 3.4(ii), we have on [α, ∞) the representation with (X, ˆ [Pˆ Mˆ + S(t) ˆ Nˆ ], X(t) = X(t)
ˆ X(t) = X(t) [P Mˆ −1 − S(t) Nˆ T ],
(6.2)
where Mˆ is invertible, see the proof of Theorem 5.13. By using (6.2) and the identities X † (t) X(t) = P and P S(t) = S(t) on [α, ∞), we obtain ˆ X † (t) X(t) = P Mˆ −1 − S(t) Nˆ T
on [α, ∞).
(6.3)
Let Tˆ∗ and T∗ be the matrices in (3.25) defined through the minimal conjoined bases (Xˆ ∗ , Uˆ ∗ ) and (X∗ , U∗ ) from the proof of Theorem 5.13. It follows by (3.26) that T∗ = Mˆ ∗T Tˆ∗ Mˆ ∗ + Mˆ ∗T Nˆ ∗ .
(6.4)
ˆ Uˆ ) is a principal solution of (H) at infinity and (5.3) holds. Then Tˆ∗ = 0 Suppose now that (X, and (X, U ) is an antiprincipal solution of (H) at infinity, by Theorem 5.13. This means that Im T∗ = Im PS ∞ , because Im T∗ ⊆ Im PS ∞ and rank T∗ = n − d∞ = rank PS ∞ , by Remark 3.12, Definition 5.1, and (3.14). Formula (6.4) then becomes T∗ = Mˆ ∗T Nˆ ∗ = Nˆ ∗T Mˆ ∗ . Multiplying this equality by T∗† from the left and by Mˆ ∗−1 from the right and using T∗† T∗ = PS ∞ yields PS ∞ Mˆ ∗−1 = T∗† Nˆ ∗T . Furthermore, by Theorem 5.3 and (6.3) we know that
(6.5)
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4675
ˆ lim X † (t) X(t) = lim [P Mˆ −1 − S(t) Nˆ T ] = L := P Mˆ −1 − T∗† Nˆ T .
(6.6)
t→∞
t→∞
We show that Im LT = Im( Pˆ − PSˆ ∞ ) = Im Pˆ ∩ Ker PSˆ ∞ . By using (6.6) and the identities P Mˆ −1 Pˆ = P Mˆ −1 , Nˆ T Pˆ = Nˆ T , we get LPˆ = L, i.e., Im LT ⊆ Im Pˆ . Moreover, the equality T∗† PS ∞ = T∗† and the formulas (5.5), (5.6), and (6.5) imply that
LPSˆ ∞ = P Mˆ −1 PSˆ ∞ − T∗† PS ∞ Nˆ T PSˆ ∞ = PS ∞ Mˆ ∗−1 − T∗† Nˆ ∗T = 0. (6.6)
(6.5)
(6.7)
Thus, Im LT ⊆ Ker PSˆ ∞ . Hence, we proved that Im LT ⊆ Im Pˆ ∩ Ker PSˆ ∞ . Now we show the opposite inclusion Im Pˆ ∩Ker PSˆ ∞ ⊆ Im LT , which is equivalent with Ker L ⊆ Im PSˆ ∞ ⊕Ker Pˆ . Let v ∈ Ker L. Then v can be uniquely decomposed as v = v1 + v2 with v1 ∈ Im Pˆ and v2 ∈ Ker Pˆ . The identity LPˆ = L then implies that (P Mˆ −1 − T∗† Nˆ T ) v1 = Lv1 = 0 and hence, P Mˆ −1 v1 = T∗† Nˆ T v1 . The vector w := P Mˆ −1 v1 therefore satisfies w ∈ Im T∗† = Im PS ∞ . By using the equalities Pˆ Mˆ P Mˆ −1 = Pˆ , Pˆ v1 = v1 , PS ∞ w = w, and the first formula in (5.5), we get v1 = Pˆ Mˆ P Mˆ −1 v1 = Pˆ Mˆ w = Pˆ Mˆ PS ∞ w = PSˆ ∞ Mˆ ∗ w, and hence, v1 ∈ Im PSˆ ∞ . This shows that v = v1 + v2 ∈ Im PSˆ ∞ ⊕ Ker Pˆ , which completes the proof in this direction. Conversely, assume that (6.1) is satisfied. Denote by L0 := P Mˆ −1 − L, where L is given in (6.1). Then by (6.3) we get S(t) Nˆ T → L0 as t → ∞. The equality S(t) = S(t) PS ∞ now implies that Ker PS ∞ Nˆ T ⊆ Ker L0 , and similarly the equality S(t) = PS ∞ S(t) implies that Im L0 ⊆ Im PS ∞ . In particular, we have rank L0 ≤ rank PS ∞ Nˆ T . Moreover, by using the identities P Mˆ −1 PSˆ ∞ = PS ∞ Mˆ ∗−1 and LPSˆ ∞ = 0 we get L0 PSˆ ∞ = PS ∞ Mˆ ∗−1 , which implies that Im PS ∞ ⊆ Im L0 . Hence, we have Im L0 = Im PS ∞ and rank L0 = rank PS ∞ = n − d∞ . In turn, the inequality n − d∞ = rank L0 ≤ rank PS ∞ Nˆ T holds. On the other hand, we have rank PS ∞ Nˆ T ≤ rank PS ∞ = n − d∞ . Thus, we conclude that rank PS ∞ Nˆ T = n − d∞ . The definition of T∗ in (3.25) now yields PS ∞ Nˆ T = lim S † (t) S(t) Nˆ T = lim S † (t) × lim S(t) Nˆ T = T∗ L0 . t→∞
t→∞
t→∞
(6.8)
We thus obtain from (6.8) the inequality n − d∞ = rank PS ∞ Nˆ T = rank T∗ L0 ≤ rank T∗ . Using the third condition in (4.13) we obtain that rank T∗ = n − d∞ . This shows that (X, U ) is an antiprincipal solution of (H) at infinity. Moreover, by using (5.6), (6.8), the symmetry of Nˆ ∗ Mˆ ∗−1 , and the equalities L0 PSˆ ∞ = PS ∞ Mˆ ∗−1 and T∗ PS ∞ = T∗ , we get Nˆ ∗ Mˆ ∗−1 = Mˆ ∗T −1 PS ∞ Nˆ T PSˆ ∞ = Mˆ ∗T −1 T∗ L0 PSˆ ∞ = Mˆ ∗T −1 T∗ PS ∞ Mˆ ∗−1 = Mˆ ∗T −1 T∗ Mˆ ∗−1 . This implies that T∗ = Mˆ ∗T Nˆ ∗ . From (6.4) we now obtain that Mˆ ∗T Tˆ∗ Mˆ ∗ = 0, i.e., Tˆ∗ = 0 as ˆ Uˆ ) is a principal solution of (H) at infinity. Finally, the matrix Mˆ ∗ is invertible. Therefore, (X, Theorem 5.13 yields the equality rank PSˆ ∞ Nˆ PS ∞ = n − d∞ , which completes the proof. 2 Remark 6.2. From the proof of Theorem 6.1 it follows that the equality in the second condition in (6.1) can be equivalently replaced by the inclusion Im LT ⊆ Im(Pˆ − PSˆ ∞ ).
4676
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
˜ U˜ ) The following result shows that the limit in (6.1) always exists for any conjoined basis (X, ˆ Uˆ ) at infinity, when (X, U ) happens to be an anfrom G instead of the principal solution (X, tiprincipal solution at infinity from G. In this case we have an additional information about the structure of the space Im LT in (6.1). Theorem 6.3. Assume that (1.1) holds and system (H) is nonoscillatory with αˆ min defined ˜ U˜ ) and (X, U ) be two conjoined bases from a given genus G, such that (X, U ) in (3.27). Let (X, ˜ U˜ ) and (X, U ) have constant keris an antiprincipal solution of (H) at infinity and such that (X, ˜ ˜ nel on [α, ∞) with some α > αˆ min . Let P , PS˜ ∞ , T be the matrices in (3.3), (3.6), (3.25) defined ˜ ˜ through the function X(t). Then the limit of X † (t) X(t) as t → ∞ exists and satisfies ˜ lim X † (t) X(t) = L with
t→∞
Im LT = Im T˜ ⊕ Im(P˜ − PS˜ ∞ ).
(6.9)
˜ U˜ ) instead of (X, ˆ Uˆ ), since Proof. We proceed similarly as in the proof of Theorem 6.1 with (X, ˆ Uˆ ) was the principal solution at some of those arguments were independent of the fact that (X, ˜ U˜ ) and (X, U ). infinity. Let N˜ := X˜ T (t) U (t) − U˜ T (t) X(t) be the (constant) Wronskian of (X, Then as in (6.2) and (6.3) we have on [α, ∞) ˜ [P˜ M˜ + S(t) ˜ N˜ ], X(t) = X(t)
˜ X(t) = X(t) [P M˜ −1 − S(t) N˜ T ],
˜ = P M˜ −1 − S(t) N˜ T , X † (t) X(t) where M˜ is invertible. Let T˜∗ and T∗ be the matrices in (3.25) defined through the minimal ˜ U˜ ) and (X, U ) on [α, ∞), conjoined bases (X˜ ∗ , U˜ ∗ ) and (X∗ , U∗ ), which are contained in (X, respectively, see the proof of Theorem 5.13. Then by (3.26) we have T∗ = M˜ ∗T T˜∗ M˜ ∗ + M˜ ∗T N˜ ∗ .
(6.10)
Since (X, U ) is an antiprincipal solution at infinity, Im T∗ = Im PS ∞ and as in (6.6) we get ˜ lim X † (t) X(t) = lim [P M˜ −1 − S(t) N˜ T ] = L := P M˜ −1 − T∗† N˜ T
t→∞
t→∞
(6.11)
and LP˜ = L. Moreover, Ker L ⊆ Im PS˜ ∞ ⊕ Ker P˜ , which shows that every vector v ∈ Ker L can be uniquely decomposed as v = v1 + v2 , where v1 ∈ Im PS˜ ∞ and v2 ∈ Ker P˜ . The vector w := P M˜ −1 v1 then satisfies w ∈ Im PS ∞ , v1 = PS˜ ∞ M˜ ∗ w, and w = T∗† N˜ T v1 , see the paragraph following formula (6.7). In particular, by combining the last two equalities and by using the identities T∗† PS ∞ = T∗† and PS ∞ N˜ T PS˜ ∞ = N˜ ∗T we obtain w = T∗† N˜ T PS˜ ∞ M˜ ∗ w = T∗† PS ∞ N˜ T PS˜ ∞ M˜ ∗ w = T∗† N˜ ∗T M˜ ∗ w.
(6.12)
We shall derive some additional properties of the matrix L, which are needed for the statement of this theorem. In particular, we prove the formula Ker L = Ker T˜∗ ∩ Im PS˜ ∞ ⊕ Ker P˜ .
(6.13)
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4677
Let v ∈ Ker L and let v1 , v2 , and w be its associated vectors defined above. Multiplying formula (6.12) by T∗ from the left together with the identities T∗ T∗† = PS ∞ and PS ∞ N˜ ∗T = N˜ ∗T yields T∗ w = N˜ ∗T M˜ ∗ w = M˜ ∗T N˜ ∗ w. By using (6.10) in the last equality, we get M˜ ∗T T˜∗ M˜ ∗ w = 0. The invertibility of M˜ ∗ and the equality T˜∗ = T˜∗ PS˜ ∞ then imply that T˜∗ PS˜ ∞ M˜ ∗ w = 0. Therefore, the vector v1 = PS˜ ∞ M˜ ∗ w satisfies v1 ∈ Ker T˜∗ ∩ Im PS˜ ∞ . Hence, the inclusion ⊆ in (6.13) holds. Conversely, assume that v ∈ Ker T˜∗ ∩ Im PS˜ ∞ ⊕ Ker P˜ . Then we can write v = v1 + v2 with v1 ∈ Ker T˜∗ ∩ Im PS˜ ∞ and v2 ∈ Ker P˜ . Since LP˜ = L, it follows from (6.11) that Lv = Lv1 = (P M˜ −1 − T∗† N˜ T ) v1 . By using the identities v1 = PS˜ ∞ v1 , P M˜ −1 PS˜ ∞ = PS ∞ M˜ ∗−1 , T∗† PS ∞ = T∗† , PS ∞ N˜ T PS˜ ∞ = N˜ ∗T , and T∗† T∗ = PS ∞ , we then get L v = (P M˜ −1 − T∗† N˜ T ) PS˜ ∞ v1 = (PS ∞ M˜ ∗−1 − T∗† N˜ ∗T ) v1 = T∗† ( T∗ M˜ ∗−1 − N˜ ∗T ) v1 . (6.14) Moreover, equality (6.10), the invertibility of M˜ ∗ , and the symmetry of M˜ ∗T N˜ ∗ imply that T∗ M˜ ∗−1 − N˜ ∗T = M˜ ∗T T˜∗ . Therefore, formula (6.14) yields that Lv = T∗† M˜ ∗T T˜∗ v1 = 0, because v1 ∈ Ker T˜∗ . This shows that v ∈ Ker L, i.e., the inclusion ⊇ in (6.13) is satisfied as well. ˜ = S˜∗ (t) on [α, ∞) and Therefore, (6.13) is proven. According to Proposition 3.8, we have S(t) ˜ ˜ ˜ hence, T = T∗ . This means that the matrix T∗ in (6.13) can be replaced by T˜ . Finally, by using Im T˜ ∩ Ker PS˜ ∞ ⊆ Im PS˜ ∞ ∩ Ker PS˜ ∞ = {0} and Im T˜ ⊆ Im P˜ , we get Im LT = ( Ker L )⊥ = Im T˜ ⊕ Ker PS˜ ∞ ∩ Im P˜ = Im T˜ ⊕ Ker PS˜ ∞ ∩ Im P˜ , which is the second condition in (6.9). The proof is complete.
2
Remark 6.4. If (H) is nonoscillatory, we introduce for every genus G its rank and defect as ˜ U˜ ), where (X, ˜ U˜ ) is any conjoined follows. The number rank G is defined as the rank of (X, basis from G. This quantity is well defined, since any two conjoined bases from G have eventually the same image of their first components. Then n − d∞ ≤ rank G ≤ n. Also, we define def G := n − rank G, for which 0 ≤ def G ≤ d∞ . From Theorem 6.3 it follows that rank L = rank T˜ + d∞ − def G, since by (6.9) we have rank L = rank T˜ + rank P˜ − rank PS˜ ∞ , while rank P˜ = rank G and rank PS˜ ∞ = n − d∞ . Therefore, the actual value of the rank of L depends primarily on the ˜ U˜ ) is rank of T˜ . In particular, the rank of L is minimal if and only if the conjoined basis (X, a principal solution of (H) at infinity. This property is well known in the controllable case, for which d∞ = 0 = def G and hence, rank L = rank T˜ , compare also with Corollary 6.6 below. The statement of Theorem 6.1 is particularly simple for the minimal genus Gmin . ˆ Uˆ ) and Theorem 6.5. Assume that (1.1) holds and system (H) is nonoscillatory. Let (X, (X, U ) be two conjoined bases of (H) from the minimal genus Gmin . Moreover, denote by ˆ Uˆ ) and (X, U ). Then (X, ˆ Uˆ ) Nˆ := Xˆ T (t) U (t) − Uˆ T (t) X(t) the (constant) Wronskian of (X, ˆ is a minimal principal solution of (H) at infinity and rank N = n − d∞ if and only if
4678
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
ˆ lim X † (t) X(t) = 0.
t→∞
(6.15)
In this case (X, U ) is a minimal antiprincipal solution of (H) at infinity. ˆ Uˆ ) and ˆ Uˆ ) and (X, U ) be as in the corollary and let α > αˆ min be such that (X, Proof. Let (X, ˆ Uˆ ) and (X, U ) are minimal conjoined bases (X, U ) have constant kernel on [α, ∞). Then (X, on [α, ∞). Moreover, let Pˆ , P , PSˆ ∞ , and PS ∞ be the corresponding matrices in (3.3), (3.6), and Remark 4.2. Then Pˆ = PSˆ ∞ , P = PS ∞ , and PSˆ ∞ Nˆ PS ∞ = Nˆ . The statement now follows from Theorem 6.1. 2 The result in Theorem 6.5 gives the classical limit characterization of the principal solutions at infinity of a completely controllable system (H), see [4, Proposition 4, p. 43], [18, Theorem VII.3.2], and [7, Theorem XI.10.5]. In this case d∞ = 0 and there is only one (that is, minimal) genus of conjoined bases of (H). We recall from Proposition 3.14 that the (minimal) principal solution at infinity is in this case unique up to a right nonsingular multiple. Corollary 6.6. Assume that (1.1) holds and system (H) is nonoscillatory and completely controlˆ Uˆ ) and (X, U ) be two conjoined bases of (H) with Nˆ being their Wronskian. Then lable. Let (X, ˆ Uˆ ) is the principal solution of (H) at infinity and Nˆ is invertible if and only if (X, ˆ lim X −1 (t) X(t) = 0.
t→∞
(6.16)
In this case (X, U ) is an antiprincipal solution of (H) at infinity. We conclude this section with some remarks and observations, which are related to the results of this paper or to the subsequent research in this area. Remark 6.7. (i) In Theorem 4.4 and Corollary 4.6 we studied the structure of the minimal genus Gmin . We believe that similar properties can be derived for an arbitrary genus G. (ii) From Corollary 4.11 it follows that for any integer p between 0 and n − d∞ there exists a conjoined basis (X, U ) of (H), whose matrix T in (3.25) has its rank equal to p. In addition, the conjoined basis (X, U ) can be chosen from any specified genus G. (iii) By [20, Theorem 7.6] and Theorem 5.8, the nonoscillation of system (H) is equivalent with the existence of principal solutions at infinity (corresponding to the minimal rank of T ) or antiprincipal solutions at infinity (corresponding to the maximal rank of T ), and these solutions have any rank between n − d∞ and n. We believe that this property holds also for the conjoined bases of (H), whose matrix T satisfies 0 < rank T < n − d∞ . Such intermediate or “nonstandard” solutions are present even in the controllable case (d∞ = 0) when n ≥ 2, but were never considered in the literature. (iv) The limit characterization of principal solutions of (H) at infinity in Theorems 6.1 and 6.3 uses the conjoined bases from the same genus G. We expect that it is possible to derive a limit property in the spirit of (6.1) or (6.9) for conjoined bases belonging to two different genera G1 and G2 .
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4679
7. Examples In this section we present several examples, which illustrate the theory of antiprincipal solutions at infinity and compare it to the theory of principal solutions at infinity. In order to shorten the presentation, we use the examples from [20, Section 8], where the principal solutions at infinity with different ranks were considered. In agreement with the notation in Section 3 and ˆ Uˆ ), in the special case of Remark 5.2, the principal solutions at infinity will be denoted by (X, minimal and maximal principal solutions at infinity by (Xˆ min , Uˆ min ) and (Xˆ max , Uˆ max ). Similarly, the antiprincipal solutions at infinity will be denoted by (X, U ), in the special case of minimal and maximal antiprincipal solutions at infinity by (Xmin, Umin ) and (Xmax , Umax ). Example 7.1. In the first example we discuss a controllable linear Hamiltonian system. Let n = 1, a = 0, A(t) = 0, B(t) = 1 + t 2 , and C(t) = −2/(1 + t 2 )2 , which implies that this system corresponds to the second order Sturm–Liouville equation [y /(1 + t 2 )] + 2y/(1 + t 2 )2 = 0. Since B(t) > 0 on [0, ∞), system (H) is completely controllable on [0, ∞) and d[0, ∞) = d∞ = 0. Therefore, there is only one (minimal/maximal) genus G of conjoined bases with the rank r = n = 1, i.e., the minimal and maximal (anti)principal solutions at infinity coincide. By [20, Example 8.1], the principal solutions at infinity are nonzero multiples of
ˆ X(t), Uˆ (t) = t, 1/(1 + t 2 ) ,
(7.1)
with αˆ min = 0, by (3.27). On the other hand, by Corollary 5.17 the antiprincipal solutions at infinity are nonzero multiples of the principal solutions at the points α > 0. For example,
X(t), U (t) = t 2 − 1, 2t/(1 + t 2 )
(7.2)
is an antiprincipal solution at infinity, being at the same time the principal solution at the point ˆ = α = 1 (see Corollary 5.17). Moreover, the solutions in (7.1) and (7.2) satisfy X−1 (t) X(t) t/(t 2 − 1) → 0 as t → ∞, as we claim in formula (6.16) of Corollary 6.6. Example 7.2. We consider the so-called zero system (H) with n × n coefficient matrices A(t) = B(t) = C(t) ≡ 0 on [a, ∞). This system is extremely abnormal, because d[a, ∞) = d∞ = n. From [20, Example 8.2] and Definition 5.1 it then follows that every conjoined basis of (H) is constant on [a, ∞) and that all conjoined bases are simultaneously both principal and antiprincipal solutions at infinity with respect to the interval [a, ∞), with αˆ min = a. Moreover, for any genus G there exists a unique orthogonal projector P ∈ Rn×n such that the conjoined baˆ Uˆ ) = (X, U ) = (P , I − P ) is a principal and antiprincipal solution at infinity belonging sis (X, to G. In particular, if P = 0, then (Xˆ min , Uˆ min ) = (Xmin , Umin ) = (0, I ), while if P = I , then ˆ (Xˆ max , Uˆ max ) = (Xmax , Umax ) = (I, 0). In addition, X † (t) X(t) = P † P → L := P as t → ∞. This is in full agreement with Theorem 6.1, because in this case Pˆ = P and PSˆ ∞ = 0. Note that for any α ≥ a the principal solution at the point α is equal to (0, I ), which is at the same time a minimal antiprincipal solution at infinity, as we claim in Proposition 5.15. In the following we extend Example 7.2 to a system with variable A(t). Example 7.3. Let B(t) = C(t) ≡ 0 ∈ Rn×n on [a, ∞). Then (H) has the form X = A(t) X and U = −AT (t) U and, as in Example 7.2, we have d[a, ∞) = d∞ = n. Therefore, the principal
4680
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
and antiprincipal solutions at infinity coincide and they can be constructed from the fundamental matrix (t, a) of system U = −AT (t) U on [a, ∞) satisfying (a, a) = I . More precisely, if P ∈ Rn×n is an orthogonal projector, then
ˆ X(t), Uˆ (t) = X(t), U (t) = T −1 (t, a) P , (t, a) (I − P )
(7.3)
is an (anti)principal solution at infinity with the rank equal to rank P . In particular, we have
Xˆ min (t), Uˆ min (t) = Xmin (t), Umin (t) = 0, (t, a) , Xˆ max (t), Uˆ max (t) = Xmax (t), Umax (t) = T −1 (t, a), 0 .
(7.4) (7.5)
ˆ Note that in this case the solutions in (7.3) satisfy X† (t) X(t) → L := P as t → ∞, which we ˆ also guarantee in Theorem 6.1 with P = P and PSˆ ∞ = 0. The equalities in (7.4) then illustrate the statement of Proposition 5.15. In the last example we utilize the construction of principal and antiprincipal solutions at infinity through the result in Theorem 5.18 and [20, Theorem 7.17]. We also demonstrate the variability of genera with different ranks. The question how to classify all different genera of conjoined bases of (H) will be studied in our subsequent work. Example 7.4. Let n = 3 and a = 0. We consider system (H) with A(t) = diag{0, 0, 1}, B(t) = diag{1 + t 2 , 0, 0}, and C(t) = diag{−2/(1 + t 2 )2 , 0, 0} on [0, ∞). It is easy to see that system (H) comes from the scalar systems in Examples 7.1, 7.2, and 7.3 with (t, 0) = e−t . In this case we have d∞ = 2 and αˆ min = 0. First we examine the minimal genus Gmin , whose rank is r = n − d∞ = 1. By [20, Example 8.4] and Theorem 5.18, we have
Xˆ min (t), Uˆ min (t) = diag{t, 0, 0}, diag{1/(1 + t 2 ), 1, e−t } , Xmin (t), Umin (t) = diag{t 2 − 1, 0, 0}, diag{2t/(1 + t 2 ), 1, e−t } . † Moreover, Xmin (t) Xˆ min (t) = diag{t/(t 2 − 1), 0, 0} → 0 as t → ∞, as we claim in formula (6.15) of Theorem 6.5. Now we discuss the maximal genus Gmax , whose rank is r = n = 3. From [20, Example 8.4] and Theorem 5.18 we obtain
Xˆ max (t), Uˆ max (t) = diag{t, 1, et }, diag{1/(1 + t 2 ), 0, 0} , Xmax (t), Umax (t) = diag{t 2 − 1, 1, et }, diag{2t/(1 + t 2 ), 0, 0} .
† In this case Xmax (t) Xˆ max (t) = diag{t/(t 2 − 1), 1, 1} → L := diag{0, 1, 1} as t → ∞, and Pˆ = I and PSˆ ∞ = diag{1, 0, 0} in formula (6.1) of Theorem 6.1. In the remaining part of this example we analyze three different genera with rank equal to r = 2. Observe that only two of them arise from the diagonal construction in Theorem 5.18. Consider the genus G1 with rank r = 2, which is given by the principal and antiprincipal solutions at infinity
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
4681
Xˆ 1 (t), Uˆ 1 (t) = diag{t, 0, et }, diag{1/(1 + t 2 ), 1, 0} , X1 (t), U1 (t) = diag{t 2 − 1, 0, et }, diag{2t/(1 + t 2 ), 1, 0} .
In Theorem 6.1 we now have X1† (t) Xˆ 1 (t) = diag{t/(t 2 − 1), 0, 1} → L := diag{0, 0, 1}, and Pˆ = diag{1, 0, 1} and PSˆ ∞ = diag{1, 0, 0}. In the genus G2 with rank r = 2 defined by the principal and antiprincipal solutions at infinity
Xˆ 2 (t), Uˆ 2 (t) = diag{t, 1, 0}, diag{1/(1 + t 2 ), 0, e−t } ), X2 (t), U2 (t) = diag{t 2 − 1, 1, 0}, diag{2t/(1 + t 2 ), 0, e−t } ), we have X2† (t) Xˆ 2 (t) = diag{t/(t 2 − 1), 1, 0} → L := diag{0, 1, 0}, and Pˆ = diag{1, 1, 0} and PSˆ ∞ = diag{1, 0, 0}. Note that in all the above genera Gmin , Gmax , G1 , G2 the matrix L satisfies LT = Pˆ − PSˆ ∞ , which will not be the case of the following nondiagonal genus. Let G3 be the genus with rank r = 2 defined by the principal and antiprincipal solutions at infinity ⎛⎛ ⎞ ⎛ ⎞⎞ 1/(1 + t 2 ) t 0 0 0 0 0 1/2 −1/2 ⎠⎠ , Xˆ 3 (t), Uˆ 3 (t) = ⎝⎝ 0 1/2 1/2 ⎠ , ⎝ 0 et /2 et /2 0 −e−t /2 e−t /2 ⎛⎛ 2 ⎞ ⎛ ⎞⎞ 0 0 0 0 t −1 2t/(1 + t 2 ) 0 1/4 1/4 ⎠⎠ . 1/4 −1/4 ⎠ , ⎝ X3 (t), U3 (t) = ⎝⎝ 0 t t −t 0 e /4 −e /4 0 −e /4 −e−t /4
In this case we have in Theorem 6.1
X3† (t) Xˆ 3 (t) =
t/(t 2 − 1) 0 0
0 1 −1
0 1 −1
→ L :=
0 0 0
0 1 −1
0 1 −1
0 0 0
0 0 , 0
as t → ∞.
The matrices Pˆ and PSˆ ∞ in (6.1) now have the form
Pˆ =
1 0 0 0 1/2 1/2 , 0 1/2 1/2
PSˆ ∞ =
1 0 0
and we can see that Im LT = Im(Pˆ − PSˆ ∞ ), although LT = Pˆ − PSˆ ∞ . References [1] C.D. Ahlbrandt, Principal and antiprincipal solutions of self-adjoint differential systems and their reciprocals, Rocky Mountain J. Math. 2 (1972) 169–182. [2] D.S. Bernstein, Matrix Mathematics. Theory, Facts, and Formulas with Application to Linear Systems Theory, Princeton University Press, Princeton, 2005. [3] S.L. Campbell, C.D. Meyer, Generalized Inverses of Linear Transformations, Classics Appl. Math., vol. 56, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2009; reprint of the 1991 corrected reprint of the 1979 original. [4] W.A. Coppel, Disconjugacy, Lecture Notes in Math., vol. 220, Springer-Verlag, Berlin–New York, 1971.
4682
P. Šepitka, R. Šimon Hilscher / J. Differential Equations 259 (2015) 4651–4682
[5] O. Došlý, Oscillation criteria and the discreteness of the spectrum of self-adjoint, even order, differential operators, Proc. Roy. Soc. Edinburgh Sect. A 119 (1991) 219–232. [6] O. Došlý, Principal solutions and transformations of linear Hamiltonian systems, Arch. Math. (Brno) 28 (1992) 113–120. [7] P. Hartman, Ordinary Differential Equations, John Wiley, New York, 1964. [8] D.B. Hinton, Principal solutions of positive linear Hamiltonian systems, J. Aust. Math. Soc. Ser. A 22 (1976) 411–420. [9] R. Fabbri, R. Johnson, S. Novo, C. Núñez, Some remarks concerning weakly disconjugate linear Hamiltonian systems, J. Math. Anal. Appl. 380 (2) (2011) 853–864. [10] R. Johnson, S. Novo, C. Núñez, R. Obaya, Uniform weak disconjugacy and principal solutions for linear Hamiltonian systems, in: Proceedings of the International Conference on Delay Differential and Difference Equations and Applications, Balatonfuered, Hungary, 2013, Springer, Berlin, 2014. [11] R. Johnson, C. Núñez, R. Obaya, Dynamical methods for linear Hamiltonian systems with applications to control processes, J. Dynam. Differential Equations 25 (3) (2013) 679–713. [12] W. Kratz, Quadratic Functionals in Variational Analysis and Control Theory, Akademie Verlag, Berlin, 1995. [13] W. Kratz, Definiteness of quadratic functionals, Analysis 23 (2) (2003) 163–183. [14] W. Kratz, R. Šimon Hilscher, Rayleigh principle for linear Hamiltonian systems without controllability, ESAIM Control Optim. Calc. Var. 18 (2) (2012) 501–519. [15] C.H. Rasmussen, Oscillation and asymptotic behaviour of systems of ordinary linear differential equations, Trans. Amer. Math. Soc. 256 (1979) 1–48. [16] W.T. Reid, Riccati matrix differential equations and non-oscillation criteria for associated linear differential systems, Pacific J. Math. 13 (1963) 665–685. [17] W.T. Reid, Principal solutions of nonoscillatory linear differential systems, J. Math. Anal. Appl. 9 (1964) 397–423. [18] W.T. Reid, Ordinary Differential Equations, John Wiley & Sons, Inc., New York–London–Sydney, 1971. [19] P. Šepitka, R. Šimon Hilscher, Minimal principal solution at infinity for nonoscillatory linear Hamiltonian systems, J. Dynam. Differential Equations 26 (1) (2014) 57–91. [20] P. Šepitka, R. Šimon Hilscher, Principal solutions at infinity of given ranks for nonoscillatory linear Hamiltonian systems, J. Dynam. Differential Equations 27 (1) (2015) 137–175. [21] R. Šimon Hilscher, Sturmian theory for linear Hamiltonian systems without controllability, Math. Nachr. 284 (7) (2011) 831–843. [22] R. Šimon Hilscher, On general Sturmian theory for abnormal linear Hamiltonian systems, in: W. Feng, Z. Feng, M. Grasselli, A. Ibragimov, X. Lu, S. Siegmund, J. Voigt (Eds.), Dynamical Systems, Differential Equations and Applications, Proceedings of the 8th AIMS Conference on Dynamical Systems, Differential Equations and Applications, Dresden, 2010, in: Discrete Contin. Dyn. Syst., Suppl. 2011, American Institute of Mathematical Sciences (AIMS), Springfield, MO, 2011, pp. 684–691. [23] M. Wahrheit, Eigenvalue problems and oscillation of linear Hamiltonian systems, Int. J. Difference Equ. 2 (2) (2007) 221–244.