Applied Mathematics and Computation 189 (2007) 1300–1303 www.elsevier.com/locate/amc
Generalized inverses of tridiagonal operators K.C. Sivakumar Department of Mathematics, Indian Institute of Technology Madras, Chennai 600 036, India
Abstract Let H be a Hilbert space with fen : n 2 Ng as an orthonormal basis. Let T : H ! H be a bounded linear operator defined by Ten ¼ en1 þ k sin ð2nrÞen þ enþ1 , where k is real and r is a rational multiple of p. In this short note it is established that the Moore–Penrose inverse of T is not bounded. We also show that the same conclusion is valid for a few related classes of operators. Ó 2006 Elsevier Inc. All rights reserved. Keywords: Almost Mathieu operator; Tridiagonal operator; Moore–Penrose inverse
1. Introduction Let H be a separable real Hilbert space. Then H is isometrically isomorphic to ‘2 , the Hilbert space of square summable real sequences. Let fen : n 2 Ng, be the standard orthonormal basis of ‘2 , where en ¼ ð0; 0; . . . ; 1; 0; . . .Þ with 1 appearing in the nth coordinate. A class of tridiagonal operators defined on ‘2 called almost Mathieu operators which are important in Mathematical Physics (see the references cited in [1]) are defined by Ten ¼ en1 þ k sin ð2nrÞen þ enþ1 ; where k is real and r is a rational multiple of p. The problem of invertibility of such operators was considered recently [1] where it was shown that these operators are not invertible. In this paper, we prove that the situation is worse in the sense that these operators even do not have bounded generalized inverses of two specific types. We also show that the same conclusion holds for some other related classes of infinite matrices. The existence of generalized inverses of matrices was first observed by Moore who gave the definition of a unique inverse for every finite matrix, square or rectangular. He used the term ‘‘general reciprocal’’. His publication appeared in 1920, but his results were apparently obtained much earlier. Moore’s discovery was not noticed for about three decades. Revival of interest in the generalized inverse was focussed on the least squares properties, which were recognized by Bjerhammar who rediscovered Moore’s generalized inverse. Penrose in
E-mail address:
[email protected] 0096-3003/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.04.074
K.C. Sivakumar / Applied Mathematics and Computation 189 (2007) 1300–1303
1301
1955 sharpened and generalized Bjerhammar’s results and showed that Moore’s inverse is the unique solution of the following four equations which have come to be called Penrose equations: AXA ¼ A;
XAX ¼ X ;
ðAX Þ ¼ AX ;
ðXAÞ ¼ XA:
We refer the reader to [2] and [3] for a more detailed historical introduction. The problem of determination of the Moore–Penrose inverse of a matrix has been handled by many authors. A classical method of determining the Moore–Penrose inverse was proposed by Decell. This uses the Cayley-Hamilton theorem. Some of the more well known techniques include the formula using the fullrank factorization of a matrix and the method that employs the singular value decomposition. The latter has the advantage of being a numerically more stable procedure. We refer the reader to [3] for more details. A technique which is being widely used in practice was invented by Greville [5]. This is an ingenious iterative method that determines the Moore–Penrose inverse of a matrix by successively adding a column to the given matrix. We refer the reader to [10] for a much simpler constructive proof of the Greville formula and [9] for a proof by verification. Udwadia and Kalaba also provide constructive proofs of other types of generalized inverses [11], [12] and [13]. A generalized inverse not so widely studied in the literature is the MP M-inverse of a matrix. This was introduced by Rao and Mitra [8]. Recently, Udwadia and Phohomsiri gave a recursive formula for computing the MP M-inverse. This notion has been shown to have applications in analytical dynamics (See [14] and the references cited therein). We next briefly review the notion of the Moore–Penrose inverse of operators between Hilbert spaces. Let H1 and H2 be Hilbert spaces and A : H 1 ! H 2 be a bounded linear map. For a linear map X : H 2 ! H 1 the Penrose equations (as given above) have a unique solution X denoted by A . It is known that A is bounded iff R(A), the range space of A is closed in H2. More generally, A has a bounded f1g inverse (any X satisfying AXA ¼ A) iff R(A) is closed [4], [6]. There is another well known type of a generalized inverse, defined for square matrices, called the group inverse. Let H 1 ¼ H 2 . Then the group inverse (if it exists) of A is the unique solution of the equations: AXA ¼ A;
XAX ¼ X ;
AX ¼ XA:
It is well known that ([3] and [7]) for a finite square matrix A, the group inverse denoted by A# exists iff RðAÞ ¼ RðA2 Þ and N ðAÞ ¼ N ðA2 Þ. Robert [7] has shown that for a linear map A over a vector space, the group inverse exists iff R(A) and N(A) are complementary subspaces. In particular, if A is one–one, then A# exists iff A is onto, in which case, A# ¼ A1 . From the discussion as above, it is clear that for a bounded linear operator A, the group inverse A# is bounded iff R(A) is closed. 2. Main results In this section, we will set out to prove that a class of almost Mathieu operators have neither bounded Moore–Penrose inverses nor group inverses. We first prove the following result. Theorem 2.1. Let V be a self-adjoint bounded linear operator on a Hilbert space such that V is one–one. If V is not onto, then V is unbounded and V# does not exist. Proof. Suppose that R(V) is closed. Then H ¼ RðV Þ N ðV Þ ¼ RðV Þ N ðV Þ, where N(V) denotes the null space of the operator V. Since V is one–one, N ðV Þ ¼ 0 and hence we have RðV Þ ¼ H , contradicting that V is not onto. Thus, R(V) is not closed. Hence V is unbounded. Since V is one–one, if V# exists, we would have V to be onto, a contradiction. h Theorem 2.2 (Theorem 0.1, [1]). Let V be an infinite tridiagonal matrix whose diagonal elements are d 1 ; d 2 ; . . . ; d m ; 0; d m ; . . . ; d 1 ; 0 repeated in the same order and whose entries in the super diagonal and the sub diagonal are all 1. Then V is bounded linear operator on ‘2 and V is not invertible. Remarks 2.3. It can be seen [1] that the almost Mathieu operators are of the form V as above for an appropriate choice of m. Hence these operators are not invertible (Corollary 0.2, [1]). The proof shows that
1302
K.C. Sivakumar / Applied Mathematics and Computation 189 (2007) 1300–1303
e1 62 RðV Þ and hence V is not onto. The proof starts with the equation Vx ¼ e1 ¼ ð1; 0; 0; . . .Þ and concludes that x ¼ 0. Setting x ¼ ðx1 ; x2 ; . . .Þ 2 ‘2 , Vx ¼ e1 implies (among other things) x1 d 1 þ x2 ¼ 1. In concluding that x ¼ 0, this equation is not used. In otherwords it follows that x ¼ 0, whenever Vx ¼ 0. Thus, V is one–one. The first main result of this note is now given. Theorem 2.4. Let V be an infinite tridiagonal matrix as in Theorem 2.2. Then V is unbounded and V# does not exist. Proof. V is one–one and not onto. Also V = V*. The conclusions now follow from Theorem 2.1.
h
The above theorem is a particular case of the following result whose proof is similar and is hence omitted. Theorem 2.5. Let V be an infinite tridiagonal matrix whose diagonal elements are d 1 ; d 2 ; . . . ; d m ; 0; d m ; . . . ; d 1 ; 0 repeated in the same order and whose entries in the super diagonal and the sub diagonal are all k, where k is any real number. Then V is a bounded linear operator on ‘2 , V is unbounded and V# does not exist. In the next two results, we identify two classes of operators for which the same conclusion is valid. Theorem 2.6. Let V be an infinite tridiagonal matrix whose diagonal elements are all zero and whose entries in the super diagonal are all k and whose entries in the sub diagonal are all k, where k is any real number. Then V is an operator on ‘2 , V is unbounded and V# does not exist. Theorem 2.7. Let W be a bounded linear operator on ‘2 such that W*W or WW* is a tridiagonal operator of the form considered in either Theorem 2.5 or Theorem 2.6. Then W is unbounded and W# does not exist. Proof. The first part of the proof follows from the formula y
y
W y ¼ W ðWW Þ ¼ ðW W Þ W : (See for e.g., [6]). To prove the second part, note that if W*W is of the form considered in Theorem 2.5 or Theorem 2.6, then W*W is one–one. Thus, N(W) = N(W*W) = {0}. If W# exists, then ‘2 ¼ RðW Þ; a contradiction. A similar argument applies to the case when WW* is one–one. h Remarks 2.8. It is worthwhile to observe that the identity #
#
W # ¼ W ðWW Þ ¼ ðW W Þ W ; 1 1 1=2 0 # # does not hold. Let A ¼ , B ¼ At A and C ¼ AAt . Then B ¼ ð1=4ÞB and C ¼ . Since 0 0 0 0 1=2 0 A2 ¼ A, then A# ¼ A. However, B# At ¼ ðAt AÞ# At ¼ ¼ At C # ¼ At ðAAt Þ# 6¼ A# . 1=2 0 Acknowledgement The author thanks Professor Firdaus Udwadia of the University of Southern California for suggestions that have led to improved readability of the paper. References [1] R. Balasubramanian, S.H. Kulkarni, R. Radha, Non-invertibility of certain almost Mathieu operators, Proc. Amer. Math. Soc. 129 (7) (2000) 2017–2018. [2] A. Ben-Israel, The Moore of the Moore–Penrose inverse, Electron. J. Linear Algebra 9 (2002) 150–157. [3] A. Ben-Israel, T.N.E. Greville, Generalized Inverses: Theory and Applications, second ed., Springer-Verlag, New York, 2003. [4] S.R. Caradus, Generalized Inverses and Operator Theory, Queen’s University, Kingston, Ontario, 1978. [5] T.N.E. Greville, Some applications of the pseudo inverse of a matrix, SIAM Rev. 1 (1960) 15–22. [6] C.W. Groetsch, Generalized Inverses of Linear Operators: Representation and Approximation, Marcel Dekker, New York, 1977. [7] P. Robert, On the group-inverse of a linear transformation, J. Math. Anal. Appl. 22 (1968) 658–669.
K.C. Sivakumar / Applied Mathematics and Computation 189 (2007) 1300–1303
1303
[8] C.R. Rao, S.K. Mitra, Generalized Inverse of Matrices and Its Applications, John Wiley, New York, 1971. [9] K.C. Sivakumar, Proof by Verification of the Greville/Udwadia/Kalaba Formula for the Moore–Penrose Inverse of a Matrix, J. Optimiz. Theory Appl. 131 (2) (2006) 307–311. [10] F.E. Udwadia, R.E. Kalaba, An alternative proof of the Greville formula, J. Optimiz. Theory Appl. 94 (1) (1997) 23–28. [11] F.E. Udwadia, R.E. Kalaba, General forms for the recursive determination of generalized inverses, J. Optimiz. Theory Appl. 101 (1999) 509–523. [12] F.E. Udwadia, R.E. Kalaba, A unified approach for the recursive determination of generalized inverses, Comp. Math. Appl. 37 (1999) 125–130. [13] F.E. Udwadia, R.E. Kalaba, Sequential determination of the 1,4-inverse of a matrix, J. Optimiz. Theory Appl. 117 (1) (2003) 1–7. [14] F.E. Udwadia, P. Phohomsiri, Recursive determination of the generalized Moore–Penrose M-inverse of a matrix, J. Optimiz. Theory Appl. 127 (3) (2005) 639–663.