An oblique projection iterative method to compute Drazin inverse and group inverse

An oblique projection iterative method to compute Drazin inverse and group inverse

Applied Mathematics and Computation 211 (2009) 417–421 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

157KB Sizes 2 Downloads 51 Views

Applied Mathematics and Computation 211 (2009) 417–421

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

An oblique projection iterative method to compute Drazin inverse and group inverse q Xingping Sheng a,b,*, Guoliang Chen b a b

Department of Mathematics, Fuyang Normal College, Anhui Fuyang 236032, China Department of Mathematics, East China Normal University, Shanghai 200062, China

a r t i c l e

i n f o

a b s t r a c t In this paper, an oblique projection iterative method is presented to compute matrix equation AXA ¼ A of a square matrix A with indðAÞ ¼ 1. By this iterative method, when taken the initial matrix X 0 ¼ A, the group inverse Ag can be obtained in absence of the roundoff errors. If we use this iterative method to the matrix equation Ak XAk ¼ Ak , a group inverse ðAk Þg of matrix Ak is got, then we use the formulae Ad ¼ Ak1 ðAk Þg , the Drazin inverse Ad can be obtained. Ó 2009 Elsevier Inc. All rights reserved.

Keywords: Group inverse Drazin inverse Iterative method

1. Introduction For an m  n matrix A over field C of complex number, consider the following equations in G:

AGA ¼ A ð1Þ;

GAG ¼ G ð2Þ;

ðAGÞ ¼ AG ð3Þ;

ðGAÞ ¼ GA ð4Þ;

where the superscript ðÞ denotes the conjugate transpose of a complex matrix. Also, in the case m ¼ n, consider the following equations:

AG ¼ GA ð5Þ;

Akþ1 G ¼ Ak

ð1k Þ

for a positive integer k ¼ indðAÞ ¼ minfp : rankðApþ1 Þ ¼ rankðAp Þg. For a sequence u of the elements from the set {1, 2, 3, 4, 5}, the set of matrices obeying the equations represented in u is denoted by Afug. A matrix from Afug is called an u-inverse of A and denoted by Afug . If G satisfies the system of Eqs. (1) and (2), it is said to be a reflexive g-inverse of A, whereas the Moore–Penrose inverse G ¼ Ay of A satisfies the set of Eqs. (1)–(4). A matrix Ad is said to be the Drazin inverse of A if (1k ), (2) and (5) are satisfied. The group is the unique {1, 2, 5} inverse of A, and exists if and only if indðAÞ ¼ 1. The Drazin inverse and Group inverse of a square matrix have various application in singular differential equation and singular different equation [3], Markov chain [3], etc. There have been many classical approaches available for determination Drazin inverse and Group inverse, such as full-rank decomposition, spectral decomposition, QR-factorization and so on, these methods can be found in [1–10]. Stanimirovic´ and Djordjevic´ gave full-rank representation and generalized resolvent on Drazin inverse in [11,12]. Chen gave the Faddeev-type algorithms for Ad in [13]. Hartwig etc. gave representation of a 2  2 block matrix and Some additive results on the Drazin q This project was granted financial support from Shanghai Science and Technology Committee (062112065), Shanghai Priority Academic Discipline Foundation, The University Young Teacher Sciences Foundation of Anhui Province (2006jq1220zd) and PhD Program Scholarship Fund of ECNU 2007. * Corresponding author. Address: Department of Mathematics, Fuyang Normal College, Anhui Fuyang 236032, China. E-mail address: [email protected] (X. Sheng).

0096-3003/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2009.01.066

418

X. Sheng, G. Chen / Applied Mathematics and Computation 211 (2009) 417–421

inverse in [14,15]. In recent years, Wei gave a lot of way to compute the Drazin inverse, which can be seen in [16–21]. A finite algorithm with a polynomial matrix and limit expression for this inverse were viewed in [22,23]. Recently, the authors gave a new representation based on Gauss–Jordan elimination in [24]. In this paper we will design an oblique Project iterative method to solve the matrix equation AXA ¼ A, when we take the initial matrix X 0 ¼ A, group inverse Ag can be obtained. If we use this iterative method to the matrix equation Ak XAk ¼ Ak , a group inverse ðAk Þg of matrix Ak is got, then we use the formulae Ad ¼ Ak1 ðAk Þg , the Drazin inverse Ad can be obtain. In the end we give a numerical example to show this iteration is efficient. 2. Notations and preliminaries Throughout this paper the following notations are used: C mn denotes the space of m  n complex matrices. For A 2 C mn , RðAÞ and NðAÞ denote the range and null space of A, In denotes the identity matrix, when there is no ambuiguity, In will be denoted by I. v ecðÞ represents the v ec operator, i.e. v ecðAÞ ¼ ðaT1 ; aT2 ; . . . ; aTn ; ÞT for the matrix A ¼ ða1 ; a2 ; . . . ; an Þ 2 Rmn ; ai 2 C m ; i ¼ 1; 2; . . . ; n. A  B stands for the Kronecker product of matrices A and B. In space C mn , we define inner product as hA; Bi ¼ traceðB AÞ ¼ ðv ec A; v ec BÞ ¼ ðv ec BÞ v ec A for all A; B 2 C mn where ð; Þ is the standard Euclidean inner. Then the norm of a matrix A generalized by this inner product is, obviously, Frobenius norm and denoted by kAk. In this paper the following Lemmas are needed in what follows: Definition 2.1. Let A 2 C mn and B 2 C mn , we define the acute angle h between A and B as follows:

cos2 h ¼

jhA; Bij : kAkkBk

From the definition of inner product hA; Bi ¼ traceðB AÞ ¼ ðv ec A; v ec BÞ ¼ ðv ec BÞ v ec A, it is obvious that this h is also the acute angle between v ecðAÞ and v ecðBÞ. Lemma 2.1 [1]. Let A 2 C mn be of rank r, let T be a subspace of C n of dimension s 6 r, and let S be a subspace of C m of dimension m  s. Then A has a {2} inverse X such that RðXÞ ¼ T and NðXÞ ¼ S, if and only if

AT  S ¼ C m : ð2Þ

In which case X is unique. This X is denoted by AT;S . Lemma 2.2 [1]. Let A 2 C mn be of rank r, any two of the following three statements imply the third:

X 2 Af1g; X 2 Af2g; rank A ¼ rank X: Lemma 2.3 [1]. Let A 2 C nn if indðAÞ ¼ k, one has Ad ¼ ARðAk Þ;NðAk Þ , in particularly k ¼ 1, we have Ag ¼ ARðAÞ;NðAÞ . ð2Þ

ð2Þ

According to paper [1], we have relation of the Drazin Ad and group inverse Ag as follows: Lemma 2.4 [1]. Let A be a square matrix with indðAÞ ¼ k, the we have Ad ¼ Ak1 ðAk Þg . We have the following quality about the above inner product. Lemma 2.5. Let A; B 2 C mn , we have the result hA; Bi ¼ hB; Ai ¼ hB ; A i ¼ hA ; B i. Proof. Note that

trace B A ¼ trace ðB AÞ ¼ tr A B and

trace B A ¼ trace AB : Then by the definition of the inner product hA; Bi ¼ tr B A, we can get

hA; Bi ¼ hB; Ai ¼ hB ; A i ¼ hA ; B i:



3. Iterative method for computing Ag and Ad In this section we first introduce an oblique projection iterative method to obtain a solution of the matrix equation AXA ¼ A, where A 2 C nn with indðAÞ ¼ 1. We then show that, for any initial matrix X 0 , the matrix sequence fX k g generated by the iterative method converges to its a solution in absence of the roundoff errors. We also show that if let the initial matrix X 0 ¼ A, then the solution X  obtained by the iterative method is the group inverse Ag .

X. Sheng, G. Chen / Applied Mathematics and Computation 211 (2009) 417–421

419

First we present the iteration method for solving the matrix equation AXA ¼ A of square A with indðAÞ ¼ 1, the iteration method as follows: Algorithm 3.1 1. In put matrices A 2 C nn , X 0 2 C nm ; 2. Calculate

R0 ¼ A  AX 0 A; P0 ¼ AR0 A; k :¼ 0; 3. If Rk ¼ 0, then stop; else, k :¼ k þ 1; 4. Calculate

X k ¼ X k1 þ

hPk1 ; Rk1 i

kPk1 k2 Rk ¼ A  AX k A ¼ Rk1 

hPk1 ; Rk1 i kPk1 k2

Rk1 ;

Pk1 ;

Pk ¼ ARk A; 5. Goto step 3. About Algorithm 3.1, we have the following basic properties: Theorem 3.2. In Algorithm 3.1, if we take the initial matrix X 0 ¼ A, then the sequences fX i g and fRi g generalized by it such that (3.2.1) RðX k Þ  RðAÞ, NðX k Þ NðAÞ and RðRk Þ  RðAÞ, NðRk Þ NðAÞ; (3.2.2) if Rk ¼ 0, then X k ¼ Ag

Proof. (3.2.1) According to the Algorithm 3.1, it is obvious that RðRk Þ  RðAÞ, NðRk Þ NðAÞ. In order to prove the RðX k Þ  RðAÞ, NðX k Þ NðAÞ, we use the induction When i ¼ 0, we have

X0 ¼ A this implies the conclusion is right. When i ¼ 1, we have

X1 ¼ X0 þ

hP0 ; R0 i kP 0 k2

ðA  A3 Þ ¼ A I þ

hP0 ; R0 i kP 0 k2

! ðI  A2 Þ

¼



hP0 ; R0 i kP0 k2

! ðI  A2 Þ A:

This shows when i ¼ 1, the conclusion is also right. Assume that conclusion holds for all 0 6 i 6 sð0 < s < kÞ. Then there exist matrices U, V, W, and Y such that

X s ¼ AU ¼ VA; Rs ¼ AW ¼ YA: Further, we have that

X sþ1 ¼ X s þ

hP s ; Rs i kPs k2

Rs ¼ A U þ

hP s ; Rs i kPs k2

! W

¼



hPs ; Rs i kPs k2

! Y A:

This implies that RðX sþ1 Þ  RðAÞ, NðX sþ1 Þ NðAÞ. By the principle of induction, the conclusion holds for all i ¼ 0; 1; . . . (3.2.2) According to Algorithm 3.1, we know that, if Rk ¼ 0, then we have X k 2 Af1g. This implies rðX k Þ P rðAÞ, then by the conclusion of (3.2.1), we can easy get rðX k Þ ¼ rðAÞ. By Lemma 2.2 we know X k 2 Af1; 2g with range RðAÞ and null space NðAÞ. Then by Lemma 2.3 we know that X k ¼ Ag . This is complete the proof. h Theorem 3.3. If we take the initial matrix X 0 ¼ A, then the sequences fP s g and fRs g generalized by Algorithm 3.1 such that P s ¼ ARs A ¼ 0 if and only if Rs ¼ 0.

420

X. Sheng, G. Chen / Applied Mathematics and Computation 211 (2009) 417–421

Proof. ‘‘Only if” It is well known that indðAÞ ¼ 1 if and only if RðAÞ  NðAÞ ¼ C n . This implies that

RðAÞ \ NðAÞ ¼ f0g;

ð1Þ

RðA Þ \ NðA Þ ¼ f0g:

ð2Þ

Since ARs A ¼ 0, this means that

RðRk AÞ  NðAÞ:

ð3Þ

Then by Theorem 3.2, we obtain

RðRk AÞ  RðRk Þ  RðAÞ:

ð4Þ

From (1), (3) and (4), we have

RðRk AÞ ¼ RðRk AÞ \ NðAÞ  RðAÞ \ NðAÞ ¼ f0g: This means that Rk A ¼ 0, in the other word A Rk ¼ 0, then

RðRk Þ  NðA Þ

ð5Þ

is obtained. By Theorem 3.2, we have

RðRk Þ  RðA Þ

ð6Þ

 ?

using RðXÞ ¼ NðX Þ . From (2), (5) and (6), we have

RðRk Þ ¼ RðRk Þ \ NðA Þ  RðRk Þ \ NðA Þ ¼ f0g: This implies that Rk ¼ 0. ‘‘If” This follows at once from Ps ¼ ARs A. This is complete the proof. h Remark 1. From Theorem 3.3, we know that if Ri – 0, then Pi – 0. This result shows that if Ri – 0, then Algorithm 3.1 cannot be terminated. In the following theorem, we will discuss the converge of Algorithm 3.1. Theorem 3.4. For the sequences fRi g and fP i g generated by Algorithm 3.1, we have

kRðiþ1Þ k2 6 kRi k2 ð1  cos2 hi Þ; where hi is the acute angle Ri and Pi . Proof. According to Algorithm 3.1 and the definition of inner product, we have that

kRiþ1 k2 ¼ hRiþ1 ; Riþ1 i ¼ hRi  ¼ hRi ; Ri i 1 

hP i ; Ri i kPi k2

P i ; Ri 

! hPi ; Ri ihRi ; Pi i kPi k2 kRi k2

hPi ; Ri i kPi k2

Pi i ¼ hRi ; Ri i 

hPi ; Ri i kPi k2

hRi ; Pi i 

hPi ; Ri i kP i k2

hPi ; Ri i þ

hPi ; Ri i2 kP i k4

hP i ; Pi i

¼ kRi k2 ð1  cos2 hi Þ by Lemma 2:5

by Definition 2.1, we know that hi is the acute angle Ri and Pi . h Remark 2. Theorem 3.4implies that, if coshi – 0 the residual sequence fRk g is decreasing. Indeed, if we take X 0 ¼ A, the coshi – 0, because suppose coshi ¼ 0, this means that RðRi Þ ? RðARk AÞ  RðAÞ, but by Theorem 3.2, we know that RðRi Þ  RðAÞ, this is contradiction. From above discussion, we know that Algorithm 3.1 is converge. In the above results, by the equation Ad ¼ Ak1 ðAk Þg for square matrix A with indðAÞ ¼ k, first we can use Algorithm 3.1 to obtain ðAk Þg , then by direct compute to get Ad ¼ Ak1 ðAk Þg in absence of roundoff errors. Here we omit the Theory and the Algorithm. References [1] [2] [3] [4]

A. Ben-Israel, T.N.E. Greville, Generalized Inverse: Theory and Applications, second ed., Springer Verlag, New York, 2003. C.R. Rao, S.K. Mitra, Generalized Inverse of Matrices and its Application, Wiley, New York, 1971. S.L. Campbell, C.D. Meyer, Generalized Inverses of Linear Transformations, Fearon-Pitman, Belmont, Calif, 1979. G.H. Golub, C.D. Meyer, Using the QR-factorization and group inversion to compute, differentiate, and estimate the sensitivity of stationary probabilities for Markov chains, SIAM J. Alg. Discrete Meth. 7 (1986) 273–281. [5] J.H. Wilkinson, Note on the practical significance of the Drazin inverse, in: Recent Applications of Generalized Inverses, Res. Notes Math. 66 (1982) 82– 99. [6] S.L. Campbell, The Drazin inverse and systems of second order linear differential equations, Linear Multilinear Alg. 14 (1983) 195–198. [7] R.E. Hartwig, More on the Souriau–Frame algorithm and the Drazin inverse, SIAM J. Appl. Math. 31 (1) (1976) 42–46.

X. Sheng, G. Chen / Applied Mathematics and Computation 211 (2009) 417–421

421

R.E. Hartwig, A method for calculating Ad , Math. Japan 26 (1) (1981) 37–43. G. Wang, Y. Wei, S. Qiao, Generalized Inverses: Theory and Computations, second ed., Science Press, Beijing, 2004. R.E. Hartwig, Schur’s theorem and the Drazin inverse, Pacific J. Math. 78 (1) (1978) 133–138. P. Stanimirovic´, D. Djordjevic´, Full-rank and determinantal representation of the Drazin inverse, Linear Alg. Appl. 311 (2000) 131–151. D. Stanimirovic´, P. Djordjevic´, On the generalized Drazin inverse and generalized resolvent, Czechoslovak Math. J. 51 (126) (2001) 617–634. Y. Chen, Representation and approximation for the Drazin inverse, Appl. Math. Comput. 119 (2001) 147–160. R.E. Hartwig, X. Li, Y. Wei, Representation for the Drazin inverse of a 2  2 block matrix, SIAM J. Matrix Anal. Appl. 27 (2006) 757–771. R.E. Hartwig, G. Wang, Y. Wei, Some additive results on Drazin inverse, Linear Algebra and its Applications 322 (2001) 207–217. Wei, Yimin, A characterization and representation of the Drazin inverse, SIAM J. Matrix Anal. Appl. 17 (4) (1996) 744–747. Yimin Wei, Index splitting for the Drazin inverse and the singular linear system, Appl. Math. Comput. 95 (2–3) (1998) 115–124. Yimin Wei, Hebing Wu, The representation and approximation for Drazin inverse, J. Comput. Appl. Math. 126 (1–2) (2000) 417–432. Yimin Wei, Successive matrix squaring algorithm for computing the Drazin inverse, Appl. Math. Comput. 108 (2–3) (2000) 67–75. Xiezhang Li, Yimin Wei, Iterative methods for the Drazin inverse of a matrix with a complex spectrum, Appl. Math. Comput. 147 (3) (2004) 855–862. Fanbin Bu, Yimin Wei, The algorithm for computing the Drazin inverses of two-variable polynomial matrices, Appl. Math. Comput. 147 (3) (2004) 805– 836. [22] Ji Jun, An alternative limit expression of Drazin inverse and its application, Appl. Math. Comput. 61 (2–3) (1994) 151–156. [23] Ji Jun, A finite algorithm for the Drazin inverse of a Polynomial matrix, Appl. Math. Comput. 130 (2002) 243–251. ð2Þ [24] X. Sheng, G. Chen, Y. Gong, The representation and computation of generalized inverse AT;S , J. Comput. Appl. Math. 213 (2008) 248–257.

[8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]