Successive matrix squaring algorithm for computing outer inverses

Successive matrix squaring algorithm for computing outer inverses

Applied Mathematics and Computation 203 (2008) 19–29 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage:...

235KB Sizes 14 Downloads 98 Views

Applied Mathematics and Computation 203 (2008) 19–29

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Successive matrix squaring algorithm for computing outer inverses Predrag S. Stanimirovic´ *, Dragana S. Cvetkovic´-Ilic´ University of Niš, Department of Mathematics, Faculty of Science, P.O. Box 224, Višegradska 33, 18000 Niš, Serbia1

a r t i c l e

i n f o

a b s t r a c t In this paper, we derive a successive matrix squaring (SMS) algorithm to approximate an outer generalized inverse with prescribed range and null space of a given matrix . We generalize the results from the papers [L. Chen, E.V. Krishnamurthy, I. MacleA 2 Cmn r od, Generalized matrix inversion and rank computation by successive matrix powering, Parallel Computing 20 (1994) 297–311; Y. Wei, Successive matrix squaring algorithm for computing Drazin inverse, Appl. Math. Comput. 108 (2000) 67–75; Y. Wei, H. Wu, J. Wei, Successive matrix squaring algorithm for parallel computing the weighted generalized inverse AyMN , Appl. Math. Comput. 116 (2000) 289–296], and obtain an algorithm for computing various classes of outer generalized inverses of A. Instead of particular matrices used , s 6 r. Numerical examples are in these articles, we use an appropriate matrix R 2 Cnm s presented. Ó 2008 Elsevier Inc. All rights reserved.

Keywords: Generalized inverse Outer inverse SMS algorithm Full rank factorization Matrix rank

1. Introduction and preliminaries Let Cmn and Cmn denote the set of all complex m  n matrices and all complex m  n matrices of rank r, respectively. In r denotes the unit matrix of order n. By A , RðAÞ, rankðAÞ and NðAÞ we denote the conjugate transpose, the range, the rank and the null space of A 2 Cmn . By Rez and Imz we denote a real and imaginary part of a complex number z, respectively. For A 2 Cmn , the set of inner and outer generalized inverses are defined by the following, respectively: Af1g ¼ fX 2 Cnm jAXA ¼ Ag;

Af2g ¼ fX 2 Cnm jXAX ¼ Xg:

The set of all outer inverses with prescribed rank s is denoted by Af2gs , 0 6 s 6 r ¼ rankðAÞ. The symbols A or Að1Þ stand for an arbitrary generalized inner inverse of A and by Að2Þ we denote an arbitrary generalized outer inverse of A. Also, the matrix X which satisfies AXA ¼ A

and

XAX ¼ X

is called the reflexive g-inverse of A and it is denoted by Að1;2Þ . The set of all reflexive g-inverses is denoted by Af1; 2g. Subsequently, the sets of f1; 2; 3g and f1; 2; 4g inverses of A are defined by Af1; 2; 3g ¼ Af1; 2g \ fXjðAXÞ ¼ AXg; Af1; 2; 4g ¼ Af1; 2g \ fXjðXAÞ ¼ XAg: By Ay we denote the Moore–Penrose inverse of A, i.e. the unique matrix Ay satisfying AAy A ¼ A;

Ay AAy ¼ Ay ;

ðAAy Þ ¼ AAy ;

ðAy AÞ ¼ Ay A:

* Corresponding author. E-mail addresses: [email protected] (P.S. Stanimirovic´), [email protected] (D.S. Cvetkovic´-Ilic´). 1 Supported by Grant No. 144003 of the Ministry of Science, Technology and Development, Republic of Serbia. 0096-3003/$ - see front matter Ó 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2008.04.037

P.S. Stanimirovic´, D.S. Cvetkovic´-Ilic´ / Applied Mathematics and Computation 203 (2008) 19–29

20

For A 2 Cnn the smallest nonnegative integer k such that rankðAkþ1 Þ ¼ rankðAk Þ is called the index of A and denoted by indðAÞ. If A 2 Cnn is a square matrix with indðAÞ ¼ k, then the matrix X 2 Cnn which satisfies the following conditions Ak XA ¼ Ak ;

XAX ¼ X;

AX ¼ XA

is called the Drazin inverse of A and it is denoted by AD . When indðAÞ ¼ 1, Drazin inverse AD is called the group inverse and it is denoted by A# . Suppose that M and N are Hermite positive definite matrices of the order m and n, respectively. Then there exists the unique matrix X 2 Cnm such that AXA ¼ A;

XAX ¼ X;

ðMAXÞ ¼ MAX;

ðNXAÞ ¼ NXA:

The matrix X is called the weighted Moore–Penrose inverse of A, and denoted by X ¼ AyM;N . In particular, if M ¼ Im and N ¼ In , then AyM;N ¼ Ay . If A 2 Cnm and W 2 Cmn , then the unique solution X 2 Cnm of the equations ðAWÞkþ1 XW ¼ ðAWÞk ;

XWAWX ¼ X;

AWX ¼ XWA;

ð1:1Þ D;W

where k ¼ indðAWÞ, is called the W-weighted Drazin inverse of A and it is denoted by A . If A 2 Cmn , T is a subspace of Cn of dimension t 6 r and S is a subspace of Cm of dimension m  t, then A has a f2g inverse r X such that RðXÞ ¼ V and NðXÞ ¼ U if and only if AV  U ¼ Cm ð2Þ

in which case X is unique and we denote it by AV ;U . It is well-known that for A 2 Cmn , the Moore–Penrose Ay , the weighted Moore–Penrose inverse AyM;N and the weighted Drazin inverse AD;W can be represented by: ð2Þ

(i) Ay ¼ ARðA Þ;NðA Þ , (ii)

AyM;N

ð2Þ

¼ ARðA] Þ;NðA] Þ , where A] ¼ N 1 A M,

(iii) AD;W ¼ ðWAWÞRðAðWAÞk Þ;NðAðWAÞk Þ , where W 2 Cnn ; k ¼ indðWAÞ. ð2Þ

Also, for A 2 Cnn , the Drazin inverse AD can be represented by: AD ¼ ARðAk Þ;NðAk Þ ; ð2Þ

where indðAÞ ¼ k:

The following representations of {2, 3}, {2, 4}-inverses with prescribed rank s are restated from [11]: Proposition 1.1. Let A 2 Cmn and 0 < s < r be a chosen integer. Then the following is valid: r (a) Af2; 4gs ¼ fðZAÞy ZjZ 2 Csm ; ZA 2 Csn s g. (b) Af2; 3gs ¼ fYðAYÞy jY 2 Cns ; AY 2 Cms g. s General representations for various classes of generalized inverses can be found in [4,8,10,12]. Some of these representations are restated here for the sake of completeness. Proposition 1.2. Let A 2 Cmn be an arbitrary matrix and A ¼ PQ is a full-rank factorization of A. There are the following general r representations for some classes of generalized inverses: Af2gs ¼ fFðGAFÞ1 GjF 2 Cns ; G 2 Csm ; rankðGAFÞ ¼ sg; r [ Af2g ¼ Af2gs ; s¼0

Af1; 2g ¼ fFðGAFÞ1 GjF 2 Cnr ; G 2 Crm ; rankðGAFÞ ¼ rg ¼ Af2gr ; Af1; 2; 3g ¼ fFðP  AFÞ1 P  jF 2 Cnr ; rankðP  AFÞ ¼ rg; Af1; 2; 4g ¼ fQ  ðGAQ  Þ1 GjG 2 Crm ; rankðGAQ  Þ ¼ rg; Ay ¼ Q  ðP  AQ  Þ1 P ; AD ¼ P Al ðQ Al APAl Þ1 Q Al ;

Al ¼ P Al Q Al ;

l P indðAÞ:

For other important properties of generalized inverses see [1,2,6,13]. We will use the following well-known result: Lemma 1.1 [7]. Let M 2 Cnn and e > 0 be given. There is at least one matrix norm k  k such that qðMÞ 6 kMk 6 qðMÞ þ ; where qðMÞ denotes the spectral radius of M.

ð1:2Þ

P.S. Stanimirovic´, D.S. Cvetkovic´-Ilic´ / Applied Mathematics and Computation 203 (2008) 19–29

21

The basic motivation for this paper were the paper of Chen et al. [3] and the papers of Wei [16] and Wei et al. [18]. In the paper of Chen et al. [3] a successive matrix squaring (SMS) algorithm which approximates the Moore–Penrose inverse of a given matrix A were considering, while in the papers of Wei [16] and Wei et al. [18] the authors considered two variants of SMS algorithm which approximate the Drazin inverse and the weighted Moore–Penrose inverse of A, respectively. In this paper, we derive a SMS algorithm to approximate an outer generalized inverse with prescribed range and null space of a given matrix A 2 Cmn , which is based on successive squaring of an appropriate block matrix T. We generalized r the results from the papers [3,16,18]. Instead of matrices A , Ak , k P indðAÞ and A] , used to compose the matrix T in these articles, we use an universal appropriate matrix R 2 Cnm , 0 6 s 6 r. Furthermore, we give an explicit form of the outer gens eralized inverse of A corresponding to chosen R. An explicit approximation of rankðXÞ is also given in the same iterative scheme. We also estimate the number of iterative steps necessary to obtain a prescribed precision. Numerical examples in the last section show that numerical approximation of rankðXÞ is important and that the number of SMS iterations is less than in the classical Euler–Knopp iterations. 2. Results Let A 2 Cmn and R 2 Cnm , 0 6 s 6 r be given. The general iterative scheme used in this paper is given by r s X1 ¼ Q ; X kþ1 ¼ PX k þ Q ;

ð2:1Þ

k 2 N;

where P ¼ I  bRA, Q ¼ bR and b is a relaxation parameter. The iterative process (2.1) can be accelerated by means of ðm þ nÞ  ðm þ nÞ composite matrix T, partitioned in the block form   P Q T¼ : ð2:2Þ 0 I The improvement of (2.1) can be done by computing the matrix power 2 3 kP 1 k Pi Q 7 6P k T ¼4 5: i¼0 0 I Pk1 i P Q . Hence, the matrix X k is equal to the right upper It is not difficult to see that the iterative scheme (2.1) gives X k ¼ i¼0 k 2k block in T . In turn, T can be computed by k repeated squaring, that is T0 ¼ T T iþ1 ¼ T 2i ;

i ¼ 0; 1; . . . ; k  1:

ð2:3Þ

It is clear that 2 Tk ¼ T

2k

6P ¼4

2k

k 2P 1

i¼0

0

3 Pi Q 7 5:

ð2:4Þ

I

Now, we will state an auxiliary result: Lemma 2.1. If P 2 Cnn and S 2 Cnn are such that P ¼ P 2 and PS ¼ SP then qðPSÞ 6 qðSÞ: Proof. Suppose that k is an eigenvalue of PS and x is a corresponding vector such that PSx ¼ kx. Then, kPx ¼ kx which implies that k ¼ 0 or Px ¼ x. If Px ¼ x, then Sx ¼ SPx ¼ PSx ¼ kx, i.e. k is an eigenvalue of S. Hence, the set of eigenvalues of PS is the subset of the set of eigenvalues of S union {0}, which implies that qðPSÞ 6 qðSÞ. h The following theorem represents the analogue of Theorem 3.1 [15] which deals with V-norm, where V-norm is defined as kBkV ¼ kBVk2 , for any B 2 Cnm and V is invertible such that V 1 AGV is the Jordan form of AG. The method used in our proof is different then the one used in Theorem 3.1 [15]. Theorem 2.1. Let A 2 Cmn and R 2 Cnm , 0 6 s 6 r be given such that r s ARðRÞ  NðRÞ ¼ Cm :

ð2:5Þ

The sequence of approximations X 2k ¼

k 2X 1

i¼0

ðI  bRAÞi bR

ð2:6Þ

P.S. Stanimirovic´, D.S. Cvetkovic´-Ilic´ / Applied Mathematics and Computation 203 (2008) 19–29

22

ð2Þ

determined by the SMS algorithm (2.3) converges in the matrix norm k  k to the outer inverse X ¼ ARðRÞ;NðRÞ of A if b is a fixed real number such that max j1  bki j < 1;

ð2:7Þ

16i6t

where rankðRAÞ ¼ t, ki ; i ¼ 1; . . . ; t are eigenvalues of RA and k  k satisfies condition (1.2) from Lemma 1.1 for M ¼ I  bAR. In the case of convergence we have the following error estimate k kX  X 2k k 6 max j1  bki j2 þ OðeÞ; 16i6t kXk

k P 0:

ð2:8Þ ð2Þ

Proof. From the condition (2.5), it follows the existence of the outer inverse X ¼ ARðRÞ;NðRÞ of A. It is evident that XAR ¼ R and R ¼ RAX. Since RðX 2k Þ  RðRÞ, we have that XAX 2k ¼ X 2k : So, we obtain kX  X 2k k ¼ kXAX  XAX 2k kkX  X 2k k ¼ kXðAX  AX 2k ÞkkX  X 2k k 6 kXk  kAX  AX 2k k:

ð2:9Þ

Now, let us prove that AX  AX k ¼ ðAX  AX 1 Þk :

ð2:10Þ

We will use the mathematical induction. For k ¼ 1, the equality (2.10) is true. Suppose that (2.10) holds for k ¼ p  1 and prove that it holds for k ¼ p. Using P ¼ I  QA, we have AX  AX p ¼ AX  AðPX p1 þ Q Þ ¼ AX  AðI  QAÞX p1  AQ ¼ AX  AX p1 þ AQAX p1  AQ : Since R ¼ RAX implies Q ¼ QAX, we get AX  AX p ¼ AXAX  AXAX p1 þ AQAX p1  AQAX ¼ ðAX  AQÞðAX  AX p1 Þ ¼ ðAX  AQÞp ¼ ðAX  AX 1 Þp : Now, (2.9) and (2.10) imply k

kX  X 2k k 6 kXk  kAX  AX 2k k 6 kXk  kAX  AX 1 k2 : If (2.7) is satisfied, we can choose  such that max16i6t j1  bki j þ e < 1. For such  and M ¼ I  bAR, there exists a matrix norm k  k satisfies (1.2). Now k kX  X 2k k 6 kAX  AX 1 k2 6 kXk

 2k max j1  bki j þ e : 16i6t

The last inequality follows if we apply Lema 2.1 for P ¼ AX and S ¼ I  AX 1 ¼ I  bAR and by Lemma 1.1. h Remark that a similar problem as in Theorem 3.1 [15] was discussed in [17] but in a more general case: If fSn ðxÞg is a family of continuous real valued functions on open set X such that rððRAÞjT Þ  X  ð0; 1Þ and limn!1 Sn ðxÞ ¼ 1=x uniformly on rððRAÞjT Þ, then ð2Þ

AT;S ¼ lim Sn ððRAÞjT ÞR n!1

and ð2Þ

kSn ððRAÞjT ÞR  AT;S kV ð2Þ kAT;S kV

6

max jSn ðxÞx  1j þ OðeÞ;

x2rððRAÞjT Þ

 > 0:

The generalization to the bounded linear operator on Banach space of the mentioned result proved in [17] is given in [5]. Corollary 2.1. Under the assumption and the conditions of Theorem 2.1 (i) (ii) (iii) (iv)

In In In In

the the the the

case case case case

m ¼ n, R ¼ Al , l P indðAÞ, we have AD ¼ limk!1 X 2k , R ¼ A , Ay ¼ limk!1 X 2k , R ¼ A] ¼ N 1 A M, AyM;N ¼ limk!1 X 2k , R ¼ AðWAÞk , AD;W ¼ limk!1 X 2k .

Corollary 2.2. In the case rankðRÞ ¼ rankðAÞ, under the conditions of Theorem 2.1, we have X 2 Af1; 2g. Proof. Using that rankðXÞ ¼ rankðAÞ, by Corollary 1 (p. 19, [1]) it follows that X 2 Af1; 2g.

h

P.S. Stanimirovic´, D.S. Cvetkovic´-Ilic´ / Applied Mathematics and Computation 203 (2008) 19–29

23

Theorem 2.2. Let A 2 Rmn and R 2 Rnm , 0 6 s 6 r be real matrices and assume that b satisfies condition (2.7). Then the r s sequence of real numbers defined by k

k

r2k ¼ n  TrððI  bRAÞ2 Þ ¼ n  TrðP2 Þ;

kP0

ð2:11Þ

converges to rankðRAÞ. Proof. It is sufficient to use k

TrððI  bRAÞ2 Þ ¼ n  t þ

t X k ð1  bki Þ2 ;

ð2:12Þ

i¼1

where t ¼ rankðRAÞ 6 rankðRÞ. Therefore, using (2.12), we get r 2k ¼ t 

t X k ð1  bki Þ2 :

ð2:13Þ

i¼1

Now, the proof can be completed using (2.7). h Remark 2.1. Taking R ¼ A in Theorems 2.1 and 2.2 we get the results of Theorem 2 from [3]. Also, taking R ¼ Al , l P indðAÞ and R ¼ A] in Theorem 2.1, respectively, we get the results from [16,18]. The next theorem presents an explicit representation of the outer inverse of A which is a limit value of the sequence of approximations (2.1), for given A and R. Theorem 2.3. Assume that A 2 Cmn . Consider an arbitrary matrix R 2 Cnm , 0 6 s 6 r, and its full rank factorization R ¼ FG, r s ns sm F 2 Cs , G 2 Cs , such that GAF is invertible. The sequence of approximations X 2k ¼

k 2X 1

ðI  bRAÞi bR

ð2:14Þ

i¼0

determined by the SMS algorithm (2.3)) converges in the matrix norm to the outer inverse X ¼ FðGAFÞ1 G of A, if b is a fixed real number satisfying max j1  bki j < 1;

ð2:15Þ

16i6s

where ki ; i ¼ 1; . . . ; s are eigenvalues for FGA. In the case of convergence we have the error estimate k kX  X 2k k 6 max j1  bki j2 þ OðeÞ; 16i6s kXk

k P 0;

ð2:16Þ

where the matrix norm k  k satisfies condition (1.2), for M ¼ I  bAR. Proof. Since the matrices F and G satisfy rankðGAFÞ ¼ rankðFÞ ¼ rankðGÞ, according to Theorem 1.3.8 (p. 33, [13,14], we conclude that X ¼ FðGAFÞ1 G is a {2} inverse of A having the range RðXÞ ¼ RðFÞ ¼ V and null space NðXÞ ¼ NðGÞ ¼ U, i.e. ð2Þ

ð2Þ

X ¼ AV;U ¼ ARðFÞ;NðGÞ : Also, conditions AV \ U ¼ f0g and AV  U ¼ Cm for the existence of X are ensured (Theorem 1.3.8, p. 33, [13]). On the other hand, since rankðRÞ ¼ s ¼ rankðFGÞ ¼ rankðFÞ ¼ rankðGÞ; ð2Þ

we obtain RðFÞ ¼ RðRÞ; NðGÞ ¼ NðRÞ which implies X ¼ ARðRÞ;NðRÞ . In view of (2.15) immediately follows that the conditions of Theorem 2.1 are satisfied. Therefore, we may conclude that the sequence of approximations (2.14) determined by the SMS algorithm (2.3) converges to X ¼ FðGAFÞ1 G, in the matrix norm satisfying (1.2) for M ¼ I  bAR. h Corollary 2.3. Assume that A 2 Rmn and R 2 Rnm , 0 6 s 6 r, are two arbitrary chosen real matrices. Let R ¼ FG, F 2 Cns s , r s G 2 Csm be a full rank factorization of R, such that GAF is invertible. The sequence of real numbers defined by s k

k

r2k ¼ n  TrððI  bRAÞ2 Þ ¼ n  TrðP2 Þ;

kP0

converges to rankðXÞ. Proof. According to Theorem 2.2, r 2k ! rankðRAÞ. Since rankðRAÞ ¼ rankðGAFÞ ¼ s ¼ rankðRÞ ¼ rankðXÞ; we complete the proof.

h

ð2:17Þ

P.S. Stanimirovic´, D.S. Cvetkovic´-Ilic´ / Applied Mathematics and Computation 203 (2008) 19–29

24

sm Corollary 2.4. Consider A 2 Crmn and arbitrary chosen matrix R 2 Cnm , s 6 r. Assume that R ¼ FG, F 2 Cns is a fulls s , G 2 Cs rank factorization of R. Also, suppose that condition (2.15) of Theorem 2.3 are valid and that A ¼ PQ is a full rank factorization of A. If the sequences of approximations X 2k is defined by (2.14), the following hold:

(a) (b) (c) (d)

flimk!1 X 2k jF flimk!1 X 2k jF flimk!1 X 2k jF flimk!1 X 2k jF

sm 2 Cns ; rankðGAFÞ ¼ s; 0 6 s 6 rg  Af2g; s ; G 2 Cs nr 2 C ; G 2 Crm ; rankðGAFÞ ¼ rg  Af1; 2g; 2 Cnr ; G ¼ P  ; rankðGAFÞ ¼ rg  Af1; 2; 3g; ¼ Q  ; G 2 Crm ; rankðGAFÞ ¼ rg  Af1; 2; 4g;

and G 2 Csm satisfy rankðGAFÞ ¼ s, applying Theorem 2.3, we conclude that X ¼ FðGAFÞ1 G Proof. (a) If the matrices F 2 Cns s s is the limit value of the sequence of approximations (2.14). Using general representation of outer inverses form Proposition 1.2, we immediately obtain X 2 Af2g. Parts (b)–(d) can be proved using Theorem 2.3 and general representations of pseudoinverses from Proposition 1.2. h Corollary 2.5. Consider matrix A 2 Cmn and integer s such that 0 6 s 6 r. Then the following is valid: r P k1  n o 2  i   Y  limk!1 Y 2 Cns ; rankðAYÞ ¼ s  Af2; 3gs ; i¼0 ðI  bðAYÞ AYÞ ðAYÞ n o P k1    2  i  (b) limk!1  Z Z 2 Csn ; rankðZAÞ ¼ s  Af2; 4gs . i¼0 ðI  bðZAÞ ZAÞ ðZAÞ (a)

Proof. Follows from Corollary 2.1, part (ii) and the general representations of the sets Af2; 3gs and Af2; 4gs given in Proposition 1.1. h In the following theorem we investigate the limits of the relaxation parameter b by which the convergence of the sequence X 2k is ensured. Eigenvalues of RA are complex numbers, in general. Theorem 2.4. Let A 2 Cmn and R 2 Cnm , 0 6 s 6 r be given. Assume that rankðRAÞ ¼ t and ki ; i ¼ 1; . . . ; t are eigenvalues of RA. r s Then the condition max j1  bki j < 1

ð2:18Þ

16i6t

is equivalent with the following: (a) For each i 2 f1; . . . ; tg, Reki > 0, and the relaxation parameter b satisfies  ( ) Reki  0 < b < 2 min  i ¼ 1; . . . ; t; jki j 6¼ 0 jki j2 

ð2:19Þ

or (b) For each i 2 f1; . . . ; tg, Reki < 0, and the relaxation parameter b satisfies  ) ( Reki  2 max  i ¼ 1; . . . ; t; jki j 6¼ 0 < b < 0: jki j2 

ð2:20Þ

Proof. According to (2.18), for each i 2 f1; . . . ; tg we get qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi j1  bki j ¼ ð1  bReki Þ2 þ ðbImki Þ2 < 1: This implies bðbjki j2  2Reki Þ < 0:

ð2:21Þ

There are two possible cases: i (a) If b > 0, then bjki j2  2Reki < 0, so Reki > 0, for every i 2 f1; . . . ; tg and b < 2 Rek , for every ki such that jki j 6¼ 0. jk j2 i

i (b) If b < 0, then bjki j2  2Reki > 0, so Reki < 0, for every i 2 f1; . . . ; tg and b > 2 Rek , for every ki such that jki j 6¼ 0. jk j2

h

i

Now we are looking for practical values of the parameter b. Let us denote by mRe ¼ minfRek1 ; . . . ; Rekt g, MRe ¼ maxfRek1 ; . . . ; Rekt g and ðMImÞ2 ¼ maxfðImk1 Þ2 ; . . . ; ðImkt Þ2 g. In accordance with conditions (a) of Theorem 2.4 and condition (2.19), in the case Reki > 0, for every i 2 f1; . . . ; tg, we can use the following value for b:

P.S. Stanimirovic´, D.S. Cvetkovic´-Ilic´ / Applied Mathematics and Computation 203 (2008) 19–29

bCopt ¼

mRe ðMReÞ2 þ ðMImÞ2

25

ð2:22Þ

:

Similarly, according to conditions (b) of Theorem 2.4 and condition (2.20), in the case Reki < 0, for every i 2 f1; . . . ; tg, we can use: MRe bCopt ¼ : ð2:23Þ ðmReÞ2 þ ðMImÞ2 Theorem 2.5. Let A 2 Cmn and R 2 Cnm , 0 6 s 6 r be given. Also, suppose that rankðRAÞ ¼ t and ki ; i ¼ 1; . . . ; t are eigenvalues r s of RA. (a) If Reki > 0, for every i 2 f1; . . . ; tg, the following statements are valid: (i) For bCopt defined in (2.22) the iterative sequence (2.1) converges. (ii) The following error estimate is valid: k1 kX  X 2k k 6 ð1  bCopt  mReÞ2 þ OðeÞ: kXk (iii) The number k of required iterations needful to attain the accuracy kX  X 2k k 6 d; kXk

0
ð2:24Þ

is estimated by k P d2  Fðð1  bCopt  mReÞ2 Þ þ FðdÞe;

1  bCopt  mRe 6¼ 0;

ð2:25Þ

where dxe gives the smallest integer greater than or equal to x and FðxÞ ¼ log2 ð lnðxÞÞ:

ð2:26Þ

(b) In the case Reki < 0, for every i 2 f1; . . . ; tg, we obtain analogous results: (i) For bCopt defined in (2.23), the iterative sequence (2.1) converges. (ii) The following error estimate is valid: k1 kX  X 2k k 6 ð1  bCopt  MReÞ2 þ OðeÞ: kXk

(iii) The number of required iterations k needful to achieve the accuracy defined in (2.24) is an integer satisfying k P d2  Fðð1  bCopt  MReÞ2 Þ þ FðdÞe; 1  bCopt  MRe 6¼ 0:

ð2:27Þ

Proof. We prove part (a) of the theorem. For the sake of simplicity, we use the notation z ¼ MRe þ i  MIm. (i) It is evident that bCopt satisfies the condition (2.19) from part ðaÞ of Theorem 2.4, so by the Theorem 2.1, we have that the iterative sequence defined by (2.1) converges. (ii) One can verify the following: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi mRe ðmReÞ2 C j1  bopt ki j ¼ 1  2 2 Reki þ jki j2 : jzj jzj4 Therefore max j1 

bCopt ki j

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi mRe ðmReÞ2 2 6 1  2 2 mRe þ jzj ¼ 1  bCopt  mRe: jzj4 jzj

Hence, the error estimate follows from (2.8) in Theorem 2.1. (iii) The number of iterative steps can be derived from ð1  bCopt  mReÞ2

k1

¼ ðð1  bCopt  mReÞ2 Þ2

k2

6 d:

Part (b) can be proved in a similar way. h

Remark 2.2. (1) In the both cases, when Reki > 0, for each i 2 f1; . . . ; tg or Reki < 0, for each i 2 f1; . . . ; tg, using bCopt we get an exact outer inverse X of A in the first iteration if Imki ¼ 0, for every i 2 f1; . . . ; tg and mRe ¼ MRe. If Reki > 0; i ¼ 1; . . . ; t and 1  bCopt  mRe ¼ 0 or Reki < 0; i ¼ 1; . . . ; t and 1  bCopt  MRe ¼ 0 the sufficient number of iterative steps is k ¼ 1.

P.S. Stanimirovic´, D.S. Cvetkovic´-Ilic´ / Applied Mathematics and Computation 203 (2008) 19–29

26

(2) Because of the accumulation of round off errors in the iterative steps of SMS method, we recommend to use the minimal number of steps from (2.25) or (2.27). Therefore, in the case Reki > 0; i ¼ 1; . . . ; t and 1  bCopt  mRe 6¼ 0 or in the case Reki < 0; i ¼ 1; . . . ; t and 1  bCopt  MRe 6¼ 0 we have ( d2  Fðð1  bCopt  mReÞ2 Þ þ FðdÞe; Reki > 0; i ¼ 1; . . . ; t; ð2:28Þ k¼ d2  Fðð1  bCopt  MReÞ2 Þ þ FðdÞe; Reki < 0; i ¼ 1; . . . ; t: 3. Numerical examples The implementation of the SMS method described in Theorem 2.1 is written in the programming package MATHEMATICA. About the package see, for example [20]. In the next examples we use b ¼ bCopt defined in (2.22) or (2.23). Example 3.1. In this example we get exact outer inverse X of A in the first iteration as we note in the part (1) of Remark 2.2. Consider the following 6  5 matrix of rank 4, investigated in [9]: 2 3 1 2 3 4 1 61 3 4 6 27 6 7 6 7 62 3 4 5 37 6 7: A¼6 7 63 4 5 6 47 6 7 44 5 6 7 65 6

6

7

7

8

In accordance with Theorem 2.3, we can choose appropriate matrices F and G in order to generate {2}-inverses FðGAFÞ1 G of A. The matrices F and G must satisfy rankðFGÞ ¼ s 6 4, and dimensions of the matrix FG must be 6  5. For example, one can choose F and G using the following pattern:   Fs ; G ¼ ½ Is 0 ; F¼ 0 where F s As ¼ Is and As is the principal minor which contains first s rows and first s columns of A. Let us choose s ¼ 2. In this case the block F 2 is equal to   3 2 : F2 ¼ 1 1 Applying (2.22), we get bCopt ¼ 1. Both nonzero eigenvalues of RA are equal to 1, so in this case is mRe ¼ MRe ¼ 1 and Imk1 ¼ Imk2 ¼ 0. As it is claimed in Remark 2.2, part ð1Þ, the outer inverse X1 is obtained by SMS algorithm (2.14) immediately, in the first iteration. Corresponding {2}-inverse X1 ¼ FðGAFÞ1 G ¼ FG of rank 2 is equal to 2 3 3 2 0 0 0 0 6 7 6 1 1 0 0 0 0 7 6 7 X1 ¼ X 21 ¼ 6 0 0 0 0 07 6 0 7: 6 7 0 0 0 0 05 4 0 0 0 0 0 0 0 Example 3.2. Consider the matrix A from Example 3.1, and choose the following matrices F 2 C52 and G 2 C26 from [9]: 2

0

6 62 6 F¼6 63 6 45 1

0

3

7 17 7 27 7; 7 35 0





0

1

0

1

0

1

1

0

1

0

1

0

 :

ð3:1Þ

Since rankðFGÞ ¼ 2 corresponding {2}-inverse X1 ¼ FðGAFÞ1 G for A of rank 2 is equal to 2 3 0 0 0 0 0 0 6 7 19 21 19 21 19 7 6 21 7 1 6 ð2Þ 6 60 A ¼ X1 ¼ 46 60 46 60 46 7 7: 174 6 6 7 27 39 27 39 27 5 4 39 102

84

102 6

84

102

84

Let us choose the precision d ¼ 10 . According to heuristic (2.22) the parameter b is equal to bCopt ¼ 9:208888  106 . In accordance with (2.28), the minimal number of iterative steps is equal to k ¼ 23. An application of iterations (2.14) produces the following approximation of Að2Þ ¼ X1 after 23 iterations:

P.S. Stanimirovic´, D.S. Cvetkovic´-Ilic´ / Applied Mathematics and Computation 203 (2008) 19–29

2

X 223

0: 6 0:12069 6 6 ¼6 6 0:344828 6 4 0:224138 0:586207

0:

0:

0:

0:

0:109195

0:12069

0:109195

0:12069

0:264368

0:344828

0:264368

0:344828

0:155172

0:224138

0:155172

0:224138

0:482759

0:586207

0:482759

0:586207

ð2Þ

0:

27

3

7 0:109195 7 7 0:264368 7 7: 7 0:155172 5 0:482759

ð2Þ

The relative error kA  X 223 k=kA k ¼ kX1  X 223 k=kX1k, where the matrix norm k  k represents the maximal singular value of the argument, is equal to 3:40765  1011 . Also, obtained r 223 ¼ 2., which is the exact approximation of rankðRÞ ¼ 2. Now, we generate a {1, 2, 3}-inverse of A. Applying MATHEMATICA function QRDecomposition, we obtain the following full-rank factorization A ¼ PQ for A: 2 3 0:122169 0:339377 0:753171 0:229416 6 0:122169 0:7528 0:393966 0:114708 7 6 7 6 7 6 0:244339 0:265331 0:382379 0:688247 7 7; P¼6 6 0:366508 0:191285 0:0115872 0:573539 7 7 6 7 6 4 0:488678 0:117239 0:359205 0:344124 5 0:733017 0:444275 0:046349 0:114708 2 3 8:18535 9:65139 11:7283 13:1943 11:3618 6 0: 2:41883 3:64059 6:05942 0:555344 7 6 7 Q ¼6 7: 4 0: 0: 0:440315 0:440315 0:625711 5 0:

0:

0:

0:

0:458831

T

Using G ¼ P and the following, randomly chosen n  r ¼ 5  4 matrix 3 2 1 2 3 4 7 6 3 4 17 6 2 6 7 7 F¼6 6 1 1 2 3 7; 6 7 1 2 3 5 4 0 1 2 1 3 we obtain 2

2:14277

6 6 4:04574 6 R ¼ FG ¼ 6 6 0:356548 6 4 1:85521 0:865847

0:0129617

4:67513

1:51032

1:02204

0:812165

3:50243

0:779682

0:236379

2:00703

2:31983

2:25524

0:291955

1:19661

1:56531

1:55251

1:86802

0:889679

3:22212

0:959952

1:39632

0:475318

3

7 0:203896 7 7 0:540167 7 7: 7 0:881097 5 0:453308

According to (2.28), the optimal number of iterative steps which ensures the precision d ¼ 106 is equal to k ¼ 20. In terms of (2.22) we compute bCopt ¼ 0:000190063. Corresponding {1, 2, 3}-inverse of A, obtained after 20 iterations, is 2 3 0:152778 0:541667 1:06944 0:722222 0:194444 0:0138889 7 6 2:29167 6:56944 4:47222 1:44444 0:736111 7 6 0:652778 7 6 X 220 ¼ 6 1:20833 1:18056 0:277778 0:694444 0:236111 7 7 6 1:59722 7 6 1:47222 0:944444 0:486111 5 4 0:597222 0:0416667 1:81944 0:5

0:25

1:5

1:25

0:75

0:25

and the approximation of its rank is equal to r 220 ¼ 4. If we apply standard MATHEMATICA function MatrixRank[] to compute the rank of X, we obtain MatrixRank[X]=4, which is approximated by r 220 . Relative error estimate is kAð1;2;3Þ  X 220 k=kAð1;2;3Þ k ¼ kX1  X 220 k=kX1k ¼ 1:39537  1011 . In the sequel we verify the recommendation that the number of iterations should be as small as possible. After k ¼ 45 > 20 iterations we obtain the following {1, 2, 3}-inverse of A 2 3 0:151176 0:540797 1:06935 0:722722 0:195448 0:0135242 7 6 2:2908 6:56935 4:47272 1:44545 0:736476 7 6 0:651176 7 6 X 245 ¼ 6 1:2092 1:18065 0:277278 0:695448 0:236476 7 7: 6 1:59882 7 6 1:47272 0:945448 0:486476 5 4 0:598824 0:040797 1:81935 0:5 0:25 1:5 1:25 0:75 0:25 Also, the approximation of rankðX 245 Þ is equal to r245 ¼ 4:00037. MATHEMATICA function MatrixRank[] produces miscount result: MatrixRank[X245 ]=5. Since MatrixRank[R]=4, from Corollary 2.3 we conclude that rank of X is equal to 4, which is approximated by r 245 . Therefore, we also show that the result proved in Corollary 2.3 is important.

P.S. Stanimirovic´, D.S. Cvetkovic´-Ilic´ / Applied Mathematics and Computation 203 (2008) 19–29

28

The Moore–Penrose inverse of A is equal to 2 4 1 8 6 8 15 36 6 16 PseudoInverse ½A ¼ Ay ¼ 6 10 13 26 86 6 3 2 4 2 4

12

2

7

5

23

5

15

1

1

1

10

6

3

3

7 3 7 7 1 7 7; 7 1 5 2

where Pseudoinverse[A] is standard MATHEMATICA function for computing Ay (see, for example [20]). On the other hand, applying iterations (2.14) in the case R ¼ FG ¼ A , we obtain an approximation of Ay . In accordance with (2.28), the minimal number of iterative steps, corresponding to d ¼ 106 , is equal to k ¼ 36, for bCopt ¼ 4:318278  108 . The approximation of the Moore–Penrose inverse of A is 2 3 0:500002 0:125004 0:999991 0:874994 0:624999 0:374999 7 6 1: 1:875 4:50001 2:87501 0:625001 0:375001 7 6 7 6 6 X 236 ¼ 6 1:25 1:625 3:24999 1:87499 0:124999 0:124999 7 7: 7 6 0:125001 0:125001 5 4 0:249997 0:374996 0:249991 0:124994 0:5 0:25 1:5 1:25 0:75 0:25 In this example we again show that information r2k ! rankðXÞ is important. An application of the standard MATHEMATICA function MatrixRank[X236 ] gives the result 5. However, since MatrixRank[R]=4, we conclude that MATHEMATICA is unable to compute the exact rank of X 236 . On the other hand, we obtain r236 ¼ 4., which is the desired value in accordance with Corollary 2.3. As a confirmation, we observe that MatrixRank[Pseudoinverse[A]]=4. Relative error estimate is kAy  X 236 k=kAy k ¼ 3:12183  106 . Example 3.3. Consider the 2 1 1 0 6 1 1 0 6 6 6 1 1 1 A¼6 6 1 1 1 6 6 4 1 1 1 1

1

0

following matrix: 3 0 0 0 0 0 0 7 7 7 1 0 0 7 7: 1 0 0 7 7 7 0 2 1 5 1

2

1

2

Using R ¼ A and b ¼ 0:00137174, derived from (2.22), SMS method produces well known approximation with the precision d ¼ 106 of the Drazin inverse from [19] after k ¼ 15 iterations: 2

X 215

0:25 6 0:25 6 6 6 0: ¼6 6 0: 6 6 4 0: 0:

0:25

0:

0:

0:

0:25

0:

0:

0:

0:

0:25

0:25

0:

0:

0:25

0:25

0:

0

3

0:

0:416667

0:583333

0:666667

7 7 7 7 0: 7: 7 0: 7 7 0:333333 5

0:

0:583333

0:416667

0:333333

0:666667

The exact Drazin inverse is equal 2 1  14 0 0 4 6 1 1 64 0 0 4 6 6 0 1  14 0 6 D 4 A ¼6 1 6 0 0  14 4 6 6 0 5 7 0  12  12 4 0

0:

7  12

5  12

0:

to 0 0 0 0 2 3 1 3

0

3

7 07 7 07 7 7: 07 7 1 7 3 5 2 3

The relative error kAD  X 215 k=kAD k, where the matrix norm k  k represents the maximal singular value of the argument, is equal to 2:67633  1014 . Let us mention that in [19] Euler-Knopp representation, corresponding to iterations (2.6) in the case R ¼ Al , l P indðAÞ, gives an analogous representation in 29697 iterations. Rank of the matrix R can be computed using r 215 ¼ 4. Example 3.4. In this example we verify SMS algorithm (2.14) in the case Reki < 0, for each i ¼ 1; . . . ; t. For this purpose, we accomplished the SMS algorithm for the matrix A, where A is defined as in Example 3.1. We used bCopt ¼ 9:208888  106 defined by (2.23) and the number of iterative steps k ¼ 23 conditioned by the second case of (2.28). Corresponding outer inverse, generated by the SMS algorithm is equal to X 223 , where X 223 is the outer inverse of A generated in Example 3.1.

P.S. Stanimirovic´, D.S. Cvetkovic´-Ilic´ / Applied Mathematics and Computation 203 (2008) 19–29

29

4. Conclusion SMS algorithm is investigated in the papers [3,16,18]. In these articles SMS algorithm is based on successive squaring of the appropriate composite matrix T in the form (2.2). In the paper [3] using that P ¼ I  bA A, Q ¼ bA the Moore–Penrose inverse is derived. The weighted Moore–Penrose inverse is derived in [18], using P ¼ I  bA] A, Q ¼ bA] , and finally the Drazin inverse is derived in [16] applying the SMS iterative process with the matrices P ¼ I  bAl A, Q ¼ bAl . In the present article, we introduced and investigated SMS iterative scheme based on the matrices P ¼ I  bRA and Q ¼ bR. In this way, we derive an universal SMS algorithm containing all previous results as partial cases. Moreover, we prove that our method can be used to approximate various outer inverses of a given matrix A 2 Cmn . Properties of the genr erated outer inverse X corresponding to chosen matrix R are examined. Moreover, in the case when a full-rank factorization R ¼ FG is known, we give an explicit form of an outer generalized inverse of a A corresponding to the chosen R. An explicit approximation of rankðRÞ and rankðXÞ is also given. We implemented our algorithm in the programming package MATHEMATICA, and many numerical examples are presented in the last section. In these examples we show that sometimes MATHEMATICA is unable to compute the exact value of rankðXÞ, so its iterative approximation is important. References [1] A. Ben-Israel, T.N.E. Greville, Generalized Inverses: Theory and Applications, second ed., Springer, 2003. [2] S.R. Caradus, Generalized Inverses and Operator Theory, Queen’s Paper in Pure and Applied Mathematics, Queen’s University, Kingston, Ontario, 1978. [3] L. Chen, E.V. Krishnamurthy, I. Macleod, Generalized matrix inversion and rank computation by successive matrix powering, Parallel Computing 20 (1994) 297–311. [4] D.S. Djordjevic´, P.S. Stanimirovic´, General representations of pseudoinverses, Matematicˇki Vesnik 51 (1999) 69–76. [5] D.S. Djordjevic´, P.S. Stanimirovic, Y. Wei, The representation and approximation of outer generalized inverses, Acta Mathematica Hungar 104 (2004) 1–26. [6] R.E. Harte, Invertibility and singularity for bounded linear operator, Dekker, 1988. [7] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, New York, New Rochelle, Melbourne, Sydney, 1986. [8] C.R. Rao, S.K. Mitra, Generalized Inverse of Matrices and its Applications, John Wiley & Sons Inc., New York, London, Sydney, Toronto, 1971. [9] P.S. Stanimirovic´, S. Bogdanovic´, M. C´iric´, Adjoint mappings and inverses of matrices, Algebra Colloquium 13 (3) (2006) 421–432. [10] P.S. Stanimirovic´, Block representations of {2}, {1, 2} inverses and the Drazin inverse, Indian Journal Pure Applied Mathematics 29 (1998) 1159–1176. [11] P.S. Stanimirovic´, Applications of hyper-power method for computing matrix products, Publikacije Elektrotehnicˇkog fakulteta 15 (2004) 13–25. [12] P.S. Stanimirovic´, D.S. Djordjevic´, Full-rank and determinantal representation of the Drazin inverse, Linear Algebra and its Applications 311 (2000) 31–51. [13] G. Wang, Y. Wei, S. Qiao, Generalized Inverses: Theory and Computations, Science Press, 2003. ð1;2Þ ð2Þ [14] G. Wang, The representations of the generalized inverses ðA BÞT;S and ðA BÞT;S and some applications, Journal of Shanghau University (Natural Sciences) 24 (1995) 1–6. ð2Þ [15] Y. Wei, A characterization and representation for the generalized inverse AT;S and its applications, Linear Algebra and its Applications 280 (1998) 87–96. [16] Y. Wei, Successive matrix squaring algorithm for computing Drazin inverse, Applied Mathematics and Computation 108 (2000) 67–75. ð2Þ [17] Y. Wei, H. Wu, The representation and approximation for the generalized inverse AT;S , Applied Mathematics and Computation 135 (2003) 263–276. [18] Y. Wei, H. Wu, J. Wei, Successive matrix squaring algorithm for parallel computing the weighted generalized inverse AyMN , Applied Mathematics and Computation 116 (2000) 289–296. [19] Y. Wei, H. Wu, The representation and approximation for Drazin inverse, Journal of Computational and Applied Mathematics 126 (2000) 417–423. [20] S. Wolfram, The Mathematica Book, fourth ed., Wolfram Media/Cambridge University Press, 1999.