An accelerated iterative method for computing weighted Moore–Penrose inverse

An accelerated iterative method for computing weighted Moore–Penrose inverse

Applied Mathematics and Computation 222 (2013) 365–371 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

316KB Sizes 1 Downloads 103 Views

Applied Mathematics and Computation 222 (2013) 365–371

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

An accelerated iterative method for computing weighted Moore–Penrose inverse F. Soleymani a, Predrag S. Stanimirovic´ b,⇑,1, Malik Zaka Ullah c a

Department of Mathematics, Zahedan Branch, Islamic Azad University, Zahedan, Iran University of Niš, Faculty of Sciences and Mathematics, Višegradska 33, 18000 Niš, Serbia c Department of Mathematics, Faculty of Sciences, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia b

a r t i c l e

i n f o

Keywords: Schulz method Weighted Moore–Penrose inverse Weighted singular value decomposition Iterative methods Initial approximation

a b s t r a c t The goal of this paper is to present an accelerated iterative method for computing weighted Moore–Penrose inverse. Analysis of convergence is included to show that the proposed scheme has sixth-order convergence. Using a proper initial matrix, a sequence of iterates will be produced, which is convergent to the weighted Moore–Penrose inverse. Numerical experiments are reported to show the efficiency of the new method. Ó 2013 Elsevier Inc. All rights reserved.

1. Introduction Let A; M and N be complex matrices of orders m  n; m  m, and n  n, respectively, where M and N are invertible. An n  m matrix X is said to be the generalized weighted Moore–Penrose (M–P) inverse of A with respect to M and N, if the following four equations

ð1Þ AXA ¼ A;

ð2Þ XAX ¼ X;

ð3MÞ ðMAXÞ ¼ MAX;

ð4NÞ ðNXAÞ ¼ NXA;

ð1:1Þ

are satisfied, where  denotes the conjugate transpose. We denote the weighted M–P inverse of A with respect to M; N by AyMN , [13]. The weighted M–P inverse plays an important role in the indefinite linear Least-Squares problem, see e.g. [2,3,18]. The generalized weighted M–P inverse AyMN does not always exist for any matrix A, but when M; N are positive definite matrices, AyMN is exactly the weighted M–P inverse defined in [1] and uniquely exists. The weighted M–P inverse has been extensively investigated in the last ten years. A number of methods for its computation was derived. A constructive proof for the recursive representation of the weighted M–P inverse of a partitioned matrix A ¼ ðU VÞ is obtained in [17]. A method and corresponding algorithm for computing the weighted M–P inverse of a multiplevariable polynomial matrix was introduced in [11]. New determinantal representations of weighted generalized inverses, including the weighted M–P inverse, were introduced in [8]. Wei et al. [21] considered two variants of the Successive Matrix Squaring algorithm which approximate the Drazin inverse and the weighted M–P inverse of A. Also, Wei [20] derived specific expressions and corresponding computational procedures for the weighted M–P inverse in Hilbert space using the general representation theorem for the weighted M–P inverse.

⇑ Corresponding author. E-mail addresses: [email protected] (F. Soleymani), [email protected], [email protected] (P.S. Stanimirovic´), [email protected] (M.Z. Ullah). 1 The author gratefully acknowledges support from the Research Project 174013 of the Serbian Ministry of Science. 0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2013.07.039

366

F. Soleymani et al. / Applied Mathematics and Computation 222 (2013) 365–371

Throughout this work, we consider that the matrices M and N are Hermitian positive definite and subsequently the weighted M–P inverse is uniquely defined. The aim of this paper is to propose a new matrix iteration in order to find AyMN quickly. In what follows, we first remind some important aspects of this issue. We denote the weighted conjugate transpose matrix of A by A# ¼ N 1 A M. Also, Im denotes the identity matrix of the order m and PL;M denotes a projector on L along M. A fundamental method for finding the weighted M–P inverse is based on the weighted singular value decomposition (WSVD) discussed originally in [16] as follows. Let rankðAÞ ¼ r, there exist U 2 Cmm and V 2 Cnn , satisfying U  MU ¼ Im and V  N 1 V ¼ In , such that

A¼U



D 0 0

0

 V ;

ð1:2Þ

where D is a diagonal matrix

D ¼ diagðr1 ; r2 ; . . . ; rr Þ; and

r

2 1; . . . ;

r1 P r2 P    P rr > 0 1



ð1:3Þ #

r are the nonzero eigenvalues of N A MA ¼ A A. Then, the weighted M–P inverse 2 r

AyMN ¼ N1 V

1

D 0

!

AyMN

could be defined by

0 U  M: 0

ð1:4Þ

Furthermore,

kAkMN ¼ r1 ;

kAyMN kNM ¼

1

rr

ð1:5Þ

:

The restrictions of computing the weighted M–P inverse using the WSVD or with the other algorithms such as the one in [19], encouraged some to develop iteration methods for finding AyMN . To have a background about Newton-like iterations toward matrix inversion, we refer the reader to [5,9]. Newton’s method for computing A1 , known as the Schulz iteration is defined by the quadratically convergent iterative scheme [12]

X kþ1 ¼ X k ð2I  AX k Þ;

k ¼ 0; 1; . . .

ð1:6Þ y



2 It is known that fX k g converges also to the pseudo-inverse A (when A is rectangular or singular) if X 0 ¼ aA ; 0 < a < kAk 2, 2 (see, for example [14]). The Schulz iteration involves only matrix multiplication. The rate of convergence is quadratic, since 2 for the residual matrix Ek ¼ I  AX k or Ek ¼ I  X k A we have Ekþ1 ¼ Ek . In 2006, Huang and Zhang [7] applied the Schulz iterative method (1.6) for finding the weighted M–P inverse using the initial approximation X 0 ¼ aA# ; 0 < a < r22 , where r21 denotes the largest eigenvalue of the matrix N 1 A MA ¼ A# A. 1 It is known that the Schulz iteration is numerically stable [14]. Unfortunately, for the approximations X k that are remote from the solution, convergence can be slow [6]. Therefore, it is possible that the method (1.6) is too slow at the beginning of the process for finding AyMN . To remedy this drawback, two approaches mainly are introduced. First, using an accelerated scaled version and second constructing higher order methods. As a matter of fact, Huang and Zhang in [7] used a similar technique previously studied by Pan and Schreiber in [10] to accelerate the scheme (1.6) for finding the weighted M–P inverse as follows:

8 2 > < ak ¼ 1þð2ak Þak ; akþ1 ¼ ak ð2  ak Þak ; > : X kþ1 ¼ ak X k ð2I  AX k Þ;

ð1:7Þ k ¼ 0; 1; . . . ;

2rr r1 þrr .

wherein a0 ¼ In the present paper, we define an iterative method of the sixth order to compute the weighted Moore–Penrose inverse. The remaining sections of this paper unfold the contents in what follows. Section 2 is devoted to the derivation of a new method for matrix inversion whereas it is theoretically shown that the sequence of iterates produced by this method satisfy some certain equations. Bearing this in mind, in Section 3, we investigate theoretically convergence of the generated sequence of iterates to the AyMN . Numerical experiments and implementation details are reported in Section 4. Finally, concluding remarks will be given in Section 5 to end the paper. 2. A higher order iterative method An important challenge when applying iterative methods for finding the weighted M–P inverse is related to the initial matrix (seed). Here, the initial matrix plays a very crucial significance to provide convergence. Accordingly, we must apply the following initial matrix

F. Soleymani et al. / Applied Mathematics and Computation 222 (2013) 365–371

X 0 ¼ bA# ;

0
2

r21

367

ð2:1Þ

;

where r21 stands for the largest eigenvalue of the matrix A# A. For regular matrices, it should be pointed out that the iterative step in (1.6) arises from the Newton’s method for the matrix equation

FðXÞ :¼ X 1  A ¼ 0:

ð2:2Þ

To construct a new matrix iteration, we must find a nonlinear equation solver, which is more efficient than Newton’s method when gets applied to (2.2). Toward this goal, we apply the following nonlinear equation solver

8 Y ¼ X k  F 0 ðX k Þ1 FðX k Þ; > > > k > h i1 > > 1 > > > Z k ¼ Y k  ðY k  X k Þ ðFðY k Þ  FðX k ÞÞ FðY k Þ; < h i1 1 > > FðZ k Þ; W k ¼ Z k  ðZ k  Y k Þ ðFðZ k Þ  FðY k ÞÞ > > > > > h i1 > > 1 :X FðW k Þ; kþ1 ¼ W k  ðX k  W k Þ ðFðX k Þ  FðW k ÞÞ

ð2:3Þ

k ¼ 0; 1; 2; . . . ;

on the matrix Eq. (2.2), to obtain the fixed-point iteration

  X kþ1 ¼ X k 6I  15AX k þ 20ðAX k Þ2  15ðAX k Þ3 þ 6ðAX k Þ4  ðAX k Þ5 ¼ uðX k Þ:

ð2:4Þ

After simplifying uðX k Þ, the following accelerated method using only five matrix–matrix multiplications to reach the convergence order 6 could be attained

8 > < #k ¼ AX k ; 1k ¼ #k ðI þ #k Þ > : X kþ1 ¼ X k ð2I  #k Þð3I  2#k þ 1k ÞðI þ 1k Þ;

ð2:5Þ k ¼ 0; 1; 2; . . .

Remark 2.1. Let us mention that in the particular case M ¼ Im ; N ¼ In our method produces the M–P inverse Ay . It is important to note that our method (2.5) is not defined by generalizing a well-known method for computing the M–P inverse; the iterative process (2.5) in both cases of weighted M–P inverse and M–P inverse is new. Furthermore, note that Chen and Wang [4] presented a general family of iterative methods for computing the M–P inverse. We will prove that (2.5) is convergent towards the weighted M–P inverse with sixth order of convergence, while it uses a nice factorization to reduce the number of matrix multiplications. As a matter of fact, the sixth-order method extracted from [4] requires six matrix multiplications and thus it is costlier than our proposed variant. In what follows, we pursue the theoretical aspects of (2.5). Lemma 2.1. For the sequence fX k gk¼1 k¼0 generated by (2.5) with the initial matrix (2.1), it holds that

ðMAX k Þ ¼ MAX k ;

ðNX k AÞ ¼ NX k A;

X k AAyMN ¼ X k ;

AyMN AX k ¼ X k :

ð2:6Þ

Proof. The desired result could be proved using the principle of mathematical induction on k. For k ¼ 0, and using (2.1), the first two equations can be verified easily, and we only give a verification to the last two equations using the facts that # # ðAAyMN Þ ¼ AAyMN and ðAyMN AÞ ¼ AyMN A, in what follows #

#

X 0 AAyMN ¼ bA# AAyMN ¼ bA# ðAAyMN Þ ¼ bðAAyMN AÞ ¼ bA# ¼ X 0 ; #

ð2:7Þ

#

AyMN AX 0 ¼ bAyMN AA# ¼ bðAyMN AÞ A# ¼ bðAAyMN AÞ ¼ bA# ¼ X 0 :

ð2:8Þ

Assume now that the conclusion holds for some k > 0. We show that it continues to hold for k þ 1. Using the iterative method (2.5), one has

ðMAX kþ1 Þ ¼ ½Mð6#k  15#2k þ 20#3k  15#4k þ 6#5k  #6k Þ ¼ ¼



    6ðM#k Þ  15ðM#2k Þ þ 20ðM#3k Þ  15ðM#4k Þ þ 6ðM#5k Þ 6M#k  15M#2k þ 20M#3k  15M#4k þ 6M#5k  M#6k

 ðM#6k Þ



¼ MAðX k ð2I  #k Þð3I  2#k þ 1k ÞðI þ 1k ÞÞ ¼ MAX kþ1 ; which uses the fact that ðM#k Þ ¼ M#k ; M is Hermitian positive definite (M  ¼ M), and also e.g.

ð2:9Þ

368

F. Soleymani et al. / Applied Mathematics and Computation 222 (2013) 365–371 

ðM#2k Þ ¼ ðM#k #k Þ ¼ #k ðM#k Þ ¼ #k ðM#k Þ ¼ #k M  #k ¼ ðM#k Þ #k ¼ M#k #k ¼ M#2k : Thus, the first equality in (2.6) holds for k þ 1, and the second equality can be proved in a similar way. For the third equality in (2.6), using the assumption that X k AAyMN ¼ X k , or equivalently AX k AAyMN ¼ AX k ¼ #k and the iterative method (2.5), we could write down

X kþ1 AAyMN ¼ X k ð6I  15#k þ 20#2k  15#3k þ 6#4k  #5k ÞAAyMN ¼ 6X k AAyMN  15X k #k AAyMN þ 20X k #2k AAyMN  15X k #3k AAyMN þ 6X k #4k AAyMN  X k #5k AAyMN ¼ 6X k AAyMN  15X k #k AAyMN þ 20X k #1k #k AAyMN  15X k #2k #k AAyMN þ 6X k #3k #k AAyMN  X k #4k #k AAyMN ¼ 6X k  15X k #k þ 20X k #1k #k  15X k #2k #k þ 6X k #3k #k  X k #4k #k ¼ X k ð2I  #k Þð3I  2#k þ 1k ÞðI þ 1k Þ ¼ X kþ1 :

ð2:10Þ

Consequently, the third equality in (2.6) holds for k þ 1. The fourth equality can similarly be proved, and the desired result follows. h Remark 2.2. Note that we can present a more elegant proof of the third and fourth statement of Lemma 2.1. More precisely, we have the following known facts:

RðAAyMN Þ ¼ RðAÞ; N ðAAyMN Þ ¼ M1 N ðA Þ ¼ N ðA# Þ;

ð2:11Þ

RðAyMN AÞ ¼ N1 RðA Þ ¼ RðA# Þ; N ðAyMN AÞ ¼ N ðAÞ: Now by using X kþ1 ¼ X k ð6I  15AX k þ 20ðAX k Þ2  15ðAX k Þ3 þ 6ðAX k Þ4  ðAX k Þ5 Þ, we have

RðX k Þ # RðA# Þ:

ð2:12Þ

Similarly, if we rewrite (2.5) in the form

  X kþ1 ¼ 6I  15X k A þ 20ðX k AÞ2  15ðX k AÞ3 þ 6ðX k AÞ4  ðX k AÞ5 X k ;

ð2:13Þ

we conclude that

N ðX k Þ  N ðA# Þ: Now, the statement

AAyM;N

ð2:14Þ

X k AAyMN

¼ X k follows from

¼ PRðAÞ;N ðA# Þ ;

AyM;N A ¼ PRðA# Þ;N ðAÞ ; with the well-known results: P L;M X k ¼ X k , if and only if RðX k Þ  L, and X k P L;M ¼ X k , if and only if N ðX k Þ M. It should be remarked that the proposed iteration could simply be used for finding the M–P inverse of rectangular or singular matrices, or also for matrix inversion of square nonsingular matrices. That is to say, when M ¼ Im and N ¼ In , the method (2.5) converges to the M–P inverse. Notice that even in the case M ¼ N ¼ I the method (2.5) is new, since it possesses the sixth order of convergence by only five matrix-by-matrix multiplications. In the next section, we will further study how the new method (2.5) converges to the weighted M–P inverse. 3. Theoretical analysis Before stating the main theorem and so as to ease up its perception, it is required to present the following lemma for the method (2.5). Lemma 3.1. Let A be m  n matrix of rank r. Assume that the matrices U 2 Cmm ; V 2 Cnn ; U  MU ¼ Im ; V  N 1 V ¼ In , express the WSVD of A defined in (1.2) and (1.3). Considering the conditions of Lemma 2.1, for each approximate inverse produced by (2.5), it holds that 1

ðV 1 NÞX k ðM1 ðU  Þ Þ ¼ where

 T k ¼ /ðT k Þ ¼

and T k is diagonal.



Tk

0

0

0

 ;

uðT k1 Þ; k P 1; bD;

k¼0

ð3:1Þ

ð3:2Þ

F. Soleymani et al. / Applied Mathematics and Computation 222 (2013) 365–371

369

Proof. We prove this lemma using the principle of mathematical induction. For the initial case, we have 1

1

1

ðV 1 NÞX 0 ðM 1 ðU  Þ Þ ¼ bðV 1 NÞA# ðM 1 ðU  Þ Þ ¼ bðV 1 NÞN1 A ðMM 1 ðU  Þ Þ     D 0 bD 0 1 ¼ bðV 1 NÞN1 V U  ðMM 1 ðU  Þ Þ ¼ : 0 0 0 0

ð3:3Þ

Therefore, T 0 ¼ bD, where is defined in (3.2). Moreover, if (3.1) is satisfied, then for the case k þ 1, we have 1

1

ðV 1 NÞX kþ1 ðM1 ðU  Þ Þ ¼ ðV 1 NÞ uðX k Þ ðM 1 ðU  Þ Þ:

ð3:4Þ

Now, using the inductive hypothesis and by considering

U  MAN1 V ¼ U  MU



D 0 0

0

 V  N1 V;

ð3:5Þ

in the middle term uðX k Þ of (3.4), we obtain

 ;

ð3:6Þ

  T kþ1 ¼ T k 6I  15ðDT k Þ þ 20ðDT k Þ2  15ðDT k Þ3 þ 6ðDT k Þ4  ðDT k Þ5 ¼ uðT k Þ:

ð3:7Þ

1

ðV 1 NÞX kþ1 ðM 1 ðU  Þ Þ ¼



T kþ1

0

0

0

where

This fact establishes that /ðT k Þ is diagonal and of the form (3.2). The proof is ended. h Theorem 3.1. Assume that A is an m  n matrix whose AyMN based on WSVD is given by (1.4). Furthermore, assume that the initial approximation X 0 is defined by (2.1). Define the sequence of matrices X 1 ; X 2 ; . . . using (2.5). Then, this sequence of iterates converges to AyMN with sixth-order convergence. Proof. In view of (1.4) and to establish this result, we must show that 1

lim ðV 1 NÞX k ðM1 ðU  Þ Þ ¼

k!1

D1

0

0

0

! ð3:8Þ

:

It follows from Lemma 3.1 that

  ðkÞ ðkÞ T k ¼ diag s1 ; s2 ; . . . ; sðkÞ ; r , where

sð0Þ ¼ bri i

ð3:9Þ

and

siðkþ1Þ ¼ uðsiðkÞ Þ:

ð3:10Þ

Now, the sequence generated by (3.10) is the result of applying (2.5) for computing the zero r1 of the function i ð0Þ ð0Þ 2 /ðsÞ ¼ ri  s1 , with the initial value si . It is seen that this iteration converge to r1 provided 0 < s < i i ri , which could k¼1 y 1 yield a condition on b. Thus, T k ! D , and the relation (3.8) is satisfied. Clearly, fX k gk¼0 ! AMN . To be more precise,

X k A ¼ N1 V



   D 0 U  MU V : 0 0 0

Tk

0

0

ð3:11Þ

Since U  MU ¼ Im , and V  N 1 V ¼ In , we have P 1 ¼ N 1 V, and P ¼ V  . Therefore,

X k A ¼ P1



  Rk P ¼ P1 0 0

TkD 0 0

ðkÞ

ðkÞ

0 0

 P;

ð3:12Þ

ðkÞ

wherein Rk ¼ T k D ¼ diagðq1 ; q2 ; . . . ; qr Þ. This yields in

X kþ1 A ¼ N 1 V

X kþ1 A ¼ P 1





uðT k Þ 0 0

0

uðT k ÞD 0 0

0:



U  MU

 P:



D 0 0

0

 V ;

ð3:13Þ

ð3:14Þ

370

F. Soleymani et al. / Applied Mathematics and Computation 222 (2013) 365–371

Considering ðkÞ

ðkÞ

Rkþ1 ¼ uðT k ÞD ¼ DuðT k Þ ¼ diagðq1 ; q2 ; . . . ; qðkÞ r Þ; for Rk ¼ T k D, then

h i Rkþ1 ¼ T k ð6I  15ðDT k Þ þ 20ðDT k Þ2  15ðDT k Þ3 þ 6ðDT k Þ4  ðDT k Þ5 Þ D:

Now, we obtain

I  Rkþ1 ¼ ðI  T k DÞ6 ¼ ðI  Rk Þ6 ;

ð3:15Þ ðkþ1Þ j

and thus for all j; 1 6 j 6 r, we have 1  q method (2.5). The proof is complete. h

ðkÞ 6 j Þ ,

¼ ð1  q

which shows the sixth order of convergence for the presented

The new iteration method (2.5) reaches the sixth-order convergence using five matrix-by-matrix multiplications. There1 fore, if one applies the definition of the efficiency index as EI ¼ pg , in which p and g stand for the order of convergence and the number of matrix-by-matrix multiplications, then the proposed method achieves the computational efficiency index 1 1 1 65 1:430, which is greater than that of 22 1:414 of the method (1.6), and 66 1:348 for the sixth-order method in [4]. This reveals that our method is economic and could be used in practical problems. This superiority will be further shown in the next section. 4. Numerical analysis In this section, some experiments are presented to demonstrate the capability of the proposed method. The programming package MATHEMATICA 8 [15] with machine precision have been used in the demonstrations. For numerical comparisons, we have used the methods (1.6), and (2.5). Note that a drawback of (1.7) is that its acceleration is possible using an specified seed, which uses both the largest and the smallest singular values r1 and rr , as follows:

X0 ¼

2

r21 þ r2r

A# :

ð4:1Þ

For (1.6) and (2.5), we have used the following simpler seed:

X0 ¼

1

r21

A# :

ð4:2Þ

Using the stopping criterion jjX kþ1  X k jj1 108 , when the maximum number of iterations set to 100, we compare the behavior of different methods for some randomly generated matrices, while the computer specifications are Microsoft Windows XP Intel(R), Pentium(R) 4, CPU 3.20 GHz, with 4 GB of RAM. Example 4.1. In this test, we compute the weighted M–P inverse of 5 dense m  n ¼ 170  180 matrices as follows m = 170; n = 180; number = 5; SeedRandom[12345]; Table[A[l] = RandomReal[{1}, {m, n}];, {l, number}]; where the five different Hermitian positive definite matrices M and N (which have also been constructed randomly) are in what follows Table½MM½l ¼ RandomReal½f2g; fm; mg;;fl; numberg; Table½MM½l ¼ Transpose½MM½l:MM½l;;fl; numberg; Table½NN½l ¼ RandomReal½f3g; fn; ng;;fl; numberg; Table½NN½l ¼ Transpose½NN½l:NN½l;;fl; numberg; Table 1 Results of comparisons for Example 4.1. Matrices #1 #2 #3 #4 #5

IT Time IT Time IT Time IT Time IT Time

(1.6)

(2.5)

65 0.765 59 0.703 73 0.875 67 0.781 67 0.812

26 0.734 24 0.687 29 0.812 27 0.750 27 0.765

F. Soleymani et al. / Applied Mathematics and Computation 222 (2013) 365–371

371

The results of comparisons are reported in Table 1, wherein IT stands for the number of iterations. The proposed method (2.5) is mostly superior to its competitors in terms of the number of iterations and the elapsed time (in seconds).

5. Concluding remarks In this paper, we have developed a new matrix iterative method for finding the weighted M–P inverse. Per iteration, it has been proved that the suggested scheme (2.5) reaches sixth-order convergence using five matrix-matrix multiplications which implies 1.430 as its efficiency index and is better than that of Huang and Zhang (1.6), i.e. 1.414. A discussion on how to obtain the initial matrix for arriving at the convergence phase has been given. Numerical analysis has also been included to illustrate the efficiency of the new method for computing AyMN . Finally and according to the theoretical and numerical results, we conclude that the new method is efficient. Acknowledgments The third author was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah. He, therefore, acknowledges with thanks DSR technical and financial support. References [1] A. Ben-Israel, T.N.E. Greville, Generalized Inverses, second ed., Springer-Verlag, New York, 2003. [2] A. Bojanczyk, N.J. Higham, H. Patel, Solving the indefinite least squares problem by hyperbolic QR factorization, SIAM J. Matrix Anal. Appl. 24 (2003) 914–931. [3] S. Chandrasekaran, M. Gn, A.H. Sayed, A stable and efficient algorithm for the indefinited linear least-squares problem, SIAM J. Matrix Anal. Appl. 20 (1998) 354–362. [4] H. Chen, Y. Wang, A family of higher-order convergent iterative methods for computing the Moore–Penrose inverse, Appl. Math. Comput. 218 (2011) 4012–4016. [5] D.S. Djordjevic´, P.S. Stanimirovic´, Iterative methods for computing generalized inverses related with optimization methods, J. Aust. Math. Soc. 78 (2005) 257–272. [6] N.J. Higham, Accuracy and Stability of Numerical Algorithms, second ed., SIAM, Philadelphia, 2002. [7] F. Huang, X. Zhang, An improved Newton iteration for the weighted Moore–Penrose inverse, Appl. Math. Comput. 174 (2006) 1460–1486. [8] X. Liu, Y. Yu, H. Wang, Determinantal representation of the weighted generalized inverse, Appl. Math. Comput. 208 (2009) 556–563. [9] M.Z. Nashed, X. Chen, Convergence of Newton-like methods for singular operator equations using outer inverses, Numer. Math. 66 (1993) 235–257. [10] V.Y. Pan, R. Schreiber, An improved Newton iteration for the generalized inverse of a matrix, with applications, SIAM J. Sci. Stat. Comput. 12 (1991) 1109–1131. [11] M.D. Petkovic´, P.S. Stanimirovic´, M.B. Tasic´, Effective partitioning method for computing weighted Moore–Penrose inverse, Comput. Math. Appl. 55 (2008) 1720–1734. [12] G. Schulz, Iterative Berechnung der Reziproken matrix, Z. Angew. Math. Mech. 13 (1933) 57–59. [13] X. Sheng, G. Chen, The generalized weighted Moore–Penrose inverse, J. Appl. Math. Comput. 25 (2007) 407–413. [14] T. Söderström, G.W. Stewart, On the numerical properties of an iterative method for computing the Moore–Penrose generalized inverse, SIAM J. Numer. Anal. 11 (1974) 61–74. [15] M. Trott, The Mathematica Guide-Book for Numerics, Springer Science+Business Media, New Tork, 2006. [16] C.F. Van Loan, Generalizing the singular value decomposition, SIAM J. Numer. Anal. 13 (1976) 76–83. [17] G. Wang, B. Zheng, Weighted generalized inverses of a partitioned matrix, Appl. Math. Comput. 155 (2004) 221–233. [18] S. Wang, B. Zheng, Z. Xiong, Z. Li, The condition numbers for weighted Moore–Penrose inverse and weighted linear least squares problem, Appl. Math. Comput. 215 (2009) 197–205. [19] Y. Wei, The representation and approximation for the weighted Moore–Penrose inverse, Appl. Math. Comput. 121 (2001) 17–28. [20] Y. Wei, The representation and approximation for the weighted Moore–Penrose inverse in Hilbert space, Appl. Math. Comput. 136 (2003) 475–486. [21] Y. Wei, H. Wu, J. Wei, Successive matrix squaring algorithm for parallel computing the weighted generalized inverse AyMN , Appl. Math. Comput. 116 (2000) 289–296.