New methods for computing the Drazin-inverse solution of singular linear systems

New methods for computing the Drazin-inverse solution of singular linear systems

Applied Mathematics and Computation 294 (2017) 343–352 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

438KB Sizes 0 Downloads 58 Views

Applied Mathematics and Computation 294 (2017) 343–352

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

New methods for computing the Drazin-inverse solution of singular linear systems F. Toutounian a,b,∗, R. Buzhabadi a a b

Department of Applied Mathematics, School of Mathematical Sciences, Ferdowsi University of Mashhad, Mashhad, Iran The Center of Excellence on Modeling and Control Systems, Ferdowsi University of Mashhad, Mashhad, Iran

a r t i c l e

i n f o

Keywords: Drazin-inverse solution DGMRES method LGMRES method GMRES-E method Singular systems Error vector

a b s t r a c t The DGMRES method is an iterative method for computing the Drazin-inverse solution of consistent or inconsistent linear systems of the form Ax = b, where A ∈ Cn×n is a singular and in general non-Hermitian matrix that has an arbitrary index. This method is generally used with restarting. But the restarting often slows down the convergence and DGMRES often stagnates. Based on the LGMRES and GMRES-E methods, we present two new techniques for accelerating the convergence of restarted DGMRES by adding some approximate error vectors or approximate eigenvectors (corresponding to a few of the smallest eigenvalues) to the Krylov subspace. We derive the implementation of these methods and present some numerical examples to show the advantages of these methods. © 2016 Elsevier Inc. All rights reserved.

1. Introduction In this paper, we consider the problem of finding a solution of the system

Ax = b, Rn×n

(1) Rn

where A ∈ is a singular matrix, b ∈ and ind(A) is arbitrary. Here ind(A), the index of A is the size of the largest Jordan block corresponding to zero eigenvalue of A. We recall that the Drazin-inverse solution of the system (1) is the vector AD b, where AD is the Drazin-inverse of the singular matrix A [3,5]. The Drazin-inverse solution AD b is the unique solution of the equation Aa+1 x = Aa b that belongs to R(Aa ) [24]. The Drazin-inverse has various applications in the theory of finite Markov chains [5], the study of singular differential and difference equations [5], the investigation of Cesaro–Neumann iterations [12], cryptographic system [11], iterative methods in numerical analysis [8,9], multibody system dynamics [22], and others. The problem of finding the solution of the form AD b for (1) is very common in the literature and many different techniques have been developed in order to solve it [7–10,19–21,23,25]. In [19], Sidi developed the DGMRES method for singular systems that is analogous to the GMRES method for nonsingular systems. In addition, in [19], the author proposed an effective mode of usage for DGMRES, denoted DGMRES(m), which is analogous to the GMRES(m) and requires a fixed amount of storage for its implementation. In restarted DGMRES (DGMRES(m)) the method is restarted once the Krylov subspace reaches dimension m, and the current approximate solution



Corresponding author. E-mail addresses: [email protected] (F. Toutounian), [email protected], [email protected] (R. Buzhabadi).

http://dx.doi.org/10.1016/j.amc.2016.09.013 0 096-30 03/© 2016 Elsevier Inc. All rights reserved.

344

F. Toutounian, R. Buzhabadi / Applied Mathematics and Computation 294 (2017) 343–352

becomes the new initial guess for the next m iterations. The restart parameter m is generally chosen small relative to n to keep storage and computation requirements reasonable. In general, restarting slows the convergence of GMRES and DGMRES methods. However, it is possible to save some important information at the time of the restart and used in the next cycle. For accelerating the convergence of the restarted GMRES method, Chapman and Saad [6] and Morgan [14–16] proposed augmentation of Krylov subspaces generated by restarted GMRES by spaces spanned by certain eigenvectors or Ritz vectors. Baglama and Reichel [2] proposed to augment Krylov subspaces generated by GMRES by linear spaces that are not defined by Ritz vectors. They show that when the linear system of equations arises from the discretization of a well-posed problem, augmentation can reduce the number of iterations required to determine an approximate solution of desired accuracy. Baker et al. [1] served the error approximation for augmenting the next approximation space and presented the LGMRES algorithm. They discussed some of its properties and illustrated that the LGMRES can significantly accelerate the convergence of restarted GMRES method. In this paper, we propose two new methods LDGMRES(m, k) and DGMRES-E(m, k) which are analogous to the LGMRES(m, k) and the GMRES-E(m, k), respectively [1,14]. For accelerating the convergence of the DGMRES(m), LDGMRES(m, k) augments the next Krylov subspace with some approximate error vectors and DGMRES-E(m, k) includes the approximate eigenvectors determined from the previous subspace in the new subspace. By numerical examples, we show the advantages of these methods. The paper is organized as follows. In Section 2, we will give a review of the DGMRES algorithm. In Section 3, we will derive the LDGMRES(m, k) algorithm by describing an augmented DGMRES algorithm. In Section 4, we will develop the DGMRES-E(m, k) algorithm. In Section 5, the results of some numerical examples are given. Section 6 is devoted to concluding remarks. 2. DGMRES algorithm DGMRES method is a Krylov subspace method for computing the Drazin-inverse solution of consistent or inconsistent linear systems (1) [19,20]. In this method, there are not any restriction on the matrix A. Thus, in general, A is non-Hermitian, a = ind(A ) is arbitrary, and the spectrum of A can have any shape. Thus, it is unnecessary for us to put any restriction on the linear system Ax = b. The system may be consistent or inconsistent. We only assume that ind(A) is known. DGMRES starts with an initial vector x0 and generates a sequence of vectors x0 , x1 , . . . , as

xm = x0 +

m −a 

ci Aa+i−1 r0 ,

r0 = b − Ax0 .

i=1

Then

rm = b − Axm = r0 −

m −a 

ci Aa+ i r0 .

i=1

The Krylov subspace we will use is

Km−a (A, Aa r0 ) = span{Aa r0 , Aa+1 r0 , . . . , Am−1 r0 }. The vector xm produced by DGMRES satisfies

Aa rm  =

min

x∈x0 +Km−a (A,Aa r0 )

Aa (b − Ax )2 .

(2)

The implementation of the DGMRES method is given by Algorithm 1. More details about the DGMRES algorithm can be found in [20]. In our discussion, we refer to the group of m iterations between successive restarts as a cycle. The restart number is denoted with a subscript: ri is the residual after i cycle or m × i iterations. During each restart cycle i DGMRES(m) finds xi+1 ∈ xi + Km−a (A, Aa ri ) such that Aa ri+1 ⊥ Aa+1 Km−a (A, Aa ri ). m−a (m − a + 1 : m + 1, 1 : m − a ) denotes the portion of H m−a In the sequel, some MATLAB notation is used; for instance, H with rows from m − a + 1 to m + 1 and columns from 1 to m − a. In addition, we denote a matrix X with l columns by Xl . 3. A new algorithm: LDGMRES When an iterative approach is restarted, the current approximation space is discarded at each restart. Our technique attempts to accelerate the convergence of DGMRES(m) by retaining some of the information that is typically discarded at the time of restart. Suppose that x is the true solution to the problem (1). The error after the ith restart cycle of DGMRES(m) is denoted by ei , where

ei ≡ x − xi .

(3)

If our approximation space contains the exact correction ei such that x = xi + ei , then we have solved the problem. We define

zi ≡ xi − xi−1 ,

(4)

F. Toutounian, R. Buzhabadi / Applied Mathematics and Computation 294 (2017) 343–352

345

Algorithm 1 DGMRES(m) algorithm. 1. Choose an initial guess x0 and compute r0 = b − Ax0 and Aa r0 . 2. Compute β = Aa r0  and set v1 = β −1 (Aa r0 ). 3. Orthogonalize the Krylov vectors Aa r0 , Aa+1 r0 , . . . , Am+a−1 r0 via the Arnoldi–Gram–Schmidt process carried out like the modified Gram–Schmidt process: For j = 1, . . . , m do u = Av j For i = 1, . . . , j do hi, j = u, vi  u = u − hi, j vi end h j+1, j = ||u||, v j+1 = u/h j+1, j end (The vectors v1 , v2 , . . . , vm+1 obtained by this way form an orthonormal set.)  ∈ Rn×k and H¯ ∈ R(k+1 )×k 4. For k = 1 : m form the matrices V k k



h11 ⎢h21

⎢ ⎢ ⎢  = [v1 |v2 |. . .|v ], H¯ = ⎢ V k k k ⎢ ⎢ ⎢ ⎣

0 .. . .. . 0

h12 h22 h32 .. . ···

··· ··· .. . .. . .. . ···

··· ···

.

h1k h2k .. . .. .

. 0

hkk hk+1,k

.. ..

⎤ ⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ ⎦

m = H¯ m H¯ ¯ 5. Form the matrix H m−1 . . . Hm−a . m : H m = Qm Rm ; Qm ∈ R(m+1 )×(m−a ) and Rm ∈ R(m−a )×(m−a ) .(Rm is upper triangular.) 6. Compute the QR factorization of H 7. Solve the (upper triangular) system Rm dm = β (Qm∗ e1 ), where e1 = [1, 0, . . . , 0]T . m−a dm (then Aa rm  = β 1 − Q ∗ e 2 ). If satisfied then Stop. 8. Compute xm = x + V 0

9. Set x0 = xm , compute r0 = b − Ax0 , and go to 2.

m 1

as the approximation to the error after the ith DGMRES(m) restart cycle, and zj ≡ 0 for j < 1. From the fact that xi ∈ xi−1 + Km−a (A, Aa ri−1 ), we observe that zi ∈ Km−a (A, Aa ri−1 ). So, in some sense, this error approximation zi represents the space Km−a (A, Aa ri−1 ) generated in the previous cycle and subsequently discarded. Therefore, as pointed out in [1], including an approximation to ei (such as zi ) to the next approximation space Km−a (A, Aa ri ); is a reasonable strategy. As LGMRES(m, k) [1], the new restarted augmented DGMRES algorithm, denoted by LDGMRES(m, k), appends k previous approximations to the error to the current Krylov approximation space and at the end of restart cycle i + 1, it finds an approximate solution to (1) in the following way: −a−1 xi+1 = xi + qm ( A )Aa ri + i+1

i 

βi j z j ,

(5)

j=i−k+1

−a−1 where polynomial qm and β ij are chosen such that ||Aa ri+1 ||2 is minimized. Note that k = 0 corresponds to DGMRES(m). i+1 In the following section, we will describe implementation of an augmented DGMRES algorithm, which appends k given vectors w j , j = 1, 2, . . . , k to the current Krylov approximation space. Then, we observe that, by defining w j = zi− j+1 , for j = 1, 2, . . . , k, the LDGMRES(m, k) algorithm can be easily obtained.

3.1. Implementation of the augmented DGMRES method Let m be the dimension of Krylov subspace, and suppose that k given vectors w j , j = 1, 2, . . . , k are used for adding to the Krylov subspace, and s = m + k. Typically the number of vectors appended, k, is much smaller than the restart parameter m. Let 0) (0 ) Ws(−a = [v1(0) , v2(0) , . . . , vm −a , w1 , . . . , wk ]

be the n × (s − a ) matrix whose first m − a columns are the orthonormalized Arnoldi vectors (the vi vectors in step 3 of DGMRES(m) algorithm) and whose last k vectors are the given vectors w j , j = 1, . . . , k. So, we can write 0) xi+1 = xi + Ws(−a d,

for some d ∈ Rs−a ,

and we need to determine d such that Aa ri+1  is minimized. Let 0) (0 ) 0) Vs(−a = [v1(0) , v2(0) , . . . , vm , . . . , vs(−a ] +1 −a+1 +1

346

F. Toutounian, R. Buzhabadi / Applied Mathematics and Computation 294 (2017) 343–352

be the n × (s − a + 1 ) matrix whose first m − a + 1 columns are Arnoldi vectors and whose last k columns are formed by 0) orthogonalizing the vectors Aw1 , Aw2 , . . . , Awk against the previous columns of Vs(−a . Then +1 0) 0) AWs(−a = Vs(−a H¯ (0) , +1 s−a

(6)

0) where H¯ s(−a is an (s − a + 1 ) × (s − a ) upper-Hessenberg matrix whose entries are computed during the construction of the 0) columns of Vs(−a by Arnoldi process. Let +1 l) l) Vs(−a = [v1(l ) , v2(l ) , . . . , vs(−a ] +l+1 +l+1

l−1 ) be the n × (s − a + l + 1 ) matrix whose first m − a + l columns are equal to the first m − a + l columns of Vs(−a (i.e., +l

l) l−1 ) Vs(−a (1 : n, 1 : m − a + l ) = Vs(−a (1 : n, 1 : m − a + l )) and whose last k + 1 columns are formed by orthogonalizing the +l+1 +l l) vector Av(jl−1 ) , for j = m − a + l, . . . , s − a + l, against the previous columns of Vs(−a , Then we have +l+1 l−1 ) l) AVs(−a = Vs(−a H¯ (l ) , +l +l+1 s−a+l

for l = 1, . . . , a,

(7)

¯ (l )

where Hs−a+l is an (s − a + l + 1 ) × (s − a + l ) upper-Hessenberg matrix and whose last k + 1 columns are formed during l) the construction of the vectors v(jl ) , j = m − a + l + 1, . . . , s − a + l + 1. We observe that only (k + 1 ) last columns of Vs(−a +l+1 l) 0) 0) and H¯ s(−a are new and must be constructed by Arnoldi process. By starting with matrices Vs(−a and H¯ s(−a , we can easily +1 +l l) l) compute the orthonormal matrix Vs(−a and upper-Hessenberg matrix H¯ s(−a , for l = 1, . . . , a. +l+1 +l

From the above discussion and β = Aa ri , v1(0 ) = β −1 Aa ri , we can write

Aa ri+1 = Aa (b − Axi+1 ) 0) = Aa ri − Aa+1Ws(−a d

=

0) βv1(0) − AaVs(−a H¯ (0) d +1 s−a

=

1) βv1(0) − Aa−1Vs(−a H¯ (1) H¯ (0) d +2 s−a+1 s−a

.. . =

a ) ¯ (a ) 1) βv1(0) − Vs(+1 Hs . . . H¯ s(−a H¯ (0) d +1 s−a

a) s−a d ), = Vs(+1 ( β e1 − H

s−a = H¯ (a ) . . . H¯ (1 ) H¯ (0 ) and e = [1, 0, 0, . . . , 0]T ∈ Rs+1 . This matrix is analogous to the matrix H m of DGMRES(m) where H 1 s s−a+1 s−a  algorithm (step 5) and is an (s + 1 ) × (s − a ) matrix. Let Hs−a = Qs−a Rs−a , where Qs−a is an (s + 1 ) × (s − a ) unitary matrix and Rs−a is an (s − a ) × (s − a ) upper triangular matrix, then we have a) s−a d )2 Aa ri+1 2 = Vs(+1 ( β e1 − H = β e1 − Qs−a Rs−a d2 .

The minimal solution is then found by solving the upper triangular system

Rs−a di = β Qs∗−a e1 . As in DGMRES [20], Aa ri+1  is a byproduct and we have

Aa ri+1 2 = β (1 − Qs∗−a e1 2 ).

Now, we can summarize one restart cycle i of the augmented DGMRES(m, k) algorithm as shown in Algorithm 2. LDGMRES(m, k) algorithm can be easily derived from Algorithm 2, by changing the line 1 as follows: 1. Set w j = zi− j+1 , for j = 1, 2, . . . , k, Note that only i error approximations are available at the beginning of restart cycles with i ≤ k, because z j = 0, when j < 1. As LGMRES, we use additional Arnoldi vectors instead of zj when j < 1. Therefore, the approximation space is of order m + k for each cycle and the first cycle (i = 0) of LDGMRES(m, k) is equivalent to the first cycle of DGMRES (m + k ). We now look briefly at how the expense and storage of LDGMRES compares to the DGMRES method. The main potential advantage of LDGMRES compared to DGMRES is in the convergence, but when implementing LDGMRES(m, k), only s + a × k matrix–vector multiplies are required per restart cycle (m − a + k matrix–vector multiplies for step 4 and a × (k + 1 ) matrix–vector products for step 15 in Algorithm 2), while DGMRES(s) uses s matrix–vector products (step 3 of Algorithm 1) per restart cycle . In LDGMRES(m, k), storage consists of s + 1 orthogonal basis vectors (v1 , v2 , . . . , vs+1 ), k pairs of w j , Aw j , l−1 ) k + 1 vectors of length n, for keeping the last k + 1 vectors of Vs(−a (step 13 in Algorithm 2), and the vectors of approximate +l solution and right-hand side. Therefore, the implementation of LDGMRES(m, k) requires storage for s + 3k + 4 vectors of length n, while DGMRES(s) requires storage for s + 3 vectors of length n. So, we can conclude that the extra expense for LDGMRES(m, k) in each cycle are a × k matrix–vector products and storage of 3k + 1 vectors of length n.

F. Toutounian, R. Buzhabadi / Applied Mathematics and Computation 294 (2017) 343–352

347

Algorithm 2 One restart cycle i of the augmented DGMRES(m, k) algorithm. 1. Let w1 , w2 , . . . , wk be the given vectors. 2. Compute ri = b − Axi , β = Aa ri , v1 = β −1 (Aa ri ), s = m + k. 3. For j = 1 : m + k − a, do 4.



u=

if j ≤ m − a otherwise

Av j Aw j−(m−a )

5. For l = 1 : j, do 6. hl, j = u, vl  7. u = u − hl, j vl 8. end 9. h j+1, j = ||u||, v j+1 = u/h j+1, j 10. end 0) 11. Form Vs−a+1 = [v1 , v2 , . . . , vm−a , vm−a+1 , . . . , vs−a+1 ], Ws(−a = [v1 , v2 , . . . , vm−a , w1 , . . . , wk ],  H = {hl, j }1≤l≤(s−a+1 ),1≤ j≤(s−a ), and set H = H. 12. For l = 1 : a, do 1) 13. Ws(−a = Vs−a+l +l 14. For j = m − a + l : m + k − a + l, do 15. u = Aw(j1 ) 16. For t = 1 : j 17. ht, j = u, vt  18. u = u − ht, j vt 19. end 20. h j+1, j = ||u||, v j+1 = u/h j+1, j 21. end 22. Form Vs−a+l+1 = [v1 , v2 , . . . , vm−a , vm−a+1 , . . . , vs−a+l+1 ] and H = {ht, j }1≤t≤(s−a+l+1 ),1≤ j≤(s−a+l ) =H×H . 23. Compute H 24. end : H  = Qs−a Rs−a , d||, by computing the QR factorization of H 25. Compute di , the minimizer ||β e1 − H where Qs−a ∈ R(s+1 )×(s−a ) , Rs−a ∈ R(s−a )×(s−a ) , and solving the triangular system Rs−a di = β Qs∗−a e1 . 0) 26. Compute zi+1 = Ws(−a di .

27. Compute xi+1 = xi + zi+1 (then Aa ri+1  = β



1 − Q ∗ e1 2 ).

For the LDGMRES(m, k), the following theorem can be stated. Theorem 1. Suppose that we augment the Krylov space with k error approximation vectors z j = x j − x j−1 , j = i − k + 1, . . . , i, then (i) zi+1 ⊥(Aa+1 )T Aa+1 {z j }(i−k+1 )≤ j≤i . ||Aa r

||

(ii) cos ∠(Aa ri+1 , Aa ri− j ) = ||Aa ri+1|| 2 , i− j 2

for 0 ≤ j ≤ k.

Proof. The proofs of (i) and (ii) are similar to that of Theorems 2 and 5 in [1], respectively.



Part (ii) of Theorem 1 indicates that, for LDGMRES, as LGMRES, the progress of the cycle also correlates with the skip angles. Therefore, fast convergence implies large skip angles. 4. Implementation of DGMRES-E In this section, based on GMRES-E(m, k) [14], we attempt to accelerate the convergence of DGMRES(m) by computing approximate eigenvectors corresponding to a few of the smallest eigenvalues and adding to the subspace for DGMRES(m). The new algorithm, denoted by DGMRES-E(m, k), is similar to the augmented DGMRES(m, k) algorithm (Algorithm 2), the only difference is, instead of augmenting the subspace with k givens vectors w j , j = 1, . . . k, the approximate eigenvectors corresponding to the k smallest eigenvalues of the matrix Aa+1 , which has index one, will be added to the subspace. ) We wish to find approximate eigenvectors from the subspace spanned by the columns of Vm(a−a which defined in ) Section 3. Vm(a−a is the n × (m − a ) matrix whose columns are the orthonormalized Arnoldi vectors. By using the relation l) (7) and the definition of Vs(−a , for l = a, we can write +l+1

(a ) ¯ (a ) AV j(a ) = V j+1 Hj ,

for j = m − a, . . . , m.

(8)

348

F. Toutounian, R. Buzhabadi / Applied Mathematics and Computation 294 (2017) 343–352

As DGMRES, this implies that ) )  Aa+1Vm(a−a = Vm(a+1 Hm−a ,

(9)

m−a = H¯ (a ) H¯ (a ) . . . H¯ (a ) . where H m m−a m−1 To find approximate eigenpairs of Aa+1 , using the Harmonic Arnoldi method [13,17,18], we solve the small generalized eigenvalue problem ) T ) ) T ) (Vm(a−a ) (Aa+1 )T (Aa+1 )Vm(a−a g j = θj (Vm(a−a ) (Aa+1 )T Vm(a−a g j.

Using (9), this becomes

 T T H  H m−a m−a g j = θ j Hm−a g j ,

(10)

m−a is the same as H m−a except that the (a + 1 ) last rows are removed. The θ ’s are called harmonic Ritz values in where H j ) [18]. The harmonic Ritz vectors are  y j = Vm(a−a g j , j = 1, . . . , m − a. The gj ’s associated with the k smallest θj ’s are needed and the corresponding harmonic Ritz vectors  y j , j = 1, . . . , k, will be used for adding to the next Krylov subspace. The main changes for the DGMRES-E(m, k) are as follows: (i) The following step must be added (as step 28) at the end of each cycle i of the augmented DGMRES(m, k) algorithm (Algorithm 2) described in Section 3,  = H ( 1 : m + 1, 1 : m ) 28. Set H For j = 1 : a, do =H  ∗ H ( 1 : m − j + 1, 1 : m − j ) H end =H (1 : m − a, 1 : m − a ) Set H  T T H  Compute the k smallest eigenpairs (θj , g j ) of the generalized eigenvalue problem H m−a m−a g j = θ j Hm−a g j , orthonormalize gj ’s, first, separating into real and complex imaginary parts if complex, in order to form an n × k matrix. Then compute y˜ j = Vm−a g j , j = 1, . . . , k. (ii) The line 1 of Algorithm 2 must be changed as follows: 1. Set w j = y˜ j , for j = 1, 2, . . . , k, We mention that, the implementation is a little different for the first cycle (i = 0). Standard DGMRES(s) is used, and for computing the approximate eigenvectors, instead of solving the problem (10), we solve the following generalized eigenvalue problem

 T T H  H s−a s−a g j = θ j Hs−a g j , s−a = H¯ (0 ) H¯ (0 ) . . . H¯ (0 ) and H s−a is the same as H s−a except that the (a + 1 ) last rows are removed. In this case the where H s s−a s−1 0) harmonic Ritz vectors are  y j = Vs(−a g j , j = 1, . . . , s − a. Moreover, we have observed experimentally that the numerical results are better if only the real parts of eigenvectors corresponding to the eigenvalue zero are taken. So, in all experiments we used only the real part of vector gj for j ≤ a.

5. Numerical results In this section we report some numerical results obtained by executing Algorithms 1 and 2 for computing the Drazininverse solution of the system (1). The test examples are given to illustrate that the method works and to demonstrate some good numerical properties. The codes are written in the programming package MATLAB and tested on a Workstation Intel Core duo 2.40GHz. The initial vector is the zero vector for all problems. We stop the iteration when Aa ri 2 /Aa r0 2 ≤ ε . In the sequel, we refer to Aa ri 2 , Aa ri 2 /Aa r0 2 , and Aa+1 y j − θj  y j 2 , as the residual norm, the relative residual norm, and the residual norm corresponding to the approximate eigenvalue θ , respectively. For Examples 1 and 2, we compare j

the performance of DGMRES (m + k ), LDGMRES(m, k), and DGMRES-E(m, k) with equal-sized approximation spaces, for k = 1, . . . , 5. In Tables 1 and 2, we give the number of cycles (Cycle), and the matrix–vector products (Mvp) required for convergence. For Examples 3 and 4, we compare the performance of these methods by using the same size subspace and the same storage. For these examples the log of relative residual norm (log10 (Aa ri 2 /Aa r0 2 )) are plotted against the number of matrix–vector products. Finally, we list the sequential angles (the angle between Aa ri+1 and Aa ri ) and the skip angles (the angle between Aa ri+1 and Aa ri−1 ) for Examples 1–4, for showing that LDGMRES nearly always has a larger median sequential angle and median skip angle than does DGMRES.

Example 1. In this example we consider the given real matrix A ∈ Rn×n with ind (A ) = 5 and n = 100. The matrix is bidiagonal with entries 0, 0, 0, 0, 0, 0.01, 0.02, . . . , 0.95 on the main diagonal and 0.01’s on the super diagonal. The right-hand side has all entries 1.0. For ε = 10−12 , the numerical results are presented in Table 1. These results indicate that the LDGMRES(m,

F. Toutounian, R. Buzhabadi / Applied Mathematics and Computation 294 (2017) 343–352

349

Table 1 Cycles and matrix–vector products required for convergence Aa ri 2 /Aa r0 2 ≤ 10−12 . DGMRES(m)

LDGMRES(m,k)

m

k

Cycle

Mvp

35 34 33 32 31 30

0 1 2 3 4 5

335

11,730

DGMRES-E(m,k)

Cycle

Mvp

Cycle

Mvp

37 89 18 20 23

1480 3994 872 1049 1300

16 27 22 18 23

640 1210 1090 975 1360

Table 2 Cycles and matrix–vector products required for convergence Aa ri 2 /Aa r0 2 ≤ 10−13 . DGMRES(m)

LDGMRES(m,k)

m

k

Cycle

Mvp

20 19 18 17 16 15

0 1 2 3 4 5

113

2261

DGMRES-E(m,k)

Cycle

Mvp

Cycle

Mvp

39 41 45 50 58

819 899 1027 1185 1426

39 41 46 50 57

819 901 1056 1197 1421

k) and DGMRES-E(m, k) are effective for this problem and they are much better than standard DGMRES (m + k ). As we observe, this example gets considerably better results with DGMRES-E(34, 1). After 4 cycles, the first harmonic Ritz value is 8.1e−10 and the corresponding residual norm is equal to 7.4e−10. Example 2. We consider the matrix A which obtained by first discretizing the Poisson equation

 ∂2 ∂2 + u(x, y ) = f (x, y ) for (x, y ) ∈  = [0, 1] × [0, 1] ∂ x2 ∂ y2

with Neumann boundary conditions

∂ u(x, y ) = ϕ (x, y ) on ∂ , ∂n on a uniform grid of mesh size h = 1/M via central differences, and then by taking the unknowns in the red–black ordering. This problem was also considered in [10,20,25]. A is singular with ind (A ) = 1. As [20,25], we construct a consistent system with known solution sˆ ∈ (A ) via sˆ = Ay, where y = [0, . . . , 0, 1]T . By taking M = 63, the number of unknowns is 4096 and the exact solution is the vector sˆ, whose components are zeros except

sˆ2016 = −1,

sˆ2047 = −1,

sˆ2048 = −2,

sˆ4096 = 4.

Using DGMRES (m + k ), LDGMRES(m, k), and DGMRES-E(m, k) in double-precision arithmetic with ε = 10−13 , we obtained the results presented in Table 2. As Example 1, the LDGMRES(m, k) and DGMRES-E(m, k) give significantly improved convergence over the standard DGMRES (m + k ) method with corresponding size subspace. We also observe that the performance of the LDGMRES(m, k) and DGMRES-E(m, k) methods, in terms of the cycles and matrix–vector products, is similar. Using just one error approximation vector or one approximate eigenvector (LDGMRES and DGMRES-E with m = 19, k = 1) gives the lowest the number of cycles, and matrix–vector products. After 5 cycles, the first harmonic Ritz value is 0.0075 and the corresponding residual norm is equal to 0.0054. Example 3. This example is taken from [4]. The matrix A is

⎡ ⎢

0

⎢−1 A=⎢ ⎣

1 .. . .. .



.. ..

.

. −1

1 0

⎥ ⎥ ⎥. ⎦

We assume that n is odd, in which case A is singular with ind (A ) = 1. We took n = 999 and

√ √ b = ( 1 / 2, 0 , . . . , 0 , 1 / 2 ) T ,

for which Ax = b is inconsistent. The results obtained for this example are presented in Table 3. The LDGMRES(m, k) and DGMRES-E(m, k) using m = 28 and k = 2 are compared to DGMRES(30). Thus, the same size subspaces are used. After 10

350

F. Toutounian, R. Buzhabadi / Applied Mathematics and Computation 294 (2017) 343–352

Table 3 Residual norms Aa ri 2 for Example 3.

DGMRES(m) LDGMRES(m,k) DGMRES-E(m,k)

m

k

s

Initial

After 5 cycles

After 10 cycles

After 20 cycles

30 28 21 28 21

0 2 2 2 2

30 30 23 30 23

1.00 1.00 1.00 1.00 1.00

0.0037 0.0029 0.0046 0.0029 0.0047

0.0019 0.0010 0.0017 0.0010 0.0017

0.0011 1.28e−5 6.06e−4 2.48e−5 6.06e−4

0 LDGMRES(28,2) DGMRES−E(28,2) DGMRES(30)

−4

a

a

log10(||A rk||/||A r0||)

−2

−6

−8

−10

−12

0

500

1000

1500 2000 2500 Matrix−Vector Products

3000

3500

Fig. 1. Comparison of convergence for Example 3. Table 4 Residual norms Aa ri 2 for Example 4.

DGMRES (m) LDGMRES (m,k) DGMRES-E (m,k)

m

k

s

Initial

15 14 11 14 11

0 1 1 1 1

15 15 12 15 12

4.30e 4.30e 4.30e 4.30e 4.30e

+ + + + +

4 4 4 4 4

After 5 cycles

After 10 cycles

After 20 cycles

After 75 cycles

1.2829 0.9771 3.7556 0.8997 3.5946

0.3855 0.3136 0.8065 0.3057 0.7162

0.0901 0.0285 0.2571 0.0326 0.2345

1.62e−4 3.22e−7 0.0219 5.11e−7 0.0135

cycles the LDGMRES(28, 2) and DGMRES-E(28, 2) have the residual norm 0.0010, compared to 0.0019 for standard DGMRES(30). After cycle 20 these new methods converge more than twice as fast. See Fig. 1 for a graph of the convergence. This figure shows that LDGMRES(28, 2) and DGMRES-E(28, 2) can reach the relative residual norm of 10e−12 after about 20 0 0 matrix–vector products. However, DGMRES(30) reaches this relative residual norm after 31,261 matrix–vector products. Next, methods requiring about the same storage are compared. LDGMRES(m, k) and DGMRES-E(m, k) with m = 21 and k = 2 (which require about the same storage as DGMRES(30)) reach the residual norm of 6.06e−4, after 20 cycles versus 0.001 (see Table 3). Example 4. In this example we consider the given real matrix A ∈ Rn×n with ind (A ) = 2 and n = 100. The matrix is bidiagonal with entries 0, 0, 1, 2, . . . , 97, 0 on the main diagonal and 1’s on the super diagonal. The right-hand side has all entries 1.0. The result obtained for this example are presented in Table 4. We observe that, when the same size subspaces are used (m = 14, k = 1 ), for small cycles ( cycle = 5, 10, 20) the results obtained for three methods are similar, but for large cycle (cycle = 75), the residual norm (Aa ri 2 ) attains 3.22e−7, 5.11e-7 versus 1.62e−4. When the same storage requirements are used, DGMRES(15) is better than the new methods. See Fig. 2 for a graph of the convergence. This figure shows that LDGMRES(14,1) and DGMRES-E(14,1) are better. Comparing the number of matrix–vector products required to reduce the residual

F. Toutounian, R. Buzhabadi / Applied Mathematics and Computation 294 (2017) 343–352

351

0 LDGMRES(14,1) DGMRES−E(14,1) DGMRES(15)

−2

log10(||Aark||/||Aar0||)

−4

−6

−8

−10

−12

−14

0

500

1000 1500 2000 Matrix−Vector Products

2500

3000

Fig. 2. Comparison of convergence for Example 4.

Table 5 Results for DGMRES and LDGMRES, problem size, the number of cycles required for convergence Aa ri 2 /Aa r0 2 ≤ 10−12 , the median sequential, and median skip angle, for Example 1–4. Problem

DGMRES(m)

LDGMRES(m,k)

Example Example Example Example Example Example Example Example

1 2 3 4 1 2 3 4

m

k

Size(n)

cycles

Median seq angle ∠(Aa ri , Aa ri−1 )

Median skip angle ∠(Aa ri+1 , Aa ri−1 )

35 20 30 15 32 18 28 14

0 0 0 0 3 2 2 1

100 4096 999 100 100 4096 999 100

335 35 1042 145 18 26 59 75

2.77 31.4 10.8 25.2 38.8 42.3 38.8 24.8

1.6 15.9 0.01 4.9 54.4 56.5 54.6 37.3

norm by a factor of 10−14 (from 4.30e + 4 to below 3.96e−9) LDGMRES(14,1), DGMRES-E(14,1) use 1819, 1802, respectively, and DGMRES(15) needs 2777. Finally, from Theorem 1, we observe that LDGMRES is effective if it has large sequential angles and large skip angles. For the above examples, we present in Table 5, the number of cycles required for convergence (Aa ri 2 /Aa r0 2 ≤ 10−12 ), as well as the median sequential and median skip angle values. Table 5 shows that LDGMRES nearly always has a larger median sequential angle and median skip angle than does DGMRES. These results indicate that LDGMRES(m, k) algorithm can significantly improve the convergence of DGMRES(m) algorithm. We mention that, by experiments, we observed that the LDGMRES, as LGMRES, is not as helpful: when DGMRES(m) skip angles are not small; DGMRES(m) converges in a small number of cycles.

6. Conclusion In this paper, we have described two methods that accelerate the convergence of DGMRES(m). For accelerating the convergence, LDGMRES(m, k) augments the next Krylov subspace with some error approximation vectors and DGMRES-E(m, k) includes the approximate eigenvectors determined from the previous subspace in the new subspace. Harmonic Ritz vector are used for approximating the desired eigenvectors. The new algorithm are straightforward and easy to implement. Examples show that these new methods can be much better than standard DGMRES(m), especially when the matrix–vector product is expensive. The method is not really needed for easy problems where few restarts are used.

352

F. Toutounian, R. Buzhabadi / Applied Mathematics and Computation 294 (2017) 343–352

Acknowledgments The authors are grateful to the anonymous referees for their comments which substantially improved the quality of this paper. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]

A.H. Baker, E.R. Jessup, T. Manteuffel, A technique for accelerating the convergence of restarted GMRES, SIAM J. Matrix Anal. Appl. 26 (2005) 962–984. J. Baglama, L. Reichel, Augmented GMRES-type methods, Numer. Linear Algebra Appl. 14 (2007) 337–350. A. Ben-Israel, T.N.E. Greville, Generalized Inverses: Theory and Applications, second ed., Wiley, New York, 1974. Springer-Verlag, New York, 2003 P.N. Brown, H.F. Walker, GMRES on (nearly) singular systems, SIAM J. Matrix Anal. Appl. 18 (1997) 37–51. S.L. Campbell, C.D. Meyer Jr., Generalized Inverses of Linear Transformations, Pitman, London, 1979. A. Chapman, Y. Saad, Deflated and augmentation Krylov subspace techniques, Numer. Linear Algebra Appl. 4 (1997) 43–66. J.J. Climent, M. Neumann, A. Sidi, A semi-iterative method for singular linear systems with an arbitrary index, J. Comput. Appl. Math. 87 (1997) 21–38. M. Eiermann, I. Marek, W. Niethammer, On the solution of singular linear systems of algebraic equations by semiiterative methods, Numer. Math. 53 (1988) 265–283. R.W. Freund, M. Hochbruck, On the use of two QMR algorithms for solving singular systems and applications in Markov chain modeling, Numer. Linear Algebra Appl. 1 (1994) 403–420. M. Hanke, M. Hochbruck, A Chebyshev-like semiiteration for inconsistent linear systems, Electron. Trans. Numer. Anal. 1 (1993) 89–103. R.E. Hartwig, J. Levine, Applications of the Drazin inverse to the hill cryptographic system, part III, Cryptologia 5 (1981) 67–77. R.E. Hartwig, F. Hall, Applications of the Drazin inverse to Cesaro-Neumann iterations, in: S.L. Campbell (Ed.), Recent Applications of Generalized Inverses, vol. 66, 1982, pp. 145–195. R.B. Morgan, Computing interior eigenvalues of large matrices, Linear Algebra Appl. 154–156 (1991) 289–309. R.B. Morgan, A restarted GMRES method augmented with eigenvectors, SIAM J. Matrix Anal. Appl. 16 (1995) 1154–1171. R.B. Morgan, Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equations, SIAM J. Matrix Anal. Appl. 21 (20 0 0) 1112–1135. R.B. Morgan, GMRES with deflated restarting, SIAM J. Sci. Comput. 24 (2002) 20–37. R.B. Morgan, M. Zeng, Harmonic projection methods for large non-symmetric eigenvalue problems, Numer. Linear Algebra Appl. 5 (1998) 33–55. C.C. Paige, B.N. Parlett, H.A.V.D. Vorst, Approximate solutions and eigenvalue bounds from Krylov subspaces, Numer. Linear Algebra Appl. 2 (1995) 115–133. A. Sidi, A unified approach to Krylov subspace methods for the Drazin-inverse solution of singular non-symmetric linear systems, Linear Algebra Appl. 298 (1999) 99–113. A. Sidi, DGMRES: A GMRES-type algorithm for Drazin-inverse solution of singular nonsymmetric linear systems, Linear Algebra Appl. 335 (2001) 189–204. A. Sidi, V. Kluzner, A bi-CG type iterative method for Drazin inverse solution of singular inconsistent non-symmetric linear systems of arbitrary index, Electron. J. Linear Algebra Appl. 6 (20 0 0) 72–94. B. Simeon, C. Fuhrer, P. Rentrop, The Drazin inverse in multibody system dynamics, Numer. Math. 64 (1993) 521–539. F. Soleymani, P.S. Stanimirovic´ , A higher order iterative method for computing the Drazin inverse, Sci. World J. 2013 (2013) 708647. Y. Wei, H. Wu, Additional results on index splitting for Drazin inverse of singular linear system, Electron J. Linear Algebra 8 (2001) 83–93. J. Zhou, Y. Wei, DFOM algorithm and error analysis for projection methods for solving singular linear system, Appl. Math. Comput. 157 (2004) 313–329.