Two block triangular preconditioners for asymmetric saddle point problems

Two block triangular preconditioners for asymmetric saddle point problems

Applied Mathematics and Computation 269 (2015) 456–463 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

277KB Sizes 2 Downloads 74 Views

Applied Mathematics and Computation 269 (2015) 456–463

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Two block triangular preconditioners for asymmetric saddle point problems✩ Cui-Xia Li, Shi-Liang Wu∗ School of Mathematics and Statistics, Anyang Normal University, Anyang 455002, PR China

a r t i c l e

i n f o

a b s t r a c t

Keywords: Block triangular preconditioner Saddle point problems Nullity Augmentation Krylov subspace method

In this paper, two block triangular preconditioners for the asymmetric saddle point problems with singular (1,1) block are presented. The spectral characteristics of the preconditioned matrices are discussed in detail. Theoretical analysis shows that all the eigenvalues of the preconditioned matrices are strongly clustered. Numerical experiments are reported to the efficiency of the proposed preconditioners. © 2015 Elsevier Inc. All rights reserved.

1. Introduction Consider the following saddle point problems:

 



x A A = y C

BT 0

 

 

x b = = f, y q

(1.1)

where A ∈ Rn×n , B, C ∈ Rm×n , m ≤ n. Without loss of generality, we assume that A is nonsingular and the matrix A is singular. The form of (1.1) is very important and appears in many different applications of scientific computing, such as the (linearized) Navier–Stokes equations [1], the time-harmonic Maxwell equations [2,3], the linear programming (LP) problem and the quadratic programming (QP) problem [4,5]. So far, a large amount of work has been devoted to solving the large saddle point linear systems (1.1). In particular, it is attractive and popular to use the preconditioned Krylov subspace iterative methods for the large saddle point linear systems (1.1). It is often advantageous to use a preconditioner with such iterative methods. The preconditioner decreases the number of iterations required for convergence but not significantly increase the amount of computation required at each iteration. A general discussion can be found in [6]. When A is nonsingular, there exist two standard block diagonal preconditioners [7–9]:



D+ =



A CA−1 BT



and D− =



A

,

−CA−1 BT

(1.2)

and two standard block triangular preconditioners [7,10]:



A T+ = C

✩ ∗



CA−1 BT



A and T− = C



−CA−1 BT

.

This research was supported by NSFC (No. 11301009) and Natural Science Foundation of Henan Province (No. 15A110007). Corresponding author. E-mail addresses: [email protected] (C.-X. Li), [email protected], [email protected] (S.-L. Wu).

http://dx.doi.org/10.1016/j.amc.2015.07.093 0096-3003/© 2015 Elsevier Inc. All rights reserved.

(1.3)

C.-X. Li, S.-L. Wu / Applied Mathematics and Computation 269 (2015) 456–463

457

−1 −1 These two block diagonal preconditioners lead to the corresponding preconditioned matrices D+ A and D− A that are di√ √ agonalizable and have only three distinct eigenvalues 1, (1± 5)/2 and 1, (1±i 3)/2, respectively. These two block triangular preconditioners lead to two diagonalizable preconditioned matrices T+−1 A and T−−1 A. The former has only two distinct eigenvalues, ± 1, the latter has only one eigenvalue, 1. When A is singular, obviously, A cannot be inverted and the corresponding Schur complement does not exist. In this case, to make use of the preconditioned Krylov subspace methods for saddle point linear systems (1.1), one strategy is to deal with saddle point linear systems (1.1) by augmentation, i.e., by replacing A with A + BT W −1C, where W ∈ Rm×m is a symmetric positivedefinite matrix. Recently, for nonsymmetric saddle point problems with (1,1) block having a high nullity, Cao [8] have introduced the augmentation block diagonal preconditioner as follows:



Daug =



A + BT W −1C W

,

(1.4)

where W ∈ Rm×m is nonsingular and such that A + BT W −1C is invertible. It was shown that if the nullity of A is m, as in the −1 symmetric case, the preconditioned matrix Daug A has two distinct eigenvalues, ± 1, and is ensured in at most two iterations of a minimum residual method such as GMRES[11]. In the same paper, Cao has presented two augmentation block triangular preconditioners:



Taug+ =

A + BT W −1C

BT W





and Taug− =

A + BT W −1C

BT −W



(1.5)

−1 −1 A and Taug A have only three distinct eigenand showed that if the nullity of A is m, then the preconditioned matrices Taug + − √ √ values 1, (1± 5)/2 and 1, (1±i 3)/2, respectively. Thus, the preconditioned GMRES with either augmentation block triangular preconditioner converges with three iterations. In the last decades, some well-known preconditioners for the saddle point problems have presented in the literatures. At present, the popular preconditioners not only include block diagonal preconditioner [7–9,21] and block triangular preconditioner [16,17,21,23], but include Hermitian and skew-Hermitian splitting (HSS) preconditioner [18], complex-symmetric and skew-Hermitian splitting preconditioner [22], the alternating preconditioner [20], the primal-based penalty preconditioner [19] and so on. Both the HSS preconditioner and the alternating preconditioner require the user to choose the value of a parameter, the determination of the optimal parameter is a nontrivial task [18,20]. The primal-based penalty preconditioner is used to solve the saddle point problems with matrix A being symmetric positive definite and B = C, in this case, the corresponding primalbased penalty preconditioner subsystem need be solved, in some cases, its solution is as difficult as that of the original saddle point problems [19]. In practice, the block diagonal preconditioner and the block triangular preconditioner are often the least expensive. Generally speaking, for many linear systems arising in practice, a well-clustered spectrum usually results in rapid convergence of the preconditioned Krylov subspace methods, such as CG, MINRES and GMRES [12]. Preconditioning technique attempt to make the spectral property better to improve the convergence rate of Krylov subspace methods [13]. This paper will establish two block triangular preconditioners for the saddle point linear systems (1.1). The spectral characteristics of the preconditioned matrices are discussed in detail. Theoretical analysis shows that the clustering of all the eigenvalues of the preconditioned matrices are more stronger than that of the results presented in the paper of Cao [8]. Numerical experiments are reported to illustrate the efficiency of the proposed preconditioners. The remainder of this paper is organized as follows. In Section 2, we establish two block triangular preconditioners and discuss the spectral analysis of the corresponding preconditioned matrices for the saddle point linear systems (1.1). Some numerical experiments are given in Section 3. Finally, some conclusions are made in Section 4.

2. Preconditioner and spectrum analysis For our analysis, the following two lemmas are required. Lemma 2.1 (Proposition 2.1 [8]). If the nonsymmetric coefficient matrix,



A=



A BT , C 0

is nonsingular, then

rank(B) = rank(C ) = m, N (A) ∩ N (C ) = {0} and N (AT ) ∩ N (B) = {0}, where N ( · ) denotes the null space of a matrix. Lemma 2.2 (Proposition 2.3 [8]). If the nonsymmetric coefficient matrix,





A BT A= , C 0 is nonsingular, then the rank of the matrix A is at least n − m, and hence its nullity is at most m.

458

C.-X. Li, S.-L. Wu / Applied Mathematics and Computation 269 (2015) 456–463

In this paper, we introduce the following augmentation block triangular preconditioners:



P− =

A + sBT W −1C 0

(1 + s)BT −W





and P+ =

A + sBT W −1C 0

(1 − s)BT W



,

where s > 0, W ∈ Rm×m is nonsingular such that A + sBT W −1C is invertible. The following theorem provides the spectrum of the preconditioned matrix P−−1 A. Theorem 2.1. Suppose that A is singular with nullity r. The matrix P−−1 A has the following two eigenvalues

λ = 1 and λ =

1 s

with algebraic multiplicity n and r, respectively. The remaining m − r eigenvalues satisfy the relation

λ=

μ

sμ + 1

,

where μ are some m − r generalized eigenvalues of the generalized eigenvalue problem:

μAx = BT W −1Cx. {xi }ri=1

(2.1) n−m {zi }i=1

{yi }m−r i=1

Let be a basis of N (A) and be a basis of N (C ), and be a set of linearly independent vectors that complete N (A) ∪ N (C ) to a basis of Rn . Then the vectors [ziT , 0T ]T (i = 1, . . . , n − m) , the vectors [xTi , −(W −1Cxi )T ]T (i = 1, . . . , r) and the vectors [yTi , −(W −1Cyi )T ]T (i = 1, . . . , m − r) are linearly independent eigenvectors associated with λ = 1, and the vectors [xTi , −s(W −1Cxi )T ]T (i = 1, . . . , r) are linearly independent eigenvectors associated with λ = 1s . Proof. Let λ be an eigenvalue of P−−1 A with eigenvector [uT , vT ]T . Then



A BT C 0

 

   A + sBT W −1C (1 + s)BT u =λ , v v 0 −W

u

which can be rewritten as

Au + BT v = λ(A + sBT W −1C )u + (1 + s)λBT v,

(2.2)

Cu = −λW v.

(2.3)

Since A is nonsingular, it follows that λ = 0. By (2.3), we get

v = −λ−1W −1Cu. Substituting it into (2.2) yields

(λ − 1)λAu + (λ − 1)(sλ − 1)BT W −1Cu = 0.

(2.4)

From (2.4), obviously, (1, [xTi , −(W −1Cxi )T ]T ) is an eigenpair of P−−1 A. Similarly, for u ∈ N (A), (1, [yTi , −(W −1Cyi )T ]T ) and ( 1s , [xTi , −s(W −1Cxi )T ]T ) are also eigenpairs of P−−1 A from (2.4). Assume that λ = 1. (2.4) gives the statement of a generalized eigenproblem (2.1) where μAx = BT W −1Cx. Given some m − r generalized eigenvalues, μ, we get (2.5), i.e.,

λ=

μ

sμ + 1

.

(2.5)

A specific set of linearly independent eigenvectors for λ = 1 and λ = 1s can be readily found. If {xi }ri=1 and {zi }n−m are bases i=1 are linearly independent and together span the subspace of N (A) and N (C ), respectively, then the vectors {xi }ri=1 and {zi }n−m i=1 complete N (A) ∪ N (C ) to a basis of Rn . It is not difficult to see N (A) ∪ N (C ) with N (A) ∩ N (C ) = {0}. Further, the vectors {yi }m−r i=1 T T T T −1 that the vectors [zi , 0 ] (i = 1, . . . , n − m) , the vectors [xi , −(W Cxi )T ]T (i = 1, . . . , r) and the vectors [yTi , −(W −1Cyi )T ]T (i = 1, . . . , m − r) are linearly independent eigenvectors associated with λ = 1, and the vectors [xTi , −s(W −1Cxi )T ]T (i = 1, . . . , r) are linearly independent eigenvectors associated with λ = 1s .  When s = 1, we easily obtain the following corollary from Theorem 2.1. Corollary 2.1. If s = 1, then the matrix P−−1 A has one eigenvalue λ = 1 with algebraic multiplicity n + r. The remaining m − r eigenvalues satisfy the relation

λ=

μ , μ+1

where μ are some m − r generalized eigenvalues of μAx = BT W −1Cx. From Theorem 2.1, it is not difficult to find that the higher the nullity of A is, the stronger the eigenvalues of P−−1 A are clustered. Further, if the nullity of A is m, then we the following corollary.

C.-X. Li, S.-L. Wu / Applied Mathematics and Computation 269 (2015) 456–463

459

Corollary 2.2. If the nullity of A is m, then the matrix P−−1 A has the following two eigenvalues:

1 s

λ = 1 and λ = , with algebraic multiplicities n and m, respectively. Theorem 2.2. If in the saddle point-type matrix A, C = B and A is positive semidefinite with nullity r, (A + AT is symmetric positive semidefinite), and W is symmetric positive definite, then the matrix P−−1 A has two eigenvalues:

1 s

λ = 1 and λ = , with algebraic multiplicities n and r, respectively. The remaining m − r eigenvalues satisfy

(λ) > 0 and |λ| <

1 . s

Proof. Based on Theorem 2.1, the matrix P−−1 A has the following two distinct eigenvalues:

1 s

λ = 1 and λ = , with algebraic multiplicities n and r, respectively. The remaining m − r eigenvalues satisfy

λ=

μ

sμ + 1

.

From (2.1), we obtain that μ = a + bi (a > 0). That is, (λ) > 0. By simple computations, we have

 



μ

1 − sλ = 1 − s

 

sμ + 1 |sμ + 1| − |sμ| = |sμ + 1| |sa + 1 + sbi| − |sa + sbi| = |sμ + 1|



=

 (sa + 1)2 + (sb)2 − (sa)2 + (sb)2 > 0, |sμ + 1|

which completes the proof.  Similarly, we can obtain the following spectrum of the preconditioned matrix P+−1 A. Theorem 2.3. Suppose that A is singular with nullity r. The matrix P+−1 A has the following two eigenvalues

λ = 1 and λ = −

1 s

with algebraic multiplicity n and r, respectively. The remaining m − r eigenvalues satisfy the relation

λ=−

μ

sμ + 1

,

n−m where μ are some m − r generalized eigenvalues of μAx = BT W −1Cx. Let {xi }ri=1 , {zi }i=1 and {yi }m−r be defined in Theorem i=1 T T T T −1 T T 2.1. Then the vectors [zi , 0 ] (i = 1, . . . , n − m), the vectors [xi , (W Cxi ) ] (i = 1, . . . , r) and the vectors [yTi , (W −1Cyi )T ]T (i = 1, . . . , m − r) are linearly independent eigenvectors associated with λ = 1, and the vectors [xTi , −s(W −1Cxi )T ]T (i = 1, . . . , r) are linearly independent eigenvectors associated with λ = − 1s .

Proof. The proof is similar to the proof of Theorem 2.1.  Corollary 2.3. If s = 1, then the matrix P+−1 A has the following two eigenvalues:

λ = 1 and λ = −1, with algebraic multiplicities n and m, respectively. The remaining m − r eigenvalues satisfy the relation

λ=−

μ , μ+1

where μ are some m − r generalized eigenvalues of μAx = BT W −1Cx.

460

C.-X. Li, S.-L. Wu / Applied Mathematics and Computation 269 (2015) 456–463 Table 1 Characteristics of Garon1. Matrix name

n

m

nnz(A)

nnz(B)

nnz(C)

Garon/Garon1

2775

400

58,949

12,889

12,885

Table 2 Iterations for Garon1 with P− and Taug− . Preconditioner

P−

IT

Taug−

s 1

2

3

4

5

37

24

19

17

16

76

Table 3 Iterations for Garon1 with P+ and Taug+ . Preconditioner

P+

IT

Taug+

s 1

2

3

4

5

38

25

20

18

17

138

Corollary 2.4. If the nullity of A is m, then the matrix P+−1 A has the following two eigenvalues:

1 s

λ = 1 and λ = − , with algebraic multiplicities n and m, respectively. In particular, if s = 1, then the matrix P+−1 A has precisely two eigenvalues: λ = 1 of algebraic multiplicity n, and λ = −1 of algebraic multiplicity m. Theorem 2.4. If in the saddle point-type matrix A, C = B and A is positive semidefinite with nullity r, and W is symmetric positive definite, then the matrix P+−1 A has two eigenvalues:

1 s

λ = 1 and λ = − , with algebraic multiplicities n and r, respectively. The remaining m − r eigenvalues satisfy

(λ) < 0 and |λ| <

1 . s

3. Numerical experiments In this section, numerical experiments are reported to demonstrate the performance of the preconditioners P− and P+ . In numerical computations, we will use the preconditioned GMRES to solve the corresponding saddle point-type systems (1.1), where the right-hand side f is taken such that the solution is all ones. The purpose of these experiments is just to investigate the influence of the eigenvalue distribution on the convergence behavior of GMRES. In our numerical experiments, all the computations are done with MATLAB 7.0. All iterations are terminated either when the numbers of iteration stops go over 500 or when the current iterates satisfy

f − Ax(k) 2 ≤ 10−6 f 2 , where the initial guess is the zero vector. Based on [15], the matrix W in the augmentation block preconditioners is taken as W = (1/γ )I, where the positive parameter γ is taken as

γ=

A 1 .

B 1 C 1

Example 1. A matrix from the UF Sparse Matrix Collection. The test matrix is Garon/Garon1, coming from UF Sparse Matrix Collection, which is an ill-conditioned matrix arising from computation fluid dynamics problem. The characteristics of the test matrix are listed in Table 1: see [14] for details. In this test, we take some different values of s. The numerical results from using the Full GMRES method with the four preconditioners Taug− , Taug+ , P− and P+ to solve the systems of linear equations with the coefficient matrix of Garon/Garon1 are given in Tables 2 and 3. In Tables 2–3, “IT” denotes the number of iteration. From Tables 2–3, it is not difficult to find that the preconditioners P− and P+ are superior to the preconditioners Taug− and Taug+ . That is, making use of the preconditioners P− and P+ instead of the preconditioners Taug− and Taug+ is feasible and effective

C.-X. Li, S.-L. Wu / Applied Mathematics and Computation 269 (2015) 456–463

461

Table 4 Values of n, m and order of Ai . h

n

m

Order of Ai

1 16 1 32 1 64

578 2178 8450

254 1022 4094

832 3200 12,544

Table 5 Iterations for A1 with v = 1 and r = m/2. h

s

1

2

3

4

5

6

1 16

P− P+ P− P+ P− P+

26 27 35 37 40 44

20 21 25 26 30 32

20 21 20 21 26 27

14 15 19 21 20 21

13 14 16 16 25 26

13 13 15 15 17 17

1 32 1 64

53 68 65 92 88 129

Taug− Taug+ Taug− Taug+ Taug− Taug+

to solve the saddle point problems in actual implement, where Table 2 corresponds to P− and Taug− and Table 3 corresponds to P+ and Taug+ . Compared with the preconditioners P+ , Taug− and Taug+ , the preconditioner P− may be the ‘best’ choice. Example 2. Considering the following Oseen problem:



− ν u + ω · gradu + gradp = f, in , − divu = 0, in ,

(3.1)

with suitable boundary condition on ∂ , where w is given such that div(w) = 0, v is the viscosity. In our experiments, two values of the viscosity parameter are used for the Oseen equation: ν = 1 and ν = 0.01. The test problems are leaky two dimensional lid-driven cavity problems in square (0 ≤ x ≤ 1, 0 ≤ y ≤ 1). Using the IFISS software [24] to discretize (3.1), we take a finite element subdivision based on uniform grid of square element. The mixed finite element used is the bilinear-constant velocity-pressure: Q1 − P0 pair with stabilization. The stabilization parameter is chosen to 14 . We get the (1,1) block A of the coefficient matrix corresponding to the discretization of the conservative term is positive real, i.e., A + AT is symmetric positive definite. Since the matrix B produced by this package is rank deficient, we drop the first two rows of B to get a full rank matrix. Corresponding to B, we also drop the first two rows and columns of the (2, 2) block. The matrix Aˆ arises from the discretization of the Oseen Eq. (3.1) and is of the form as follows:



BTu

F1

Aˆ = ⎣ Bu



F2

BTv ⎦.

Bv

0

To introduce the form (1.1), the matrices [Bu , Bv ] and [Bu , Bv ]T are replaced by a random matrix Cˆ with the same sparsity as [Bu , Bv ] and a random matrix BˆT with the same sparsity as [Bu , Bv ]T , respectively. Further, Cˆ(1 : m, 1 : m) and Bˆ(1 : m, 1 : m) are, respectively, replaced by C1 = Cˆ(1 : m, 1 : m) − 32 Im and B1 = Bˆ(1 : m, 1 : m) − 32 Im , such that C1 and B1 are nonsingular. Define C2 = Cˆ(1 : m, m + 1 : n) and B2 = Bˆ(1 : m, m + 1 : n), then we have C = [C1 , C2 ] and B = [B1 , B2 ] with B1 , C1 ∈ Rm×m and B2 , C2 ∈ Rm×(n−m) . The above strategy leads to the following saddle point-type matrix:



A=

A BT C 0



with

rank(C ) = rank(B) = m. From Lemma 2.2, noting that the nullity of A is at most m, we construct the following saddle point-type matrices:



Ai =

Ai C



BT , i = 1, 2 0

where Ai is constructed from A by filling its first i × m/2 rows and columns with zero entries. Note that Ai is semipositive real and 1 1 1 its nullity is i × m/2. We take three meshes h : h = 16 , 32 , 64 . The order of the matrices Ai and the values n and m are listed in Table 4. Tables 5–8, show the iteration numbers of GMRES with the augmentation block triangular preconditioners P+ , P− , Taug+ and Taug− . Tables 5–6 present the iteration numbers of GMRES with nullity r = m 2 . Tables 7–8 present the iteration numbers of GMRES with nullity r = m.

462

C.-X. Li, S.-L. Wu / Applied Mathematics and Computation 269 (2015) 456–463 Table 6 Iterations for A1 with v = 0.01 and r = m/2. h

s

1

2

3

4

5

6

1 16

P− P+ P− P+ P− P+

24 26 21 22 28 32

19 20 23 24 20 21

14 15 13 14 19 21

12 13 13 13 18 19

13 14 12 12 12 13

9 10 12 13 13 14

1 32 1 64

39 56 39 61 56 75

Taug− Taug+ Taug− Taug+ Taug− Taug+

Table 7 Iterations for A2 with v = 1 and r = m. h

s

1

2

3

4

5

6

1 16

P− P+ P− P+ P− P+

2 2 2 2 4 4

2 2 2 2 4 4

2 2 2 2 4 4

2 2 2 2 4 4

2 2 2 2 4 4

2 2 2 2 4 4

1 32 1 64

3 3 3 3 6 6

Taug− Taug+ Taug− Taug+ Taug− Taug+

3 3 3 3 6 6

Taug− Taug+ Taug− Taug+ Taug− Taug+

Table 8 Iterations for A2 with v = 0.01 and r = m. h

s

1

2

3

4

5

6

1 16

P− P+ P− P+ P− P+

2 2 2 2 4 4

2 2 2 2 4 4

2 2 2 2 4 4

2 2 2 2 4 4

2 2 2 2 4 4

2 2 2 2 4 4

1 32 1 64

From Tables 5, 6, 7, 8, one can clearly see that the higher the nullity of the (1,1) block is, the better the convergence performance of the prconditioned GMRES gets. Tables 7–8 show that the iteration numbers of the preconditioned GMRES with nullity r = m are insensitive to the changes in the mesh size or in some values of the parameter or the viscosity by using the GMRES method to solve the resulting saddle point linear systems. From Tables 7–8, it is not difficult to see that making use of the preconditioners P− and P+ instead of the preconditioners Taug− and Taug+ is feasible and effective to solve the saddle point problems in actual implement. 4. Conclusion In this paper, we have established two block triangular preconditioners for the saddle point linear systems with singular (1,1) block. The preconditioners have the attractive property of improved eigenvalues clustering when using the optimal parameter in practice. Numerical experiments further confirm the effectiveness of our preconditioners. Acknowledgments The authors would like to thank the anonymous referees for their helpful suggestions, which greatly improve the paper. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]

H.C. Elman, Preconditioning for the steady-state Navier–Stokes equations with low viscosity, SIAM J. Sci. Comput. 20 (1999) 1299–1316. C. Greif, D. Schötzau, Preconditioners for the discretized time-harmonic Maxwell equaitons in mixed form, Numer. Linear Algebra Appl. 14 (2007) 281–297. C. Greif, D. Schötzau, Preconditioners for saddle point linear systems with highly singular (1,1) blocks, Electron. Trans. Numer. Anal. 22 (2006) 114–121. T. Rees, C. Greif, A preconditioner for linear systems arising from interior point optimization methods, SIAM J. Sci. Comput. 29 (2007) 1992–2007. S. Cafieri, M. D’Apuzzo, V. De Simone, D. Di Serafino, On the iterative solution of KKT systems in potential reduction software for large-scale quadratic probems, Comput Optim Appl. 38 (2007) 27–45. M. Benzi, G.H. Golub, J. Liesen, Numerical solution of saddle point problems, Acta Numer. 14 (2005) 1–137. I.C.F. Ipsen, A note on preconditioning nonsymmetric matrices, SIAM J. Matrix Anal. Appl. 23 (2001) 1050–1051. Z.-H. Cao, Augmentation block preconditioners for saddle point-type matrices with singular (1,1) blocks, Numer. Linear Algebra Appl. 15 (2008) 515–533. Z.-H. Cao, A note on block diagonal and constraint preconditioners for non-symmetric indefinite, Int. J. Comput. Math. 83 (2006a) 383–395. Z.-H. Cao, Block triangular schur complement preconditioners for saddle piont problems and application to the oseen equations, Technical Report, 2006. Y. Saad, M.H. Schultz, Gmres: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput. 7 (1986) 856– 869. Y. Saad, Iterative Methods for Sparse Linear Systems, second edition, SIAM, Philadelphia, PA, 2003. A. Greenbaum, Iterative Methods for Solving Linear Systems, Frontiers in Applied Mathematics, vol. 17, SIAM, Philadelphia, 1997.

C.-X. Li, S.-L. Wu / Applied Mathematics and Computation 269 (2015) 456–463 [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]

463

UF Sparse Matrix Collection, Garon Group. http://www.cise.ufl.edu/research/sparse/matrices. G.H. Golub, C. Greif, On solving block-structured indefinite linear systems, SIAM J. Sci. Comput. 24 (2003) 2076–2092. M. Benzi, J. Liu, Block preconditioning for saddle point systems with indefinite (1,1) block, Inter. J. Comput. Math. 84 (2007) 1117–1129. M. Benzi, M.A. Olshanskii, An augmented lagrangian-based approach to the oseen problem, SIAM J. Sci. Comput. 28 (2006) 2095–2113. J.-L. Li, T.-Z. Huang, L. Li, The spectral properties of the preconditioned matrix for nonsymmetric saddle point problems, J. Comput. Appl. Math. 235 (2010) 270–285. S.-Q. Shen, T.-Z. Huang, E.-J. Zhong, Spectral properties of primal-based penalty preconditioners for saddle point problems, J. Comput. Appl. Math. 233 (2010) 2235–2244. X.-F. Peng, W. Li, An alternating preconditioner for saddle point problems, J. Comput. Appl. Math. 234 (2010) 3411–3423. S.-L. Wu, T.-Z. Huang, C.-X. Li, Modified block preconditioners for the discretized time-harmonic maxwell equations in mixed form, J. Comput. Appl. Math. 237 (2013) 419–431. S.-L. Wu, C.-X. Li, A splitting iterative method for the discrete dynamic linear systems, J. Comput. Appl. Math. 267 (2014) 49–60. S.-L. Wu, C.-X. Li, Eigenvalue estimates of an indefinite block triangular preconditioner for saddle point problems, J. Comput. Appl. Math. 260 (2014) 349– 355. D.J. Silvester, H.C. Elman, A. Ramage, IFISS: incompressible flow iterative solution software. http://www.manchester.ac.uk/ifiss.