Applied Mathematics and Computation 243 (2014) 825–837
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
Splitting-based block preconditioning methods for block two-by-two matrices of real square blocks Hui-Yin Yan, Yu-Mei Huang ⇑,1 School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, People’s Republic of China
a r t i c l e
i n f o
a b s t r a c t Recently, Bai proposed rotated block preconditioners for block two-by-two matrices of real square blocks. These rotated block preconditioners have the product form of a scaled orthogonal matrix and a block two-by-two triangular matrix. Theoretical and numerical results have shown the superiority of these rotated block preconditioners (see Bai (2013) [10]). In this paper, inspired by the efficiency and the special structure of the rotated block preconditioners, we establish a new equivalent linear system to the original linear system in which an orthogonal matrix arises. We construct block Jacobi and block Gauss–Seidel splitting iteration methods based on the coefficient matrix of the new linear system. The convergence of these splitting iterations is also demonstrated. Then, by utilizing the proposed block Jacobi and block Gauss–Seidel splittings, we put forward block preconditioners which are of the product form of a scaled orthogonal matrix and a block two-by-two diagonal or a block two-by-two triangular matrix. Spectral distributions of these preconditioned matrices and numerical experiments show that the proposed splitting-based block preconditioners can be quite competitive with the rotated block preconditioners when they are used to accelerate Krylov subspace iteration methods such as GMRES for solving the block two-by-two liner systems. Ó 2014 Published by Elsevier Inc.
Keywords: Block two-by-two matrix Splitting iteration method Orthogonal matrix Spectral property Splitting-based preconditioner
1. Introduction Consider a block two-by-two linear system of the form
Ax
W
T
T
W
y z
¼ b:
ð1:1Þ
When matrices W and T are symmetric and positive semidefinite, and at least one of them is positive definite, the linear systems (1.1) becomes a special case of the generalized saddle-point problem [22,23]. In our work, we consider the case where T W and T are two real square matrices. Coefficient matrix A will be nonsingular if and only if nullðWÞ nullðTÞ ¼ f0g and i is not a generalized eigenvalue of the matrix pair ðW; TÞ (i.e., Tx – iWx for some x – 0), where nullðÞ denotes the null pffiffiffiffiffiffiffi space of the corresponding matrix and i ¼ 1 is the imaginary unit. Many practical problems from scientific computing and engineering applications require the solution of a linear system with the coefficient matrix A in (1.1), such as the ⇑ Corresponding author. 1
E-mail addresses:
[email protected] (H.-Y. Yan),
[email protected] (Y.-M. Huang). Research supported in part by NSFC Grant Nos. 11101195 and 11171371.
http://dx.doi.org/10.1016/j.amc.2014.06.040 0096-3003/Ó 2014 Published by Elsevier Inc.
826
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
distributed control problems [14] and linear quadratic control problems [41]. In addition, the linear system generated from distributed control problem in [14] has richer properties if W and T are two real symmetric and positive definite matrices. For more applications of this class of linear systems, we refer the readers to [40,2,20,9,45,15,10]. To solve the linear system (1.1) efficiently by Krylov subspace iteration methods such as GMRES, we often employ a preconditioning matrix to precondition (1.1) and solve the resulting preconditioned linear system which always has a more clustered spectral distribution. There have been a number of preconditioners arising for special block two-by-two coefficient matrices in the literature. For example, the specialized incomplete factorization preconditioners, see, for instance [27,28]; algebraic multilevel iteration preconditioners, see, for instance, [1,3–7,21]; block and approximate Schur complement preconditioners, see, for instance, [25,26,30,36,37,32]; splitting iteration preconditioners, see, for instance, [24,29,31,34,35,44,46]; block definite and indefinite preconditioners, see, for instance, [20,33,38,43]; block triangular preconditioners [20,39,42]; and structured preconditioners, see, for instance, [25,8]. Theoretical analyses and experimental results have demonstrated that such preconditioners greatly accelerate the convergence speed of the preconditioned Krylov subspace iteration methods for solving the large sparse system of linear equations (1.1). In addition, recently, Bai and his coauthors proposed block preconditioners which have the special product form of a scaled orthogonal matrix and a block diagonal matrix or a block triangular matrix for (1.1) in [10,14]. One of them is the preconditioned and modified HSS (PMHSS) iteration methods which was constructed by modifying and preconditioning HSS iteration method. HSS is an efficient stationary splitting iteration method for solving non-Hermitian positive definite linear systems which was originally proposed by Bai et al. in [17]. The corresponding PMHSS preconditioner for the matrix A is of the form
a þ 1 aW þ T FðaÞ ¼ pffiffiffi G 0 2a
0
aW þ T
;
1 with G ¼ pffiffiffi 2
I
I
I
I
;
ð1:2Þ
where a is a prescribed real constant and I is the identity matrix. We notice that FðaÞ is a product of a scaled orthogonal matrix G and a block two-by-two diagonal matrix. If matrices W and T are symmetric positive semidefinite, the convergence and the spectral properties of the PMHSS preconditioned matrix FðaÞ1 A are established in [14]. Numerical experiments in [14] also showed that the PMHSS preconditioners can be applied as an effective preconditioner for Krylov subspace iteration methods. For more details, see also [12,13,16,19]. Based on the PMHSS preconditioning procedure in [14], in [10], the authors constructed rotated block lower triangular (RBLT), rotated block upper triangular (RBUT) preconditioning matrices, and the rotated block triangular product (RBTP) preconditioning matrix for the block two-by-two linear system (1.1) with the coefficient matrix A. These rotated block triangular preconditioning matrices are
aW þ T 1 LðaÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi G a2 þ 1 aT W aW þ T 1 UðaÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi G 2 0 a þ1 aW þ T 1 PðaÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi G a2 þ 1 aT W
0
aW þ T
;
ð1:3Þ
;
ð1:4Þ
W aT
aW þ T 0
aW þ T
aW þ T
0
0
aW þ T
1
ð1:5Þ
aW þ T W aT ; 0 aW þ T
ð1:6Þ
respectively, with G in (1.2). Here a is a real constant. We also notice that the rotated block preconditioners are products of a scaled orthogonal matrix G and block triangular matrices. These rotated preconditioning matrices can be applied not only to symmetric positive semidefinite matrices W and T, but also to nonsymmetric W or nonsymmetric T, see, for instance, [10]. Theoretical results in [10] showed that the eigenvalues of LðaÞ1 A and UðaÞ1 A are contained within the same union of two complex disks; and the eigenvalues of PðaÞ1 A are also contained within an union of two complex disks. Explicit eigenvector expressions of the preconditioned matrices LðaÞ1 A; UðaÞ1 A and PðaÞ1 A are also given in [10]. In this paper, inspired by the efficiency and structure of the PMHSS preconditioner and the rotated block preconditioning matrices, we introduce an orthogonal matrix into the original linear system, resulting a new equivalent block two-by-two linear system to the original linear system (1.1). We construct block Jacobi and block Gauss–Seidel splitting iterations based on the new linear system and establish their convergence theories. Then, we propose block Jacobi and block Gauss–Seidel splitting iteration based block preconditioners for Krylov subspace methods such as GMRES to solve (1.1). The eigenvalues and eigenvectors of these preconditioned matrices are expressed explicitly. Theoretical results show that the eigenvalues of the proposed preconditioned matrices are contained in one complex disk while the eigenvalues of the original rotated block preconditioned matrices are contained in two complex disks. Therefore, the spectral distributions of the proposed preconditioned matrices are essentially more clustered. Numerical experiments are implemented to show the effectiveness of those proposed block Jacobi and block Gauss–Seidel splitting based preconditioned GMRES iteration methods for solving (1.1).
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
827
The organization of this paper is as follows. In Section 2, we introduce block Jacobi and block Gauss–Seidel splitting iterations for linear system (1.1) and establish their convergence theories. Block Jacobi and block Gauss–Seidel splitting-based preconditioning matrices and the computing procedure for solving the generalized residual equations with these preconditioning matrices are described in Section 3. In Section 4, we analyze the spectral properties of the proposed splitting-based block preconditioned matrices and show their eigenvalue and eigenvector expressions explicitly. In Section 5, numerical results are presented to show the effectiveness of the proposed splitting-based block preconditioning matrices. Finally, in Section 6, we end the paper with conclusion. 2. Splitting iteration methods In PMHSS and rotated block triangular preconditioning methods, the preconditioners have the product form of a scaled orthogonal matrix and a block two-by-two diagonal or triangular matrix. Inspired by the efficiency and construction of these preconditioners, we propose splitting iteration methods in this section as well as iteration based block preconditioners in next section for (1.1). Consider the orthogonal matrix
1 S ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi a2 þ 1
aI
I
I
aI
;
where a is a real constant and I is the identity matrix. We assume that aW þ T is nonsingular. By multiplying both sides of the block two-by-two liner system (1.1) with the matrix S, we obtain
1 SAx ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 a þ1
aW þ T W aT x ¼ Sb: aT W aW þ T
ð2:7Þ
By further multiplying both sides of (2.7) with S1 , we can create an equivalent linear system to (1.1) as follows:
Ax ¼
1 a2 þ 1
aI I I aI
aW þ T W aT x ¼ b: aT W aW þ T
ð2:8Þ
We now consider splitting iteration methods for solving (2.8). Firstly, the block Jacobi splitting of the linear system (2.8) is
A ¼ MBJ ðaÞ NBJ ðaÞ; with
M BJ ðaÞ ¼
1 a2 þ 1
NBJ ðaÞ ¼
1 a2 þ 1
aI I I aI
aW þ T
0
0
aW þ T
0
aT W
W aT
0
ð2:9Þ
and
aI I I aI
:
The block Jacobi iteration scheme is
xkþ1 ¼ GBJ ðaÞxk þ M BJ ðaÞ1 b;
k ¼ 0; 1; 2; . . . ;
ð2:10Þ
where GBJ ðaÞ is the iteration matrix of the form
GBJ ðaÞ ¼ MBJ ðaÞ1 NBJ ðaÞ ¼
0
XðaÞ
XðaÞ
0
:
ð2:11Þ
Let XðaÞ represent the matrix
XðaÞ ¼ ðaW þ TÞ1 ðW aTÞ:
ð2:12Þ
Then the block lower-triangular Gauss–Seidel splitting of linear system (2.8) is
A ¼ MBLTGS ðaÞ NBLTGS ðaÞ; with
M BLTGS ðaÞ ¼
1 a2 þ 1
NBLTGS ðaÞ ¼
1 a2 þ 1
aI I I aI
aW þ T 0 aT W aW þ T
and
aI I I aI
0 aT W 0
0
:
ð2:13Þ
828
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
The block lower-triangular Gauss–Seidel splitting iteration is of the form
xkþ1 ¼ GBLTGS ðaÞxk þ M BLTGS ðaÞ1 b;
k ¼ 0; 1; 2; . . . ;
ð2:14Þ
where the iteration matrix is
GBLTGS ðaÞ ¼ MBLTGS ðaÞ NBLTGS ðaÞ ¼
!
XðaÞ
0
1
:
0 XðaÞ2
ð2:15Þ
Similarly, the block upper-triangular Gauss–Seidel splitting of the linear system (2.8) is
A ¼ M BUTGS ðaÞ NBUTGS ðaÞ; with
M BUTGS ðaÞ ¼
1 a þ1
NBUTGS ðaÞ ¼
1 a þ1
2
aI I I aI
aW þ T W aT 0 aW þ T
ð2:16Þ
and 2
aI I I aI
0
0
W aT
0
:
The block upper-triangular Gauss–Seidel splitting iteration scheme is
xkþ1 ¼ GBUTGS ðaÞxk þ M BUTGS ðaÞ1 b;
k ¼ 0; 1; 2; . . . ;
ð2:17Þ
with
GBUTGS ðaÞ ¼ MBUTGS ðaÞ1 NBUTGS ðaÞ ¼
XðaÞ2
0
XðaÞ
0
! ð2:18Þ
being the iteration matrix. For convergence of the splitting iterations (2.10), (2.14) and (2.17), we have the following results. Theorem 2.1. Suppose that W and T are real symmetric positive semidefinite matrices and W is nonsingular, a is a positive real constant and aW þ T is nonsingular. Let kmin and kmax be the minimum and the maximum eigenvalues of the matrix W 1 T. Then for the block Jacobi splitting iteration shown in (2.10) and the block Gauss–Seidel splitting iterations shown in (2.14) and (2.17), it holds that: (i) if kmax 6 1, then when
a>
1 kmin ; 1 þ kmin
we have
qðGBJ ðaÞÞ < 1; qðGBLTGS ðaÞÞ < 1; qðGBUTGS ðaÞÞ < 1; where and in the sequel, qðÞ denotes the spectral radius of the corresponding matrix; (ii) if kmax > 1, then when
max
1 kmin ; 1 þ kmin
kmax þ 1 0
we have
qðGBJ ðaÞÞ < 1; qðGBLTGS ðaÞÞ < 1; qðGBUTGS ðaÞÞ < 1: Proof. As W and T are real symmetric positive semidefinite matrices and W is nonsingular, we know that W 1 T is similar to 1 1 W 2 TW 2 . Therefore, any eigenvalue k of the matrix W 1 T is nonnegative. Since
XðaÞ ¼ ðaW þ TÞ1 ðW aTÞ 1
¼ ðaI þ W 1 TÞ ðI aW 1 TÞ; the eigenvalues of XðaÞ are given by
f ðkÞ ¼
1 ak ; aþk
829
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
where k P 0 is an eigenvalue of matrix W 1 T and the derivative of f ðkÞ satisfies
f 0ðkÞ ¼
a2 1 ða þ kÞ2
< 0:
Therefore, f ðkÞ is monotonically decreasing and
1 akmax 1 ak 1 akmin 6 6 a þ kmax aþk a þ kmin holds for any eigenvalue k P 0 of the matrix W 1 T, where kmin and kmax are the smallest and the largest eigenvalues of W 1 T. Suppose that the inequalities
1 <
1 akmax a þ kmax
and
1 akmin <1 a þ kmin
ð2:19Þ
hold true. Then
qðXðaÞÞ < 1: Through simple deductions, we have: (i) if kmax 6 1, then
a>
1 kmin 1 þ kmin
guarantees the validity of the inequality (2.19), so qðXðaÞÞ < 1; (ii) if kmax > 1, then
max
1 kmin ; 1 þ kmin
0
kmax þ 1 kmax 1
guarantees the validity of the inequality (2.19), so qðXðaÞÞ < 1, too. On the other hand, from the formulations (2.11), (2.15) and (2.18), we can easily obtain
qðGBJ ðaÞÞ ¼ qðXðaÞÞ; qðGBLTGS ðaÞÞ ¼ qðXðaÞÞ2 ; qðGBUTGS ðaÞÞ ¼ qðXðaÞÞ2 : We can derive the conclusions immediately.
h
Theorem 2.1 shows that the splitting iterations (2.10), (2.14) and (2.17) are convergent for any initial guess when the parameter a is selected appropriately. 3. Splitting-based block preconditioning matrices Suppose that a is a positive constant and aW þ T is nonsingular. By taking advantage of the analyses in Section 2, we adopt M BJ ðaÞ; M BLTGS ðaÞ and M BUTGS ðaÞ as splitting-based block preconditioning matrices for the block two-by-two matrix A when Krylov subspace methods are applied to solve (1.1). We call these preconditioning matrices as block Jacobi (BJ) preconditioner, block lower-triangular Gauss–Seidel (BLTGS) preconditioner and block upper-triangular Gauss–Seidel (BUTGS) preconditioner, respectively. Moreover, similar to the rotated block triangular product preconditioner in [10], we analogously construct splitting-based block triangular product preconditioner as
M SBTP ðaÞ ¼
1 a2 þ 1
aI I I aI
aW þ T 0 aT W aW þ T
aW þ T
0
0
aW þ T
1
aW þ T W aT : 0 aW þ T
ð3:20Þ
For convenience, we abbreviate M SBTP ðaÞ as SBTP in the sequel of the paper. Here we also note that the proposed splittingbased block preconditioners have the product form of a scaled orthogonal matrix S and a block diagonal or a block triangular matrix. Specifically, we find that when a ¼ 1, the rotated block preconditioners (1.3)–(1.5) proposed in [10] are the same as M BLTGS ðaÞ; M BUTGS ðaÞ and M SBTP ðaÞ, respectively. Theoretical analyses and experimental results show that the proposed splitting-based block preconditioners are more efficient than the rotated block preconditioners. The main computation in preconditioned methods is to solve a linear system with the coefficient matrix being the preconditioner. Due to the special orthogonal, block diagonal and block triangular factors in the preconditioners M BJ ðaÞ; M BLTGS ðaÞ; MBUTGS ðaÞ and MSBTP ðaÞ, linear systems with the block Jacobi and the block Gauss–Seidel splitting-based preconditioning matrices can be accomplished thoroughly by solving linear sub-systems with the coefficient matrix aW þ T. Denoting the linear systems we will solve as
MBJ ðaÞt ¼ r;
M BLTGS ðaÞt ¼ r;
M BUTGS ðaÞt ¼ r;
and M SBTP ðaÞt ¼ r;
ð3:21Þ
830
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837 T
T
where r ¼ ðrTa ; rTb Þ and t ¼ ðtTa ; tTb Þ being the current and the generalized residual vectors, respectively, and ð:ÞT represents the transpose of either a vector or a matrix. Substituting (2.9), (2.13), (2.16) and (3.20) into the above linear systems (3.21) and setting
er a ¼ ar a þ r b ;
er b ¼ r a þ ar b ;
we can solve (3.21) through the following procedures: Solve t from M BJ ðaÞt ¼ r -Solve ta from ðaW þ TÞta ¼ e ra -Solve tb from ðaW þ TÞtb ¼ e rb Solve t from M BLTGS ðaÞt ¼ r ra -Solve ta from ðaW þ TÞta ¼ e -Solve tb from ðaW þ TÞtb ¼ ðW aTÞta þ e rb Solve t from M BUTGS ðaÞt ¼ r rb - Solve tb from ðaW þ TÞtb ¼ e -Solve ta from ðaW þ TÞta ¼ ðaT WÞtb þ e ra Solve t from M SBTP ðaÞt ¼ r -Solve e t a from ðaW þ TÞ et a ¼ er a -Solve tb from ðaW þ TÞtb ¼ ðW aTÞ e t a þ er b ra -Solve ta from ðaW þ TÞta ¼ ðaT WÞtb þ e We can generally solve the above linear systems by triangular factorization methods. In addition, if the matrix aW þ T is positive definite, the above linear systems can also be solved effectively either exactly by Cholesky factorization or inexactly by some conjugate gradient or multigrid schemes, see [15,17,18]. Comparing with the PMHSS preconditioner in [14], we see that the computing procedures of block Jacobi preconditioner are the same as those of PMHSS preconditioner; when comparing with the rotated block preconditioners in [10], we see that the computing procedures of the block Gauss–Seidel preconditioners are the same as those of the rotated block preconditioners. We also find that block Jacobi splitting-based preconditioner requires less computing cost than that of block Gauss–Seidel splitting-based preconditioners do. 4. Spectral-properties of the proposed preconditioners In this section, we give the spectral analysis of the proposed block Jacobi and block Gauss–Seidel splitting-based preconditioners. We show the eigenvalues of the proposed preconditioned matrices in the following theorem. Theorem 4.2. Suppose that W and T are two real m-by-m matrices, a is a real constant and aW þ T is nonsingular. XðaÞ is given in (2.12) and kj ; j ¼ 1; 2; . . . ; m, are the eigenvalues of XðaÞ. Then, for the preconditioning matrices M BJ ðaÞ defined in (2.9), MBLTGS ðaÞ defined in (2.13), M BUTGS ðaÞ defined in (2.16) and M SBTP ðaÞ defined in (3.20) for linear system (1.1), it holds that (i) the eigenvalues of the block Jacobi preconditioned matrix M BJ ðaÞ1 A are 1 ikj ; j ¼ 1; 2; . . . ; m; (ii) the preconditioned matrices M BLTGS ðaÞ1 A; M BUTGS ðaÞ1 A and M SBTP ðaÞ1 A have the same eigenvalues: 1 with multiplicity m, and the others are 1 þ k2j ; j ¼ 1; 2; . . . ; m.
Proof. (i) By straightforward computation, we have
M BJ ðaÞ1 ¼
ðaW þ TÞ1
0
0
ðaW þ TÞ1
!
aI I
I : aI
It immediately follows that
M BJ ðaÞ1 A ¼
I
XðaÞ
XðaÞ
I
:
Based on the Schur decomposition theory, there exists an unitary matrix VðaÞ such that
ðVðaÞÞ XðaÞVðaÞ ¼ RðaÞ;
831
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
where ðÞ is the conjugate transpose of the corresponding matrix, RðaÞ is an upper-triangular matrix whose diagonal elements are the eigenvalues kj ; j ¼ 1; 2; . . . ; m, of XðaÞ. Define unitary matrices
b ðaÞ ¼ V
VðaÞ
0
0
VðaÞ
1 and Q ¼ pffiffiffi 2
I
I
iI
i I
ð4:22Þ
:
Then we know that Q satisfies
I
RðaÞ
RðaÞ
I
¼Q
I þ i RðaÞ
0
0
I i RðaÞ
Q
and
M BJ ðaÞ1 A ¼
I
XðaÞ
XðaÞ
I
b ðaÞ ¼V
0 b ðaÞÞ ¼ ð V b ðaÞQÞ I þ i RðaÞ b ðaÞQ Þ : ðV ðV I 0 I i RðaÞ
RðaÞ
I RðaÞ
b ðaÞQ is also an unitary matrix. Hence M BJ ðaÞ1 A is unitarily similar to Note that V
I þ i RðaÞ
0
0
I i RðaÞ
:
We can easily know that the eigenvalues of the block Jacobi preconditioned matrix M BJ ðaÞ1 A are 1 ikj ; j ¼ 1; 2; . . . ; m. (ii) Through direct computations, we obtain
0 M BLTGS ðaÞ A ¼ @
0
XðaÞðaW þ TÞ1
ðaW þ TÞ1
M BUTGS ðaÞ1 A ¼
A@
ðaW þ TÞ1
XðaÞðaW þ TÞ1
0
aW þ T W aT M SBTP ðaÞ A ¼ 0 aW þ T 1
10
ðaW þ TÞ1
1
1
aI
I
I
aI
!
0
AA ¼ @ A¼
aI
I
ðaW þ TÞ1
I
aI
1
aW þ T
0
0
aW þ T
I
1
XðaÞ
0 I þ XðaÞ2 I þ XðaÞ2 XðaÞ
aW þ T 0 aT W aW þ T
A;
! 0 ; I 1
aI
I
I
aI
A¼
I
XðaÞ3
0 I þ XðaÞ2
! :
Obviously, the eigenvalues of these block Gauss–Seidel preconditioned matrices M BLTGS ðaÞ1 A; M BUTGS ðaÞ1 A and M SBTP ðaÞ1 A are the same. These eigenvalues are 1 with multiplicity m, and the others are 1 þ k2j ; j ¼ 1; 2; . . . ; m, respectively. h Furthermore, if the matrix XðaÞ could be diagonalized, the preconditioned matrices M BJ ðaÞ1 A; MBLTGS ðaÞ1 A; M BUTGS ðaÞ1 A and MSBTP ðaÞ1 A could be diagonalized, too, see, for instance [10,14]. In that case, we could also obtain the eigenvectors of M BJ ðaÞ1 A; MBLTGS ðaÞ1 A; M BUTGS ðaÞ1 A and M SBTP ðaÞ1 A. Theorem 4.3. Suppose that W and T are two real m-by-m matrices, a is a real constant and aW þ T is nonsingular. If the matrix XðaÞ can be diagonalizable, i.e., there exist a nonsingular matrix UðaÞ and a diagonal matrix KðaÞ satisfying UðaÞ1 XðaÞUðaÞ ¼ KðaÞ, with /j :¼ /j ðaÞ the jth column of UðaÞ and kj the jth diagonal element of KðaÞ. Then, for the preconditioned matrices M BJ ðaÞ1 A; M BLTGS ðaÞ1 A; M BUTGS ðaÞ1 A and M SBTP ðaÞ1 A, it holds that (i) the eigenvalues of the preconditioned matrix M BJ ðaÞ1 A are
tj ¼ 1 þ ikj ; tmþj ¼ 1 ikj ; j ¼ 1; 2; . . . ; m and the corresponding eigenvectors are
1 xj ¼ pffiffiffi 2
/j
!
i/j
;
xmþj
1 ¼ pffiffiffi 2
/j i/j
! ; j ¼ 1; 2; . . . ; m;
(ii) the eigenvalues of the preconditioned matrix M BLTGS ðaÞ1 A are
tj ¼ 1 þ k2j ; tmþj ¼ 1; j ¼ 1; 2; . . . ; m and the corresponding eigenvectors are
0 B xj ¼ @
1 ffi pffiffiffiffiffiffiffi /j 2 1þkj
kj pffiffiffiffiffiffiffi ffi /j 2 1þkj
1 C A;
xmþj ¼
/j 0
; j ¼ 1; 2; . . . ; m;
832
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
(iii) the eigenvalues of the preconditioned matrix M BUTGS ðaÞ1 A are
tj ¼ 1 þ k2j ; tmþj ¼ 1; j ¼ 1; 2; . . . ; m and the corresponding eigenvectors are
0 B xj ¼ @
k
j ffi /j pffiffiffiffiffiffiffi 2
1þkj
1 ffi pffiffiffiffiffiffiffi /j 2
1 C A;
0
xmþj ¼
! ; j ¼ 1; 2; . . . ; m;
/j
1þkj
(iv) the eigenvalues of the preconditioned matrix M SBTP ðaÞ1 A are
tj ¼ 1 þ k2j ; tmþj ¼ 1; j ¼ 1; 2; . . . ; m and the corresponding eigenvectors are
1 0 pffiffiffi2ffi kj p ffiffiffiffiffiffiffi ffi / B 1þk2 j C j C B xj ¼ B pffiffiffi2ffi C; A @ kj pffiffiffiffiffiffiffi2ffi /j kj
xmþj ¼
/j 0
; j ¼ 1; 2; . . . ; m:
1þkj
Proof. (i) Define a matrix
b ðaÞ ¼ U
UðaÞ
0
0
UðaÞ
:
From
UðaÞ1 XðaÞUðaÞ ¼ KðaÞ; we know that
0 b ðaÞQÞ I þ i KðaÞ b ðaÞQÞ1 ; ðU M BJ ðaÞ1 A ¼ ð U 0 I i KðaÞ where Q is defined in (4.22) and
UðaÞ UðaÞ : 2 i UðaÞ i UðaÞ
1
b ðaÞQ ¼ pffiffiffi U
So, results of (i) can be easily obtained. (ii) For a block two-by-two matrix
C¼
C1 C3 ; C4 C2
ð3:23Þ
with square diagonal blocks ðiÞ ðiÞ Ci ¼ diagðcðiÞ 1 ; c2 ; . . . ; cm Þ;
i ¼ 1; 2; 3; 4;
it admits the spectral decomposition
C ¼ FNF1 ;
N ¼ diagðN1 ; N2 Þ;
F¼
where ðiÞ ðiÞ Ni ¼ diagðnðiÞ 1 ; n2 ; . . . ; nm Þ;
i ¼ 1; 2;
with
8 > > > < > > > :
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ð1Þ nj ð2Þ
nj
¼ ¼
ð2Þ cð1Þ þcj þ j
ð1Þ
ð2Þ 2
ð3Þ ð4Þ
ðcj cj Þ þ4cj cj 2
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ð2Þ cð1Þ þcj j
ð1Þ ð2Þ 2 ð3Þ ð4Þ ðcj cj Þ þ4cj cj
2
; ;
F1 F4
F3 ; F2
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
833
and
1 ffi pffiffiffiffiffiffiffi 1 ; ¼ diag pffiffiffiffiffiffiffi ; 1 2ffi ; . . . ; pffiffiffiffiffiffiffiffi 2 1þf1 1þf2 1þf2m g1 g2 gm ffi ; ¼ diag pffiffiffiffiffiffiffiffi ; pffiffiffiffiffiffiffiffi ; . . . ; pffiffiffiffiffiffiffiffi 2 2 2 1þg1 1þg2 1þgm 1 1 1 ffi ; pffiffiffiffiffiffiffiffi ; . . . ; pffiffiffiffiffiffiffiffi ; ¼ diag pffiffiffiffiffiffiffiffi 1þg21 1þg22 1þg2m f1 f2 fm ffi pffiffiffiffiffiffiffi ffi ; . . . ; pffiffiffiffiffiffiffiffi ; ¼ diag pffiffiffiffiffiffiffi 2 2 2
8 > > F1 > > > > > > > > > > < F2 > > > F3 > > > > > > > > > : F4
1þf1
1þf2
1þfm
with
8 > > > > > < fj
ð1Þ cð2Þ cj þ j
¼
> > > > > : gj
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð1Þ 2
ð2Þ
ð3Þ ð4Þ
ðcj cj Þ þ4cj cj
;
ð3Þ 2cj
ð2Þ cð1Þ cj þ j
¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð2Þ 2
ð1Þ
ð3Þ ð4Þ
ðcj cj Þ þ4cj cj
:
ð3Þ 2cj
For details, see, for instance [10]. Now we consider the block lower-triangular Gauss–Seidel splitting preconditioned matrix
!
KðaÞ
I
b ðaÞ M BLTGS ðaÞ A ¼ U 1
0 I þ KðaÞ2
b ðaÞ1 : U
ð3:24Þ
For (3.24), taking matrix C in (3.23) as
C¼
!
KðaÞ
I
;
0 I þ KðaÞ2
through direct computations we derive the eigenvectors corresponding to the eigenvalues
tj ¼ 1 þ k2j ; tmþj ¼ 1; j ¼ 1; 2; . . . ; m; of M BLTGS ðaÞ1 A as
0 B xj ¼ @
1 ffi pffiffiffiffiffiffiffi /j 2 1þkj kj
pffiffiffiffiffiffiffi2ffi /j
1
C A;
xmþj ¼
/j
0
; j ¼ 1; 2; . . . ; m:
1þkj
(iii) Since
M BUTGS ðaÞ1 A ¼
0 ðM BLTGS ðaÞ1 AÞ 0 I
0 I
I
I
0
:
By using (3.24), through direct calculations again, we derive the conclusion immediately. (iv) Because of
b ðaÞ M SBTP ðaÞ A ¼ U
I
1
KðaÞ3
0 I þ KðaÞ2
! b ðaÞ1 U
ð3:25Þ
for (3.25), taking matrix C in (3.23) as
C¼
I
KðaÞ3
0 I þ KðaÞ2
! ;
we can immediately derive the conclusion. h
5. Numerical results In this section, we present some numerical experiments to show the effectiveness of the proposed splitting-based block preconditioners BJ, BLTGS, BUTGS and SBTP when they are combined in GMRES method to solve the block two-by-two linear system (1.1). We compare these computing results with those obtained by PGMRES method when the preconditioners PMHSS (1.2), RBLT (1.3), RBUT (1.4) and RBTP (1.5) are used. In our implementations, iteration processes of PGMRES are terminated once the current residuals satisfy
834
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
kb AxðkÞ k 6 106 : kb Axk We adopt two examples which are used in [10] to implement our experiment. The linear systems in both of the examples come from equivalent formulations of the complex linear system
ðW þ iTÞx ¼ h;
with x ¼ y þ iz and h ¼ f þ ig:
These examples are described below. Example 5.1 (The Complex-Shifted Linear System [10,11]). Consider the parabolic partial differential equation
mðx; y; tÞ
@v Dv ¼ fðx; y; tÞ; @t
ðx; yÞ 2 X :¼ ½0; 1 ½0; 1 ;
t > 0;
ð5:26Þ
with the homogeneous Dirichlet boundary condition, where D is the two-dimensional Laplace operator, mðx; y; tÞ is an uniformly positive function for ðx; yÞ 2 X and t P 0, and fðx; y; tÞ ¼ f 0 ðx; y; tÞeixt is a periodic forcing function in time. Using the ansatz vðx; y; tÞ ¼ uðx; y; tÞeixt , where u and v are complex-valued functions, we can reformulate (5.26) as
mðx; y; tÞ
@v Du þ ixmðx; y; tÞu ¼ f 0 ðx; y; tÞ; @t
ð5:27Þ
Table 1 Iteration numbers and CPU times of PMHSS, rotated block triangular preconditioned GMRES and proposed splitting-based block preconditioned GMRES for Example 5.1.
x
Method
m m 625 625 IT (CPU)
900 900 IT (CPU)
1764 1764 IT (CPU)
1936 1936 IT (CPU)
2025 2025 IT (CPU)
1
BJ PMHSS
4 (0.19) 8 (0.28)
3 (0.36) 8 (0.53)
3 (1.28) 8 (1.75)
3 (1.50) 8 (2.12)
3 (1.74) 8 (2.40)
102
BJ
12 (0.36)
12 (0.68)
11 (2.07)
11 (2.41)
11 (2.77)
PMHSS
16 (0.39)
16 (0.75)
17 (2.64)
17 (3.10)
17 (3.64)
BJ
15 (0.40)
15 (0.76)
15 (2.44)
16 (2.98)
16 (3.39)
PMHSS
14 (0.39)
14 (0.67)
15 (2.48)
15 (2.85)
15 (3.35)
BJ
11 (0.35)
11 (0.56)
13 (2.24)
13 (2.63)
13 (3.01)
PMHSS
13 (0.36)
13 (0.77)
14 (2.38)
14 (2.67)
14 (3.24)
1
BLTGS RBLT
2 (0.20) 5 (0.22)
2 (0.32) 5 (0.44)
2 (1.24) 5 (1.56)
2 (1.44) 5 (1.84)
2 (1.62) 5 (2.13)
102
BLTGS
7 (0.26)
7 (0.51)
6 (1.69)
6 (2.10)
6 (2.25)
RBLT
9 (0.34)
9 (0.59)
10 (2.20)
10 (2.51)
10 (2.93)
BLTGS
9 (0.31)
9 (0.61)
9 (2.03)
9 (2.44)
9 (2.76)
RBLT
8 (0.31)
8 (0.57)
9 (2.17)
9 (2.39)
9 (2.86)
BLTGS
6 (0.27)
6 (0.52)
7 (1.84)
7 (2.15)
7 (2.49)
RBLT
8 (0.34)
8 (0.58)
8 (1.93)
8 (2.28)
8 (2.68)
1
BUTGS RBUT
2 (0.19) 4 (0.21)
2 (0.30) 4 (0.39)
2 (1.12) 4 (1.38)
2 (1.38) 4 (1.61)
2 (1.56) 4 (1.88)
10
BUTGS RBUT
7 (0.27) 9 (0.31)
6 (0.46) 9 (0.54)
6 (1.61) 9 (1.85)
6 (1.83) 9 (2.18)
6 (2.13) 9 (2.61)
103
BUTGS
8 (0.28)
8 (0.52)
8 (1.75)
8 (2.15)
8 (2.44)
RBUT
8 (0.27)
8 (0.51)
8 (1.78)
8 (2.15)
8 (2.45)
BUTGS
6 (0.23)
6 (0.46)
7 (1.69)
7 (1.99)
7 (2.28)
RBUT
7 (0.26)
8 (0.52)
8 (1.81)
8 (2.20)
8 (2.51)
1
SBTP RBTP
2 (0.20) 4 (0.24)
2 (0.38) 4 (0.46)
2 (1.31) 4 (1.64)
2 (1.64) 4 (2.02)
2 (1.84) 4 (2.27)
102
SBTP
6 (0.31)
6 (0.55)
6 (1.99)
6 (2.27)
6 (2.68)
RBTP
9 (0.39)
9 (0.73)
9 (2.34)
9 (2.80)
9 (3.26)
SBTP
8 (0.38)
8 (0.68)
8 (2.23)
8 (2.69)
8 (3.01)
RBTP
8 (0.42)
8 (0.68)
8 (2.29)
8 (2.69)
8 (3.01)
SBTP
6 (0.30)
6 (0.58)
7 (2.07)
7 (2.48)
7 (2.84)
RBTP
7 (0.34)
8 (0.69)
8 (2.25)
8 (2.68)
8 (3.01)
3
10
104
103 104
4
10
103 4
10
835
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
see [2]. Then by discretizing (5.27) with the finite element method of bilinear elements on a uniform rectangular mesh, we obtain a complex linear system of the form
ðK þ ixsMÞx ¼ h; where x is a positive parameter, s is the temporal discretization step-size, and K ¼ M þ sL, with M 2 Rm m being the mass matrix and L 2 Rm m being the discrete negative Laplace operator. In actual implementations, we choose
1 3
mðx; y; tÞ ¼ 1 þ ðx2 sinðytÞ þ y2 sinðxtÞÞ and take the right-hand side vector h with its jth entry ½h j being given by
½h j ¼
ð1 iÞj
sðj þ 1Þ2
and the temporal step-size
s to be equal to the spatial step-size h ¼ pffiffiffim1þ1.
In Example 5.1, we take the matrices W ¼ K; T ¼ xsM, and the vectors f ; g with their jth entries ½f j ; ½g j being given by
½f j ¼
j
sðj þ 1Þ2
;
½g j ¼ ½f j ;
Table 2 Iteration numbers and CPU times of PMHSS, rotated block triangular preconditioned GMRES and proposed splitting-based block preconditioned GMRES for Example 5.2.
m
Method
m m 625 625 IT (CPU)
900 900 IT (CPU)
1225 1225 IT (CPU)
1600 1600 IT (CPU)
2025 2025 IT (CPU)
0.1
BJ PMHSS
7 (0.21) 10 (0.22)
7 (0.40) 10 (0.45)
7 (0.78) 10 (0.79)
8 (1.30) 11 (1.38)
8 (2.36) 11 (2.34)
1
BJ PMHSS
7 (0.22) 11 (0.26)
7 (0.40) 11 (0.47)
8 (0.85) 11 (0.87)
8 (1.38) 11 (1.43)
8 (2.34) 11 (2.31)
10
BJ PMHSS
7 (0.22) 12 (0.25)
7 (0.42) 12 (0.59)
7 (0.75) 12 (0.87)
7 (1.39) 13 (1.50)
8 (2.34) 13 (2.41)
100
BJ PMHSS
6 (0.21) 7 (0.20)
6 (0.38) 8 (0.41)
6 (0.78) 8 (0.75)
6 (1.35) 8 (1.29)
6 (2.03) 8 (2.09)
0.1
BLTGS RBLT
4 (0.20) 6 (0.24)
4 (0.39) 6 (0.44)
4 (0.74) 7 (0.90)
4 (1.19) 7 (1.45)
4 (2.01) 6 (2.23)
1
BLTGS RBLT
4 (0.21) 7 (0.27)
4 (0.39) 7 (0.51)
4 (0.75) 7 (0.93)
5 (1.34) 7 (1.44)
5 (2.02) 7 (2.33)
10
BLTGS RBLT
4 (0.21) 7 (0.24)
4 (0.43) 7 (0.49)
4 (0.75) 8 (0.96)
4 (1.23) 8 (1.51)
4 (2.02) 8 (2.47)
100
BLTGS RBLT
4 (0.19) 5 (0.21)
4 (0.43) 5 (0.43)
4 (0.76) 5 (0.81)
4 (1.31) 5 (1.31)
4 (1.92) 5 (2.05)
0.1
BUTGS RBUT
4 (0.19) 6 (0.22)
4 (0.38) 6 (0.42)
4 (0.76) 7 (0.80)
4 (1.25) 7 (1.36)
4 (2.16) 7 (2.15)
1
BUTGS RBUT
4 (0.18) 7 (0.23)
4 (0.39) 7 (0.48)
4 (0.77) 7 (0.83)
5 (1.39) 7 (1.52)
5 (2.22) 7 (2.43)
10
BUTGS RBUT
4 (0.19) 7 (0.23)
4 (0.39) 7 (0.40)
4 (0.73) 8 (0.85)
4 (1.21) 8 (1.45)
4 (1.86) 8 (2.22)
100
BUTGS RBUT
4 (0.20) 5 (0.20)
4 (0.38) 5 (0.38)
4 (0.70) 5 (0.71)
4 (1.18) 5 (1.21)
4 (1.82) 5 (1.92)
0.1
SBTP RBTP
4 (0.22) 5 (0.23)
4 (0.41) 6 (0.51)
4 (0.78) 6 (0.84)
4 (1.35) 6 (1.50)
4 (2.05) 6 (2.27)
1
SBTP RBTP
4 (0.22) 6 (0.24)
4 (0.45) 6 (0.47)
4 (0.83) 7 (1.04)
4 (1.45) 7 (1.57)
4 (2.30) 6 (2.33)
10
SBTP RBTP
4 (0.22) 7 (0.27)
4 (0.42) 7 (0.53)
4 (0.87) 7 (1.02)
4 (1.43) 7 (1.72)
4 (2.35) 7 (2.68)
100
SBTP RBTP
3 (0.20) 5 (0.24)
3 (0.41) 5 (0.47)
3 (0.80) 5 (0.87)
3 (1.36) 5 (1.52)
4 (2.32) 5 (2.47)
836
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
respectively. Then we obtain the linear system (1.1). Here W and T are real symmetric matrices, and when xs becomes larger, the skew-symmetric part of the coefficient matrix of the linear system (1.1) will be stronger. We can easily show the advantages of our proposed splitting-based block preconditioned GMRES methods by comparing the numbers of iteration steps (IT) and CPU times in seconds (CPU) with those obtained by PMHSS and rotated block triangular preconditioned GMRES methods. For the parameter a, numerical results in [14,10] showed that a ¼ 1 in both PMHSS preconditioned GMRES methods and rotated triangular preconditioned GMRES methods is quite comparable to the optimal ones. Therefore, in our experiments, we choose a ¼ 1 in PMHSS and rotated block preconditioned GMRES methods. We determine optimal a in the proposed splitting-based block preconditioned GMRES methods experimentally by trial and error, i.e., the one which minimizes the iteration number and computing time is used in our experiments. In this example, we select a ¼ 100 in all proposed splitting-based preconditioners for x ¼ 1; a ¼ 2 for x ¼ 102 ; a ¼ 1:1 for x ¼ 103 and a ¼ 0:7 for x ¼ 104 . How to set the parameter a in the splitting-based block preconditioners theoretically is still a hard problem that needs to be carried on later. The computing results of Example 5.1 are shown in Table 1. We can see from this table that except for the case that x ¼ 103 , the computing effect is almost the same as those obtained by PMHSS preconditioned GMRES methods and rotated triangular preconditioned GMRES methods, for other values of x, the proposed splitting-based preconditioners are more effective. Because of the high efficiency of the rotated triangular preconditioned GMRES methods, such improvements shown in Table 1 can sufficiently indicate the superiority of the proposed preconditioners when Krylov subspace methods are applied to solve linear system (1.1). Example 5.2 (The Nonsymmetric BBC Problem [10,11]). The complex linear system is of the form ðW þ iTÞz ¼ h, with
W ¼ILþLI
and T ¼ 10ðI Lc þ Lc IÞ þ 9ðe1 eT‘ þ e‘ eT1 Þ I;
m with t a positive constant, L ¼ L e eT e eT , e where L ¼ tridiagð1 h; 2; 1 þ hÞ 2 R‘ ‘ is a tridiagonal matrix, h ¼ 2ð‘þ1Þ c 1 ‘ ‘ 1 1 and e‘ are the first and the last unit basis vectors in R‘ , respectively. The right-hand side vector h is taken to be h ¼ ð1 þ iÞðW þ iTÞe, with e 2 Rm being the vector of all entries equal to 1 and m ¼ ‘2 .
Example 5.2 is a nonsymmetric variant of the complex symmetric linear system constructed and implemented in [12]; see also [13,11]. For this example, by taking the vectors f and g to be
f ¼ ðW TÞe and g ¼ ðW þ TÞe; respectively, we obtain the linear system (1.1). When h or t become larger, the coefficient matrix of the linear system (1.1) will be more nonsymmetric. In the implement of this example, a ¼ 1 is chosen in both PMHSS and rotated block triangular preconditioned GMRES methods. We set a ¼ 0:4 in the proposed splitting-based block preconditioned GMRES methods for all experiments. And we find that the computing results are the best when parameter a being around 0:4. Table 2 further shows the efficiency of the proposed preconditioners when Krylov subspace methods are applied to solve linear system (1.1). 6. Conclusion The block two-by-two linear system has attracted much more attention and lots of efficient solvers could be found in the literature. Of which the preconditioned Krylov subspace methods are the most important solvers and plenty of special preconditioners have been studied by the researchers. In this paper, we have proposed a class of special structured preconditioners which have the product form of a scaled orthogonal matrix and block two-by-two triangular matrices. These preconditioners are established based on block Jacobi and block Gauss–Seidel splittings of a new linear system which is equivalent to the original one. We have shown that the splitting iterations are convergent when the parameters are selected appropriately. The spectral properties of the proposed preconditioned matrices have been also demonstrated. Both theoretical and numerical results have indicated the superiority of the proposed splitting-based block preconditioners. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
O. Axelsson, Iterative Solution Methods, Cambridge University Press, Cambridge, 1994. O. Axelsson, A. Kucherov, Real valued iterative methods for solving complex sysmmetric linear systems, Numer. Linear Algebra Appl. 7 (2000) 197–218. O. Axelsson, P.S. Vassilevski, Algebraic multilevel preconditioning methods I, Numer. Math. 56 (1989) 157–177. O. Axelsson, P.S. Vassilevski, Algebraic multilevel preconditioning methods II, SIAM J. Numer. Anal. 27 (1990) 1569–1590. Z.-Z. Bai, Parallel iterative methods for large-scale systems of algebraic equations (Ph.D. Thesis), Shanghai University of Science and Technology, Shanghai, June 1993 (In Chinese). Z.-Z. Bai, A class of hybrid algebraic multilevel preconditioning methods, Appl. Numer. Math. 19 (1996) 389–399. Z.-Z. Bai, Parallel hybrid algebraic multilevel iterative methods, Linear Algebra Appl. 267 (1997) 281–315. Z.-Z. Bai, Structured preconditioners for nonsingular matrices of block two-by-two structures, Math. Comput. 75 (2006) 791–815. Z.-Z. Bai, Block preconditioners for elliptic PDE-constrained optimization problems, Computing 91 (2011) 379–395. Z.-Z. Bai, Rotated block triangular preconditioning based on PMHSS, Sci. China Math. 56 (2013) 2523–2538. Z.-Z. Bai, On preconditioned iteration methods for complex linear systems, J. Eng. Math., in press. http://dx.doi.org/10.1007/s10665-013-9670-5. Z.-Z. Bai, M. Benzi, F. Chen, Modified HSS iteration methods for a class of complex symmertic linear systems, Computing 87 (2010) 93–111. Z.-Z. Bai, M. Benzi, F. Chen, On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms 56 (2011) 297–317. Z.-Z. Bai, M. Benzi, F. Chen, Z.-Q. Wang, Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems, IMA J. Numer. Anal. 33 (2013) 343–369.
H.-Y. Yan, Y.-M. Huang / Applied Mathematics and Computation 243 (2014) 825–837
837
[15] Z.-Z. Bai, F. Chen, Z.-Q. Wang, Additive block diagonal preconditioning for block two-by-two linear systems of skew-Hamiltonian coeffiicient matrices, Numer. Algorithms 62 (2013) 655–675. [16] Z.-Z. Bai, G.H. Golub, C.-K. Li, Convergence properties of preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite matrices, Math. Comput. 76 (2007) 287–298. [17] Z.-Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) 603–626. [18] Z.-Z. Bai, G.H. Golub, M.K. Ng, On inexact Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, Linear Algebra Appl. 428 (2008) 413–440. [19] Z.-Z. Bai, G.H. Golub, J.-Y. Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems, Numer. Math. 98 (2004) 1–32. [20] Z.-Z. Bai, M.K. Ng, On inexact preconditioners for nonsymmertic matrices, SIAM J. Sci. Comput. 26 (2005) 1710–1724. [21] Z.-Z. Bai, D.-R. Wang, A class of new hybrid algebraic multilevel preconditioning methods, Linear Algebra Appl. 260 (1997) 223–255. [22] M. Benzi, G.H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl. 26 (2004) 20–41. [23] M. Benzi, G.H. Golub, J. Liesen, Numerical solution of saddle point problems, Acta Numer. 14 (2005) 1–137. [24] J.H. Bramble, J.E. Pasciak, A.T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Numer. Anal. 34 (1997) 1072–1092. [25] Z.-H. Cao, Positive stable block triangular preconditioners for symmetric saddle point problems, Appl. Numer. Math. 57 (2007) 899–910. [26] Z.-H. Cao, Constraint Schur complement preconditioners for nonsymmetric saddle point problems, Appl. Numer. Math. 59 (2009) 151–169. [27] I.S. Duff, N.I.M. Gould, J.K. Reid, J.A. Scott, K. Turner, The factorization of sparse symmetric indefinite matrices, IMA J. Numer. Anal. 11 (1991) 181–204. [28] I.S. Duff, J.K. Reid, Exploiting zeros on the diagonal in the direct solution of indefinite sparse symmetric linear systems, ACM Trans. Math. Software 22 (1996) 227–257. [29] N. Dyn, W.E. Ferguson, The numerical solution of equality constrained quadratic programming problems, Math. Comput. 41 (1983) 165–170. [30] H.C. Elman, Preconditioners for saddle point problems arising in computational fluid dynamics, Appl. Numer. Math. 43 (2002) 75–89. [31] H.C. Elman, G.H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point problems, SIAM J. Numer. Anal. 31 (1994) 1645–1661. [32] H.C. Elman, D.J. Silvester, A.J. Wathen, Performance and analysis of saddle point preconditioners for the discrete steady-state Navier–Stokes equations, Numer. Math. 90 (2002) 665–688. [33] B. Fischer, R. Ramage, D.J. Silvester, A.J. Wathen, Minimum residual methods for augmented systems, BIT 38 (1998) 527–543. [34] G.H. Golub, A.J. Wathen, An iteration for indefinite systems and its application to the Navier–Stokes equations, SIAM J. Sci. Comput. 19 (1998) 530–539. [35] G.H. Golub, X. Wu, J.-Y. Yuan, SOR-like methods for augmented systems, BIT Numer. Math. 41 (2001) 71–85. [36] I.C.F. Ipsen, A note on preconditioning nonsymmetric matrices, SIAM J. Sci. Comput. 23 (2001) 1050–1051. [37] I.C.F. Ipsen, C.D. Meyer, The idea behind Krylov methods, Am. Math. Mon. 105 (1998) 889–899. [38] C. Keller, N.I.M. Gould, A.J. Wathen, Constrained preconditioning for indefinite linear systems, SIAM J. Matrix Anal. Appl. 21 (2000) 1300–1317. [39] A. Klawonn, Block-triangular preconditioners for saddle point problems with a penalty term, SIAM J. Sci. Comput. 19 (1998) 172–184. [40] J.L. Lions, Optimal Control of Systems Governed by Partial Differential Equations, Springer-Verlag, Berlin, Heidelberg, 1968. [41] V.L. Mehrmann, The Autonomous Linear Quadratic Control Problem, Springer-Verlag, Berlin, 1991. [42] M.F. Murphy, G.H. Golub, A.J. Wathen, A note on preconditioning for indefinite linear systems, SIAM J. Sci. Comput. 21 (2000) 1969–1972. [43] I. Perugia, V. Simoncini, Block-diagonal and indefinite symmetric preconditioners for mixed finite element formulations, Numer. Linear Algebra Appl. 7 (2000) 585–616. [44] R.J. Plemmons, A parallel block iterative method applied to computations in structural analysis, SIAM J. Algebriac Discrete Methods 7 (1986) 337–347. [45] T. Rees, H.S. Dollar, A.J. Wathen, Optimal solvers for PDE-constrained optimization, SIAM J. Sci. Comput. 32 (2010) 271–298. [46] W. Zulenher, Analysis of iterative methods for saddle point problems: a unified approach, Math. Comput. 71 (2001) 479–505.