Applied Mathematics and Computation 219 (2013) 4225–4231
Contents lists available at SciVerse ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
A generalization of parameterized inexact Uzawa method for singular saddle point problems q Guo-Feng Zhang ⇑, Shan-Shan Wang School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, PR China
a r t i c l e
i n f o
Keywords: Matrix splitting Singular saddle point problems Parameterized inexact Uzawa method Semi-convergence
a b s t r a c t In this paper, we further study the generalized parameterized inexact Uzawa method for solving singular saddle point problems, obtaining the generalized parameterized inexact Uzawa (GPIU) method. Theoretical analysis shows that the semi-convergence of this new method can be guaranteed by suitable choices of the iteration parameters. Numerical experiments are used to demonstrate the feasibility and effectiveness of the generalized parameterized inexact Uzawa method for solving singular saddle point problems. Ó 2012 Elsevier Inc. All rights reserved.
1. Introduction We consider an iterative solution of the large sparse and singular saddle point problem
AX :¼
A BT
B 0
x p ¼ b; y q
ð1:1Þ
where A 2 Rmm is a symmetric positive definite matrix, B 2 Rmn is a rank-deficient matrix, and p 2 Rm and q 2 Rn are given vectors, with m P n. The saddle point problem (1.1) arises in many application areas, such as computational fluid dynamics, mixed finite element approximation of elliptic partial differential equations, optimization, optimal control, weighted leastsquares problems, electronic networks, computer graphics and others; see [2,4,11,13,15,16,22] and references therein. Frequently, iterative methods become more attractive than direct methods for solving the saddle point problem (1.1) , because the coefficient matrix of the saddle point problem (1.1) is large and sparse. If A is symmetric positive definite and B is of full column rank, many efficient iterative methods have been studied in the literature, for example, Uzawa method [7,12,14,19,24,25], SOR-like method [6,18], RPCG method [3,9,22], HSS method [4,5,19] and so on. See [10] for a survey. Though most often the matrix B occurs in the form of full column rank, but not always. If B is rank-deficient, we call the linear system (1.1) a singular saddle point problem. How to effectively solve the singular saddle point problem (1.1) is important in both scientific computing and engineering applications. Many techniques have been proposed for solving the rank-deficient saddle point problems, including parameterized Uzawa method [24,20], PHSS iteration method [1,8], preconditioned minimum residual (PMINRES) method [17] and preconditioned conjugate gradient (PCG) method [22]. In this paper, we generalize the parameterized inexact Uzawa methods for solving the singular saddle point problem (1.1) , obtaining the generalized parameterized inexact Uzawa (GPIU) method. By choosing the involved parameters, we can recover many known iteration methods, as well as yield new ones. The semi-convergence of these new methods are discussed in depth. Numerical experiments have been taken to demonstrate the feasibility and effectiveness of the GPIU method.
q
This work was supported by the National Natural Science Foundation (11271174).
⇑ Corresponding author.
E-mail address:
[email protected] (G.-F. Zhang). 0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2012.10.116
4226
G.-F. Zhang, S.-S. Wang / Applied Mathematics and Computation 219 (2013) 4225–4231
The paper is organized as follows. In Section 2 we present the GPIU method for solving the singular saddle point problem (1.1) . In Section 3, the semi-convergence of the GPIU method are discussed. Also, we obtain six GPIU methods by give six different parameter matrices. Finally, in Section 4, several numerical examples are given to show the efficiency of the GPIU method. 2. Generalized parameterized inexact Uzawa methods For solving the singular saddle point problems (1.1) , we make the following matrix splitting
A :¼
A
B
BT
0
¼ M N;
where
M¼
P
0
T
B þ Q 1
mm
Q2
;
N ¼
P A B Q1
Q2
;
nn
P2R and Q 2 2 R are prescribed symmetric positive definite matrices, and Q 1 2 Rnm is an arbitrary matrix. Then we present the following generalized parameterized inexact Uzawa (GPIU) iteration method for solving the singular saddle point problem (1.1) :
P
0
B þ Q 1
Q2
T
xðkþ1Þ yðkþ1Þ
!
¼
P A B Q1
Q2
xðkÞ yðkÞ
!
þ
p q
;
ð2:1Þ
or equivalently,
(
xðkþ1Þ ¼ xðkÞ þ P1 ðp AxðkÞ ByðkÞ Þ; T ðkþ1Þ ðkþ1Þ yðkþ1Þ ¼ yðkÞ þ Q 1 qÞ Q 1 xðkÞ Þ: 2 ðB x 2 Q 1 ðx
ð2:2Þ
The iteration matrix of the GPIU iteration method (2.1) or (2.2) is given by
T ¼
P
0
B þ Q 1
Q2
T
1
P A B Q1
Q2
¼ I M1 A:
ð2:3Þ
Evidently, the above iteration method naturally induces a preconditioner M for the coefficient matrix A. This preconditioner is called the GPIU preconditioner. 3. The semi-convergence of the GPIU method When the matrix B is rank-deficient, the coefficient matrix A of the saddle point problem (1.1) is singular and, hence, has infinitely many solutions, see [21]. In this section, we will discuss the semi-convergence property about the GPIU method for solving the singular saddle point problem (1.1) . We first reveal some basic notations and concepts. Assume that a matrix A can be split into
A ¼ M N;
ð3:1Þ
with M being nonsingular. Then we can construct a splitting iteration method as
X ðkþ1Þ ¼ T X ðkÞ þ c;
k ¼ 0; 1; 2; . . . ;
1
ð3:2Þ 1
where T ¼ M N is called the iteration matrix and c ¼ M b. Obviously, a vector X is a solution of the linear system AX ¼ b if and only if ðI T ÞX ¼ c (see [2,8,21]). It is well known that any of the following three conditions is necessary and sufficient for guaranteeing the semi-convergence of the iteration method (3.2) for the singular linear systems AX ¼ b: (a) The spectral radius of the iteration matrix T is equal to one, i.e., qðT Þ ¼ 1; (b) The elementary divisors of the iteration matrix T associated with its eigenvalue l ¼ 1 are linear, i.e., rankðI T Þ2 ¼ rankðI T Þ, or equivalently, indexðI T Þ ¼ 1, where rank () and index () denote the rank and index of the corresponding matrix, respectively; (c) If l 2 rðT Þ, the spectrum of the iteration matrix T , satisfies jlj ¼ 1, then l ¼ 1, i.e., VðT Þ < 1, where
VðT Þ maxfjlj l 2 rðT Þ;
l – 1g
is called the convergence factor of the iteration scheme (3.2). As usual, we call the splitting (3.1) and its corresponding iteration matrix T semi-convergent if the iteration (3.2) is semi-convergent.
G.-F. Zhang, S.-S. Wang / Applied Mathematics and Computation 219 (2013) 4225–4231
4227
Lemma 3.1 (Young [23]). Consider the quadratic equation x2 dx þ g ¼ 0 , where d and g are real numbers. Both roots of the equation are less than one in modulus if and only if jgj < 1 and jdj < 1 þ g. Theorem 3.2. Let P 2 Rmm and Q 2 2 Rnn be symmetric positive definite, and B 2 Rmn be of column rank-deficient, with m P n. Suppose that k is an eigenvalue of the iteration matrix T and ðuT ; v T ÞT 2 Rmþn is the corresponding eigenvector. Then k ¼ 1 if and only if u ¼ 0. Proof. If k ¼ 1, then
T
u
v
¼
u ;
v
Noticing that T ¼ M1 N , it then follows that
N
u
v
u ¼M ;
v
or equivalently
P A B Q1
u
v
Q2
¼
P
0
BT þ Q 1
Q2
u :
v
By simplification, we then obtain
Au þ Bv ¼ 0;
ð3:3Þ
BT u ¼ 0:
From the first equation in (3.3), we have u ¼ A1 Bv , which, together with the second equation in (3.3), result in BT A1 Bv ¼ 0. Because that A is positive definite, we get Bv ¼ 0, which shows that u ¼ 0. If u ¼ 0, then
T
0
v
¼k
0 :
v
It follows that
P A B Q1
0
v
Q2
¼k
P
0
BT þ Q 1
Q2
0 ;
v
or equivalently,
Bv ¼ 0;
ð3:4Þ
Q 2 v ¼ kQ 2 v : Since
v – 0, we see that k ¼ 1.
h
mm
Theorem 3.3. Let P 2 R and Q 2 2 Rnn be symmetric positive definite, and B 2 Rmn be of rank-deficient, with m P n. Suppose that k – 1 is an eigenvalue of the iteration matrix T and ðuT ; v T ÞT 2 Rmþn is the corresponding eigenvector. Then k satisfies the following quadratic equation.
k2 þ
b þ c 2a s
a
kþ
aþsb ¼ 0; a
ð3:5Þ
where
a :¼
u Pu > 0; u u
b¼
u Au > 0; u u
c¼
T u BQ 1 2 B u P 0; uu
s¼
u BQ 1 2 Q 1u : u u
Proof. From Theorem 3.2 we know that if k – 1, then u – 0. By (2.3) we get
P A B Q1
Q2
u
v
¼k
P
0
BT þ Q 1
Q2
u ;
v
or equivalently,
½ð1 kÞP Au ¼ Bv ; ½ð1 kÞQ 1 þ kBT u ¼ ðk 1ÞQ 2 v :
Because Q 2 is symmetric positive definite and k – 1, we have tion of (3.6), we obtain
ð3:6Þ 1 T k v ¼ ðQ 1 2 Q 1 þ k1 Q 2 B Þu. Substituting v into the first equa-
4228
G.-F. Zhang, S.-S. Wang / Applied Mathematics and Computation 219 (2013) 4225–4231 1 1 T k2 Pu þ kðAu þ BQ 1 2 B u 2Pu BQ 2 Q 1 uÞ þ ðPu þ BQ 2 Q 1 u AuÞ ¼ 0:
ð3:7Þ
Since u – 0, by left-multiplying u , (3.7) leads to 1 1 T k2 u Pu þ ku ðAu þ BQ 1 2 B u 2Pu BQ 2 Q 1 uÞ þ u ðP þ BQ 2 Q 1 AÞu ¼ 0:
ð3:8Þ
As P is symmetric positive definite, we have u Pu – 0. Dividing u Pu though both side of (3.8), with the notations a; b; c and s, (3.8) leads to
k2 þ
b þ c 2a s
a
kþ
aþsb ¼ 0: a
ð3:9Þ
h Theorem 3.4. Assume that A 2 Rmm is symmetric positive definite, B 2 Rmn is rank-deficient, P 2 Rmm and Q 2 2 Rnn are symmetric positive definite, and Q 1 2 Rnm is an arbitrary matrix such that BQ 1 2 Q 1 is symmetric. Then rðT Þ < 1 holds if and only if the following conditions hold:
c s < b and 0 < < 2a þ s b: 2
Proof. Since P is symmetric positive definite and BQ 1 2 Q 1 is symmetric. By Lemma 3.1 and Eq. (3.9), we know that the spectral radius of the GPIU iteration matrix is less than one in modulus if and only if
(
j aþasb j < 1; as j < 1 þ aþasb ; j bþc2 a
or equivalently,
c s < b and 0 < < 2a þ s b: 2
h Theorem 3.5. Let A 2 Rmm be symmetric positive definite and B 2 Rmn be rank-deficient. Suppose that P 2 Rmm and Q 2 2 Rnn 1 T 1 are symmetric positive definite, Q 1 2 Rnm is an arbitrary matrix such that BQ 1 2 Q 1 is symmetric and NðBÞ \ RðQ 2 B A BÞ ¼ f0g, then indexðI T Þ ¼ 1. Here and in the sequel, NðÞ and RðÞ is used to represent the null space and range space of the corresponding matrix, respectively. Proof. We give the proof by contradiction. Suppose that rank½I T 2 < rank½I T . Then there exists a nonzero vector a ¼: ðaT1 ; aT2 ÞT 2 Cmþn such that ðI T Þa – 0 and ðI T Þ2 a ¼ 0. Since T ¼ M1 N and A ¼ M N , we have I T ¼ M1 A. Let z ¼ M1 Aa ¼ ðuT1 ; uT2 ÞT – 0. Then M1 Az ¼ 0. It leads to 0 – z 2 NðM1 AÞ \ RðM1 AÞ. Since z 2 NðM1 AÞ, we see that z 2 NðAÞ, that is
A
B T
B
0
u1
u2
¼ 0;
or equivalently
Au1 þ Bu2 ¼ 0; BT u1 ¼ 0:
ð3:10Þ
From the first equation in (3.10), we have u1 ¼ A1 Bu2 . From the second equality in (3.10), we have BT A1 Bu2 ¼ 0. Because A is positive definite, A1 is also positive definite, we get Bu2 ¼ 0. Therefore u1 ¼ 0 and 0 – u2 2 NðBÞ. From
M1 Aa ¼
u1 u2
;
we have
0 u1 ¼M : Aa ¼ M u2 u2 It can be rewritten as
Aa1 þ Ba2 ¼ 0; BT a1 ¼ Q 2 u2 ;
Hence T 1 u2 2 RðQ 1 2 B A BÞ:
G.-F. Zhang, S.-S. Wang / Applied Mathematics and Computation 219 (2013) 4225–4231
4229
Then, we get T 1 0 – u2 2 NðBÞ \ RðQ 1 2 B A BÞ;
which contradicts with T 1 NðBÞ \ RðQ 1 2 B A BÞ ¼ f0g:
Therefore, indexðI TÞ ¼ 1, this completes the proof.
h
From Theorems 3.4 and 3.5, we immediately obtain the following convergence result. Theorem 3.6. Assume that the conditions of Theorem 3.5 are satisfied. Then the GPIU method (2.2) is semi-convergent to a solution of the singular saddle point problem (1.1) if and only if the following conditions hold.
c s < b and 0 < < 2a þ s b: 2
By choosing different parameter matrices P; Q 1 and Q 2 , we can easily get a series of iterative methods for solving the singular saddle problem (1.1) . Some choices of the parameter matrices P, Q 1 and Q 2 are given in Table 3.1. When we choose P ¼ w1 A; Q 1 ¼ 0 and Q 2 ¼ 1t Q , Q is an approximate matrix to the Schur complement BT A1 B, the GPIU method reduces to the PIU method presented in [6] and studied in [24].
4. Numerical examples In this section, we illustrate the feasibility and effectiveness of the GPIU iterative methods by some numerical examples. We list the number of iterations (denoted by ‘‘IT’’) and the norm of absolute residual vectors (denoted by ‘‘RES’’). Here, ‘‘RES’’ are defined as
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kp AxðkÞ ByðkÞ k2 þ kq BT xðkÞ k2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; RES :¼ kpk2 þ kqk2 with fððxðkÞ ÞT ; ðyðkÞ ÞT ÞT g being the current approximate solution. All computations are implemented in MATLAB on a personal computer with a machine precision 1016 , and terminated if the current iteration satisfies either RES < 107 or the number of the prescribed iteration kmax ¼ 1000 is exceeded. In the following numerical experiments, we choose the matrices P 1 ; Q 1 and Q 2 according to the six cases listed in Table 3.1. Example 4.1. Consider the singular saddle point problem (1.1) with coefficient matrix of the matrix block
A¼
I T1 þ T1 I
0
0
I T1 þ T1 I
2 R2l
2
2l2
and
B¼
b B
e B
b B
b1
b2
2 R2l
2
ðl2 þ2Þ
;
where
T1 ¼
1 h
2
b¼ B
tridiagð1; 2; 1Þ 2 Rll ;
b e ; b1 ¼ B 0
b 0 ; b2 ¼ B e
IF FI
2 R2l
e ¼ ð1; 1; . . . ; 1ÞT 2 Rl
2
2
l2
;
=2
Table 3.1 Some choices of the parameter matrices P, Q 1 and Q 2 . Case
P
Q1
Q2
I
1
xA
0
II
x diagðAÞ
1
0
III
x tridiagðAÞ
1
0
IV
xA
1
st BT
1 t In 1 t In 1 t In 1 t In
V
1
xA
st BT
VI
1
xA
sQ 2 B
1 b B~T BÞ e b T 1 B; t diagð B P T
1 b B~T BÞ e b T 1 B; t tridiagð B P
4230
G.-F. Zhang, S.-S. Wang / Applied Mathematics and Computation 219 (2013) 4225–4231
Table 4.1 Numerical results of Example 4.1. p¼8
p ¼ 16
p ¼ 24
ðx; t; sÞ
IT
RES (10
Case I
(0.8, 0, 100) (1.6, 0, 1000)
101 71
6.54 5.89
0.0931 0.0791
87 64
5.83 8.65
0.4849 0.3659
80 60
7.69 7.13
2.1176 1.6570
Case II
(0.8, 0, 100) (1.6, 0, 1000)
118 79
7.23 4.59
0.0708 0.0516
106 74
5.83 2.66
0.1127 0.0890
100 71
9.37 6.18
0.3769 0.3610
Case III
(0.8, 0, 100) (1.6, 0, 1000)
105 79
6.56 4.89
0.0707 0.0577
95 73
7.74 4.64
0.1092 0.1070
91 70
8.57 7.73
0.5565 0.3171
Case IV
(0.8, 0.8, 100) (1.6, 1, 1000)
101 71
4.78 9.42
0.0986 0.0750
87 64
5.78 7.47
0.4926 0.3775
80 60
7.48 6.45
2.1137 1.6530
Case V
(0.2, 0.4, 0.8) (1.6, 1, 1000)
50 101
6.37 4.56
0.0651 0.0938
56 101
7.64 7.47
0.2531 0.4926
55 100
5.58 6.16
1.1486 2.1024
Case VI
(0.8, 0.8, 100) (1.6, 1, 1000)
150 101
8.55 2.45
0.1143 0.0151
149 101
6.67 7.73
0.6850 0.4957
145 100
5.38 8.63
3.1711 2.1106
PIU (or PPIU case I)
s ¼ 0:25 x ¼ 0:90
184
9.25
0.6203
487
9.34
21.2042
849
9.58
187.4520
PPIU Case II
s ¼ 0:25 x ¼ 0:90
81
9.54
0.3178
238
9.84
10.7409
435
9.32
95.1224
72
6.37
0.2538
147
5.89
3.1896
214
7.75
8.5876
MINRES
7
)
CPU
IT
RES (10
7
)
CPU
IT
RES (107 )
CPU
with
F¼
1 tridiagð1; 1; 0Þ 2 Rll h
1 being a tridiagonal matrix, where being the Kronecker product symbol, h ¼ pþ1 the discretization meshsize. This problem is b with a technical modification of Example 5.1 in [6]. Here, the matrix block B is an augmentation of the full-rank matrix B b linearly independent vectors b1 and b2 which are linear combinations of the columns of the matrix B. Thus B is a rank-defi2 2 cient matrix. We set m ¼ 2l and n ¼ l þ 2. In our computations, we choose the right-hand-side vector ðpT ; qT ÞT 2 Rmþn such that the exact solution of the linear system (1.1) is ððx ÞT ; ðy ÞT ÞT ¼ ð1; 1; . . . ; 1ÞT 2 Rmþn . In this example, the GPIU methods (Case I-IV)) are compared with the MINRES method, the parameterized inexact Uzawa (PIU) method presented in [6,24] and the preconditioned PIU (PPIU) method presented in [20]. With respect to different sizes of the coefficient matrix, we list the number of iteration steps (IT) and the norm of the residual (RES) about the GPIU, the PIU, the PPIU and the MINRES methods in Table 4.1. From Table 4.1, we see that in the most cases the IT and CPU of the GPIU method are less than that of the PIU, PPIU and the MINRES methods. In the most cases, the GPIU method is more effective than MINRES method, and the MINRES is more effective than the PIU and PPIU methods. To further compare the numerical behaviors of the GPIU method, we consider the following incompressible steady state Stokes problem. In the next example, we only compared GPIU method with the MINRES method because its coefficient matrix generated by IFISS [15], it is cumbersome when applying the PPIU method to solve this problem. And, from Example 4.1 we see the MINRES is more effective than the PIU and PPIU methods.
Example 4.2. Consider the incompressible steady state Stokes problem
mDu þ grad p ¼ f ; div u ¼ 0; in X:
in X;
ð4:1Þ
with suitable boundary condition on @ X. Here, we get the test problem (leak-lid-driven cavity) by using IFISS software written by David silvester, Howard Elman and Alison Ramage. The mixed finite element used is the bilinear-constant velocity– pressure Q 1 P0 pair with stabilization. The stabilization parameter is chosen to be 0:25. We get the (1, 1) block A of the coefficient matrix corresponding to the discretization of the conservative term which is symmetric positive definite. We demote C ¼ 0. Since the matrix B produced by the software is rank deficient, so A is singular. We assume that r ¼ rankðBÞ, and b B e with B b 2 C rn and B e 2 C ðmrÞn . In our experiment, we choose uniform grids 8 8; 16 16; 32 32. B ¼ ½ B; Table 4.1 gives the numerical results of Example 4.2 when we apply the GPIU method and MINRES to solve the problem, respectively. From Table 4.1 we can see that the GPIU method is much more effective than MINRES with the proper choice of x; t, and s for solving the singular saddle point problem (1.1) . However, the parameters in Tables 4.1, and 4.2 are chosen randomly, so they may not be the optimal parameters. How to choose the optimal parameters that makes the fastest convergence rate is an unsolved important problem, which will be our future work.
4231
G.-F. Zhang, S.-S. Wang / Applied Mathematics and Computation 219 (2013) 4225–4231 Table 4.2 Numerical results of Example 4.2. 88
Case I Case II Case III Case IV
Case V Case VI MINRES
16 16
ðx; t; sÞ
IT
RES (10
(0.6, 0, 10) (0.8, 0, 100) (0.8, 0, 100) (1.6, 0, 1000) (0.6, 0, 10) (1.6, 0, 1000) (0.6, 0.6, 10) (0.8, 0.8, 100) (0.2, 0.4, 0.8) (1.6, 1, 1000) (0.2, 0.4, 0.8) (1.6, 1, 1000)
34 554 395 145 295 151 34 548 56 95 56 94 348
9.25 6.24 6.24 5.69 9.91 5.69 9.19 6.25 8.47 6.02 8.91 9.05 5.67
5
)
32 32
CPU
IT
RES (10
0.0302 0.2711 0.1086 0.0519 0.0860 0.0547 0.0275 0.2455 0.0219 0.0423 0.0378 0.0723 0.2365
51 15 988 198 234 209 49 15 49 94 49 94 468
9.96 9.29 2.98 6.65 9.72 6.65 9.16 8.97 8.07 5.49 9.65 8.49 8.65
5
)
CPU
IT
RES (105 )
CPU
0.2447 0.0729 1.40 0.3253 1.6799 0.3570 0.2444 0.0752 0.2010 0.3979 0.5005 0.9590 9.7428
67 19 344 312 567 346 65 18 50 93 50 93 784
8.48 9.31 8.05 7.13 6.88 7.13 9.46 9.85 7.51 6.42 7.71 7.12 7.54
5.3681 1.5778 19.3740 6.4912 21.2518 7.4563 5.0894 1.4351 3.1323 5.8651 6.8167 12.9547 46.3496
Acknowledgment The authors are very much indebted to the referee for their constructive suggestions and helpful comments which have improved the quality of the paper. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]
Z.-Z. Bai, On semi-convergence of Hermitian and skew-Hermitian splitting methods for singular linear systems, Computing 89 (2010) 171–197. Z.-Z. Bai, Structured preconditioners for nonsingular matrices of block two-by-two structures, Math. Comput. 75 (2006) 791–815. Z.-Z. Bai, G.-Q. Li, Restrictively preconditioned conjugate gradient methods for systems of linear equations, IMA J. Numer. Anal. 23 (2003) 561–580. Z.-Z. Bai, G.H. Golub, Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems, IMA J. Numer. Anal. 27 (2007) 1–23. Z.-Z. Bai, G.H. Golub, J.-Y. Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems, Numer. Math. 98 (2004) 1–32. Z.-Z. Bai, B.N. Parlett, Z.-Q. Wang, On generalized successive overrelaxation methods for augmented linear systems, Numer. Math. 102 (2005) 1–38. Z.-Z. Bai, Z.-Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl. 428 (2008) 2900–2932. Z.-Z. Bai, L. Wang, J.-Y. Yuan, Weak convergence theory of quasi-nonnegative splittings for singular matrices, Appl. Numer. Math. 47 (2003) 75–89. Z.-Z. Bai, Z.-Q. Wang, Restrictive preconditioners for conjugate gradient methods for symmetric positive definite linear systems, J. Comput. Appl. Math. 187 (2006) 202–226. M. Benzi, G.H. Golub, J. Liesen, Numerical solution of saddle point problems, Acta Numer. 14 (2005) 1–137. J.T. Betts, Practical Methods For Optimal Control Using Nonlinear Programming, SIAM, Philadelphia, PA, 2001. J.H. Bramble, J.E. Pasciak, A.T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Numer. Anal. 34 (1997) 1072–1092. F. Brezzi, M. Fortin, Mixed and Hybrid Finite Element Methods, Springer-Verlag, New York and London, 1991. H.C. Elman, G.H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point problems, SIAM J. Numer. Anal. 31 (1994) 1645–1661. H.C. Elman, A. Ramage, D.J. Silvester, Algorithm 866: IFISS, a MatLab toolbox for modelling incompressible flow, ACM Trans. Math. Softw. 33 (2007) 1– 18. H.C. Elman, D.J. Silvester, A.J. Wathen, Performance and analysis of saddle point preconditioners for the discrete steady-state Navier–Stokes equations, Numer. Math. 90 (2002) 665–688. B. Fischer, R. Ramage, D.J. Silvester, A.J. Wathen, Minimum residual methods for augmented systems, BIT Numer. Math. 38 (1998) 527–543. G.H. Golub, X. Wu, J.-Y. Yuan, SOR-like methods for augmented systems, BIT Numer. Math. 41 (2001) 71–85. Z.-H. Huang, T.-Z. Huang, Spectral properties of the preconditioned AHSS iteration method for generalized saddle point problems, Comput. Appl. Math. 29 (2010) 269–295. H.-F. Ma, N.-M. Zhang, A note on block-diagonally preconditioned PIU methods for singular saddle point problems, Int. J. Comput. Math. 88 (2011) 808–817. C.D. Meyer Jr., R.J. Plemmons, Convergent powers of a matrix with applications to iterative methods for singular linear systems, SIAM J. Numer. Anal. 14 (1977) 699–705. X. Wu, B.P.B. Silva, J.-Y. Yuan, Conjugate gradient method for rank deficient saddle point problems, Numer. Algor. 35 (2004) 139–154. D.M. Young, Iterative Solution for Large Linear Systems, Academic Press, New York, 1971. B. Zheng, Z.-Z. Bai, X. Yang, On semi-convergence of parameterized Uzawa methods for singular saddle point problems, Linear Algebra Appl. 431 (2009) 808–817. Y.-Y. Zhou, G.-F. Zhang, A generalization of parameterized inexact Uzawa method for generalized saddle point problems, Appl. Math. Comput. 215 (2009) 599–607.