Applied Mathematics and Computation 221 (2013) 89–101
Contents lists available at SciVerse ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
On block-diagonally preconditioned accelerated parameterized inexact Uzawa method for singular saddle point problems q Zhao-Zheng Liang, Guo-Feng Zhang ⇑ School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, PR China
a r t i c l e
i n f o
Keywords: Singular saddle point problems Semi-convergence Parameterized inexact Uzawa method Preconditioning Matrix splitting
a b s t r a c t Recently, [Z.-Z. Bai, Z.-Q.Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl. 428 (2008) 2900–2932] studied a class of parameterized inexact Uzawa (PIU) methods and proposed a generalized and modified accelerated parameterized inexact Uzawa (APIU) iteration method for solving nonsingular saddle point problems. In this paper, we further generalize this method to obtain the blockdiagonally preconditioned accelerated parameterized inexact Uzawa (BDP–APIU) method for solving singular saddle point problems. Theoretical analysis shows that the semi-convergence of this new method can be guaranteed. In addition, the quasi-optimal parameters of the new method are discussed. Numerical example is given to show the feasibility and effectiveness of the new method for solving singular saddle point problems. Ó 2013 Elsevier Inc. All rights reserved.
1. Introduction We consider an iterative solution of the large, spare and singular saddle point problem
A
B
T
0
B
x y
¼
b ; q
ð1:1Þ
where A 2 Rmm is a symmetric positive definite matrix, B 2 Rmn has rankðBÞ ¼ r < n with m > n, and b 2 Rm ; q 2 Rn are given vectors. We denote AT and A the transpose and the conjugate transpose of A, respectively. The range and the null spaces of A are denoted by RðAÞ and N ðAÞ. In addition, In is the identity matrix of order n. The linear system (1.1) typically results from scientific and engineering applications, e.g., discrete approximations of certain partial differential equations [2,15,18], constrained weighted least-squares estimation [12], interior point methods in constrained optimization [10] and so on. See [9] for more detailed discussion about saddle point problems. When the coefficient matrix of (1.1) is nonsingular, i.e., B is of full rank, a large amount of work has been developed to solve the linear system, including Uzawa-type methods [8,14,17,20], matrix splitting methods [3–8], preconditioned Krylov subspace iteration methods [13,22], see [2,9] and the reference therein for more details. Though most often the matrix B occurs in the form of full rank, but not always. If B is rank-deficient, the linear system (1.1) is a singular saddle point problem. There are also a large number of numerical methods for solving the singular linear systems. For example, PMINRES [19], PCG [24], Uzawa-type methods [16,26,27] and the HSS-like methods [1,23]. Recently, Bai and Wang [8] studied the parameterized inexact Uzawa (PIU) method and further extended the PIU method to obtain the accelerated parameterized inexact Uzawa (APIU) iteration method for nonsingular saddle point problems: q
This work was supported by the National Natural Science Foundation (11271174).
⇑ Corresponding author.
E-mail address:
[email protected] (G.-F. Zhang). 0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2013.06.002
90
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101
(
xðkþ1Þ ¼ xðkÞ þ xP1 ðb AxðkÞ ByðkÞ Þ;
ð1:2Þ
yðkþ1Þ ¼ yðkÞ þ sQ 1 ðBT xðkÞ qÞ þ cQ 1 BT ðxðkþ1Þ xðkÞ Þ
where x; s and c with x; s – 0 are given relaxation factors, and P 2 Rmm and Q 2 Rnn are given nonsingular matrices. In [20], Gao and Kong generalized the PIU method and presented the block diagonally preconditioned PIU (PPIU) method:
(
xðkþ1Þ ¼ xðkÞ þ xP1 ðP1 b P1 AxðkÞ P1 ByðkÞ Þ;
ð1:3Þ
yðkþ1Þ ¼ yðkÞ þ sQ 1 ðP2 BT xðkþ1Þ P2 qÞ
where x; s – 0 are given relaxation factors, P; P1 2 Rmm and Q ; P2 2 Rnn with P 1 and P 2 being positive definite, are given nonsingular matrices. In this paper, we further generalize the APIU and PPIU methods to the block-diagonally preconditioned accelerated parameterized inexact Uzawa (BDP–APIU) method for singular saddle point problem (1.1). The corresponding semi-convergence properties under certain conditions are studied. In addition, the quasi-optimal parameters of the new method are discussed. Numerical examples are given to show the feasibility and effectiveness of the new method for solving singular saddle point problems. The outline of the paper is as follows. After describing the block-diagonally preconditioned accelerated parameterized inexact Uzawa method for singular saddle point problems in Section 2, we analyze the semi-convergence of this method in Section 3. In addition, the quasi-optimal parameters of the BDP–APIU method for solving singular saddle point problems are discussed in this part. In Section 4, numerical examples are given to show the efficiency of the new algorithm. Finally in Section 5, we give some brief concluding remarks. 2. Block-diagonally preconditioned accelerated parameterized inexact Uzawa iterative method The singular saddle point problem (1.1) can be rewritten as the following equivalent form
AX
A
B
BT
0
x y
¼
b q
c:
ð2:1Þ
Let
P 1 :¼
P1
0
0
P2
;
where P1 2 Rmm and P 2 2 Rnn are positive definite matrices. Left preconditioning the augmented linear system (2.1) by P 1 , we get
P1 0
0 P2
A BT
B 0
x P1 ¼ y 0
x
0 P2
b q
;
that is,
P1 A
P1 B
P2 BT
0
¼
y
P1 b P2 q
:
ð2:2Þ
For the coefficient matrix of the linear systems (2.2), we make the following matrix splitting
A1
P1 A
P1 B
P2 BT
0
¼ D L U;
where
D¼
P
0
0 Q
;
L¼
0
0
P2 BT
0
;
U¼
P P 1 A P1 B 0
Q
;
P2R and Q 2 R , being good approximations to P1 A and ðP2 BT ÞP 1 ðP1 BÞ, respectively, are prescribed symmetric positive definitive matrices. Let mm
nn
X¼
xI m
0
0
sIn
;
D¼
Im
0
0
cIn
where x; s; and c with x; s – 0 are given real numbers. Based on the above splitting, the new relaxed iteration method for singular saddle point problem (2.1) is given the following form, we call it as the block-diagonally preconditioned accelerated parameterized inexact Uzawa (BDP–APIU) method:
91
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101
xðkþ1Þ
!
xðkÞ
¼ ðD DLÞ1 ½ðI XÞD þ ðX DÞL þ XU
ðkþ1Þ
y
ðkÞ
y
!
b þ ðD DLÞ1 X ; q
or equivalently,
xðkþ1Þ
! ¼ T ðx; s; cÞ
yðkþ1Þ
xðkÞ
!
yðkÞ
þ Mðx; s; cÞ1
b q
;
where
Im xP1 ðP1 AÞ
T ðx; s; cÞ ðD DLÞ1 ½ðI XÞD þ ðX DÞL þ XU ¼
!
xP1 ðP1 BÞ
Q 1 ðP2 BT ÞðsIm xcP1 ðP1 AÞÞ In xcQ 1 ðP2 BT ÞP1 ðP1 BÞ
is the iteration matrix of the BDP–APIU method. Let 1
Mðx; s; cÞ X1 ðD DLÞ ¼
xP c s P2 BT
0
!
1
sQ
and 1
N ðx; s; cÞ Mðx; s; cÞ A1 ¼
xP
P1 A 1 cs P2 BT
P1 B 1
sQ
! :
Then it is obvious that
T ðx; s; cÞ ¼ Mðx; s; cÞ1 N ðx; s; cÞ:
ð2:3Þ
The BDP–APIU method: Let P1 2 Rmm ; P2 2 Rnn be positive definite matrices, P 2 Rmm and Q 2 Rnn be symmetric positive definite matrices. Given initial vectors xð0Þ 2 Cm and yð0Þ 2 Cn , and the relaxation parameters x; s and c with x; s – 0. For T T T k ¼ 0; 1; 2; . . . until the iteration sequence fðxðkÞ ; yðkÞ Þ g is convergent, compute
(
xðkþ1Þ ¼ xðkÞ þ xP1 ðP1 b P1 AxðkÞ P1 ByðkÞ Þ; yðkþ1Þ ¼ yðkÞ þ sQ 1 ðP2 BT xðkÞ P2 qÞ þ cQ 1 P2 BT ðxðkþ1Þ xðkÞ Þ:
ð2:4Þ
It is obvious that when choosing P 1 ¼ Im ; P2 ¼ In , then the BDP–APIU method reduces to the APIU method [8], and when choosing c ¼ s it reduces to the PPIU method [20]. 3. Convergence analysis for the BDP–APIU method In this section, we discuss the semi-convergence of the BDP–APIU method for solving the singular saddle point problem (2.1). We first reveal some basic concepts and notations. Denote rðAÞ and qðAÞ as the spectrum and spectral radius of the matrix A, respectively. The rank and index of the matrix A are denoted by rankðAÞ and indexðAÞ, respectively. Assume that the matrix A can be split into A ¼ M N with M nonsingular. Then we can construct a splitting iteration method:
xðkþ1Þ ¼ T xðkÞ þ M1 c; k ¼ 0; 1; 2; . . . :
ð3:1Þ
1
where T ¼ M N is the iteration matrix. It is well known that any of the following three conditions is necessary and sufficient for guaranteeing the semi-convergence of the iteration method (3.1) for the singular linear systems AX ¼ c (see [1,11,26,27]): (a) The spectral radius of the iteration matrix T is equal to one, i.e., qðT Þ ¼ 1; (b) The elementary divisor associated with k 2 rðAÞ is linear when k ¼ 1, i.e., rankðI T Þ2 ¼ rankðI T Þ, or equivalently, indexðI T Þ= 1; (c) If k 2 rðT Þ with jkj ¼ 1, then k ¼ 1, i.e.,
VðT Þ maxfjkj : k 2 rðT Þ; k – 1g < 1: When iteration scheme (3.1) is semi-convergent, VðT Þ is said to be the semi-convergence factor. As usual, the splitting A ¼ M N and the corresponding iteration matrix T are called as semi-convergent if the iteration (3.1) is semi-convergent. Next we study the semi-convergence of the BDP–APIU iteration method (2.4). To get the semi-convergence conditions, the following lemmas are used. Lemma 3.1 [21]. Both roots of the complex quadratic equation k2 þ bk þ c ¼ 0 are less than one in modulus if and only if jcj < 1 and jb b cj þ jcj2 < 1.
92
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101
Lemma 3.2 [25]. Both roots of the real quadratic equation k2 þ bk þ c ¼ 0 are less than one in modulus if and only if jcj < 1 and jbj < 1 þ c. Theorem 3.1. Let P 1 2 Rmm and P2 2 Rnn be positive definite matrices, A; P 2 Rmm and Q 2 Rnn be symmetric positive definite matrices and rankðBÞ < m. If k is an eigenvalue of the iteration matrix T ðx; s; cÞ defined by (2.3), and ðu ; v Þ is its corresponding eigenvector with u 2 Cm and v 2 Cn , then k ¼ 1 if and only if u ¼ 0. Proof. If k ¼ 1, then
T ðx; s; cÞ
u
¼
v
u
v
:
Noticing that T ðx; s; cÞ ¼ Mðx; s; cÞ1 N ðx; s; cÞ, we have
N ðx; s; cÞ
u
v
u ¼ Mðx; s; cÞ ;
v
or equivalently,
P xP 1 A ðs cÞP2 B
xP1 B
T
u
v
Q
¼
P
0 T
cP2 B
Q
u :
v
By simplification, we then obtain
xP1 ðAu þ Bv Þ ¼ 0; sP2 BT u ¼ 0:
Since x; s – 0; P1 2 Rmm and P 2 2 Rnn are positive definite matrices. Thus it follows that
Au þ Bv ¼ 0;
ð3:2Þ
BT u ¼ 0:
From the first equation in (3.2), we know u ¼ A1 Bv , which together with the second equation in (3.2), obtaining BT A1 Bv ¼ 0. Because A is positive definite, we get Bv ¼ 0. Then u ¼ 0 holds. If u ¼ 0, then
T ðx; s; cÞ
0
v
0 ¼k :
v
that is,
P xP 1 A
xP1 B
ðs cÞP2 BT
Q
0
v
¼k
P
0
cP2 BT
Q
0
v
;
or equivalently,
xP1 Bv ¼ 0;
ð3:3Þ
Q v ¼ kQ v : Since
v – 0, it then follows k ¼ 1.
Theorem 3.2. Let P 1 2 Rmm and P2 2 Rnn be positive definite matrices, A; P 2 Rmm and Q 2 Rnn be symmetric positive definite matrices and rankðBÞ < m. If k – 1 is an eigenvalue of the iteration matrix T ðx; s; cÞ and ðu ; v Þ 2 Cmþn is the corresponding eigenvector. Then k satisfies the following quadratic equation:
k2 þ ðax 2 þ bxcÞk þ ð1 ax þ bxðs cÞÞ ¼ 0;
ð3:4Þ
where
a :¼
u Au u P 1 1 Pu
;
b :¼
u BQ 1 ðP2 BT Þu u P1 1 Pu
:
Proof. Firstly, since k – 1, we know u – 0 from Theorem 3.1. By (2.4) we have
P xP 1 A
xP1 B
ðs cÞP2 BT
Q
u
v
¼k
P
0
cP2 BT
Q
u
v
;
93
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101
or equivalently,
ðP xP1 AÞu xP1 Bv ¼ kPu;
ð3:5Þ
ðs cÞP2 BT u þ Q v ¼ kðcP2 BT u þ Q v Þ:
cc 1 From the second equation in (3.5), we obtain that v ¼ sþk Q P 2 BT u, which, together with the first equation in (3.5), result k1 in 1 1 1 k2 P1 P2 BT þ xA 2P1 P 2 BT Þu ¼ 0: 1 Pu þ kðxcBQ 1 PÞu þ ðP 1 P xA þ xðs cÞBQ
Since u – 0, by left multiplying u , and with the positive definiteness of P 1 and P, we have 2
k þ x
u Au u P1 1 Pu
2 þ xc
! u BQ 1 P2 BT u u P1 1 Pu
kþ 1x
u Au u P1 1 Pu
þ xðs cÞ
! u BQ 1 P 2 BT u u P1 1 Pu
¼ 0:
Thus, the proof is completed. h Furthermore, from the quadratic Eq. (3.4), we can easily get the following properties about the eigenvalues of the BDP– APIU iteration matrix T ðx; s; cÞ. Corollary 3.1. Let P 1 2 Rmm ; P 2 2 Rnn be positive definite, A; P 2 Rmm ; Q 2 Rnn be symmetric positive definite and rankðBÞ < m. Assume that k – 1 is an eigenvalue of the iteration matrix T ðx; s; cÞ and ðu ; v Þ 2 Cmþn is the corresponding eigenvector. Then it holds that
k¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 xða þ bcÞ ½2 xða þ bcÞ2 4½1 ax þ bxðs cÞ : 2
Remark. We easily see that if b ¼ 0 then k ¼ 1 ax and if
ð3:6Þ
s ¼ c then
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 xða þ bsÞ ½2 xða þ bsÞ2 4ð1 axÞ : k¼ 2
ð3:7Þ
We define
(
U :¼
aj a :¼ aR þ iaI ¼
u Au u P1 1 Pu
) ; 0 – u 2 Cm
and
(
W :¼ bj b :¼ bR þ ibI ¼
u BQ 1 P2 BT u u P 1 1 Pu
) ; 0 – u 2 Cm ;
where aR ; aI ; bR and bI denote the real part and imaginary part of a and b, respectively. Assume that P1 1 P is symmetric positive definite and BQ 1 P2 BT is symmetric semi-positive definite, we denote
amax :¼ maxfaj a 2 Ug; amin :¼ minfaj a 2 Ug; bmax :¼ maxfbj b 2 Wg;
bmin :¼ minfbj b 2 Wg:
Theorem 3.3. Let P1 2 Rmm ; P 2 2 Rnn be positive definite, A; P 2 Rmm ; Q 2 Rnn be symmetric positive definite and rankðBÞ < m. Then VðT ðx; s; cÞÞ < 1 if the one of the following conditions holds (1) x; s and c satisfy
n o 8 2½aR þbI ðscÞ > 0 < x < min ; 2 2 2 > > a2U jaj þjbj ðscÞ þ2ðscÞðaR bR þaI bI Þ < b2W
> maxfðs cÞbR aR g < 0 and max sðh2 x2 þ h1 x þ h0 Þ < 0; > > a2U : b2 a2U W b2W
where
ð3:8Þ
94
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101
8 ho ¼ 4b2I s2 ; > > > > < h ¼ 4½b ðc sÞ2 þ a cjbj2 4½b ðc 2sÞ þ a ða b þ a b Þ; 1 R R R R I I R R 2 2 2 > > h2 ¼ ½2ðaR bR þ aI bI Þ þ ð2c sÞjbj ½ðc sÞ jbj > > : þ2ðaR bR þ aI bI Þðc sÞ þ jaj2 : 1 (2) In addition, if P 1 P 2 BT is symmetric semi-positive definite, then x; s and c 1 P is symmetric positive definite and BQ should satisfy
0
2
amax
; 0
4 2ðamin amax Þ þ and bmax x bmax
s
amin bmax
s 2 amax þ : bmax x
2
ð3:9Þ
Proof. From the Theorem 3.4, we easily see that
k ¼ 1 xa if b ¼ 0: From Lemmas 3.1, we know that the roots of the quadratic Eq. (3.4) corresponding to u – 0 hold jkj < 1 if and only if the parameters x; s and c satisfy
8 j1 axj < 1; > > > < j1 ax þ bxðs cÞj < 1; > jðax 2 þ bxcÞ ða x 2 þ b xcÞð1 ax þ bxðs cÞÞj > > : þj1 ax þ bxðs cÞj2 < 1:
ð3:10Þ
Solving the above inequality (3.10), we obtain the inequality (3.8). 1 If P1 P2 BT is symmetric semi-positive definite, the coefficients of the quadratic 1 P is symmetric positive definite and BQ Eq. (3.4) is real. Thus by Lemma 3.2, we know that x; s and c satisfy the following inequality
8 > < j1 axj < 1; j1 ax þ bxðs cÞj < 1; > : jax 2 þ bxcj < 2 ax þ bxðs cÞ:
ð3:11Þ
By simplification, we can get the inequality (3.9). Theorem 3.4. Let P2 2 Rnn be positive definite, A; P 2 Rmm and Q 2 Rnn be symmetric positive definite and rankðBÞ < m. If N ðBÞ \ RðQ 1 P 2 BT A1 BÞ ¼ f0g, then indexðI T ðx; s; cÞÞ ¼ 1. Proof. We give the proof by contradiction. Suppose that index½I T ðx; s; cÞ – 1, then rank½I T ðx; s; cÞ2 < rank½I T ðx; s; cÞ. Then there exists a ¼ ða1 ; a2 Þ 2 Cmþn and a – 0, such that ½I T ðx; s; cÞa – 0 and ½I T ðx; s; cÞ2 a ¼ 0. Noticing that T ðx; s; cÞ ¼ Mðx; s; cÞ1 N ðx; s; cÞ and A1 ¼ Mðx; s; cÞ N ðx; s; cÞ, we have I T ðx; s; cÞ ¼ Mðx; s; cÞ1 A1 . Let z ¼ Mðx; s; cÞ1 A1 a ¼ ðx ; y Þ – 0, then Mðx; s; cÞ1 A1 z ¼ 0. It leads to 0 – z 2 N ðMðx; s; cÞ1 A1 Þ \ RðMðx; s; cÞ1 A1 Þ. It is easy to see z 2 N ðAÞ. That is,
Ax þ By ¼ 0;
ð3:12Þ
BT x ¼ 0:
From the first equation of (3.12) we know x ¼ A1 By. Substituting the expression for x into the second equation of (3.12), we get BT A1 By ¼ 0. Because A is positive definite, A1 is also positive definite, we have By ¼ 0. Therefore x ¼ 0 and 0 – y 2 N ðBÞ. Since z ¼ Mðx; s; cÞ1 A1 a ¼ ð0 ; y Þ , then A1 a ¼ Mðx; s; cÞðx ; y Þ . That is,
P1 Aa1 þ P1 Ba2
P2 BT a1
¼
! 0 ; 1 s Qy
or equivalently,
(
P1 Aa1 þ P1 Ba2 ¼ 0; P2 BT a1 ¼ 1s Qy:
ð3:13Þ
So y 2 RðQ 1 P2 BT A1 BÞ. Thus 0 – y 2 N ðBÞ \ RðQ 1 P 2 BT A1 BÞ, which is a contradiction. Therefore indexðI T ðx; s; cÞÞ ¼ 1, this completes the proof.
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101
95
Combining Theorems 3.3 and 3.4 we immediately obtain the following sufficient conditions for the convergence result of the BDP–APIU method for solving singular saddle point problem (1.1).
Theorem 3.5. Let P 1 2 Rmm , P2 2 Rnn be positive definite, A; P 2 Rmm and Q 2 Rnn be symmetric positive definite and rankðBÞ < m and N ðBÞ \ RðQ 1 P 2 BT A1 BÞ ¼ f0g. Then the BDP–APIU method for solving singular saddle point problem (1.1) is semi-convergent if x; s and c satisfy
( 0 < x < min a2U b2W
)
2½aR þ bI ðs cÞ
jaj2 þ jbj2 ðs cÞ2 þ 2ðs cÞðaR bR þ aI bI Þ
and
maxfðs cÞbR aR g < 0; a2U
b2W
max sðh2 x2 þ h1 x þ h0 Þ < 0; a2U b2W
where
8 ho ¼ 4b2I s2 ; > > > > < h ¼ 4½b ðc sÞ2 þ a cjbj2 4½b ðc 2sÞ þ a ða b þ a b Þ; 1 R R R R I I R R 2 2 2 > ¼ ½2ð a b þ a b Þ þ ð2 c s Þjbj ½ð c s Þ jbj h > 2 R I R I > > : þ2ðaR bR þ aI bI Þðc sÞ þ jaj2 : 1 In addition, if P1 P 2 BT is symmetric semi-positive definite, then the BDP-APIU 1 P is symmetric positive definite and BQ method for solving singular saddle point problem (1.1) is semi-convergent if x; s and c satisfy
0
2
amax
and
0
4
xbmax
þ
2ðamin amax Þ ; bmax
s
amin bmax
s 2 xamax þ : xbmax
2
1 Next, we assume that the matrix P1 P2 BT is symmetric semi-positive 1 P is symmetric positive definite and the matrix BQ definite. In this case, we shall discuss the corresponding optimal semi-convergence factor of the BDP–APIU method for singular saddle point problems.
If c ¼ s, from Corollary 3.1, the eigenvalues of the iteration matrix T ðx; sÞ of the BDP–APIU method satisfy the following functional relation
uðx; s; aðxÞ; bðxÞÞ ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 xðaðxÞ þ sbðxÞÞ ½2 xðaðxÞ þ sbðxÞÞ2 4ð1 xaðxÞÞ ; 2
where x 2 Cm ; x – 0. In order to study the choosing of the optimal iteration parameters, we define continuous spectral radius of T ðx; sÞ as
qc ðT ðx; sÞÞ :¼ maxm juðx; s; aðxÞ; bðxÞÞj: x2C
It is easy to see that VðT ðx; sÞÞ 6 qc ðT ðx; sÞÞ. However, computing qc ðT ðx; sÞÞ is complicated and impossible, as both aðxÞ and bðxÞ are dependent on the variable x. Instead, we introduce the quasi-spectral radius .ðT ðx; sÞÞ as follows
.ðT ðx; sÞÞ :¼ maxfjuðx; s; aðxÞ; bðxÞÞjja 2 ½amin ; amax ; b 2 ½bmin ; bmax [ f0gg: Consequently, the quasi-optimal iteration parameters are defined by
fxopt ; sopt g :¼ argmin
.ðHðx; sÞÞ:
x;s>0
Obviously, it holds that
VðT ðx; sÞÞ 6 qc ðT ðx; sÞÞ 6 .ðT ðx; sÞÞ: Therefore, the quasi-optimal iteration parameters actually minimize an upper bound of the exact semi-convergence factor. Thus, if c ¼ s, by making use of Theorem 3.1 in [8], we immediately obtain the following results about the quasi-optimal iteration parameters and the corresponding quasi-optimal semi-convergence factor for the BDP–APIU method.
96
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101
Theorem 3.6. Let P1 2 Rmm and P 2 2 Rnn be positive definite, A; P 2 Rmm and Q 2 Rnn be symmetric positive definite and 1 rankðBÞ < m. Assume that P1 P2 BT is symmetric semi-positive definite. It follows that 1 P is symmetric positive definite and BQ (1) if
s 6 s0 , then it holds that
8 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 amin x for 0 < x < x ðsÞ; > > > qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > > <1 2 amax x sxbmin þ ð2 amax x sxbmin Þ2 4ð1 amax xÞ for x ðsÞ 6 x 6 x0 ðsÞ; 2 .ðHðx; sÞÞ ¼ > qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > > > 2 > : 12 sxbmax þ amax x 2 þ ðsxbmax þ amax x 2Þ2 4ð1 amax xÞ for x0 ðsÞ 6 x 6 amax ; (2) if
amax xÞ s0 < s < 2ð2 xbmax , then it holds that
8 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi < 1 amin x for 0 < x < xþ ðsÞ; qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi .ðHðx; sÞÞ ¼ 1 : 2 sxbmax þ amax x 2 þ ðsxbmax þ amax x 2Þ2 4ð1 amax xÞ for xþ ðsÞ 6 x < a 2 ; max where amin ; amax and bmin ; bmax is used to represent the smallest and the largest nonzero eigenvalues of the matrix P1 ðP 1 AÞ and Q 1 ðP2 BT ÞP 1 ðP1 BÞ, respectively. Moreover, the quasi-optimal iteration parameters xopt and sopt are given by
xopt ¼
4 ðbmin þ bmax Þs0 þ 2amax
and
sopt ¼ s0 :
The corresponding quasi-optimal semi-convergence factor of the BDP–APIU method is
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4amin VðHðxopt ; sopt ÞÞ ¼ 1 : ðbmin þ bmax Þs0 þ 2amax Here,
s0 is a positive root of the cubic equation s3 þ fs2 þ gs þ w ¼ 0, such that x0 ðs0 Þ ¼ xþ ðs0 Þ, and f :¼
2ðamax 2amin Þ ; bmin þ bmax
g :¼
amin ðamin 2amax Þ bmin bmax
;
w :¼
2a2min amax : bmin bmax ðbmin þ bmax Þ
x ðsÞ; xþ ðsÞ; x0 ðsÞ are defined by
8 ðamax þsbmin Þ2 ðamin amax Þ2 þ4sbmin amax > > > x ðsÞ ¼ 2amin ðamax þsbmin Þ > > pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi > > j a s b ðamin 2amax Þ2 þsbmin ðsbmin 6amin þ4amax Þ > min min j > ; > 2amin ðamax þsbmin Þ > < ðamax þsbmax Þ2 ðamin amax Þ2 þ4sbmax amax xþ ðsÞ ¼ 2amin ðamax þsbmin Þ > > pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > > > jamin sbmax j ðamin 2amax Þ2 þsbmax ðsbmax 6amin þ4amax Þ > > ; > 2amin ðamax þsbmax Þ > > > 4 : x ðsÞ ¼ 0 sðbmin þbmax Þþ2amax :
4. Numerical experiments In this section, we use an example to show the numerical advantages of the BDP–APIU method defined in (2.4) over the PIU, APIU ([8]), PPIU ([16,20]) and MINRES methods for solving singular saddle point problems. All the computations are implemented in MATLAB (version 8.0.0.783 (R2012b)) with a machine precision 1016 on a personal computer with Genuine Inter (R) Dual-Core CPU T4300 2.09 GHz, and 1.93 GB memory. We list number of iterations (denoted by ‘IT’), elapsed CPU time in seconds (denoted by ‘CPU’) and norm of absolute residual vectors (denoted by ‘RES’). Here, ‘RES’ is defined as
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kb AxðkÞ ByðkÞ k22 þ kq BT xðkÞ k22 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi RES :¼ kbk22 þ kqk22 T
T T
with fððxðkÞ Þ ; ðyðkÞ Þ Þ g being the current approximate solution. In our experiments, all runs are started with zero initial vector, the iteration terminates if the current iterations satisfy RES < 106 or if the numbers of the iteration steps exceed 1000 (this case is denoted by the symbol ‘–’). Example. Consider the singular saddle point problem (1.1) with the following coefficient sub-matrices ([5,26,27]):
97
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101 Table 4.1 Choices of the matrices P; Q ; P1 and P 2 for CasesI III. Case
P
I II
^ A ^ P
III
^ P
Q T
diagðB A
1
BÞ
diagðBT A1 BÞ ^Þ tridiagðQ
P1
P2
Im
In
^ 1 A ^ 1 A
In In
Table 4.2 Semi-convergence factors of the APIU, PPIU and BDP–APIU methods. Case
APIU I
PPIU II
BDP APIU I
PPIU II
BDP APIU III
p¼8
ðx; s; cÞ
(0.94, 0.56, 0.35)
(0.99, 0.31, 0.42)
0.9337
ð0:95; 0:65; 0:65Þ} 0.8401
(0.99, 0.76, 0.39)
VðT Þ
ð0:97; 0:55; 0:55Þ} 0.8435
ðx; s; cÞ
(0.94, 0.56, 0.35)
}
}
(0.99, 0.80, 0.24)
VðT Þ
0.9741
ð1:01; 0:51; 0:51Þ 0.9573
(1.08, 0.75, 0.53)
ðx; s; cÞ
(0.92, 0.38, 0.25)
ð0:98; 0:52; 0:52Þ} 0.9676
(1.00, 0.64, 0.58)
ð0:98; 0:54; 0:54Þ} 0.9490
(1.01, 0.84, 0.37)
}
ð0:98; 0:54; 0:54Þ 0.9781
(1.01, 0.65, 0.57)
}
(1.01, 0.84, 0.25)
ð0:98; 0:54; 0:54Þ} 0.9838
(1.15, 0.90, 0.23)
ð0:98; 0:54; 0:54Þ} 0.9720
(1.15, 0.95, 0.24)
p ¼ 16 p ¼ 24 p ¼ 32
VðT Þ
0.9859
ðx; s; cÞ
(0.87, 0.42, 0.37)
VðT Þ
0.9959
ðx; s; cÞ
p ¼ 40
VðT Þ
0.8401
0.9386
0.9669
0.9712
0.9824
ð0:95; 0:58; 0:58Þ 0.9475
ð0:98; 0:54; 0:54Þ 0.9637
0.9030
Table 4.3 Numerical results for PIU, APIU, PPIU, BDP–APIU and MINRES methods. (p = 8). Case
ðx; s; cÞ
IT
CPU
RES
PIU I
(0.90, 0.25, 0.25)
183
0.0453
9:45 107
APIU I
(0.94, 0.56, 0.35)
179
0.0419
7:84 107
PPIU II
(0.90, 0.25, 0.25)
89
0.0176
9:47 107
PPIU II
77
0.0161
6:62 107
PPIU II
ð0:97; 0:55; 0:55Þ} (0.98, 0.51, 0.51)
78
0.0156
9:83 107
BDP APIU II
(0.94, 0.56, 0.35)
87
0.0143
6:16 107
BDP APIU II
(0.98, 0.42, 0.36)
78
0.0126
5:87 107
BDP APIU II
(0.99, 0.31, 0.42)
69
0.0114
8:96 107
PPIU III
(0.96, 0.64, 0.64)
82
0.0166
9:89 107
PPIU III
80
0.0160
9:27 107
PPIU III
ð0:95; 0:65; 0:65Þ} (0.97, 0.68, 0.68)
76
0.0155
9:32 107
BDP APIU III
(0.97, 0.78, 0.66)
56
0.0113
8:87 107
BDP APIU III
(0.99, 0.76, 0.39)
52
0.0108
8:22 107
BDP APIU III
(0.89, 0.68, 0.43)
60
0.0124
7:67 107
73
0.0132
9:20 107
MINRES
A¼
IT þT I
0
0
IT þT I
2 R2p
2 2p2
and
^ B ~ B ^ b1 B¼ B
2 2 b2 2 R2p ðp þ2Þ ;
where
T¼
1
tridiagð1; 2; 1Þ 2 Rpp ; 2 h
^ e ; b1 ¼ B 0
^ 0 ; b2 ¼ B e
b¼ B
IF FI
2 R2p
e ¼ ð1; 1; . . . ; 1ÞT 2 Rp
2 =2
2 p2
;
0.7936
0.9318
0.9535
0.9618
98
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101 Table 4.4 Numerical results for PIU, APIU, PPIU, BDP–APIU and MINRES methods. (p = 16). Case
ðx; s; cÞ
IT
CPU
RES
PIU I
(0.85, 0.34, 0.34)
486
1.7998
9:32 107
APIU II
(0.94, 0.56, 0.35)
447
1.6809
9:84 107
PPIU II
(0.85, 0.34, 0.34)
258
0.1232
9:81 107
PPIU II
194
0.0820
8:82 107
PPIU II
ð1:01; 0:51; 0:51Þ} (0.98, 0.50, 0.50)
199
0.0851
9:01 107
BDP APIU II
(0.94, 0.56, 0.35)
230
0.0939
8:03 107
BDP APIU II
(1.08, 0.75, 0.53)
185
0.0877
9:22 107
BDP APIU II
(1.02, 0.72, 0.54)
194
0.0835
9:12 107
PPIU III
(0.96, 0.57, 0.57)
171
0.0886
9:73 107
PPIU III
165
0.0841
9:78 107
PPIU III
ð0:95; 0:58; 0:58Þ} (0.96, 0.62, 0.62)
141
0.0804
8:76 107
BDP APIU III
(1.01, 0.79, 0.31)
113
0.0639
9:57 107
BDP APIU III
(1.01, 0.78, 0.30)
114
0.0718
9:50 107
BDP APIU III
(0.99, 0.80, 0.24)
107
0.0620
7:96 107
148
0.3184
7:20 107
MINRES
Table 4.5 Numerical results for PIU, APIU, PPIU, BDP–APIU and MINRES methods. (p = 24). Case
ðx; s; cÞ
IT
CPU
PIU I
(0.85, 0.34, 0.34)
848
13.5330
9:69 107
APIU I
(0.92, 0.38, 0.25)
822
13.0964
9:98 107
PPIU II
(0.85, 0.34, 0.34)
PPIU II
}
RES
431
0.7031
9:23 107
352
0.5494
9:78 107
350
0.5420
9:26 107
PPIU II
ð0:98; 0:52; 0:52Þ (1.01, 0.51, 0.51)
BDP APIU II
(0.92, 0.38, 0.25)
425
0.6408
9:97 107
BDP APIU II
(1.01, 0.63, 0.56)
340
0.5254
9:45 107
BDP APIU II
(1.00, 0.64, 0.58)
339
0.5191
9:71 107
PPIU III
(0.97, 0.52, 0.52)
PPIU III
}
282
0.4896
9:05 107
274
0.4592
9:56 107
270
0.4471
9:46 107
PPIU III
ð0:98; 0:54; 0:54Þ (0.99, 0.55, 0.55)
BDP APIU III
(0.99, 0.81, 0.30)
185
0.3145
9:71 107
BDP APIU III
(0.98, 0.82, 0.32)
189
0.3197
8:15 107
BDP APIU III
(1.01, 0.84, 0.37)
182
0.3052
9:56 107
215
2.0619
8:10 107
MINRES
with
F¼
1 tridiagð1; 1; 0Þ 2 Rpp ; h
1 being a tridiagonal matrix. Here is the Kronecker product symbol and h ¼ pþ1 is the discretization meshsize. This problem is a technical modification of Example 5.1 in [5] and later used in many papers. Here, the matrix block B is an b with linearly independent vectors b1 and b2 , then B is a rank-deficient matrix. For augmentation of the full-rank matrix B 2 this example, we have m ¼ 2p and n ¼ p2 þ 2, Hence, the total number of variables is m þ n ¼ 3p2 þ 2. In our computations, T T we choose the right-hand-side vector ðb ; qT Þ 2 Rmþn such that the exact solution of the linear system (1.1) is T T T T mþn ððxÞ ; ðyÞ Þ ¼ ð1; 1; . . . ; 1Þ 2 R . We choose P 1 as an approximation inverse of A; P as an approximation of P 1 A and P 2 such that P1 2 Q is an approximation of BT ðP 1 PÞ1 B. We test five cases for the BDP–APIU method, which can reduce to the PIU, APIU and PPIU methods, according to different choices of matrices P; Q ; P1 and P2 , and parameters x; s and c, and compare with the MINRES method simulta^ denotes an approximation of P 1 A neously. The choices of the matrices P; Q ; P1 and P 2 are listed in Table 4.1. The matrix P ^ ¼ Im þ diagðdiagðP 1 A; pÞ pÞ þ diagðdiagðP 1 A; pÞ; pÞ. The matrix by the 0; p, and p diagonal, that is P ^ 1 B; ^ ¼ tridiagðAÞ. ^ ¼ DiagðB ^T A ^ B ~ T BÞ, ~ where A Q
99
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101 Table 4.6 Numerical results for PIU, APIU, PPIU, BDP–APIU and MINRES methods. (p = 32). Case
ðx; s; cÞ
IT
CPU
RES
PIU I APIU I
(0.85, 0.21, 0.21) (0.87, 0.42, 0.37)
>1000 >1000
– –
– –
PPIU II
(0.85, 0.21, 0.21)
733
2.3283
9:87 107
PPIU II
536
1.6772
9:95 107
PPIU II
ð0:98; 0:54; 0:54Þ} (0.96, 0.56, 0.56)
532
1.6116
9:40 107
BDP APIU II
(0.89, 0.46, 0.35)
611
1.8425
9:61 107
BDP APIU II
(1.01, 0.65, 0.57)
515
1.5654
8:33 107
BDP APIU II
(0.98, 0.64, 0.53)
527
1.6860
8:58 107
PPIU III
(0.96, 0.53, 0.53)
395
1.3246
9:86 107
PPIU III
388
1.3021
9:80 107
PPIU III
ð0:98; 0:54; 0:54Þ} (0.95, 0.52, 0.52)
402
1.3308
9:90 107
BDP APIU III
(1.13, 0.89, 0.24)
236
0.8126
9:26 107
BDP APIU III
(1.12, 0.90, 0.20)
224
0.7637
9:63 107
BDP APIU III
(1.01, 0.84, 0.25)
230
0.7921
9:46 107
289
8.0572
9:98 107
MINRES
0
0
A−I P−II B−III P−III B−III
−2
2
−4
10
−6
−6
10
10
−8
−8
10
−2
2
2
||rk|| /||rk||
−4
10
A−I P−II B−III P−III B−III
10
||rk|| /||rk||
10
10
2
10
0
100
200
300
400
500
10
0
200
400
600
800
1000
k
k
Fig. 4.1. the iterative curves of the APIU, PPIU and BDP–APIU methods for p ¼ 16 ðleftÞ and p ¼ 24 ðrightÞ.
Table 4.7 Numerical results for PPIU, BDP-APIU and MINRES methods. (p = 40). Case
ðx; s; cÞ
PPIU II
(0.98, 0.53, 0.53)
PPIU II
IT
}
CPU
RES 9:50 107
725
3.8074
718
3.7814
9:60 107
721
3.8148
9:97 107
PPIU II
ð0:98; 0:54; 0:54Þ (0.99, 0.53, 0.53)
BDP APIU II
(1.15, 0.90, 0.23)
648
3.3990
9:55 107
BDP APIU II
(1.14, 0.95, 0.24)
663
3.5266
9:10 107
BDP APIU II
(1.15, 0.93, 0.26)
670
3.5292
9:18 107
PPIU III
(0.99, 0.53, 0.53)
508
3.0130
9:66 107
PPIU III
503
2.9313
9:62 107
PPIU III
ð0:98; 0:54; 0:54Þ} (0.97, 0.54, 0.54)
504
2.8923
9:83 107
BDP APIU III
(1.15, 0.90, 0.23)
293
1.7650
9:38 107
BDP APIU III
(1.15, 0.95, 0.24)
287
1.6862
9:88 107
BDP APIU III
(1.14, 0.94, 0.26)
290
1.7307
8:56 107
337
22.6851
9:96 107
MINRES
In Table 4.2, we list the semi-convergence factors of the APIU, PPIU and BDP–APIU methods with different x; s; c and p. From the Table 4.2 we see that the semi-convergence factors of the BDP–APIU method are somewhat smaller than those of
100
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101 0
0
10
−2
−2
10
||rk|| /||rk||
2 −4
10
2
2
P−II B−III P−III B−III
2
10
||rk|| /||rk||
10
P−II B−III P−III B−III
−6
−6
10
10
−8
10
−4
10
−8
0
100
200
300
k
400
500
600
10
0
200
400
k
600
800
Fig. 4.2. the iterative curves of the PPIU and BDP–APIU methods for p ¼ 32 ðleftÞ and p ¼ 40 ðrightÞ.
the APIU and the PPIU methods. The parameters x; s and c for the five cases are chosen experimentally optimal, except for ‘}’ denoting theoretically quasi-optimal, as listed in Tables 4.3–4.7. We compare the BDP–APIU method with PIU, APIU, PPIU and MINRES methods. The numerical results are listed in Tables 4.3–4.7. From Tables 4.3–4.7 we see when choosing suitable parameters, the BDP–APIU methods with P; Q ; P1 and P2 chosen as listed in Table 4.1 succeed to quickly produce approximate solutions of higher quality. The BDP–APIU methods of Case III is much better, which always outperforms the other testing methods in iteration steps, CPU time and residual errors. It is also much better than the MINRES method. In order to better understand the numerical results, the convergence history of the corresponding methods with the iteration parameters x; s and c chosen optimally based on Table 4.2 are demonstrated in Figs. 4.1,4.2, further showing the advantages of the BDP–APIUIII comparing with the other methods for iterative steps. Here, we simply substitute the first letters of their monograms for the APIU, PPIU and BDP–APIU methods. 5. Conclusions Combining with implicit block-diagonally preconditioning technique, we further generalize the PIU method to the BDP– APIU method for singular saddle point problems. Corresponding semi-convergence of this method and choices of the optimal parameters are discussed. Numerical examples show the feasibility and effectiveness of the BDP–APIU method for solving singular saddle point problems. However, how to choose more efficient optimal parameters for general case of the BDP–APIU method still needs more discussions. References [1] Z.-Z. Bai, On semi-convergence of Hermitian and skew-Hermitian splitting methods for singular linear systems, Computing 89 (2010) 171–197. [2] Z.-Z. Bai, Structured preconditioners for nonsingular matrices of block two-by-two structures, Math. Comput. 75 (2006) 791–815. [3] Z.-Z. Bai, G.H. Golub, Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems, IMA J. Numer. Anal. 27 (2007) 1–23. [4] Z.-Z. Bai, G.H. Golub, C.-K. Li, Convergence properties of preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semi-definite matrices, Math. Comput. 76 (2007) 287–298. [5] Z.-Z. Bai, G.H. Golub, J.-Y. Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semi-definite linear systems, Numer. Math. 98 (2004) 1–32. [6] Z.-Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) 603–626. [7] Z.-Z. Bai, B.N. Parlett, Z.-Q. Wang, On generalized successive overrelaxation methods for augmented linear systems, Numer. Math. 102 (2005) 1–38. [8] Z.-Z. Bai, Z.-Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl. 428 (2008) 2900–2932. [9] M. Benzi, G.H. Golub, J. Liesen, Numerical solution of saddle point problems, Acta Numer. 14 (2005) 1–137. [10] L. Bergamaschi, J. Gondzio, G. Zilli, Preconditioning indefinite systems in interior point methods for optimization, Comput. Optim. Appl. 28 (2004) 149– 171. [11] A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, 1979. [12] A. Bjorck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA, 1996. [13] J. Bramble, J. Pasciak, A preconditioned technique for indefinite systems resulting from mixed approximations of elliptic problems, Math. Comput. 50 (1988) 1–17. [14] J.H. Bramble, J.E. Pasciak, A.T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Numer. Anal. 34 (1997) 1072–1092. [15] F. Brezzi, M. Fortin, Mixed and Hybrid Finite Element Methods, Springer-Verlag, New York and London, 1991. [16] J.-L. Li, T.-Z. Huang, Semi-convergence analysis of the inexact Uzawa method for singular saddle point problems, Revista Da La 53 (2012) 61–70. [17] H.C. Elman, G.H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point problems, SIAM J. Numer. Appl. 31 (1994) 1645–1661. [18] H.C. Elman, D.J. Silvester, A.J. Wathen, Finite elements and fast iterative solvers, Numerical Mathematics and Scientific Computation, Oxford University Press, Oxford, 2005. [19] B. Fischer, R. Ramage, D.J. Silvester, A.J. Wathen, Minimum residual methods for augmented systems, BIT Numer. Math. 38 (1998) 527–543. [20] N. Gao, X. Kong, Block diagonally preconditioned PIU methods of saddle point problems, Appl. Math. Comput. 216 (2010) 1880–1887. [21] J.J.H. Miller, On the location of zeros of certain classes of polynomials with applications to numerical analysis, J. Inst. Math. Appl. 8 (1971) 397–406.
Z.-Z. Liang, G.-F. Zhang / Applied Mathematics and Computation 221 (2013) 89–101
101
[22] T. Rusten, R. Winther, A preconditioned iterative method for saddle point problems, SIAM J. Matrix Anal. Appl. 13 (1992) 887–904. [23] S.-S. Wang, G.-F. Zhang, Preconditioned AHSS iteration method for singular saddle point problems, Numer. Algorithm (2012), http://dx.doi.org/ 10.1007/s11075-012-9638-y (online). [24] X. Wu, B.P.B. Silva, J.-Y. Yuan, Conjugate gradient method for rank deficient saddle point problems, Numer. Algorithm 35 (2004) 139–154. [25] D.M. Young, Iterative Solution for Large Linear Systems, Academic Press, New York, 1971. [26] G.-F. Zhang, S.-S. Wang, A generalization of parameterized inexact Uzawa method for singular saddle point problems, Appl. Math. Comput. 219 (2013) 4225–4231. [27] B. Zheng, Z.-Z. Bai, X. Yang, On semi-convergence of parameterized Uzawa methods for singular saddle point problems, Linear Algebra Appl. 431 (2009) 808–817.