SIMPLE-like preconditioners for saddle point problems from the steady Navier–Stokes equations

SIMPLE-like preconditioners for saddle point problems from the steady Navier–Stokes equations

Accepted Manuscript SIMPLE-like preconditioners for saddle point problems from the steady Navier-Stokes equations Zhao-Zheng Liang, Guo-Feng Zhang PII...

495KB Sizes 1 Downloads 26 Views

Accepted Manuscript SIMPLE-like preconditioners for saddle point problems from the steady Navier-Stokes equations Zhao-Zheng Liang, Guo-Feng Zhang PII: DOI: Reference:

S0377-0427(16)30059-0 http://dx.doi.org/10.1016/j.cam.2016.02.012 CAM 10509

To appear in:

Journal of Computational and Applied Mathematics

Received date: 9 May 2015 Revised date: 10 October 2015 Please cite this article as: Z.-Z. Liang, G.-F. Zhang, SIMPLE-like preconditioners for saddle point problems from the steady Navier-Stokes equations, Journal of Computational and Applied Mathematics (2016), http://dx.doi.org/10.1016/j.cam.2016.02.012 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Manuscript Click here to view linked References

SIMPLE-like preconditioners for saddle point problems from the steady Navier-Stokes equationsI Zhao-Zheng Liang, Guo-Feng Zhang∗ School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, P.R. China

Abstract The semi implicit method for pressure linked equations (SIMPLE) preconditioner is further generalized to a class of SIMPLE-like (SL) preconditioners for solving saddle point problems from the steady Navier-Stokes equations. The SL preconditioners can also be viewed as a generalization of the relaxed deteriorated PSS (RDPSS) preconditioner proposed by Cao et al. [J. Comput. Appl. Math., 273 (2015) 41-60]. Convergence analysis of the corresponding SL iteration is presented and the optimal iteration parameter is obtained by minimizing the spectral radius of the SL iteration matrix. Moreover, Krylov subspace acceleration of the SL preconditioning is studied. The SL preconditioned saddle point matrix is analyzed. Results about eigenvalue and eigenvector distributions and the minimal polynomial are derived. Numerical experiments from “leaky” two dimensional lid-driven cavity problems are given, which show the advantages of the SL preconditioned GMRES over the DPSS and RDPSS preconditioned ones for solving saddle point problems. Keywords: SIMPLE preconditioner, spectral analysis, saddle point problems, convergence, preconditioning, Navier-Stokes equation 2000 MSC: 65F10, 65F15, 65F50

1. Introduction Consider the incompressible Navier-Stokes equations of the form:   −ν∆u + u · ∇u + ∇p = f, in Ω, ∇ · u = 0, in Ω,  u = g, on ∂ Ω,

(1.1)

where Ω ⊂ R2 (or R3 ) is an open bounded domain with boundary ∂ Ω. Here, ν > 0 is the viscosity parameter, ∆ is the vector laplacian, ∇ is the gradient, ∇ · u is the divergence of u, f is a given external force field and g is the Dirichlet boundary data. The goal is to find the unknown velocity and pressure fields u and p . The convective term u · ∇u makes this system nonlinear. Linearization of the Navier-Stokes system (1.1) by Picard fixed-point iteration results in a sequence of Oseen problems of the form   −ν∆u + w · ∇u + ∇p = f, in Ω, (1.2) ∇ · u = 0, in Ω,  u = g, on ∂ Ω, I This work was supported

by the National Natural Science Foundation of China (No. 11271174, 11511130015). author. Email addresses: [email protected] (Zhao-Zheng Liang), [email protected] (Guo-Feng Zhang)

∗ Corresponding

Preprint submitted to Elsevier

October 10, 2015

where the divergence free field w is the velocity field obtained from the previous Picard iteration step. Spatial discretizations of the Oseen equations (1.2) using the Ladyzhenskaya-Babuša-Brezzi (LBB) stable finite element methods [23] result in the saddle point problems of the following structure:      A BT u f A+ X = = ≡ c+ , (1.3) B 0 p g where now u and p represent discrete velocity and pressure, respectively. A is the discretization of the diffusion and convection terms, B T is the discrete gradient, B is the discrete divergence, and f and g contain forcing and boundary terms. In general, A ∈ Rn×n is a positive real matrix, i.e., its symmetric part is positive definite, B ∈ Rm×n (m < n) is a matrix of full rank, f ∈ Rn and g ∈ Rm . Negativing the second block row of (1.3) equivalently yields the nonsymmetric saddle point problems      A BT u f AX = = ≡ c. (1.4) −B 0 p −g The nonsymmetric formulation (1.4) is especially natural when A is nonsymmetric, but positive real. In fact, A is positive stable, i.e., the eigenvalues of A have real and positive parts, see [8]. This can be advantageous when using certain Krylov subspace methods, like GMRES. A variety of scientific computing and engineering applications can derive the augmented linear system (1.3) or (1.4), for example, computational fluid dynamics, mixed finite element approximation of elliptic PDEs, weighted and equality constrained least squares estimation. See [9, 22, 32] and the references therein for a broad overview of applications and numerical solution techniques of saddle point problems. Due to the large and sparse structure of its coefficient matrix, iteration methods are more attractive for solving the saddle point problem (1.3) or (1.4). A large amount of efficient methods have been presented in the literature, such as Uzawa-type methods [6, 7, 13, 14, 20], Hermitian and skew-Hermitian splitting methods [1–4], SIMPLE-type methods [18, 28, 32, 37, 38] and so forth. Meanwhile, Krylov subspace methods [33, 34] are considered more efficient in general. However, they are not competitive without good preconditioners to speed up their convergence. Important and efficient preconditioners include block and approximate Schur complement preconditioners [27, 29, 30, 36], constraint preconditioners [5, 19, 26], augmented Lagrangian preconditioners [10, 12, 16, 24, 25], HSS preconditioners [8, 11, 15, 17, 31, 35] and so on. In [15], based on the factorized form of the deteriorated PSS (DPSS) preconditioner:    1 αI + A 0 αI B T PDPSS = (1.5) 0 αI −B αI 2α for the nonsymmetric saddle point problem (1.4), Cao et al. proposed the RDPSS preconditioner structured as      1 T 1 A 0 αI B T A α AB , (1.6) PRDPSS = = 0 −B 0 α 0 αI −B which is much closer to the saddle point matrix A compared with PDPSS . Here, α > 0 and I denotes identity matrix of proper size. Convergence of the corresponding RDPSS iteration is analyzed and optimal parameter which minimizes the spectral radius of the iteration matrix is derived in [15]. Besides, some spectrum properties −1 A is also described there. of the RDPSS preconditioned matrix PRDPSS In [28], the SIMPLE preconditioner for saddle point problem (1.3) was presented with the following structure   A AD −1 B T PSIMPLE = , (1.7) B 0 −1 A+ has with D = diag(A) being the diagonal part of A. Eigenvalue analysis of the preconditioned matrix PSIMPLE been presented in [28]. Some eigenvalue bounds and the estimation for the spectral condition number are also given. We see that the RDPSS preconditioner (1.6) is somewhat similar to the SIMPLE preconditioner (1.7).

2

In this paper, a new generalized variant of the RDPSS preconditioner (1.6), which resembles the SIMPLE preconditioner (1.7), is presented for the nonsymmetric saddle point problem (1.4). We call it SIMPLE-like (SL) preconditioner, which is given by:   1 −1 T A B α AQ PSL = , (1.8) −B 0 with α > 0 and Q being an approximation of A. It is obvious that this preconditioner reduces to the RDPSS preconditioner when Q = I . Convergence properties of the corresponding SL iteration is analyzed and the optimal −1 iteration parameter is obtained. Moreover, spectral properties of the SL preconditioned matrix PSL A is studied. Compared with the RDPSS preconditioner, the SL preconditioner can result in more rapid convergence rate and denser eigenvalue distributions with suitable choices of the preconditioning matrices Q . The remainder of this paper is organized as follows. In Section 2, convergence of the SL iteration for solving saddle point problem (1.4) is analyzed and the optimal iteration parameter is described. In Section 3, spectrum properties of the SL preconditioned matrix are analyzed. In Section 4, the SL preconditioner is considered to accelerate GMRES and its implementation is given. In Section 5, numerical experiments from “leaky” lid-driven cavity problems are presented to show the effectiveness of the SL preconditioned GMRES. Finally in Section 6, we end this paper with some brief conclusions. Throughout this paper, we denote x ∗ as the conjugate transpose of the vector x . I is the identity matrix of proper order. The minimum and maximum eigenvalues of a symmetric matrix H are denoted by λmin (H ) and λmax (H ), respectively. For a matrix A, ρ(A) and σ(A) denote its spectral radius and spectrum, respectively. The real part of a complex number c is denoted as Re(c ). The imaginary unit is denoted as ı . 2. Analysis of the SL iteration In this section, we analyze convergence properties of the SL iteration method. The choice of the optimal iteration parameter is also studied. For the SL preconditioner PSL defined in (1.8), it can also be induced by the following matrix splitting     1 −1 T B 0 ( α1 A − Q )Q −1 B T A α AQ − , A = PSL − RSL = −B 0 0 0 which results in the following SL iteration method. Method 2.1. (The SL iteration method) Given initial vectors x (0) ∈ Rn , y (0) ∈ Rm and positive parameter α. For k = 0, 1, 2, · · · until the iteration sequence {((x (k ) )T , (y (k ) )T )T } converges to the exact solution of the augmented linear system (1.4), compute         1 −1 T f x (k +1) x (k ) 0 ( α1 A − Q )Q −1 B T B A α AQ = + . (2.1) −g y (k +1) y (k ) −B 0 0 0 −1 −1 The SL iteration (2.1) can be rewritten in a fixed-point form: X (k +1) = TSL X (k ) + PSL c , where TSL = PSL RSL is the iteration matrix. Now, we discuss convergence of the SL iteration method.

Lemma 2.2. Let A ∈ Rn×n be a nonsymmetric positive definite matrix, B ∈ Rm×n be a matrix of full rank and Q ∈ Rn×n be a symmetric positive definite matrix. If µ is any eigenvalue of (BQ −1 B T )−1 B A −1 B T , then R e (µ) > 0. Proof. Let µ be an eigenvalue of the matrix (BQ −1 B T )−1 B A −1 B T and u ∈ Cn be the corresponding eigenvector. Then we have B A −1 B T u = µBQ −1 B T u. Since u , 0, taking the scalar product of the above equation by u and dividing by u ∗ u, we obtain α1 + ı β1 = µα2 , with α1 + ı β1 =

u ∗ B A −1 B T u , u∗u

α2 =

u ∗ BQ −1 B T u . u∗u 3

Since B is of full rank and u , 0, we have B T u , 0. So αi > 0 (i = 1, 2), as A and Q are, respectively, nonsymmetric and symmetric positive definite. Thus, it is obvious that R e (µ) > 0. Now, we present convergence results about the SL iteration method. Theorem 2.3. Assume the conditions of Lemma 2.2 hold. Denote the upper and lower bounds of the real parts of the eigenvalues of (BQ −1 B T )−1 B A −1 B T as γmax and γmin , respectively. The upper bound of the absolute values of the imaginary parts of these eigenvalues is denoted as ηmax . Then the SL iteration (2.1) is convergent if α satisfies:

0<α<

 2γ min   γ2min +η2max ,  

2γmax γ2max +η2max ,

if if

ηmax ¾

p

ηmax <

p

γmax γmin , (2.2) γmax γmin .

Proof. From (1.8), by simple calculations, we have   A −1 − Q −1 B T (BQ −1 B T )−1 B A −1 −Q −1 B T (BQ −1 B T )−1 −1 PSL = . α(BQ −1 B T )−1 B A −1 α(BQ −1 B T )−1 Then the iteration matrix   A −1 − Q −1 B T (BQ −1 B T )−1 B A −1 −Q −1 B T (BQ −1 B T )−1 0 TSL = α(BQ −1 B T )−1 B A −1 α(BQ −1 B T )−1 0   0 −A −1 B T + Q −1 B T (BQ −1 B T )−1 B A −1 B T . = 0 I − α(BQ −1 B T )−1 B A −1 B T

1 −1 T B α AQ

0

− BT



(2.3)

From (2.3), we know that the spectral radius of the SL iteration matrix is ρ(TSL ) = max

1¶i ¶m

q

(1 − αγi )2 + α2 η2i ,

where µ = γi + ı ηi (i = 1, 2, · · · , m ) are eigenvalues of (BQ −1 B T )−1 B A −1 B T . To ensure the convergence of the SL iteration, it must hold that ρ(TSL ) < 1. By using Lemma 2.2, we have γi > 0. Simple calculations give the conditions in (2.2). Based on the convergence condition (2.2), we have the following result about the optimal α that minimizes the spectral radius of the SL iteration matrix (2.3) by similar discussions with Theorem 2.2 in [15]. Theorem 2.4. Assume the conditions of Theorem 2.3 hold. Then the optimal α, which minimizes the spectral radius of the SL iteration matrix (2.3), is  γ p min   γ2min +η2max , if ηmax ¾ γmax γmin , αopt =  p  2 if ηmax < γmax γmin . γmax +γmin , The corresponding optimal spectral radius is

ρ(TSL ) =

 ηmax q ,   γ2min +η2max

  p(γmax −γmin )2 +4η2max γmax +γmin

,

if

ηmax ¾

p

γmax γmin ,

if

ηmax <

p

γmax γmin .

Based upon the proof of Theorem 2.3, when A and Q are symmetric, we can derive the following much simpler convergence condition for the SL iteration method, which can be used easily in practical applications. 4

Corollary 2.5. Let A, Q ∈ Rn×n be symmetric positive definite matrices, B ∈ Rm×n be a matrix of full rank. Then we know that (BQ −1 B T )−1 B A −1 B T has real and positive spectrum. Let µmax and µmin be its largest and smallest eigenvalues. Then the SL iteration method is convergent if 0<α<

2 . µmax

The optimal α, which minimizes the spectral radius ρ(TSL ) is αopt =

2 , µmax + µmin

and the corresponding optimal spectral radius is ρ(TSL ) =

µmax − µmin . µmax + µmin

3. Spectral properties of the SL preconditioned matrix −1 A are analyzed. Bounds on its eigenvalIn this section, spectral properties of the preconditioned matrix PSL ues and the corresponding eigenvector distributions are established. Besides, properties of the minimal polyno−1 mial of PSL A is also discussed, which is instructive for the implementation of the Krylov subspace acceleration. From (2.3), we have   I A −1 B T − Q −1 B T (BQ −1 B T )−1 B A −1 B T −1 . (3.1) PSL A = I − TSL = −1 T −1 −1 T 0 α(BQ B ) B A B

Then it immediately leads to the following results. Theorem 3.1. Let A be a positive real matrix, B ∈ Rm×n be a matrix of full rank and Q ∈ Rn×n be an approximation −1 of A. Then the eigenvalues of PSL A defined in (3.1) are 1 with multiplicity at least n. The remaining eigenvalues are of the form αµi (i = 1, 2, · · · , m ), where µi are eigenvalues of the matrix (BQ −1 B T )−1 B A −1 B T . Remark 3.2. From Lemma 2.2, we know that the eigenvalues of (BQ −1 B T )−1 B A −1 B T all have positive real parts −1 when Q is symmetric positive definite. Thus the eigenvalues of PSL A lie in a positive box, which may result in faster convergence of Krylov subspace acceleration. Especially, when A and Q are both symmetric positive definite, −1 the eigenvalues of PSL A will locate in a positive real interval. Remark 3.3. Assume that A and Q are both symmetric positive definite. Consider the generalized eigenvalue problem B A −1 B T u = λBQ −1 B T u. Denote u˜ = B T u. Then it holds that λmin ((BQ −1 B T )−1 B A −1 B T ) = min u ,0

u˜ T A −1 u˜ u T B A −1 B T u = λmin (Q A −1 ). = min T −1 T u˜ ,0 u ˜ T Q −1 u˜ u BQ B u

Similarly, we have λmax ((BQ −1 B T )−1 B A −1 B T ) = λmax (Q A −1 ). Thus the eigenvalues of (BQ −1 B T )−1 B A −1 B T are located in the interval [λmin (Q A −1 ), λmax (Q A −1 )]. In the following, the eigenvector distributions and an upper bound of the degree of the minimal polynomial −1 A are presented, which are instructive for the Krylov subspace acceleration. of PSL −1 Theorem 3.4. Assume the conditions of Theorem 3.1 hold. Then the preconditioned matrix PSL A has n + i + j linearly independent eigenvectors. There are

5

(1) n eigenvectors of the form [u T 0T ]T , that correspond to the eigenvalue 1; (2) i (1 ¶ i ¶ m ) eigenvectors of the form [u T v T ]T , with ( α1 A − Q )Q −1 B T v = 0, that correspond to the eigenvalue 1; (3) j (1 ¶ j ¶ m ) eigenvectors of the form [u T v T ]T , that correspond to the nonunit eigenvalues. Proof. Let θ and [u T v T ]T be an eigenpair of PS−1 L A . Then        1 −1 T u A B u B A α AQ . =θ v −B T 0 v −B 0 It is equivalent to θ (1 − θ )Au = ( A − Q )Q −1 B T v α

(3.2)

and (1 − θ )B u = 0, which implies that either θ = 1 or B u = 0. If θ = 1, (3.2) reduces to θ ( A − Q )Q −1 B T v = 0, α

(3.3)

which is satisfied when v = 0. So there are n linearly independent eigenvectors of the form [u T 0T ]T that correspond to the eigenvalue 1. If v , 0, there are i (1 ¶ i ¶ m ) linearly independent eigenvectors of the form [u T v T ]T with v satisfying (3.3). 1 If θ , 1, from (3.2), we have u = 1−θ A −1 ( θα A − Q )Q −1 B T v . Substituting it into (1 − θ )B u = 0, we get αB A −1 B T v = θ BQ −1 B T v. 1 A −1 ( θα A − Q )Q −1 B T v = 0, which contradicts that [u T v T ]T is an It is easy to see that v , 0. Otherwise, u = 1−θ eigenvector. Then there are j (1 ¶ j ¶ m ) linearly independent eigenvectors of the form [u T v T ]T that correspond to nonunit eigenvalues. Thus the eigenvectors are obtained. Next, we will prove they are linearly independent. Let (1)

a (1) = [a 1 , · · · , a n(1) ],

(2)

(2)

a (2) = [a 1 , · · · , a i ] and

(3)

(3)

a (3) = [a 1 , · · · , a j ]

be three vectors and   (1)   (2)     ‚ (3) Œ a 1(3) 0  a  (1)  (2)  a1 (3) (2) (1)  1  u1 · · · u j  .  u · · · ui  .  u1 · · · un ..  .  . . + =    ..  + 1(2)   (3) (3) (2) . . . , 0 ··· 0 v · · · v v1 · · · vi 1 j (3) (2) (1) 0 an aj ai

(3.4)

where the first matrix arises from the case θk = 1 (k = 1, · · · , n), the second matrix from the case θk = 1 (k = 1, · · · , i ) and the third matrix from the case θk , 1 (k = 1, · · · , j ). To show that the m + i + j eigenvectors are linearly −1 independent, a (k ) (k = 1, 2, 3) should be all zero vectors. Multiplying both sides of (3.4) from left with PSL A , we have   (2)       (1) Œ θ1 a 1(3) ‚ (3) 0  a1  (1)  (2)  a1 (3) (2) u1 · · · u j  .  u1 · · · ui  .  u 1 · · · u n(1)  .  .  . . (3.5)  ..  + (2) (2)  .  + (3) (3)  .  = ..  . 0 ··· 0 v1 · · · vi v1 · · · v j (3) (2) (1) 0 an θj a j ai Subtracting (3.4) from (3.5), gives     ‚ (3) Œ (θ1 − 1)a 1(3) 0 (3) u1 · · · u j   . . .  = .. . (3) (3)  . v1 · · · v j (3) 0 (θ j − 1)a j

6

(3)

Since θk , 1 and the coefficient matrix are i linearly independent vectors, we have a k = 0 (k = 1, · · · , j ). Then (2) ak

we can easy to obtain = 0 (k = 1, · · · , i ) as  (1)    0  a1  (1) u 1 · · · u n(1)  .   ..   ..  = . . 0 ··· 0 0 a n(1)

(2) vk

(1)

are linearly independent. Thus, (3.4) can be simplified to

(1)

As u k (k = 1, . . . , n) are linearly independent, we have a k = 0, k = 1, . . ., n. Theorem 3.5. Assume the conditions of Theorem 2.3 hold. Let l (1 ¶ l ¶ m ) be the degree of the minimal polyno−1 A is mial of the matrix α(BQ −1 B T )−1 B A −1 B T − I . Then the degree of the minimal polynomial of the matrix PSL at most l + 1. −1 Proof. We know that PSL A takes the form   0 Θ2 −1 PSL A =I + ≡ I +Υ, 0 Θ1

where Θ1 = α(BQ −1 B T )−1 B A −1 B T − I and Θ2 = A −1 B T + Q −1 B (BQ −1 B T )−1 B A −1 B T . Note that, for any positive integer k , we have   k  0 Θ2 0 Θ2 Θ1k −1 Υk = . = k −1 0 Θ1 0 Θ1 It follows that if a polynomial p (t ) is of the form pˆ (t ) = α0 + α1 t + · · · + αk t k , for p (t ) = t pˆ (t ), it holds that   0 Θ2 pˆ (Θ1 ) p (Υ ) = . 0 Θ1 pˆ (Θ1 ) Therefore, if p ∗ (t ) is the minimal polynomial of Θ1 , p (t ) = t p ∗ (t ) is an annihilating polynomial of Υ . Because −1 −1 A is at most l + 1. A = I + Υ , it immediately results in the degree of the minimal polynomial of PSL PSL Remark 3.6. Based on Theorem 3.5 and the results in [33], we know that the dimension of the Krylov subspace −1 A , c ) is at most l + 1. Thus GMRES with SL preconditioner for solving the saddle point problem (1.4) will K (PSL terminate in at most l + 1 steps. 4. Implementation aspects From the discussions in Section 2, we notice that when the condition number of (BQ −1 B T )−1 B A −1 B T is too large, even with optimal choice of α, the convergence of the stationary iteration (3.4) is typically too slow for it to be competitive. In practical implementations, we use the SL splitting as a preconditioner for the Krylov subspace method, e.g., GMRES. The linear system (1.4) is equivalent to the linear system PS−1 A X = PS−1 c. L L It is easy to verify that   1 A αI 0 PS L = 0 α −B BQ −1 B T

(4.1)

 Q −1 B T . I

Since the pre-factor α1 has no effect on the preconditioned system (4.1), at each step of the application of the preconditioned GMRES (PGMRES) with SL preconditioner, we need to solve a linear system of the form    αI Q −1 B T A 0 z = r, 0 I −B BQ −1 B T for a given r . Then we have the following algorithm for the implementation of the SL iteration. 7

Algorithm 4.1. For a given residual vector r = (r1T , r2T )T , we can compute the update vector z = (z 1T , z 2T )T from the following steps: (1) solve Au 1 = r1 ; (2) solve (BQ −1 B T )z 2 = B u 1 + r2 ; (3) solve Q u 2 = B T z 2 ; (4) z 1 = α1 (u 1 − u 2 ). From the above Algorithm 4.1, we know that three subsystems with coefficient matrices A, BQ −1 B T and Q have to be solved at each iteration steps. If we choose Q to be diagonal or tridiagonal approximations of A, BQ −1 B T will also have sparse structure. Then direct method such as LU factorization or inexact solves such as GMRES can be used to solve the corresponding linear system. Especially, when A and Q are symmetric positive definite, Cholesky factorization or conjugate gradient (CG) method can be used. Now we give analysis about the numerical stability and convergence behavior of GMRES. From the characterization of Krylov subspace method, it follows that the k th residual for GMRES can be expressed in polynomial formed as rk = c − A X k = pk (A )r0 , with pk (0) = 1 and pk ∈ Πk , the set of polynomials of degree not greater than k . GMRES generates implicitly the polynomial pk for which ||rk || is minimal. It is well known that if A is diagonalizable with the eigenvector matrix W , then it holds ||rk || ¶ κ2 (W ) min max |pk (λ)|, pk ∈Πk ,pk (0)=1 λ∈σ(A ) ||r0 || where κ2 (W ) = ||W || · ||W −1 || is the spectral condition number of W . In some cases, the upper bound may be used to predict a true reduction for the GMRES residuals. When the eigenvalues of A are in an ellipse in the right half plane, which are satisfied for results in this paper, then rate of convergence can be bounded. If the eigenvalues of A appear in complex conjugate pairs, the ellipse then has its center d at the real axis, say d > 0. We denoted the focal points by d − c and d + c and the intersection of the ellipse with the real axis by d − c and d + c . For this situation, the asymptotic convergence factor is defined as p a + a2 − c 2 . (4.2) p d + d2 −c2 See [33] for more detailed discussions. When preconditioners are used in GMRES, asymptotic convergence factor (4.2) will depend on the distribution of the eigenvalues of the preconditioned matrix. For the SL preconditioner (1.8), in the case of symmetric positive definite A and Q , the distribution can be measured by the spectral condition number, which is the quotient of the largest and smallest eigenvalues in absolute value. However, in the case that A is nonsymmetric, the condition number appears not to be a good estimate due to the large number of unit eigenvalue. Note that the number of iterations of GMRES is at most equal to the number of different eigenvalues if they are all simple. Next, we will use an effective spectral condition number κ, instead of the real spectral condition number to measure the distribution of the eigenvalues. The number κ is the quotient of the largest and smallest eigenvalue in the absolute value not equal to one. This technique is also used in [18]. Let µi (i = 1, 2, · · · , m ) be the eigenvalues of (BQ −1 B T )−1 B A −1 B T . Then from Theorem 3.1, we have −1 κ(PSL A)=

α max |µi | max |µi | = . α min |µi | min |µi |

−1 This analysis shows that κ(PSL A ) will become much better when max |µi | is close to min|µi |, i.e., Q is a good approximation to A.

8

5. Numerical experiments In this section, we provide numerical experiments on linear systems from Oseen problem (1.2). Our aim is to compare the performance of the SL preconditioner with the DPSS and RDPSS preconditioners. Experiments have been performed by using GMRES with these preconditioners. All the sub-linear systems arising in the application of the preconditioners are solved by Cholesky or LU factorization in combination with AMD reordering. As for the preconditioning matrix Q in the SL preconditioner, we consider the following two choices: • Case I: Q = diag(A); • Case II: Q = tridiag(H ) with H = 12 (A + A T ). The test problem is a “leaky” two dimensional lid-driven cavity problem. We discretize the Oseen problem (1.2) with Q2 − Q1 finite element method on uniform or stretched l × l grids. In computations, we employ the function driver “navier_textproblem” of the IFISS software package [21] with default options except meshsize. The derived linear systems are structured as (1.4) with n = 2(l + 1)2 and m = 41 (l + 2)2 , for a total 94 l 2 + 5l + 3 unknowns. All computations are performed in MATLAB (version 8.3.0.532 (R2014a)) on a personal computer with 2.60 GHz central processing unit (Intel(R) Core(TM) i5-4210) and 4.00G memory. Starting with zero initial guesses, we compare the tested methods in terms of the number of iteration steps (denoted by ‘IT’) and elapsed CPU times in seconds (denoted by ‘CPU’). All iteration processes are terminated when the current residuals satisfy RES :=

Æ

k f − A x (k ) − B T y (k ) k22 + k g − B x (k ) k22 < 10−6 Æ k f k22 + k g k22

with {((x (k ) )T , (y (k ) )T )T } being the current approximation solutions. The numerical results of the tested methods on the uniform and stretched grids are depicted in Tables 1-4 and 5-8, respectively. Here, ‘DPSS’ and ‘RDPSS’ denote GMRES with DPSS and RDPSS preconditioners, respectively. Analogously, the SL preconditioned GMRES with the preconditioning matrices Q chosen as Cases I and II are, respectively, denoted as ‘SL-I’ and ‘SL-II’. GMRES without preconditioning is denoted by ‘Non’. In Tables 1-3 and 5-7, problems of different sizes and iteration parameters ν are used for each tested method with varied viscosity parameters α = 0.01, 0.1 and 0.01. Meanwhile, in Tables 4 and 8, the parameters αopt are the theoretically found optimal ones based on Theorem 2.4. Here, ‘–’ means the tested method is not convergent within the prescribed iteration number k max = 1000. From these tables, we can see significant improvements for the performance of GMRES with all preconditioners. Meanwhile, the SL and RDPSS preconditioners lead to much better numerical results than the DPSS preconditioner. We have the observations from Tables 1-3 and 5-7 that the SL and RDPSS preconditioned GMRES need less iteration steps and CPU times comparing with the DPSS preconditioned ones. Besides, we also observe that their performances are not sensitive to the iteration parameter α though the GMRES with DPSS preconditioners are unduly parameter dependent. Comparing the performance of the SL and RDPSS preconditioned GMRES, we see that the SL preconditioned GMRES with Q chosen as Case I leads to better performance than the RDPSS preconditoined GMRES. It appears to offer advantages in terms of both iteration steps and runtime. As for the SL preconditioned GMRES with Q chosen as Case II, it may outperform all other methods in terms of iteration steps but it becomes more time consuming with the incresing of problem size. The reason is that the involved sub-linear system BQ −1 B T y = b is much harder to solve when Q is chosen as Case II as it leads to denser coefficient matrix. Moreover, its poor performance for solving problems on large stretched grids is owing to the losing of symmetric dominance for A. In order to further investigate the dependence of the RDPSS and SL preconditioned GMRES on the iteration parameter α, we illustrate the changing of their iteration steps with respect to α from 0.001 to 1 in Figure 1. Here, we choose the case with ν = 0.1 for 32 × 32 grids. From this figure, we see that the SL preconditioned GMRES needs less iteration steps than the RDPSS preconditioned GMRES. What is more, their iteration counts do not change dramatically with the changing of α. Thus, it may exist a fairly wide range of values of α that produce similar fast convergence results. 9

The eigenvalue distributions of the the original matrix A and the preconditioned matrices of all the tested methods are shown in Figures 2-4. The extreme eigenvalues and condition numbers of these matrices are listed in Table 9. Meanwhile, the convergence plots for the corresponding PGMRES methods are shown in Figure 5. Here, we also choose the case with ν = 0.1 on 32 × 32 grids. RDPSS denotes the RDPSS preconditioned matrices, and the SL preconditioned ones with the preconditioning matrix Q chosen as Cases I and II are denoted by ‘SL-I’ and ‘SL-II’, respectively. For all the preconditioners, the parameters α is the optimal ones chosen as Table 4 or 8. In Table 9, γmax and γmin denote, respectively, the upper and lower bounds of the real part of the eigenvalues of the related matrix, ηmax denotes the upper bound of the absolute value of the imaginary part of the eigenvalues of the related matrix. κ is the spectral condition number defined as in Section 4. From Figures 2-4, we see that all the preconditioned matrices have much denser spectrum distributions comparing with the original matrices. Note that the spectrum of the original matrices are also bounded, but they are much closer to the origin, which might delay the convergence. See the numerical results of GMRES without preconditioning in Tables 4 and 8. Moreover, spectrum distribution of the SL preconditioned matrices are more clustered than the RDPSS preconditioned ones. The formers are more separate from zero which in turn lead to faster convergence of PGMRES, as we can see the numerical results in Tables 4 and 8. The results of Table 9 agree with the numerical results in Tables 4 and 8 and Figures 2-5. The SL preconditioned matrices have tighter extreme eigenvalue distributions and smaller condition numbers than the original matrices and the RDPSS preconditioned matrices. Thus the SL preconditioned GMRES is well defined and it has better numerical performance than the GMRES method without preconditioning and the RDPSS preconditioned GMRES. Table 1: Numerical results of PGMRES for uniform grids (ν = 0.01).

Case α

0.01

DPSS

IT CPU

RDPSS

32 × 32 0.1

1

0.01

42 0.1623

137 0.9569

399 8.0655

IT CPU

52 0.2216

60 0.2645

SL-I

IT CPU

36 0.1421

SL-II

IT CPU

33 0.1955

64 × 64

128 × 128

0.1

1

0.01

0.1

1

93 2.7633

368 15.6052

856 94.1069

206 23.2811

914 320.0132

– –

64 0.2871

76 1.3757

86 1.6204

93 1.7928

103 9.2892

120 11.4096

139 13.6078

44 0.1865

49 0.2121

46 0.7994

61 1.0737

63 1.1122

61 5.2938

79 6.9173

77 6.7923

41 0.2363

45 0.2601

42 1.3627

54 1.5103

57 1.6121

54 16.8545

70 16.5465

68 15.1968

6. Conclusions We have proposed and analyzed a class of SIMPLE-like (SL) preconditioners for solving saddle point problems from Navier-Stokes problems. The new preconditioners can include the RDPSS preconditioner proposed in [15] as a special case. Convergence analysis of the corresponding SL iteration method and the selection of the optimal iteration parameter are given. Spectral analysis of the SL preconditioned matrix are presented, including bounds of the eigenvalues, distribution of the corresponding eigenvectors and its minimal polynomial. Numerical experiments from “leaky” two-dimensional lid-driven cavity problem are used to show its advantages over the DPSS and RDPSS preconditioners. References [1] Z.-Z. Bai, Optimal parameters in the HSS-like methods for saddle-point problems, Numer. Linear Algebra Appl. 16(6) (2009) 447–479.

10

Table 2: Numerical results of PGMRES for uniform grids (ν = 0.1).

Case α

0.01

DPSS

IT CPU

RDPSS

32 × 32 0.1

1

0.01

40 0.1618

70 0.3464

203 1.9703

IT CPU

26 0.1008

31 0.1181

SL-I

IT CPU

24 0.1002

SL-II

IT CPU

21 0.1505

64 × 64

128 × 128

0.1

1

0.01

0.1

1

55 0.9839

160 3.7812

467 25.3184

98 8.7591

236 8.6679

989 373.8501

34 0.1283

40 0.7112

46 0.8198

51 0.8814

59 5.1424

63 5.4762

77 6.6272

28 0.1092

30 0.1179

34 0.5973

41 0.7114

42 0.7263

50 4.3202

55 4.7845

59 5.1354

24 0.1545

27 0.1772

30 1.1793

36 1.1998

30 1.1931

44 15.5499

50 15.7271

51 15.9122

Table 3: Numerical results of PGMRES for uniform grids (ν = 1).

Case α

0.01

DPSS

IT CPU

RDPSS

32 × 32 0.1

1

0.01

44 0.2856

50 0.2212

132 0.8995

IT CPU

23 0.1605

26 0.1004

SL-I

IT CPU

25 0.1145

SL-II

IT CPU

21 0.1695

64 × 64 0.1

1

0.01

65 1.0978

89 1.6814

265 8.4163

28 0.1099

36 0.6239

38 0.6493

26 0.1061

28 0.1105

36 0.6292

22 0.1447

24 0.1549

33 1.1408

0.1

1

76 6.5216

112 10.1362

504 99.3014

40 0.6807

52 4.4792

52 4.4327

56 4.6849

38 0.6475

43 0.7528

49 4.1803

51 4.3627

65 5.5702

34 1.1578

37 1.1889

44 14.9657

46 15.1361

56 16.3128

Uniform grids

Stretched grids 40

SL−I SL−II RDPSS

36 34

128 × 128

SL−I SL−II RDPSS

35

32 30

30

Iter

Iter

28 26

25

24 20 22 20

15

18 16

0

0.2

0.4

0.6

0.8

10

1

α

0

0.2

0.4

0.6

α

Figure 1: Iteration steps for SL and RDPSS methods with varying α for 32 × 32 grids (ν = 0.1).

11

0.8

1

Table 4: Numerical results of GMRES and PGMRES with optimal parameters for uniform grids.

Case Non

ν

0.01

IT CPU

248 0.0243

16 × 16 0.1

1

124 157 0.5103 1.0213

4.10e-03 0.0563 0.4614 29 19 17 0.0243 0.0322 0.0302

0.01 521 8.7303

32 × 32 0.1

1

0.01

0.1

1

636 39.5201

700 40.8375

3.91e-05 6.19e-04 22 30 0.4125 0.5062

0.0302 38 0.7187

284 349 3.4619 4.7825

2.86e-04 0.0146 0.1198 23 27 26 0.0903 0.1009 0.1071

64 × 64

– –

RDPSS

αopt IT CPU

SL-I

αopt IT CPU

0.0779 31 0.0294

0.1075 0.0980 18 16 0.0287 0.0351

0.0053 23 0.0853

0.0295 0.0253 25 25 0.0955 0.1109

4.57e-04 22 0.4321

0.0013 28 0.5105

0.0064 37 0.7457

SL-II

αopt IT CPU

0.1139 28 0.0341

0.1537 0.1509 16 15 0.0376 0.0352

0.0085 20 0.1413

0.0455 0.0407 23 22 0.1497 0.1678

9.68e-04 27 0.9961

0.0023 27 0.9848

0.0104 33 1.1902

Table 5: Numerical results of PGMRES for stretched grids (ν = 0.01).

Case α

0.01

DPSS

IT CPU

RDPSS

32 × 32 0.1

1

0.01

32 0.1221

101 0.6107

401 8.2552

IT CPU

37 0.1516

49 0.1982

SL-I

IT CPU

24 0.0962

SL-II

IT CPU

24 0.1635

64 × 64 0.1

1

0.01

57 1.0019

302 11.1828

– –

56 0.2336

52 0.9036

72 1.3651

30 0.1193

34 0.1193

32 0.5699

39 0.2118

46 0.2934

77 2.7475

128 × 128 0.1

1

136 11.0712

867 152.2219

– –

81 1.4584

63 10.5969

83 13.7179

103 16.2173

42 0.7384

48 0.8521

43 7.3003

57 9.6872

61 9.9483

142 4.7715

158 5.2693

154 46.3043

364 115.9819

436 148.8143

[2] Z.-Z. Bai, G.H. Golub, J.-Y. Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems, Numer. Math. 98(1) (2004) 1–32. [3] Z.-Z. Bai, G.H. Golub, Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems, IMA J. Numer. Anal. 27(1) (2007) 1–23. [4] Z.-Z. Bai, G.H. Golub, L.-Z. Lu, J.-F. Yin, Block triangular and skew-Hermitian splitting methods for positive-definite linear systems, SIAM J. Sci. Comput. 26(3) (2005) 844–863. [5] Z.-Z. Bai, M.K. Ng, Z.-Q. Wang, Constraint preconditioners for symmetric indefinite matrices, SIAM J. Matrix Anal. Appl. 31(2) (2009) 410–433. [6] Z.-Z. Bai, B.N. Parlett, Z.-Q. Wang, On generalized successive overrelaxation methods for augmented linear systems, Numer. Math. 102(1) (2005) 1–38. [7] Z.-Z. Bai, Z.-Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl. 428(11) (2008) 2900–2932. [8] M. Benzi, G.H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl. 26(1) (2004) 20–41. [9] M. Benzi, G.H. Golub, J. Liesen. Numerical solution of saddle point problems, Acta Numer. 14 (2005) 1–137. [10] M. Benzi, A. Olshanskii, An augmented lageangian-based approach to the Ossen problem, SIAM J. Sci. Comput. 28 (2006) 2095–2113. [11] M. Benzi, V. Simoncini, On the eigenvalues of a class of saddle point matrices, Numer. Math. 103(2) (2006) 173–196. [12] M. Benzi, Z. Wang, Analysis of augmented Lagrangian-based preconditioners for the steady incompressible Navier-Stokes equations, SIAM J. Sci. Comput. 33 (2011) 2761–2784. [13] J.H. Bramble, J.E. Pasciak, A.T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Numer. Anal. 34(3)

12

Table 6: Numerical results of PGMRES for stretched grids (ν = 0.1).

Case α

0.01

DPSS

IT CPU

RDPSS

32 × 32 0.1

1

0.01

41 0.1975

79 0.4132

333 5.3926

IT CPU

36 0.1396

32 0.1217

SL-I

IT CPU

21 0.0843

SL-II

IT CPU

18 0.1435

64 × 64 0.1

1

0.01

67 1.1572

255 8.0346

990 133.0228

36 0.1364

51 0.8602

56 0.9732

25 0.0977

29 0.1138

37 0.6447

23 0.1471

26 0.1741

80 2.5266

128 × 128 0.1

1

146 27.2852

573 131.4725

– –

59 0.9945

75 12.7767

79 13.6293

80 13.6665

38 0.6742

37 0.6443

50 8.4356

53 9.1227

62 10.7012

99 3.1742

106 3.4213

228 69.2017

340 104.0383

354 110.4306

Table 7: Numerical results of PGMRES for stretched grids (ν = 1).

Case α

0.01

DPSS

IT CPU

RDPSS

32 × 32 0.1

1

0.01

65 0.3072

91 0.4953

272 3.2254

IT CPU

36 0.1378

39 0.1485

SL-I

IT CPU

23 0.0909

SL-II

IT CPU

28 0.1575

64 × 64 0.1

1

0.01

124 2.5416

258 8.0959

737 65.8169

38 0.1496

70 1.2213

70 1.2505

24 0.1222

28 0.1072

34 0.5927

25 0.1678

27 0.1745

81 2.6792

0.1

1

181 35.0339

612 192.7699

– –

69 1.2018

117 20.9962

110 19.2573

110 19.3117

39 0.6704

45 0.7725

51 8.5301

52 8.7742

76 13.1077

93 3.1211

97 3.2647

305 67.1044

307 71.2677

307 82.7895

Stretched grids

Uniform grids 0.08

0.2

0.06

0.15 0.1

imaginary

imaginary

0.04 0.02 0 −0.02

0.05 0 −0.05

−0.04

−0.1

−0.06

−0.15

−0.08

128 × 128

0

0.2

0.4

0.6

0.8

1

1.2

−0.2

1.4

0

0.5

1

1.5

2

2.5

real

real

Figure 2: The eigenvalue distributions of the original matrices A for 32 × 32 grids (ν = 0.1).

13

3

3.5

4

Table 8: Numerical results of GMRES and PGMRES with optimal parameters for stretched grids.

Case 0.01

ν Non

16 × 16 0.1

1

32 × 32

0.01

0.1

1

0.01

535 866 10.2099 22.1571

64 × 64 0.1

1

– –

– –

IT CPU

207 151 255 1.3174 0.6468 2.0578

522 9.8754

– –

RDPSS

αopt IT CPU

0.0016 0.0953 0.8165 23 17 20 0.0304 0.0234 0.0381

1.76e-03 28 0.1037

0.0311 31 0.1132

0.2680 38 0.1535

SL-I

αopt IT CPU

0.1188 0.1537 0.1385 22 16 16 0.0287 0.0205 0.0255

0.0436 27 0.1016

0.0464 24 0.0946

0.0405 24 0.1031

0.0020 22 0.4892

0.0059 0.0113 31 34 0.5632 0.7023

SL-II

αopt IT CPU

0.1629 0.2297 0.2442 25 15 14 0.0475 0.0277 0.0373

0.0264 31 0.1822

0.0722 22 0.1472

0.0722 25 0.1599

0.0016 33 0.8621

0.0190 0.0189 82 83 2.7549 2.7219

1.55e-04 0.0015 0.0826 22 30 70 0.4526 0.5136 2.2162

Table 9: The extreme eigenvalues and condition numbers of the original and preconditioned matrices for 32 × 32 grids (ν = 0.1).

Case

Uniform grids γmax

γmin

Stretched grids

ηmax

κ

γmax

γmin

ηmax

κ

Non

1.0000 0.0017 0.0724 575.6935

3.9831 4.04e-04 0.1742

RDPSS

1.9830 0.0146 0.1480 136.2006

1.9827

0.0185

0.1359 107.1463

SL-I

1.9709 0.0276 0.2298

71.8912

1.9567

0.0417

0.1538

46.9040

SL-II

1.9567 0.0437 0.2814

45.2066

1.9619

0.0368

0.1138

53.3220

RDPSS

0.15

SL-I

0.25

9.85e-03

SL-II

0.3

0.2 0.1

0.2 0.15 0.1

0

−0.05

imaginary

0.1

imaginary

imaginary

0.05

0.05 0 −0.05 −0.1

−0.1

0

−0.1

−0.2

−0.15 −0.15

−0.3 −0.2

−0.2

0

0.5

1

real

1.5

2

−0.25

0

0.5

1

real

1.5

2

−0.4

0

0.5

1

1.5

2

real

Figure 3: The eigenvalue distributions of the preconditioned matrices for 32 × 32 uniform grids (ν = 0.1).

(1997) 1072–1092. [14] J.H. Bramble, J.E. Pasciak, A.T. Vassilev, Uzawa type algorithms for nonsymmetric saddle point problems. Math. Comput. 69(230) (2000) 667–689. [15] Y. Cao, J.-L. Dong, Y.-M. Wang, A relaxed deteriorated PSS preconditioner for nonsymmetric saddle point problems from the steady Navier-Stokes equation, J. Comput. Appl. Math. 273 (2015) 41–60. [16] Y. Cao, M.-Q. Jiang, Y.-L. Zheng, A splitting preconditioner for saddle point problems, Numer. Linear Algebra Appl. 18(5) (2011) 875–895. [17] Y. Cao, L.-Q. Yao, M.-Q. Jiang, A relaxed HSS preconditioner for saddle point problems from meshfree discretization, J. Comput. Math.

14

RDPSS

0.15

−0.05

−0.1

0.05

imaginary

imaginary

imaginary

0.1

0.1

0

0.05 0 −0.05

−0.15

−0.15

0

0.5

1

1.5

−0.2

2

0

−0.05

−0.1

−0.1

−0.15

SL-II

0.15

0.15

0.1

0.05

−0.2

SL-I

0.2

0

0.5

real

1

1.5

−0.2

2

0

0.5

real

1

1.5

2

real

Figure 4: The eigenvalue distributions of the preconditioned matrices for 32 × 32 stretched grids (ν = 0.1).

Uniform grids

−1

10

SL−I SL−II RDPSS

−2

10

−2

−3

−3

||rk ||2 /||r0 ||2

||rk ||2 /||r0 ||2

SL−I SL−II RDPSS

10

10

−4

10

−5

10

−6

10

−4

10

−5

10

−6

10

10

−7

10

Stretched grids

−1

10

−7

0

5

10

15

20

25

10

30

0

5

10

15

20

25

30

35

k

k

Figure 5: The convergence curves of PGMRES for 32 × 32 grids (ν = 0.1).

31(4) (2013) 398–421. [18] A.C. de Niet, F.W. Wubs, Two preconditioners for saddle point problems in fluid flows. Int. J. Numer. Meth. Fluids 54(4) (2007) 355–377. [19] H.C. Elman, Preconditioning for the steady-state Navier-Stokes equations with low viscosity, SIAM J. Sci. Comput. 20(4) (1999) 1299– 1316. [20] H.C. Elman, G.H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point problems, SIAM J. Numer. Anal. 31(6) (1994) 1645–1661. [21] H.C. Elman, A. Ramage, D.J. Silvester, Algorithm 866: IFISS, a Matlab toolbox for modelling incompressible flow, ACM Trans. Math. Software, 33(2) (2007) Article 14. [22] J.H. Ferziger, M. Peri´c, Computational Methods for Fluid Dynamics, Springer Berlin, 1996. [23] R. Glowinski, O. Pironneau, Finite Element Methods for Navier-Stokes Equations, Springer, Berlin, 1986. [24] G.H. Golub, C. Greif, On solving block-structured indefinite linear systems, SIAM J. Sci. Comput. 24 (2003) 2076–2092. [25] X. He, M. Neytcheva, S.S. Capizzano, On an augmented Lagrangian-based preconditioning of Oseen type problems. BIT Numer. Math. 51(4) (2011) 865–888. [26] C. Keller, N. I. Gould, A.J. Wathen, Constraint preconditioning for indefinite linear systems, SIAM J. Matrix Anal. Appl. 21(4) (2000) 1300–1317. [27] P. Krzyzanowski, On block preconditioners for nonsymmetric saddle point problems, SIAM J. Sci. Comput. 23(1) (2001) 157–169. [28] C.-G. Li, C. Vuik, Eigenvalue analysis of the SIMPLE preconditioning for incompressible flow, Numer. Linear Algebra Appl. 11 (2004)511–523. [29] M.F. Murphy, G.H. Golub, A.J. Wathen, A note on preconditioning for indefinite linear systems, SIAM J. Sci. Comput. 21(6) (2000) 1969–1972. [30] Y. Notay, A new analysis of block preconditioners for saddle point problems, SIAM J. Matrix Anal. Appl. 35(1) (2014) 143–173. [31] J.-Y. Pan, M.K. Ng, Z.-Z. Bai, New preconditioners for saddle point problems, Appl. Math. Comput. 172(2) (2006) 762–771. [32] S.V. Patankar. Numerical Heat Transfer and Fluid Flow, McGraw-Hill, New York, 1980. [33] Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, PA, 2003.

15

[34] Y. Saad, M.H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. comput. 7(3) (1986) 856–869. [35] V. Simoncin, M. Benzi, Specrtal properities of the Hermitian and skew-Hermitian splitting preconditioner for saddle point problems, SIAM J. Matrix Anal. Appl. 26(2) (2004) 377–389. [36] V. Simoncini, Block triangular preconditioners for symmetric saddle-point problems, Appl. Numer. Math. 49(1) (2004) 63–80. [37] C. Vuik, G. Segal, A comparision of preconditioners for incompressible Navier-Stokes solvers, Int. J. Numer. Meth. Fluids 57(12) (2008) 1731–1751. [38] C. Vuik, G. Segal, G.P. Boerstoel, The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces. Int. J. Numer. Meth. Fluids 33(7) (2000) 1027–1040.

16