Relaxed block upper–lower triangular preconditioner for generalized saddle point problems from the incompressible Navier–Stokes equations

Relaxed block upper–lower triangular preconditioner for generalized saddle point problems from the incompressible Navier–Stokes equations

Journal of Computational and Applied Mathematics 364 (2020) 112329 Contents lists available at ScienceDirect Journal of Computational and Applied Ma...

652KB Sizes 0 Downloads 74 Views

Journal of Computational and Applied Mathematics 364 (2020) 112329

Contents lists available at ScienceDirect

Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

Relaxed block upper–lower triangular preconditioner for generalized saddle point problems from the incompressible Navier–Stokes equations✩ Ya-Jing Li a , Xin-Yun Zhu b , Hong-Tao Fan a , a b



College of Science, Northwest A&F University, Yangling, Shaanxi 712100, PR China Department of Mathematics, University of Texas of the Permian Basin, Odessa, TX 79762, USA

article

info

Article history: Received 29 November 2018 Received in revised form 7 April 2019 Keywords: Generalized saddle point problems Relaxed block upper–lower triangular preconditioner Matrix splitting Eigenvalue distribution Navier–Stokes equations

a b s t r a c t Based on a new matrix splitting of the original coefficient matrix, a relaxed block upper– lower triangular (RBULT) preconditioner is proposed for the solution of generalized saddle point problem arising from the incompressible Navier–Stokes equations. The distinct advantage of this preconditioner is that it is computationally cheaper to implement than the modified dimensional split (MDS) preconditioner, the generalized relaxed splitting (GRS) preconditioner, and the modified relaxed splitting (MRS) preconditioner when using Krylov subspace methods. The eigenvalue distribution and an upper bound of the degree of the minimal polynomial of the preconditioned matrix are given. Finally, numerical results are provided to demonstrate the validity of the theoretical analysis, which indicate not only that the RBULT preconditioner is competitive when compared with other state-of-the-art preconditioners, but that it is more effective. © 2019 Elsevier B.V. All rights reserved.

1. Introduction The Navier–Stokes equations are the mass, momentum and energy conservation expressions for Newtonian-fluids, i.e., fluids which follow a linear relationship between viscous stress and strain. These equations are used to solve incompressible or compressible, low or high speed, and inviscid or viscous flows. In many situations, analytic solutions of these equations are not available or they may not be directly solvable [1]. Providing accurate numerical solutions is both relevant and very useful. Numerical solutions to the incompressible Navier–Stokes equations are in greater demand than ever before as the field of computational fluid dynamics (CFD) increases its impact as an engineering tool. Problems that can be addressed by the incompressible Navier–Stokes equations include low speed flows in aerodynamics, internal flows in propulsion, and even problems in biomedical fluid analysis. For more problems governed by other classical equations, see [2,3]. In this study, we provide an efficient numerical method of solving the incompressible Navier–Stokes equations governing the flow of viscous Newtonian fluid [4]:

∂u − ν ∆u + (u · ▽)u + ▽p = f ∂t divu = 0 u=g u(x, 0) = u0 (x)

on

Ω × (0, T ],

on on on

Ω × [0, T ], ∂ Ω × [0, T ], Ω,

✩ This work is supported by the National Natural Science Foundation of China (Nos.11701456, 11571004) ∗ Corresponding author. E-mail addresses: [email protected] (X.-Y. Zhu), [email protected] (H.-T. Fan). https://doi.org/10.1016/j.cam.2019.06.045 0377-0427/© 2019 Elsevier B.V. All rights reserved.

2

Y.-J. Li, X.-Y. Zhu and H.-T. Fan / Journal of Computational and Applied Mathematics 364 (2020) 112329

where Ω ⊂ Rd (d = 2,3) is an open bounded domain with boundary ∂ Ω , [0,T] is the time interval, u = u(x, t) and p = p(x, t) are the unknown velocity and pressure fields, ν is the kinematic viscosity, ∆ is the vector Laplacian, ▽ is the gradient, div the divergence, and f, g and u0 are given functions. After implicit time discretization of the Navier–Stokes system of equations by Picard fixed-point iteration, we get a sequence of the Oseen problems. Discretization of the Oseen problems using finite element methods results in large, sparse generalized saddle point systems of the form:

[

A −B

A˜x ≡

BT C

][ ]

[

u f = p −g

]

≡ b˜ ,

where u and p represent the discrete velocity and pressure, respectively. A denotes the discretization of the diffusion, convection and time-dependent terms. A is a diagonal block matrix, e.g., A = diag(A1 , A2 ) in 2D, BT is the discrete gradient, B denotes the (negative) discrete divergence, C is a stabilization matrix, and f and g contain forcing and boundary terms. If the discretization satisfies the LBB (‘inf-sup’) stability condition, no pressure stabilization is required and we can take C = 0. If the LBB condition is not satisfied, the stabilization matrix C ̸ = 0 is symmetric and positive semidefinite (SPD) and the actual choice of C depends on the particular finite element pair being used. In this paper, we are interested in solving linear systems arising from the 2D linearized Navier–Stokes equations with the following structure [4–6]:



A1

0

Au ≡ ⎣ 0

A2

−B1

BT1

⎤⎡ ⎤ u1





f1

BT2 ⎦ ⎣u2 ⎦ = ⎣ f2 ⎦ ≡ b,

−B2

C

p

(1.1)

−g

where A1 ∈ Rn1 ×n1 , A2 ∈ Rn2 ×n2 are nonsymmetric positive definite matrices, B1 ∈ Rm×n1 , B2 ∈ Rm×n2 have full row rank, and C ∈ Rm×m is a symmetric positive semi-definite matrix. Under these conditions, the system (1.1) is nonsingular and can be viewed as a special case of the generalized saddle point problem. Generalized saddle point problems occur often in various scientific and engineering applications such as constrained optimization, mixed finite element analysis of elliptic PDEs, and constrained least squares problems; for detailed elaborations see [2,7–9]. In the past several decades, tremendous effort has been invested in the development of methods for fast solutions of these generalized saddle point problems. Most of the previous work has been aimed at developing efficient preconditioners for Krylov subspace methods; see [7] for a comprehensive survey. In this paper, we focus on preconditioned Krylov subspace methods, especially preconditioned GMRES [10]. Some famous preconditioners have been proposed for generalized saddle point problems, such as HSS preconditioners [11], block-based preconditioners [12], constraint preconditioners [13], dimensional split preconditioners [5,6], and augmented Lagrangian-based preconditioners [14]. Although a variety of splitting-based preconditioners are available in the literature for solving such problems as the generalized Stokes problem and the rotation form of the Navier–Stokes equations, most of them are not well-suited for the standard (convection) form, see [5,15]. Recently, to solve the linear system (1.1) with C = 0, Benzi et al. [5] first split the coefficient matrix A of (1.1) into the following two parts: A1

0

BT1

A1

0

BT1

0

0

0

A =⎣ 0

A2

BT2 ⎦ = ⎣ 0

0

0 ⎦ + ⎣0

A2

BT2 ⎦ ≡ A¯1 + A¯2 .

0

0



−B1

−B2

0





−B1





0

−B2

⎤ (1.2)

0

Based on the splitting (1.2), a dimensional splitting (DS) preconditioner [5] and a relaxed dimensional factorization (RDF) preconditioner [4] were obtained. More recently, to further improve the efficiency of both the DS preconditioner and the RDF preconditioner used for solving system (1.1), Tan et al. [16] proposed a relaxed splitting (RS) preconditioner for solving the problem (1.1). The greatest advantage of the RS preconditioner is that, in practice, it can be implemented in an easier and more economical manner than that of the DS preconditioner or the RDF preconditioner; (see [16]). These advantages prompted Cao et al. [17] and Fan et al. [18] to generalize this result with C ̸ = 0 in the development of the generalized relaxed splitting (GRS) preconditioner and modified relaxed splitting (MRS) preconditioner to solve the system (1.1). The GRS preconditioner and the MRS preconditioner have merit in several regards. One is that the structure of these two preconditioners can provide a convenient way to analyze the spectral distribution of the preconditioned matrix. Another advantage is that, when using(Krylov subspace ) methods, we need only to solve two subsystems of linear equations A2 BT2 with coefficient matrices like A1 and , while for the modified dimensional split (MDS) preconditioner [6] −B2 α I + C BT1 α I + A2 BT2 and . Here and in the αI −B2 αI + C following discussion, α is a positive real number and I represents the identity matrix of the proper dimension. This implies that less computational cost is required in using GRS and MRS as compared to the MDS preconditioner. However, we find that when combining Krylov subspace methods, for ( both theT GRS ) preconditioner and the MRS A2 B2 preconditioner, the subsystem of linear equation with coefficient matrix can still be difficult and time −B2 α I + C consuming to solve, see [18]. To further reduce computational cost, by considering a new class of matrix splitting

(

we need to solve two subsystems with coefficient matrices

α I + A1 −B1

)

(

)

Y.-J. Li, X.-Y. Zhu and H.-T. Fan / Journal of Computational and Applied Mathematics 364 (2020) 112329

3

techniques, we establish a relaxed block upper–lower triangular (RBULT) preconditioner for solving the system (1.1). We would like to emphasize that although the idea or strategy we used is very simple and the theoretical proofs are trivial compared with many relaxed matrix splitting preconditioners developed recently [19–21], the method proposed has great advantages and is a huge improvement, both theoretically and experimentally. The greatest advantages of the RBULT preconditioner are that (a) when using Krylov subspace methods it is only required that we solve three subsystems of linear equation with coefficient matrices A1 , A2 and α I + C , respectively, thus avoiding the difficulty of finding the inverse of many coefficient matrices; (b) moreover, the size of matrix C is typically very small and the products between matrices BT1 and B1 (or BT2 and B2 ) in the MRS, GRS, and MDS preconditioning schemes are avoided, which implies that the computational cost can be further reduced when compared to the MRS, GRS, and MDS preconditioners; (c) additionally, it can be proved by means of theoretical analysis that in practical applications, eigenvalues of the preconditioned matrix corresponding to the RBULT method are more tightly clustered when an appropriately large parameter α is selected, the effectiveness of which is confirmed experimentally. Finally, we investigate the eigenvalue distribution and an upper bound on the degree of the minimal polynomial of the preconditioned matrix. The numerical results presented in this paper further validate the efficiency of the RBULT preconditioner as compared with other state-of-the-art preconditioners. 2. The relaxed block upper–lower triangular preconditioner Firstly, we split the coefficient A matrix as follows: BT1

A1

0

BT1

A1

0

0

0

0

A =⎣ 0

A2

BT2 ⎦ = ⎣ 0

A2

0⎦ + ⎣0

0

BT2 ⎦ ≡ B¯ 1 + B¯ 2 .

0

0

C



−B1

−B2





−B1

C

−B2





0

⎤ (2.1)

It is not difficult to see that the matrices α I + B¯ 1 and α I + B¯ 2 are nonsingular. By application of the alternating direction iteration method, the following block upper–lower triangular (BULT) fixed-point iteration method is obtained:

{

1

(α I + B¯ 1 )u(k+ 2 ) = (α I − B¯ 2 )u(k) + b,

(2.2)

1

(α I + B¯ 2 )u(k+1) = (α I − B¯ 1 )u(k+ 2 ) + b. 1

Eliminating u(k+ 2 ) from the above iteration scheme, we rewrite (2.2) as u(k+1) = Tα u(k) + d, where d is a certain vector, and the iteration matrix Tα is of the form Tα = (α I + B¯ 2 )−1 (α I − B¯ 1 )(α I + B¯ 1 )−1 (α I − B¯ 2 ). In fact, the above iteration scheme can be seen as induction on the splitting A = B¯¯ − C¯¯, where B¯¯ =

1 2α

and C¯¯ =

(α I + B¯ 1 )(α I + B¯ 2 )

1 2α

(α I − B¯ 1 )(α I − B¯ 2 ).

(2.3)

Here the matrix B¯¯ is called a preconditioner. Generally, the convergence rate of the iteration u(k+1) = Tα u(k) + d is too slow to be used in solving generalized saddle point problems. However, the preconditioner produced by its iteration scheme is often used to accelerate the convergence rate of the Krylov subspace. Making use of a similar technique (i.e., deleting the shift-term α I of the submatrices α I + A1 and α I + A2 in the sub-block of the corresponding product matrix), we propose a relaxed block upper–lower triangular (RBULT) preconditioner for generalized saddle point problems of the form:

⎡ PRBULT =

1

α

A1

⎣ 0 −B1

0

⎤⎡ αI 0 ⎦⎣ 0

αI

αI

0

0

A2

−B2

BT1

0

0

BT2

⎤ ⎦.

(2.4)

αI + C

By comparison with the original system, we have

⎡ ⎢0

PRBULT

⎢ ⎢ − A = ⎢0 ⎢ ⎣ 0

1

0

α 1

0 0

α αI −



A1 BT1 − BT1 A2 BT2 − BT2

1

α

B1 BT1 −

1

α

B2 BT2

⎥ ⎥ ⎥ ⎥. ⎥ ⎦

(2.5)

When combining the Krylov subspace methods such as GMRES to solve generalized saddle point problems using the RBULT preconditioner, the solution of the following system of linear equations is required at each step: z1 z2 z3

[ ] PRBULT

r1

[ ] = r2 . r3

(2.6)

4

Y.-J. Li, X.-Y. Zhu and H.-T. Fan / Journal of Computational and Applied Mathematics 364 (2020) 112329

This can be accomplished by the following algorithm: RBULT algorithm: (1) Solve A1 y1 = r1 ; (2) Solve A2 y2 = r2 ; (3) Solve (α I + C )z3 = r3 + B1 y1 + B2 y2 ; (4) z1 = y1 − α1 BT1 z3 ; (5) z2 = y2 − α1 BT2 z3 . From the RBULT algorithm above, we can see that the key feature of the RBULT preconditioning scheme is that the action of the RBULT preconditioning matrix PRBULT requires the solution of three linear sub-systems with coefficient matrices A1 , A2 and α I + C , thus avoiding the difficulty of finding the inverse of the matrix α I + C in the MDS and GRS schemes and the inverse of the matrix A2 in the MRS scheme before solving subsystems in each step when combining the Krylov subspace method. In contrast, the action of the MDS preconditioning matrix PMDS requires the solution of two linear sub-systems with coefficient matrix α I + C and another two linear sub-systems with the coefficient matrices α I + A2 + BT2 (α I + C )−1 B2 and α I + A1 + α1 BT1 B1 . Similarly, the action of the GRS preconditioning matrix PGRS requires the solution of two linear sub-systems with coefficient matrix α I + C and another two linear sub-systems with the coefficient matrices A1 and A2 + BT2 (α I + C )−1 B2 . The action of the MRS preconditioning matrix PMRS requires the solution of three 1 T linear sub-systems with the coefficient matrices A1 , A2 , and α I + C + B2 A− 2 B2 , respectively. In addition, as shown by the underline, the size of matrix C is typically very small and the product between matrices BT1 and B1 (or BT2 and B2 ) in the MRS, GRS, and MDS preconditioning schemes is avoided, which implies that the computational cost can be reduced dramatically when compared to the MRS, GRS, and MDS preconditioners. Therefore, the RBULT preconditioner is easier to apply and implement than the other three preconditioners for generalized saddle point problems (1.1). To further clarify this point, we describe the MDS, GRS, and MRS preconditioning schemes below: MDS algorithm: (1) Solve (α I + A1 + α1 BT1 B1 )z1 = r1 − α1 BT1 r3 ; (2) Solve (α I + C )y1 = r3 + B1 z1 ; (3) Solve (α I + A2 + BT2 (α I + C )−1 B2 )z2 = r2 − BT2 y1 ; (4) Solve (α I + C )z3 = r3 + B1 z1 + B2 z2 . GRS algorithm: (1) Solve A1 y1 = r1 ; (2) Solve (α I + C )y2 = r3 + B1 y1 ; (3) Solve (A2 + BT2 (α I + C )−1 B2 )z2 = r2 − BT2 y2 ; (4) Solve (α I + C )z3 = r3 + B2 z2 ; (5) Solve z1 = y1 − α1 B1 z3 . MRS algorithm: (1) Solve A1 y1 = r1 ; (2) Solve A2 y2 = r2 ; 1 T (3) Solve (α I + C + B2 A− 2 B2 )z3 = r3 + B1 y1 + B2 y2 ; (4) Solve z1 = y1 − α1 BT1 z3 ; 1 T (5) Solve z2 = y2 − A− 2 B2 z3 . −1 3. Spectral property of the preconditioned matrix PRBULT A

In this section, we shall study the spectral distribution and the degree of the minimal polynomial of the preconditioned −1 matrix PRBULT A. −1 Theorem 3.1. Let the RBULT preconditioner PRBULT be defined as in (2.4). Then the preconditioned matrix PRBULT A has an eigenvalue of 1 with multiplicity at least n1 + n2 , and the remaining eigenvalues λ satisfy the following generalized eigenvalue problem: 1 T −1 T (C + B1 A− 1 B1 + B2 A2 B2 )x = λ(α I + C )x.

(3.1)

Proof. Using the equalities (2.4) and (2.5), we have −1

−1

PRBULT A = I − PRBULT (PRBULT − A )

⎡ ⎡

αI

0

=I − α ⎣ 0

αI

0

0

BT1 BT2

αI + C

⎤−1 ⎡ ⎦

A1

⎣ 0 −B1

0 A2

−B2

⎤−1 ⎢0 ⎢ ⎢ 0 ⎦ ⎢0 ⎢ ⎣ αI

1

0

α

0

0

1

0 0

α αI −

A1 BT1 A2 BT2

1

α



BT1



BT2

B1 BT1 −

1

α



B2 BT2

⎥ ⎥ ⎥ ⎥. ⎥ ⎦

(3.2)

Y.-J. Li, X.-Y. Zhu and H.-T. Fan / Journal of Computational and Applied Mathematics 364 (2020) 112329

5

Since

⎡ ⎡ αI ⎣0

αI

0

0

⎤−1

BT1

0

BT2



αI + C

1

⎢ I

⎢α ⎢ =⎢0 ⎢ ⎣

1

α

0



1

− BT1 (α I + C )−1 ⎥ α ⎥ ⎥ 1 T −1 ⎥ , − B2 (α I + C ) ⎥ α ⎦ (α I + C )−1

0 I

0

(3.3)

and

⎡ A1 0 −B1

[

0 A2 −B2

0 0 αI

]−1



1 A− 1

⎢ ⎢ =⎢ 0 ⎣1 α

−1

B1 A1

0 1

α

0

⎥ ⎥

1 A− 2

−1

B2 A2

0 ⎥. 1 ⎦ I

(3.4)

α

Therefore



I −1 PRBULT A = ⎣0 0

0 I 0

⎤ ˆ Π ˜ ⎦. Π Ξ

(3.5)

−1 ˆ and Π ˜ , we only Since the eigenvalues of the preconditioned matrix PRBULT A have nothing to do with the submatrices Π −1 T 1 T ). B + B A B need to give an expression of Ξ . Here Ξ = (α I + C )−1 (C + B1 A− 2 2 2 1 1 −1 As we can see above, the preconditioned matrix PRBULT A has at least n eigenvalues of 1, and the rest of the eigenvalues satisfy the following generalized eigenvalue problem −1 T 1 T (α I + C )−1 (C + B1 A− 1 B1 + B2 A2 B2 )x = λx,

(3.6)

namely, −1 T 1 T (C + B1 A− 1 B1 + B2 A2 B2 )x = λ(α I + C )x.



(3.7)

−1 T −1 T 1 T Remark 3.1. By comparing with the generalized eigenvalue problem (C + B1 A− 1 B1 + B2 A2 B2 )x = λ(β I + C + B2 A2 B2 )x of the MRS method with β > 0, the generalized eigenvalue problem of the RBULT method is simpler and can be expected −1 A corresponding to to be more easily solved. Also, it is expected that the eigenvalues of the preconditioned matrix PRBULT −1 −1 −1 −1 the RBULT method are more tightly clustered than those of preconditioned matrices PMRS A , PGRS A , PMDS A and PHSS A . To illustrate this point, in the Appendix we further analyze the eigenvalues of these preconditioned matrices and give a detailed deduction of the upper and lower bounds of the eigenvalue distribution of the RBULT preconditioned matrix. −1 −1 A , PGRS A can be given similarly and omitted Discussions on the preconditioned matrices PMRS

If the generalized saddle point problem reduces to its special case with C = 0, then a similar result to Theorem 3.1 can be obtained, as given below. −1 Corollary 3.1. Let the RBULT preconditioner PRBULT be defined as in (2.4) with C = 0. Then the preconditioned matrix PRBULT A with C = 0 has an eigenvalue of 1 with multiplicity at least n1 + n2 , and the remaining eigenvalues λ satisfy the following generalized eigenvalue problem: 1 T −1 T (B1 A− 1 B1 + B2 A2 B2 )x = λα x.

(3.8)

1 T −1 T −1 T Remark 3.2. By comparing with the generalized eigenvalue problem (B1 A− 1 B1 + B2 A2 B2 )x = λ(β I + B2 A2 B2 )x of the MRS method with C = 0, the generalized eigenvalue problem of the RBULT method is simpler and can be expected to be more easily solved.

More specifically, when the matrices C = 0, A1 = A2 and B1 = B2 , the result of Corollary 3.1 can be simplified as follows, see [22]. Corollary 3.2. Let the RBULT preconditioner PRBULT be defined as in (2.4) with C = 0, A1 = A2 and B1 = B2 . Then the −1 preconditioned matrix PRBULT A has three different eigenvalues 1 and 1 ± i, where the symbol i denotes the imaginary unit; the multiplicity of 1 is 2n1 or 2n2 , and the multiplicity of 1 ± i is m. Following an analogous approach to that in [17,18], we give an accurate elaboration about the degree of the minimal −1 polynomial of the RBULT preconditioned matrix PRBULT A. −1 Theorem 3.2. The degree of the minimal polynomial of the RBULT preconditioned matrix PRBULT A is at most m + 1. Thus, −1 the dimension of the Krylov subspace K (PRBULT A , b) is at most m + 1.

6

Y.-J. Li, X.-Y. Zhu and H.-T. Fan / Journal of Computational and Applied Mathematics 364 (2020) 112329

−1 Proof. Using (3.5), the preconditioned matrix PRBULT A can be written as

[

I PRBULT A = 0 −1

where Π =

] Π , Ξ

(3.9)

( ) ˆ Π n×m and Ξ ∈ Rm×m . Let λi (i = 1, . . . , m) be the eigenvalues of Ξ . Then λi are also the eigenvalues ˜ ∈R Π

−1 −1 of the preconditioned matrix PRBULT A . From (3.9), the characteristic polynomial of the preconditioned matrix PRBULT A is −1 (PRBULT A − I )n

m ∏

−1 (PRBULT A − λi I ).

i=1

Because λi , s are the eigenvalues of matrix Ξ , using the Hamilton–Cayley theorem, we have m ∏

(Ξ − λi Im×m ) = 0.

i=1

Thus, the polynomial −1 −1 Pm+1 (PRBULT A ) ≜ (PRBULT A − I )n

( =

0n×n

0m×n

m ∏

−1 (PRBULT A − λi I )

i=1 ∏ ) m i=1 (Ξ − λi Im×m ) ∏m (Ξ − I) i=1 (Ξ − λi Im×m )

Π·

(3.10)

= 0. −1 A ) is at most (m + 1). From [10], Therefore, the degree of the minimal polynomial of preconditioned matrix Pm+1 (PRBULT we know that the degree of the minimal polynomial is equal to the dimension of the corresponding Krylov subspace −1 −1 K (PRBULT A , b) (for general b). Consequently, the dimension of the corresponding Krylov subspace K (PRBULT A , b) is also at most m + 1. □

3.1. Theoretical information for determining parameter α From Theorem 3.1 we see that the spectral radius of the RBULT preconditioned matrix depends on the parameter α . Thus, it is necessary to determine how to select the parameter α required in practice. We find, however, that the parameter 1 T −1 T α is related to the eigenvalues of the matrix (α I +C )−1 (C +B1 A− 1 B1 +B2 A2 B2 ) and it is difficult to compute the theoretically parameter α for large scale matrices A1 , A2 , B1 , B2 and C . There are some ways to estimate these parameters in practice. In fact, in [23] the authors have given a way to choose the parameter α , i.e., α =

∥B1 BT1 +B2 BT2 ∥2 ∥C ∥22

. But, we need to compute

the maximum singular values of the matrices B1 BT1 + B2 BT2 and C , which is not practical and very expensive when the dimension of the matrices B1 BT1 + B2 BT2 and C is very large. Therefore it is necessary to make an approximation for the parameter α for use in practical computations. Based on the algebraic estimation technique [15], we may expect that PRBULT in (2.5) is as close to A as possible and RRBULT ≜ PRBULT − A ≈ 0. With this assumption, the RBULT preconditioned matrix will have a clustered eigenvalue distribution. To this end, we define a function Φ (α ) =∥ RRBULT (α ) ∥2F with respect to α , where ∥ · ∥F denotes the Frobenius norm, so as to choose the proper parameter α by minimizing the function Φ (α ). Explicitly (if we denote W = α I − α1 B1 BT1 − α1 B2 BT2 ), ∗ Φ (α ) =∥ RRBULT (α ) ∥2F = tr(RRBULT (α )RRBULT (α )) ⎧⎡ ⎤⎡ ⎤⎫ 1 ⎪ ⎪ T T ⎪ ⎪ A B − B 0 0 ⎪ ⎪ 1 1 1 ⎥⎢ ⎪ ⎪ ⎢ 0 0 0 ⎥ ⎪ ⎪ α ⎨⎢ ⎥⎢ ⎬ ⎥ ⎢ ⎥ 1 ⎢ ⎥ T T = tr ⎢0 0 ⎥⎢ A2 B2 − B2 0 0 0 ⎥ ⎪ ⎥⎣ ⎢ α ⎪ ⎪ ⎦⎪ ⎪ ⎪ ⎣ ⎦ 1 1 1 1 ⎪ T T T T ⎪ 1 1 ⎪ T T B1 A1 − B1 B2 A2 − B2 α I − B1 B1 − B2 B2 ⎪ ⎩ 0 0 α I − B1 B − B2 B ⎭ 1 2 α α α α α α ⎧⎡ ⎤⎫ 1 1 1 1 1 ⎪ T T T T T T T T T ⎪ ⎪ ⎪ ⎪ ⎪⎢( α A1 B1 − B1 )( α B1 A1 − B1 ) ( α A1 B1 − B1 )( α B2 A2 − B2 ) ( α A1 B1 − B1 )W ⎥⎪ ⎪ ⎪ ⎨⎢ ⎥⎪ ⎬ ⎥ ⎢ 1 1 1 1 1 T T T T T T T T T = tr ⎢( A2 B2 − B2 )( B1 A1 − B1 ) ( A2 B2 − B2 )( B2 A2 − B2 ) ( A2 B2 − B2 )W ⎥ ⎪ ⎢ α ⎥⎪ α α α α ⎪ ⎪ ⎪ ⎣ ⎦⎪ ⎪ ⎪ 1 ⎪ ⎪ T 1 T T ⎩ ⎭ W ( B1 A1 − B1 ) W ( α B2 A2 − B2 ) WW α

Y.-J. Li, X.-Y. Zhu and H.-T. Fan / Journal of Computational and Applied Mathematics 364 (2020) 112329

7

Table 1 GMRES(20) numerical results for the Oseen problem with ν = 0.1. Grids (α, β ) IT CPU (α, β ) IT CPU (α, β ) IT CPU (α, β ) IT CPU

16 × 16

32 × 32

64 × 64

128 × 128

RBULT

MRS

GRS

MDS

HSS

(2100,–) 5 0.0788 (270,–) 8 0.3518 (30,–) 8 1.1042 (10,–) 11 4.0100

(10,1) 13 0.1790 (1, 0.008) 11 0.4975 (1,0.008) 12 2.5356 (1, 1×10−6 ) 25 10.9852

(1,–) 21 0.3110 (1.5,–) 24 1.0412 (0.5,–) 20 3.7707 (0.3,–) 35 18.5095

(0.03,–) 23 0.3177 (0.02,–) 22 1.0626 (0.0004,–) 22 3.8048 (2×10−5 ,–) 34 20.7562

(0.06,–) 48 0.8556 (0.033,–) 117 11.7048 (0.005,–) 262 137.2467 (4×10−6 ,–) 475 262.6086

Table 2 GMRES(20) numerical results for the Oseen problem with ν = 0.05. Grids (α, β ) IT CPU (α, β ) IT CPU (α, β ) IT CPU (α, β ) IT CPU

16 × 16

32 × 32

64 × 64

128 × 128

RBULT

MRS

GRS

MDS

HSS

(2500,–) 6 0.0809 (300,–) 8 0.7937 (35,–) 8 1.1223 (15,–) 12 4.5129

(8,1) 13 0.1698 (1000, 10) 15 1.1165 (10,0.08) 13 3.7755 (1000,105) 28 17.1822

(10,–) 21 0.2627 (20,–) 21 1.4702 (4,–) 23 6.9657 (0.2,–) 36 24.3450

(0.02,–) 23 0.2701 (0.002,–) 22 1.4930 (0.0005,–) 24 7.0663 (5×10−6 ,–) 37 29.8324

(0.033,–) 47 0.8199 (0.002,–) 105 8.4210 (0.005,–) 260 138.1838 (1×10−6 ,–) 492 296.5758

1

1

1

1

= tr[( A1 BT1 − BT1 )( B1 AT1 − B1 ) + ( A2 BT2 − BT2 )( B2 AT2 − B2 ) + WW T ] α α α α = tr( 1

1

α2

A1 BT1 B1 AT1 −

A2 BT2 B2 −



α

+

α2

1

1

α

1

α

A1 BT1 B1 −

1

α

BT1 B1 AT1 + BT1 B1 +

1

α2

A2 BT2 B2 AT2

BT2 B2 AT2 + BT2 B2 + α 2 I − B1 BT1 − B2 BT2 − B1 BT1 +

B1 BT1 B2 BT2 − B2 BT2 +

1

α2

B2 BT2 B1 BT1 +

1

α2

1

α2

B1 BT1 B1 BT1

B2 BT2 B2 BT2 ),

where tr(·) and T denote the trace and transpose of a matrix, respectively. Moreover, we obtain the following function

φ (α ) = −

∂ 2 1 1 Φ (α ) = − 3 tr(A1 BT1 B1 AT1 ) + 2 tr(A1 BT1 B1 ) + 2 tr(BT1 B1 AT1 ) ∂α α α α 2

α3 2



α



α3

3

2

tr(A2 BT2 B2 AT2 ) + tr(B1 BT1 B1 BT1 ) −

1

α2 2

α

3

tr(A2 BT2 B2 ) +

1

α2

tr(B1 BT1 B2 BT2 ) −

tr(BT2 B2 AT2 ) + 2α m 2

α3

tr(B2 BT2 B1 BT1 )

tr(B2 BT2 B2 BT2 ).

Notice that the stationary point of Φ (α ) is the root of φ (α ) = 0. Since the parameter in the RBULT preconditioned method must be positive, those solutions of φ (α ) = 0 that are complex or negative will not be considered. If more than one solution is a positive number, then by simple calculation, the global minimizer of Φ (α ) is the desired value. Through numerical experiments (see Tables 1 and 2), we found that the positive parameter α obtained by algebraic estimation techniques proved to be very effective. In addition, the Fourier-based analysis of the iteration operator can provide insight into the choice of optimal parameters, and yield estimates for the asymptotic rate of convergence in terms of the discretization parameter, see [4,24]. Therefore, it is necessary to use Fourier analysis to analyze and determine the optimal parameter of the RBULT method, which will be the focus of our future work.

8

Y.-J. Li, X.-Y. Zhu and H.-T. Fan / Journal of Computational and Applied Mathematics 364 (2020) 112329

4. Numerical experiments In this section, we provide a numerical example to demonstrate the effectiveness of the proposed preconditioner. We consider the generalized saddle point problem from the discretization of the 2D linearized steady-state Navier–Stokes equation, i.e., the steady Oseen equation of the form

{ −ν ∆u + ω · ∇ u + ∇ p = f , ∇ · u = 0,

in Ω ,

(4.1)

where Ω is a bounded domain, ν > 0 is the viscosity, and ω is the viscosity field. The vector field u stands for the velocity, and p represents the pressure. We use the IFISS software package developed by Elman et al. [25] to generate discretizations of the ‘‘regularized’’ two-dimensional lid-driven cavity problem for the Oseen equation (4.1). The mixed finite element used here is the bilinear pressure Q1 –P0 pair with local stabilization, where 0.25 is used as the stabilization parameter. We use GMRES(20) in conjunction with the preconditioners GRS [17] and MDS [6]. We also compare the results of the GRS and MDS preconditioners with those of the Hermitian and skew-Hermitian (HSS) preconditioner [11]. All runs are started from the initial zero vector and terminated if the current iterations satisfy ∥b − A uk ∥2 /∥b∥2 < 10−6 or if the prescribed iteration number κmax = 1000 is exceeded. In actual computations, the subsystems of linear equations arising in the applications of the preconditioners are solved by the LU factorization. The symbols IT and CPU stand for the iteration counts and total CPU time respectively. All experiments were run on a PC using MATLAB 2012a under the Windows 7 operating system. To implement all the preconditioners mentioned above efficiently, we need to choose the parameter α appropriately, since the analytic determination of the parameter which results in the fastest convergence of the preconditioned GMRES iteration appears to be quite a difficult problem, especially in the case of the Oseen problem. In Table 1, we report numerical results of exact solvers for the Oseen problem with ν = 0.1 on uniform grids of increasing size. From this table, we see that the iteration numbers and the elapsed CPU times of the RBULT preconditioned GMRES method are much smaller than those of the MRS, GRS, MDS, and HSS preconditioned GMRES methods. The reasons may be that either the relaxed block upper–lower triangular iteration has considerably less computing workloads than the others, which in turn confirms the theoretical analysis presented in Section 2. To better show the great advantage of the RBULT preconditioner, we also examined the case with ν = 0.05; a similar result can be seen in Table 2. It is worth noting that with relatively large parameter α , the superiority of RBULT preconditioner becomes more and more apparent. In order to further illustrate its advantage over the other preconditioners (as previously described), the eigenvalue distributions of the original and five preconditioned matrices are shown in Figs. 1 and 2. It can be observed that eigenvalues of −1 −1 −1 −1 −1 preconditioned matrix PRBULT A are more tightly clustered around one than those of PMRS A , PGRS A , PMDS A and PHSS A . Therefore, the relaxed block upper–lower triangular iteration method is very effective as a preconditioner for solving generalized saddle point problems.

5. Conclusion We have presented the RBULT preconditioned method for solving the generalized saddle point problem. The concrete preconditioning scheme of the RBULT method was discussed, and the spectral distribution and the degree of the minimal polynomial of the preconditioned matrix has been analyzed. Furthermore, we presented some kind of theoretical −1 information for determining α . Finally, we derived the upper and lower bounds of the preconditioned matrix PRBULT A to −1 illustrate that in practical applications, the eigenvalues of the preconditioned matrix PRBULT A are more tightly clustered −1 −1 −1 −1 than those of the preconditioned matrices PMRS A , PGRS A , PMDS A and PHSS A . Future research should include further analysis of the preconditioned iteration, such as using Fourier analysis for estimating the optimal value of the relaxation −1 parameter α , eigenvector distributions of the preconditioned matrix PRBULT A , and extension to the 3D case. Acknowledgments The authors are very much indebted to the anonymous referees and editor for their constructive comments and valuable suggestions, which greatly improved the original manuscript of this paper. This work was supported by the National Natural Science Foundation of China (Nos. 11701456, 11801452, 11571004), Fundamental Research Project of Natural Science in Shaanxi Province-General Project (Youth) (Grant Nos. 2019JQ-415, 2019JQ-196), Fundamental Research Funds for the Central Universities (Nos. 2452017219, 2452018017), and Innovation and Entrepreneurship Training Program for College Students of Shaanxi Province (S201910712132).

Y.-J. Li, X.-Y. Zhu and H.-T. Fan / Journal of Computational and Applied Mathematics 364 (2020) 112329

9

Fig. 1. The eigenvalue distribution of the original and five preconditioned matrices (32 × 32 uniform grids, ν = 0.1).

Fig. 2. The eigenvalue distribution of the original and five preconditioned matrices (32 × 32 uniform grids, ν = 0.05).

Appendix It is known that the convergence of a Krylov subspace method is influenced by the spectrum distribution of the preconditioned matrix. In general, favorable convergence rate is often associated with a clustering most of the eigenvalues around 1 and away from zero [10]. Next, to illustrate and compare the distribution of eigenvalues of the RBULT method, −1 −1 −1 −1 −1 we shall analyze the eigenvalues of the preconditioned matrix PRBULT A , PMRS A , PGRS A , PMDS A and PHSS A . Through −1 −1 −1 calculation, we can obtain that the preconditioned matrices PRBULT A , PMRS A [17], and PGRS A [18]



I −1 PRBULT A = ⎣0 0

0 I 0

⎤ [ ˆ I Π −1 ⎦ ˜ , PMRS A = 0 Π 0 Ξ

0 I 0

] [ ∆1 I −1 ∆2 , PGRS A = 0 0 ∆3

0 I 0

] Θ1 Θ2 , Θ3

(A.1)

10

Y.-J. Li, X.-Y. Zhu and H.-T. Fan / Journal of Computational and Applied Mathematics 364 (2020) 112329

all share the same eigenvalue of 1 with multiplicity at least n1 + n2 , and their remaining eigenvalues λ satisfy, respectively, the following generalized eigenvalue problems: −1 T 1 T (C + B1 A− 1 B1 + B2 A2 B2 )x = λ(α I + C )x,

(A.2)

−1 T −1 T 1 T (C + B1 A− 1 B1 + B2 A2 B2 )x = λ1 (α I + C + B2 A2 B2 )x,

(A.3)

1 T 1 T −1 ˆ −1 T ˆ −1 T (C + B1 A− (C + B1 A− 1 B1 ))x = λ2 (α I + C )x, 1 B1 + B2 A2 B2 − B2 A2 B2 (α I + C )

(A.4)

α I + C ) B2 . The preconditioned matrices PMDS A [6] and PHSS A [11] are, respectively, of forms ⎤ N −1 0 − α1 N −1 B1 −1 PMDS A = ⎣−M −1 B2 (α I + C )−1 BT1 N −1 M −1 −M −1 B2 (α I + C )−1 (I − α1 BT1 N −1 B1 )⎦ × T −1 −1 T PB1 N (α I + C ) B2 P(I − α1 BT1 N −1 B1 ) ⎡ ⎤ (A.5)

where Aˆ 2 = A2 +

BT2 (

−1

−1

−1



A1

0 A2 −BT2

⎣ 0 −BT1

B1 B2 ⎦ , C

where M = α I + A2 + B2 (α I + C )−1 BT2 and N = α I + A1 +

⎡ 1 ⎢

−1

PHSS A =



1

T

B B , α 1 1

W1 A1 − (α I + H1 )B1 BT1

(α I + H1 )B1 BT2

−(α I + H2 )B2 BT1 −α BT1 A1 − α 2 BT1

W2 A2 − (α I + H2 )B2 BT2



−α BT2 A2 − α 2 BT2 A2 − α 2 BT2

W1 B1 + (α I + H1 )C



(α I + H2 )(α I + S2 )B2 + (α I + H2 )B2 C ⎦ ,



−α BT1 B1 − α BT2 B2 + α 2 C

(A.6) where W1 = (α I + H1 )(α I + S1 ) and W2 = (α I + H2 )(α I + S2 ). Since that the structure of the preconditioned matrices −1 −1 PMDS A and PHSS A is very complex and their eigenvalues are not as easy to be prove and compare mathematically as those −1 −1 −1 of the first three preconditioned matrices. The remaining eigenvalues of PRBULT A , PMRS A , PGRS A depend on the specific problem and the parameter α which makes it difficult to calculate each of the remaining eigenvalues of the corresponding −1 matrices concretely and further illustrate the eigenvalues of PRBULT A are more tightly clustered, it is therefore necessary to use numerical methods based on practical problems to verify, see [5,11,17]. It is noteworthy here that although we cannot directly calculate all eigenvalues satisfying the generalized eigenvalue problems (A.2)–(A.4) mathematically, by analyzing the upper and lower bounds of the eigenvalues satisfying the generalized eigenvalue problem, we can give the approximate estimate interval of their eigenvalues distribution. Next, we will give the bounds for the remaining −1 eigenvalues of the preconditioned matrix PRBULT A in detail. The techniques used here can also be exploited to discuss the other two methods similarly. For the sake of simplicity, we omit discussion of the similar cases. Corollary A.1. Under the conditions of Theorem 3.1, if we assume that |ς | ≤ ςmax and |ξ | ≤ ξmax , and ς + ξ ̸ = 0, then

λ = 1 or λ satisfies

△min ≤ Re(λ) ≤ △max , |Im(λ)| ≤ Υmax , where Re(λ) and Im(λ) are the real and imaginary parts of λ, and

λmin (H1 ) + λmin (H2 ) + λmin (C ) , α + λmax (C ) λmax (H1 ) + λmax (H2 ) + λmax (C ) △max = , α + λmin (C ) ςmax + ξmax Υmax = . α + λmin (C )

(A.7)

△min =

Proof. Denote µ + ς i := and H2 =

1

−1

B (A α 2 2

1 T x∗ B1 A− B1 x 1 x∗ x

(µ > 0), η + ξ i :=

(A.8) (A.9) 1 T x∗ B2 A− B2 x 2 x∗ x

(η > 0), c :=

x∗ Cx x∗ x

(c ≥ 0), and H1 =

−1 T

+ (A2 ) )BT1 , then

1

α

1 −1 T T B1 (A− 1 + (A1 ) )B1

λmin (H1 ) ≤ µ ≤ λmax (H1 ), λmin (H2 ) ≤ η ≤ λmax (H2 ), where λmin (H) and λmax (H) represent the minimum and maximum eigenvalues of the symmetric matrix H, respectively. Since the matrices B1 and B2 have full row ranks, and C is symmetric positive semi-definite, then 0 < λmin (B1 BT1 ) ≤ a ≤ λmax (B1 BT1 ),

0 < λmin (B2 BT2 ) ≤ b ≤ λmax (B2 BT2 ),

(A.10)

Y.-J. Li, X.-Y. Zhu and H.-T. Fan / Journal of Computational and Applied Mathematics 364 (2020) 112329

11

and 0 < λmin (C ) ≤ a ≤ λmax (C ).

(A.11)

Multiplying the equality (3.7) from left with x ̸ = 0, we obtain that λ satisfies ∗

λ=

1 T −1 T ∗ x∗ Cx + x∗ B1 A− 1 B1 x + x B2 A2 B2 x

. α x∗ x + x∗ Cx Using the above symbols, when ς + ξ ̸ = 0, then (A.9) reduces to λ=

(A.12)

µ + η + c + (ς + ξ )i . α+c

(A.13)

Furthermore, since that

|ς | ≤ ςmax and |ξ | ≤ ξmax .

(A.14) −1

Then we can get the upper and lower bounds of the eigenvalues of the preconditioned matrix PRBULT A . This completes the proof. □ By observing the above three formulas (A.7)–(A.9), we can find that for solving such a special generalized saddle point problem obtained by discretization of incompressible Navier–Stokes equations, we should choose an appropriately large −1 parameter α so as to make the remaining eigenvalues of the preconditioned matrix PRBULT A more tightly clustered and therefore greatly improve the efficiency of the RBULT preconditioned Krylov subspace method. References [1] S.E. Rogers, D. Kwak, Steady and unsteady solutions of the incompressible Navier–Stokes equations, AIAA J. 29 (1991) 603–610. [2] F. Mirzaee, S. Alipour, N. Samadyar, A numerical approach for solving weakly singular partial integro-differential equations via two-dimensionalorthonormal Bernstein polynomials with the convergence analysis, Numer. Methods Partial Differential Equations, (2018) http://dx.doi.org/10. 1002/num.22316. [3] F. Mirzaee, N. Samadyar, S. Alipour, Numerical solution of high order linear complex differential equations via complex operational matrix method, SeMA J. (2018) http://dx.doi.org/10.1007/s40324-018-0151-7. [4] M. Benzi, M.K. Ng, Q. Niu, Z. Wang, A relaxed dimensional factorization preconditioner for the incompressible Navier–Stokes equations, J. Comput. Phys. 230 (2011) 6185–6202. [5] M. Benzi, X.-P. Guo, A dimensional split preconditioner for Stokes and linearized Navier–Stokes equations, Appl. Numer. Math. 61 (2011) 66–76. [6] Y. Cao, L.-Q. Yao, M.-Q. Jiang, A modified dimensional split preconditioner for generalized saddle point problems, J. Comput. Appl. Math. 250 (2013) 70–82. [7] M. Benzi, G.H. Golub, J. Liesen, Numerical solution of saddle point problems, Acta Numer. 14 (2005) 1–137. [8] H.C. Elman, D.J. Silvester, A.J. Wathen, Finite Elements and Fast Iterative Solvers, Oxford University Press, New York, 2006. [9] C. Greif, E. Moulding, D. Orban, Bounds on eigenvalues of matrices arising from interior-point methods, SIAM J. Optim. 24 (2014) 49–83. [10] Y. Saad, Iterative Methods for Sparse Linear Systems, second ed., SIAM, Philadelphia, PA, 2003. [11] Z.-Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) 603–626. [12] X.-N. Wu, G.H. Golub, J.A. Cuminato, J.-Y. Yuan, Symmetric-triangular decomposition and its applications Part II: Preconditioners for indefinite systems, BIT 48 (2008) 139–162. [13] Z.-Z. Bai, M.K. Ng, Z.-Q. Wang, Constraint preconditioners for symmetric indefinite matrices, SIAM J. Matrix Anal. Appl. 31 (2009) 410–433. [14] M. Benzi, Z. Wang, Analysis of augmented Lagrangian-based preconditioners for the steady incompressible Navier–Stokes equations, SIAM J. Sci. Comput. 33 (2011) 2761–2784. [15] Y.-M. Huang, A practical formula for computing optimal parameters in the HSS iteration methods, J. Comput. Appl. Math. 255 (2014) 142–149. [16] N.-B. Tan, T.-Z. Huang, Z.-J. Hu, A relaxed splitting preconditioner for the incompressible Navier–Stokes equations, J. Appl. Math. (2012) 402490, 12 pages. [17] Y. Cao, S.-X. Miao, Y.-S. Cui, A relaxed splitting preconditioner for genenralized saddle point problems, Comput. Appl. Math. 34 (2015) 865–879. [18] H.-T. Fan, X.-Y. Zhu, A modified relaxed splitting preconditioner for generalized saddle point problems from the incompressible Navier–Stokes equations, Appl. Math. Lett. 55 (2016) 18–26. [19] Y.-F. Ke, C.-F. Ma, An inexact modified relaxed splitting preconditioner for the generalized saddle point problems from the incompressible Navier–Stokes equations, Numer. Algorithms 75 (2017) 1103–1121. [20] Y.-F. Ke, C.-F. Ma, A new relaxed splitting preconditioner for the generalized saddle point problems from the incompressible Navier–Stokes equations, Comput. Appl. Math. 37 (2018) 515–524. [21] S.-W. Zhou, A.-L. Yang, Y.-J. Wu, A relaxed block-triangular splitting preconditioner for generalized saddle-point problems, Int. J. Comput. Math. 94 (2017) 1609–1623. [22] J.-T. Zhou, Q. Niu, Substructure preconditioners for a class of structured linear systems of equations, Math. Comput. Model. 52 (2010) 1547–1553. [23] G.H. Golub, C. Greif, On solving block-structured indefinite linear systems, SIAM J. Sci. Comput. 24 (2003) 2076–2092. [24] M.J. Gander, Q. Niu, Y.-X. Xu, Analysis of a new dimension-wise splitting iteration with selective relaxation for saddle point problems, BIT 56 (2016) 441–465. [25] H.C. Elman, A. Ramage, D.J. Silvester, Algorithm 866: IFISS, a matlab toolbox for modelling incompressible flow, ACM Trans. Math. Softw. 33 (2007) 14.