Journal of Computational and Applied Mathematics 335 (2018) 185–206
Contents lists available at ScienceDirect
Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
An adaptive BDDC algorithm in variational form for mortar discretizations Jie Peng a,1 , Shi Shu a,b,1 , Junxian Wang a, * a b
School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China Hunan Key Laboratory for Computation and Simulation in Science and Engineering, Xiangtan University, Xiangtan 411105, China
article
info
Article history: Received 28 April 2017 Received in revised form 8 November 2017 MSC: 65N30 65F10 65N55 Keywords: Elliptic problems Mortar methods BDDC algorithm Adaptive primal constraints
a b s t r a c t A balancing domain decomposition by constraints (BDDC) algorithm with adaptive primal constraints in variational form is introduced and analyzed for high-order mortar discretization of two-dimensional elliptic problems with high varying and random coefficients. Some vector-valued auxiliary spaces and operators with essential properties are defined to describe the variational algorithm, and the coarse space is formed by using a transformation operator on each interface. Compared with the adaptive BDDC algorithms for conforming Galerkin approximations, our algorithm is simpler, because there have not any continuity constraints at subdomain vertices in the mortar method involved in this paper. The condition number of the preconditioned system is proved to be bounded above by a userdefined tolerance and a constant which is dependent on the maximum number of interfaces per subdomain, and independent of the mesh size and the contrast of the given coefficients. Numerical results show the robustness and efficiency of the algorithm for various model problems. © 2017 Elsevier B.V. All rights reserved.
1. Introduction Mortar methods were first introduced by Bernardi, Maday and Patera [1,2] as the discretization techniques based on domain decomposition. These techniques are widely applied in many scientific and engineering computation fields, such as multi-physical models, coupling schemes with different discretizations, problems with non-matching grids and so on [3–5]. Balancing domain decomposition by constraints (BDDC) algorithms, which were introduced by Clark R. Dohrmann [6], are variants of the balancing Neumann–Neumann algorithms for solving the Schur complement systems. These algorithms have been extended to solve PDE(s) discrete systems obtained by various discretization methods, such as conforming Galerkin [7–9], discontinuous Galerkin [10,11], and mortar methods [12–14] and so on. However, these BDDC algorithms require a strong assumption on the coefficients in each subdomain to achieve a good performance. To enhance the robustness, the selection of good primal constraints should be problem-dependent, this led to adaptive algorithms for choosing primal constraints [15]. Generalized eigenvalue problems with respect to the local problems per interface shared by two subdomains are used to adaptively choose primal constraints [16–27]. In the work by Klawonn, Radtke and Rheinbach [22], an adaptive coarse space for the dual–primal finite element tearing and interconnecting (FETIDP) and BDDC methods is obtained by solving generalized eigenvalue problems associated with the edge Schur complements and mass matrices. Another class of eigenvalue problems are also introduced to construct the coarse spaces for BDDC
*
Corresponding author. E-mail addresses:
[email protected] (J. Peng),
[email protected] (S. Shu),
[email protected] (J. Wang).
1 These authors contributed equally to this work and should be considered co-first authors. https://doi.org/10.1016/j.cam.2017.11.031 0377-0427/© 2017 Elsevier B.V. All rights reserved.
186
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
algorithms in [16,19], and their eigenvalue problems are defined by using the edge Schur complements and the part of Schur complement in each subdomain. Recently, eigenvalue problems with respect to the parallel sum (see [28]) have got great attention of researchers. In Pechstein and Dohrmann [17], these types of eigenvalue problems were first introduced to select the primal constraints for BDDC algorithms, and [18,20,21,23,25–27] have extended it to elliptic problems discretized with conforming finite element methods, staggered discontinuous Galerkin methods, isogeometric analysis, and vector field problems discretized with Raviart–Thomas finite elements. However, most of the available literatures on adaptive BDDC algorithms were in algebraic form (i.e. matrices and vectors) and adaptive BDDC algorithms for mortar discretizations have not previously been discussed in the literature. In this paper, an adaptive BDDC algorithm in variational form for high-order mortar discretization of two dimensional elliptic problems with high varying and random coefficients is introduced and analyzed. Based on a vector-valued function space, we derive the Schur complement variational problem for Lagrange multiplier variable. Then, scaling operators and transformation operators with essential properties are defined, and the transformation operators are constructed by using the generalized eigenvalue problems with respect to the parallel sum. Further, in contrast to the BDDC (or adaptive BDDC) algorithms in variational form [8,29,15,22], by introducing some auxiliary spaces and operators, we arrive at a preconditioned adaptive BDDC algorithm in variational form for mortar discretizations. Compared with the conforming Galerkin approximations, we emphasize that since the mortar method involved in this paper do not have any continuity constraints at subdomain vertices, this simplifies our algorithm quite a lot. Using the characters of the involved operators, we proved that the condition number bound of the adaptive BDDC preconditioned systems is C Θ , where C is a constant which depends only on the maximum number of interfaces per each subdomain, and Θ is a given tolerance. Finally, numerical results for various model problems show the robustness of the proposed algorithms and verify the theoretical estimate in both geometrically conforming and unconforming partitions. In particular, the algorithm with deluxe scaling matrices keeps better computational efficiency than that with multiplicity scaling matrices. In the following, we introduce some definitions. Assume that V and W are Hilbert spaces and U = V ⊕ W , the operators RVU : U → V and IVU : V → U are separately called restriction operator and interpolation operator refer to RVU u = v, ∀u = v + w ∈ U , where v ∈ V , w ∈ W ,
(1.1)
and IVU v = v, ∀v ∈ V .
(1.2) T
For a given linear operator L from the Hilbert space U to the Hilbert space V , the operator L : V → U is defined by (LT v, u) = (v, Lu), ∀u ∈ U , v ∈ V .
⃗ = (u1 , u2 , . . . , un )T and v⃗ = Assume that V is a Hilbert space with linear inner product (·, ·)V . Then for any u T (v1 , v2 , . . . , vm ) , where ui ∈ V (i = 1, . . . , n), vj ∈ V (j = 1, . . . , m), we define u1 ⎜u2 ⎟
⎛ ⎞
(u1 , v1 )V (u1 , v2 )V · · · (u1 , vm )V ⎜(u2 , v1 )V (u2 , v2 )V · · · (u2 , vm )V ⎟
⎟ ⎜ ⃗ , v⃗ T )V = (⎜ (u ⎝ .. ⎠ , (v1 , v2 , . . . , vm ))V = ⎝ .
un
⎞
⎛
⎟. .. .. .. .. ⎠ . . . . (un , v1 )V (un , v2 )V · · · (un , vm )V
(1.3)
For any α ⃗ = (α1 , . . . , αn )T ∈ Rn and β⃗ = (β1 , . . . , βm )T ∈ Rm , it is easy to verify that
⃗ (α ⃗ T u⃗ , (β⃗ T v⃗ )T )V = α⃗ T (u⃗ , v⃗ T )V β.
(1.4)
The rest of this paper is organized as follows. In Section 2, we present the discretization of a second order elliptic problem with mortar finite element and its corresponding Schur complement system associated with a vector-valued function space. Some auxiliary spaces and a proper space decomposition are presented in Section 3, while the adaptive BDDC algorithm is introduced in Section 4. The condition number bound of the preconditioned system is analyzed in Section 5, and various numerical experiments are presented in Section 6. Finally, a conclusion is given in Section 7. 2. Model problem and Schur complement system Consider the following elliptic problem: find u ∈ H01 (Ω ) such that a(u, v ) = ⟨f , v⟩, ∀v ∈ H01 (Ω ),
(2.1)
where a(u, v ) =
∫ Ω
(ρ∇ u · ∇v + ε uv )dx, ⟨f , v⟩ =
∫ Ω
f v dx
(2.2)
and Ω ⊂ R2 is a bounded polygonal domain, f ∈ L2 (Ω ), the bounded coefficients ε ≥ 0 and ρ ≥ ρmin > 0 can be random and has high contrast in Ω .
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
187
Fig. 1. An example of set E (both black and white), M (black) and Ei in a geometrically nonconforming partition.
In the following, we define a mortar discrete problem of (2.1) based on a nonoverlapping domain decompositions. We decompose the given region Ω into polyhedral subdomains Ωi (i = 1, . . . , N), which satisfy
¯ = Ω
N ⋃
¯ k with Ωi ∩ Ωj = ∅, i ̸= j, Ω
k=1
∂ Ωi ∩∂ Ωj (i ̸= j) is either empty, a vertex or a common edge, and let di be the diameter of Ωi . Each subdomain Ωi is associated with a regular or quasi-uniform triangulation Ti , where the mesh size of Ti is denoted by hi . Denote the Ps Lagrange finite element space associated with Ti by X (Ωi ) = {v ∈ C (Ω i ) : v ⏐τ ∈ Ps , ∀τ ∈ Ti , v ⏐∂ Ω ∩∂ Ω = 0},
⏐
⏐
i
where Ps denotes the set of all polynomials of degree less than or equal to s, and s ∈ Z+ . If ∂ Ωi ∩ ∂ Ωj (i ̸ = j) is a common edge, we call it an interface. Each interface ∂ Ωi ∩ ∂ Ωj (i ̸ = j) is associated with a one-dimensional triangulation, provided either from Ti or Tj . Since the triangulations in any different subdomains are independent of each other, they are generally do not match at the interfaces. For convenience, we denote the interface by Γij and Γji , respectively, when the triangulation is given by Ti and Tj . Further, let Γ = ∪Γij , M denote the number of interfaces. Denote E := {Γij : 1 ≤ i ≤ N , j ∈ Ei }, M := {Γij ∈ E : hi ≥ hj },
where Ei := {j : ∂ Ωi ∩ ∂ Ωj is a common edge, for 1 ≤ j ≤ N and j ̸ = i}. Fig. 1 is an example to illustrate the elements of set E , M and Ei , where we suppose that hi ≥ hj for each Γij if i < j. Let the interfaces in M denote the nonmortars, and those of E \ M the mortars (see [4]). The discrete Lagrange multiplier space will be associated with the nonmortars. Since there is one-to-one correspondence between element Γij in M and the interface, we can denote M by {Fk : k = 1, . . . , M }, where Fk is called the interface with global index k. For any given subdomain Ωi , let Mi := {k : Fk ⊂ ∂ Ωi \ ∂ Ω , for 1 ≤ k ≤ M }.
(2.3)
Let M(Fk ) denote the corresponding standard Lagrange multiplier space with respect to the nonmortar edge Fk (see [1,2,12]). Define the extension space of X (Ωi ) and M(Fk ) by X (i) = E (i) (X (Ωi )) and MFk = Ek (M(Fk )), where the trivial extension operator E (i) and Ek satisfy E (i) v =
{
{ ¯i v on Ω λ on F¯k , ∀v ∈ X ( Ω ) and E λ = , ∀λ ∈ M(Fk ). i k ¯ 0 on Ω \ Ωi 0 on Γ \ F¯k
Denote the direct sum of X (i) (i = 1, . . . , N) and MFk (k = 1, . . . , M) respectively as Xh = ⊕Ni=1 X (i) and Mh = ⊕M k=1 MFk . Let Vh denotes the vector-valued function space Xh × Mh . The mortar finite element approximation of problem (2.1) is as follows(the case for P1 Lagrange finite element space see [4]).
188
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
Find (u, λ) ∈ Vh such that a˜ (u, v ) + b(v, λ) = ⟨f , v⟩, ∀v ∈ Xh ,
{
b(u, q)
= 0,
(2.4)
∀q ∈ Mh ,
where a˜ (u, v ) =
N ∑
a˜ i (u, v ), a˜ i (u, v ) =
∫
i=1
Ωi
(ρ∇ u · ∇v + ε uv )dx,
(2.5)
and b(v, µ) =
N ∑
∑
bi (v, µ), bi (v, µ) =
Γir ⊂∂ Ωi \∂ Ω
i=1
∫ Γir
(σir v|Ωi µ)dS , σir =
{
1, i < r , −1, i > r .
(2.6)
Remark 2.1. When Ωi is an internal subdomain and ε = 0, the bilinear form a˜ i (·, ·) which is symmetric positive semi-definite can be regularized and transformed to a symmetric positive definite (SPD) form (see [30]). Therefore, we always assume that a˜ i (·, ·)(i = 1, . . . , N) are coercive on Xh . The Schur complement of the system (2.4) is Sλ = g ,
(2.7)
where S = B¯ A¯ −1 B¯ T and g = B¯ A¯ −1 f , here the operator A¯ : Xh → Xh and B¯ : Xh → Mh such that
¯ , v ) = a˜ (u, v ), ∀u, v ∈ Xh and (B¯ v, λ) = b(v, λ), ∀v ∈ Xh , λ ∈ Mh . (Au In order to discuss the adaptive BDDC preconditioner in variational form for solving the mortar discretizations (2.4) restricted to the coupling of Ps -Lagrangian finite elements, we need to derive the corresponding variational problem of the scalar Schur complement system (2.7) on a vector-valued function space. For any u = (u, λ), v = (v, q) ∈ Vh , we introduce a bilinear form A(u, v) =
N ∑
Ai (u, v), where Ai (u, v) = a˜ i (u, v ) + bi (v, λ) + bi (u, q) for 1 ≤ i ≤ N ,
(2.8)
i=1
then the saddle point problem (2.4) is equivalent to the following variational problem: find u ∈ Vh such that A(u, v) = ⟨f , v ⟩L2 , ∀ v ∈ Vh ,
(2.9)
where f = (f , 0), ⟨f , v ⟩L2 = ⟨f , v⟩, ∀ v = (v, q) ∈ Vh ,
(2.10)
here ⟨·, ·⟩ is defined in (2.2). Define the vector-valued function spaces (i)
(i)
VI = ⊕Ni=1 VI , VI
= {(v, 0) : v ∈ X (i) }, i = 1, . . . , N ,
(2.11)
and VFk = {(0, λ) : λ ∈ MFk }, k = 1, . . . , M .
(2.12)
For any given subdomain Ωi , let (i)
V (i) = (⊕k∈Mi VFk ) ⊕ VI ,
(2.13)
where Mi is defined in (2.3). n For any interface Fk , by using the multiplier basis functions {ϕlk }l=k 1 of MFk , where nk = dim(MFk ), we can define a function vector
Φ k = (φk1 , . . . , φknk )T ,
(2.14)
where the lth vector-valued function φkl = (φlk , ϕlk ) ∈ VI ⊕ VFk satisfies that A(φkl , v) = 0, ∀v ∈ VI .
(2.15)
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
189
Utilizing the vector Φ k defined in (2.14), we can define vector-valued function spaces k k ˆ = ⊕M W k=1 Wk , where Wk = span{φ 1 , . . . , φ nk }.
(2.16)
ˆ is It can be proved that the bilinear form A(·, ·), which is defined in (2.8), satisfies that the restriction of −A(·, ·) on W SPD. Since the BDDC algorithms originally were designed for SPD systems, we denote for the sake of argument that A(·, ·) := −A(·, ·), f := −f , Ai (·, ·) := −Ai (·, ·), 1 ≤ i ≤ N .
(2.17)
ˆ such that ˆ = (w, λ) ∈ W Then, the variational form of the Schur complement system for (2.9) can be expressed as: find w ˆ. ˆ , v) ˆ = ⟨f , vˆ ⟩L2 , ∀vˆ ∈ W A(w
(2.18)
Obviously, the second component of the solution to the above variational problem, i.e. λ, is also the solution of the Schur complement system (2.7). ˆ →W ˆ be the Schur complement operator defined by Let Sˆ : W
ˆ. ˆ = A(uˆ , v) ˆ , ∀uˆ , vˆ ∈ W (Sˆ uˆ , v)
(2.19)
We can rewrite (2.18) as
ˆ = QWˆ f , Sˆ w
(2.20)
ˆ is the L2 projection operator. where QWˆ : (L2 (Ω ), L2 (Γ )) → W The BDDC preconditioner developed in recent years is an effective preconditioning technique for solving the Schur complement system [6]. The behavior of this preconditioner can be split into some independent local problems and one global coarse problem, and the residual needs to be update through a weighted average of these solutions. The local and global problems are obtained by dividing the interface unknowns into dual unknowns and primal unknowns, and the corresponding basis functions are called dual basis functions and primal basis functions, respectively. The primal basis functions are generated by energy minimizing basis functions associated with the primal unknowns. Adaptive BDDC method is an advanced BDDC method using a transformation of basis, and the selection of the primal unknowns adaptively depends on a given tolerance, these primal unknowns are always selected by generalized eigenvalue problems with respect to the local problems [15]. It is worth pointing out that, the variational form is as popular as the matrix representation for describing and analyzing BDDC methods [8,29,15,22]. In this paper, we will construct and analyze our adaptive BDDC algorithm in variational form. In the following sections, we will extend the adaptive BDDC algorithm to the Schur complement system (2.20) of the mortar discretization. We emphasize that the mortar method involved in this paper do not have any continuity constraints at subdomain vertices, this simplifies our algorithm quite a lot compared with the conforming Galerkin approximations. 3. Some auxiliary spaces and a transformation of basis The main feature of the adaptive BDDC method is to construct a set of adaptive primal bases to accommodate the local structures of the coefficient in model problem (2.1). This can be achieved by introducing some auxiliary spaces and ˆ (or the spaces {Wk }). In this section, we will give a detailed constructing a new set of basis functions of the space W description of the definition of these auxiliary spaces and the construction of the new bases. Additionally, we will introduce a partially coupled space for our adaptive BDDC preconditioner and give some other preparations which will be used in the theoretical analysis. 3.1. Auxiliary spaces (ν )
We always assume that Fk be the interface shared by Ωi and Ωj . For ν = i, j, let VI , VFk and V (ν ) be the spaces defined n in (2.11), (2.12) and (2.13) respectively. By using the basis functions {ϕlk }l=k 1 of MFk , the vectors k,ν
k,ν
¯ k,ν = (φ¯ 1 , . . . , φ¯ n )T Φ k,ν = (φk1,ν , . . . , φkn,ν )T and Φ k k k,ν
can be defined similarly to Φ k in (2.14), where φl k,ν
(3.1) k,ν
= (φlk,ν , ϕlk ) ∈ VI(ν ) ⊕ VFk , φ¯ l
(ν )
Aν (φl , v) = 0, ∀v ∈ VI , l = 1, . . . , nk ,
= (φ¯ lk,ν , ψ¯ lk,ν ) ∈ V (ν ) satisfy that (3.2)
and
{
k,ν
¯ l , v) = 0, ∀v ∈ V (ν ) \ VF Aν (φ k
ψ¯ lk,ν |Fk = ϕlk
, l = 1, . . . , nk .
(3.3)
190
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
Define the auxiliary vector-valued function spaces (ν = i, j) (ν )
Wk
(ν ) Zk
¯ (ν ) = span{φ¯ k1,ν , . . . , φ¯ kn,ν }, = span{φk1,ν , . . . , φkn,ν }, W k k k
(3.4)
k,ν k,ν = span{φ¯ 1 − φk1,ν , . . . , φ¯ nk − φkn,ν }. k
(3.5)
Using (3.1), (3.2) and (3.3), we obtain the following orthogonality relations (ν )
¯ , v ∈ U , ν = i, j, Aν (w , v) = 0, ∀w ∈ W k (ν )
(3.6)
(ν )
where U = Zk or Wm , m ∈ Mν and m ̸ = k. 3.2. A transformation of basis In order to give a new set of basis functions for the spaces Wk (k = 1, . . . , M), we first introduce the scaling operators on each interface Fk (k = 1, . . . , M). (j) (ν ) (i) Let DF : U → U (ν = i, j) be the scaling operator, where U = Wk , Wk or Wk , and satisfy that for all w = w ⃗ T Ψ with k nk k k,i k,j w ⃗ ∈ R and Ψ = Φ , Φ or Φ , we have (ν )
(ν )
k
k
⃗ )T Ψ , DF w = w ⃗ T (D F
(3.7) (ν )
⃗ is nonsingular, and where the nk × nk scaling matrix D F k
(i)
(j)
k
k
DF + DF = I , where I is the identity operator.
(3.8) (ν )
⃗ (ν = i, j) are (see [16,31]) Two of the most frequently used formulas of the scaling matrices D F k
(i)
⃗ = D F k
1 2
1
⃗I , D ⃗ (j) = ⃗I , Fk
(3.9)
2
and (i)
(i)
(j)
(i)
(j)
(i)
(j)
(j)
k
k
k
k
k
k
k
k
⃗ = (S⃗ + S⃗ )−1 S⃗ , D ⃗ = (S⃗ + S⃗ )−1 S⃗ , D F F F F F F F F
(3.10)
where ⃗I denotes the nk × nk identity matrix, and (ν )
S⃗F
k
= (a(l,νm) )nk ×nk , a(l,νm) = Aν (φkm,ν , φkl ,ν ), l, m = 1, . . . , nk , ν = i, j.
(3.11)
The matrices defined in (3.9) and (3.10) are usually called multiplicity scaling matrices and deluxe scaling matrices, respectively. (ν ) ¯ (ν ) (ν = i, j) defined in (2.16) and (3.4), and the scaling operators defined above, Using the function spaces Wk , Wk and W k we then define a transformation operator, which is of great significance to give a new set of basis functions of Wk and is also crucial to select a set of primal bases. (ν ) ¯ (ν ) For any given positive real number Θ , we define a linear transformation operator TFk : U → U (U = Wk , Wk , W k ,ν = T nk k k,ν k,ν ¯ i, j) such that for each w = w ⃗ Ψ with w ⃗ ∈ R and Ψ = Φ , Φ or Φ (ν = i, j), we have TFk w = w ⃗ T (T⃗Fk )T Ψ ,
(3.12)
where the nk × nk transformation matrix T⃗Fk is nonsingular, and satisfies the following condition: there exist two integers k
k
nk∆ , nkΠ ∈ [0, nk ] and nk∆ + nkΠ = nk such that for any given w ⃗ ∆ = (w1 , . . . , wnk )T ∈ Rn∆ , w ⃗ Π = (v1 , . . . , vnk )T ∈ RnΠ , the ∆ Π following inequality holds (j)
(j)
(i)
(i)
(i)
(j)
(i)
(j)
(i)
(i)
(i)
(i)
¯ k,∆ + w ¯ k,Π , w ¯ k,∆ + w ¯ k,Π ), Ai (DF wk,∆ , DF wk,∆ ) + Aj (DF wk,∆ , DF wk,∆ ) ≤ Θ Ai (w k
k
k
k
(3.13)
where k
(i) wk,∆
¯
=
n∆ ∑
k
k,i ¯ k(i),Π = wl TFk φ¯ l , w
l=1
nΠ ∑
k
k,i vl TFk φ¯ nk +l , wk(ν,∆) = ∆
l=1
n∆ ∑
wl TFk φkl ,ν , ν = i, j.
(3.14)
l=1
We now give a way to construct the linear operator TFk . Using the bilinear form Aν (·, ·) defined in (2.8), and the basis k,ν n
¯ l } k defined in (3.1), we can define two matrices via functions {φ l=1 (ν )
(ν )
(ν )
k,ν
k,ν
¯ m , φ¯ l ), l, m = 1, . . . , nk , ν = i, j, S⃗¯Fk = (bl,m )nk ×nk , bl,m = Aν (φ
(3.15)
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
191
and their parallel sum (see [28]) (i)
(j)
(j)
(i)
(i)
(j)
S⃗¯Fk : S⃗¯Fk = S⃗¯Fk (S⃗¯Fk + S⃗¯Fk )† S⃗¯Fk , (i)
(j)
(i)
(j)
where (S⃗¯Fk + S⃗¯Fk )† is a pseudo inverse of the matrix S⃗¯Fk + S⃗¯Fk . (ν )
(i)
(j)
Since S⃗¯Fk (ν = i, j) are both SPD, S⃗¯Fk : S⃗¯Fk is also SPD and satisfies (i)
(ν )
(j)
S⃗¯Fk : S⃗¯Fk ≤ S⃗¯Fk , ν = i, j.
(3.16)
Introducing a generalized eigenvalue problem (see [17,20,24,25]) (i)
(j)
⃗ (i) )T S⃗(j) D ⃗ (i) + (D ⃗ (j) )T S⃗(i) D ⃗ (j) )v⃗ = λS⃗¯F : S⃗¯F v⃗ , ((D F F F F F F k k k
k
k
k
k
(3.17)
k
(ν )
(ν )
⃗ (ν = i, j) are the scaling matrices and S⃗ (ν = i, j) are defined in (3.11). where v ⃗ ∈ Rnk , D Fk Fk Let λ1 ≤ λ2 ≤ · · · ≤ λnk ≤ Θ ≤ λnk +1 ≤ · · · ≤ λnk ∆
(3.18)
∆
be the eigenvalues of (3.17), where nk∆ is a non-negative integer, and Θ is a given tolerance in (3.13). Denote the nk × nk transformation matrix F F T⃗Fk = (T⃗∆k , T⃗Πk ),
(3.19)
where F
F
T⃗∆k := (v ⃗1 , . . . , v⃗nk ), T⃗Πk := (v⃗nk +1 , . . . , v⃗nk ), ∆
(3.20)
∆
here v ⃗l (l = 1, . . . , nk ) are the generalized eigenvectors of (3.17) corresponding to λl and satisfy (i) (j) ⃗ (i) )T S⃗(j) D ⃗ (i) ⃗ (j) T ⃗(i) ⃗ (j) ⃗m = v⃗lT S⃗¯F : S⃗¯F v⃗m = 0, if l ̸= m. v⃗lT ((D Fk Fk Fk + (DFk ) SFk DFk )v k k
(3.21)
Lemma 3.1. The operator TFk defined in (3.12) and (3.19) satisfies (3.13). k,ν Proof. For the special choice of w = φl (l = 1, . . . , nk∆ ) in (3.12), it is easy to know that w ⃗ = (δl,1 , . . . , δl,nk )T , where δl,m (m = 1, . . . , nk ) are the Kronecker delta. From this and utilizing (3.19) and (3.20), it follows that k,ν
(TFk φ1 , . . . , TFk φ
k,ν T ) nk∆
F = (T⃗∆k )T Φ k,ν , ν = i, j.
(3.22)
Similarly, k,i
k,i
k,i
k,i
¯ 1 , . . . , TF φ¯ k )T = (T⃗ k )T Φ ¯ k,i , (TFk φ¯ k , . . . , TFk φ¯ n )T = (T⃗Πk )T Φ ¯ k,i . (TFk φ ∆ k n n +1 k F
∆
F
∆
(3.23)
Then, we can rewrite the functions in (3.14) as (ν )
T ⃗ k T ¯ k,i T ⃗ k T ¯ k,i T ⃗ k T ¯ k,∆ = w ¯ k,Π = w w ⃗∆ (T∆ ) Φ , w ⃗Π (TΠ ) Φ , wk,∆ = w ⃗∆ (T∆ ) Φ k,ν , ν = i, j. F
(i)
F
(i)
F
For the above-mentioned preparations, inequality (3.13) can be proved as follows (j)
(j)
(i)
(i)
(i)
(j)
(i)
(j)
Ai (DF wk,∆ , DF wk,∆ ) + Aj (DF wk,∆ , DF wk,∆ ) k
k
k
k
T ⃗ Fk T ⃗ (i) T T ⃗ Fk T ⃗ (i) T T ⃗ Fk T ⃗ (j) T T ⃗ Fk T ⃗ (j) T ⃗∆ (T∆ ) (DF ) Φ k,j ) ⃗∆ (T∆ ) (DF ) Φ k,i ) + Aj (w ⃗∆ (T∆ ) (DF ) Φ k,j , w = Ai (w ⃗∆ (T∆ ) (DF ) Φ k,i , w k k k k ( )T ( )T k,i T ⃗ Fk T ⃗ (j) T k,i T ⃗ Fk T ⃗ (i) T k,j T ⃗ Fk T ⃗ (i) T T ⃗ Fk T ⃗ (j) T ⃗ ∆ (T∆ ) (DFk ) Φ k,j ) ⃗ ∆ (T ∆ ) (D Fk ) Φ ) + Aj (w ⃗ ∆ (T∆ ) (DFk ) Φ , w = Ai (w ⃗ ∆ (T∆ ) (DFk ) Φ , w T ⃗ Fk T ⃗ (i) T T ⃗ Fk T ⃗ (j) T ⃗ (j) T⃗∆Fk w ⃗ (i) T⃗∆Fk w ⃗∆ ⃗∆ + w ⃗∆ (T∆ ) (DF ) Aj (Φ k,j , (Φ k,j )T )D =w ⃗∆ (T∆ ) (DF ) Ai (Φ k,i , (Φ k,i )T )D Fk Fk k ( k ) T ⃗ Fk T ⃗ (j) )T S⃗(i) D ⃗ (j) ⃗ (i) T ⃗(j) ⃗ (i) ⃗ Fk ⃗ ∆ =w ⃗∆ (T∆ ) (D Fk Fk Fk + (DFk ) SFk DFk T∆ w (i) (j) F T ⃗ Fk T ⃗ ≤ Θw ⃗∆ (T∆ ) (S¯ Fk : S⃗¯Fk )T⃗∆k w ⃗∆ (i) (j) F F F F ≤ Θ (T⃗∆k w ⃗ ∆ + T⃗Πk w ⃗ Π )T (S⃗¯Fk : S⃗¯Fk )(T⃗∆k w ⃗ ∆ + T⃗Πk w ⃗Π) (i)
≤ Θ (T⃗∆k w ⃗ ∆ + T⃗Πk w ⃗ Π )T S⃗¯Fk (T⃗∆k w ⃗ ∆ + T⃗Πk w ⃗Π) F
F
F
F
¯ k,i , (Φ ¯ k,i )T )(T⃗∆k w = Θ (T⃗∆k w ⃗ ∆ + T⃗Πk w ⃗ Π )T Ai (Φ ⃗ ∆ + T⃗Πk w ⃗Π) F
F
F
F
(3.24)
192
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206 F F F ¯ k,i , ((T⃗∆Fk w ¯ k,i )T ) = Θ Ai ((T⃗∆k w ⃗ ∆ + T⃗Πk w ⃗ Π )T Φ ⃗ ∆ + T⃗Πk w ⃗ Π )T Φ
¯ k(i),∆ + w ¯ k(i),Π , w ¯ k(i),∆ + w ¯ k(i),Π ) = Θ A i (w T ⃗ Fk T ⃗ (j) T (T∆ ) (DF ) in the where we have used (3.7) and (3.24) in the first equality, (1.4) in the third equality (where α ⃗ = β⃗ = w ⃗∆ F
k
(i)
T ⃗ k T ⃗ (T∆ ) (DF )T in the second item), (1.3) and (3.11) in the fourth equality, (3.17), (3.18), (3.21) and first item, and α ⃗ = β⃗ = w ⃗∆ (i)
k
(j)
S⃗¯Fk : S⃗¯Fk is SPD in the first and second inequality, (3.16) in the last inequality. The last three equalities follow from (1.3), (3.15), (1.4) and (3.24). This completes the proof of (3.13). □ Using the linear operator TFk defined in (3.12) and (3.13), we can obtain a new set of basis functions of Wk as follows k
{φˆ l := TFk φkl : 1 ≤ l ≤ nk }.
(3.25)
Based on the basis functions described above, we can decompose the space Wk into Wk = Wk,∆ ⊕ Wk,Π ,
(3.26)
where k
k
k
k
ˆ 1 , . . . , φˆ nk } and Wk,Π = span{φˆ nk +1 , . . . , φˆ n }. Wk,∆ = span{φ k ∆
(3.27)
∆
ˆ can be obtained as follows Then using (2.16) and (3.26), a decomposition of the space W ˆ = ⊕M W k=1 Wk,∆ ⊕ WΠ ,
(3.28)
where the coarse-level and primal variable space WΠ = ⊕M k=1 Wk,Π .
(3.29)
Due to the adaptive BDDC preconditioner is built on a partially coupled space, where the unknowns are strong coupled at the primal unknowns and decoupled at the dual unknowns. we then introduce the space immediately. 3.3. A partially coupled space (ν )
Similarly, using the linear operator TFk , we can get a new set of basis functions for the auxiliary spaces Wk defined in (3.4) as k,ν
{φ˜ l
:= TFk φkl ,ν : 1 ≤ l ≤ nk }.
Then, we decompose (ν )
Wk
(ν ) Wk
(ν = i, j)
(3.30)
into
= Wk(,ν∆) ⊕ Wk(,νΠ) ,
(3.31)
where k,ν
(ν )
k,ν
k,ν
(ν )
k,ν
˜ 1 , . . . , φ˜ k }, W ˜ ˜ Wk,∆ = span{φ n k,Π = span{φ nk +1 , . . . , φ nk }. ∆
(3.32)
∆
(i)
By using WΠ defined in (3.29), and Wk,∆ (k ∈ Mi , 1 ≤ i ≤ N) defined in (3.32), we can arrive at an partially coupled space
˜ =W ˜ ∆ ⊕ WΠ , W
(3.33)
where (i)
(i)
(i)
˜ ∆ = ⊕Ni=1 W , W = ⊕k∈M W , i = 1, . . . , N . W ∆ ∆ k,∆ i
(3.34)
In order to estimate the condition number bound of the preconditioned system, we will introduce a new set of basis ¯ k,ν (ν = i, j) and then give a decomposition of it. functions of the auxiliary spaces W
¯ k,ν and its decomposition 3.4. A new base of W (ν )
¯ Using the linear operator TFk , a new set of basis functions for W k k,ν
{φ˜¯ l
k,ν
:= TFk φ¯ l
: 1 ≤ l ≤ nk }.
(ν = i, j) can be expressed as (3.35)
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206 (ν )
¯ Then W k
193
can be decomposed as
(ν )
¯ (ν ) , ¯ (ν ) ⊕ W =W k,∆ k,Π
¯ W k
(3.36)
where k,ν
k,ν
k,ν
∆
∆
k,ν
¯ (ν ) = span{φ˜¯ 1 , . . . , φ˜¯ nk }, W ¯ (ν ) = span{φ˜¯ nk +1 , . . . , φ˜¯ n }. W k,∆ k,Π k
(3.37)
In the next section, we will present the adaptive BDDC preconditioner for solving the Schur complement system (2.20) by using the above-mentioned spaces and decompositions, and derive the operator form of the preconditioned system. 4. Adaptive BDDC preconditioner
˜. Firstly, we introduce a bilinear form and the corresponding SPD operator on W (i) (i) k For any given subdomain Ωi (i = 1, . . . , N) and its interface Fk (k ∈ Mi ), let Tk : Wk → Wk and T(i) : Wk(i) → Wk be the linear basis transformation operators such that (i)
k,i
k
k,i
k ˜ ˆ l = φ˜ l , T(i) φl Tk φ
k
= φˆ l , l = 1, . . . , nk ,
(4.1)
k,i n
k n
ˆ l } k and {φ˜ l } k are separately defined in (3.25) and (3.30). where the basis functions {φ l=1 l=1 ˜ , by the definition (3.33) of W ˜ , we have For any given u˜ , v˜ ∈ W ζ˜ =
N ∑ ∑
(i) ζ˜ k,∆ +
i=1 k∈Mi (i)
M ∑
ζ˜ k,Π , ζ˜ = u˜ , v˜ ,
(4.2)
k=1
(i)
where ζ˜ k,∆ ∈ Wk,∆ , ζ˜ k,Π ∈ Wk,Π .
˜ via Using (4.1) and (4.2), define a bilinear form on W N ∑
˜ u˜ , v) ˜ = A(
˜, Ai (u˜ (i) , v˜ (i) ), ∀u˜ , v˜ ∈ W
(4.3)
i=1
where (i) ζ˜ =
∑
(i)
(i) (ζ˜ k,∆ + Tk ζ˜ k,Π ), ζ˜ = u˜ , v˜ .
(4.4)
k∈Mi
˜ →W ˜ can be defined as follows Then, an SPD operator S˜ : W ˜. ˜ u˜ , v) ˜ = A( ˜ , ∀u˜ , v˜ ∈ W (S˜ u˜ , v)
(4.5)
To derive the algorithm, we also want to introduce some other operators. (i) For each subdomain Ωi (i = 1, . . . , N), by using the scaling operators DF (k ∈ Mi ) and the linear basis transformation k
(i) (i) k ˆ can be defined by operators T(i) (k ∈ Mi ) defined in (3.7) and (4.1), respectively, a linear operator R∆,Γ : W∆ → W
∑
(i)
R∆,Γ =
(i)
(i)
k DF T(i) Rk,∆ ,
(4.6)
k
k∈Mi (i)
Wk,∆
(i)
where Rk,∆ := R
(i)
W∆
(i)
(i)
is the restriction operator defined in (1.1) (where U = W∆ and V = Wk,∆ ).
Using (4.6), we can easily verify that (i)
(i)
(i)
k R∆,Γ w = DF T(i) w , ∀w ∈ Wk,∆ , k ∈ Mi , i = 1, . . . , N , k
(4.7)
and for any given w ∈ Wk,∆ (k = 1, . . . , M), from (4.7), (4.1) and (3.8), we have (i)
(i)
(j)
(j)
w = R∆,Γ Tk w + R∆,Γ Tk w ,
(4.8)
where i and j are the indices of the subdomains which satisfy Fk = ∂ Ωi ∩ ∂ Ωj . ˜ is defined by The operator RΠ ,Γ : WΠ → W −1 ˜ RΠ ,Γ = IΠ − S˜∆ SIΠ ,
(4.9)
194
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
˜ W ˜ ), is the interpolation operators defined in (1.2) (where V = WΠ and U = W where IΠ := IW Π N ∑
−1 S˜∆ =
(i)
(i)
(i)
(i)
˜ ∆ )−1 (I∆ )T , I∆ ((I∆ )T SI
(4.10)
i=1
˜
(i)
(i)
˜ ). here I∆ := I W(i) (i = 1, . . . , N) are the interpolation operators defined in (1.2) (where V = W∆ and U = W W∆
−1 ˆ →W ˆ Using the above-mentioned preparations, we can present the adaptive BDDC preconditioned operator MBDDC :W for solving the Schur complement system (2.20) in the variational form as follows.
ˆ , the action ug = M −1 g ∈ W ˆ is defined via the following four Algorithm 4.1 (Adaptive BDDC Preconditioner). Given g ∈ W BDDC steps. Step 1. Find N ∑
u∆,a =
ˆ, R∆,Γ ua∆,i ∈ W (i)
(4.11)
i=1
,i where u∆ ∈ W∆ (i = 1, · · · , N) satisfy a (i)
,i T Ai (u∆ a , v) = ((R∆,Γ ) g , v), ∀v ∈ W∆ , (i)
(i)
(4.12)
(i)
here the operator R∆,Γ is defined in (4.6). Step 2. Find uΠ ∈ WΠ such that
˜ Π ,Γ uΠ , RΠ ,Γ v) = (g , v) − A( ˜ A(R
N ∑
,i u∆ a , v), ∀v ∈ WΠ ,
(4.13)
i=1
where the operator RΠ ,Γ is defined in (4.9). Step 3. Find N ∑
u∆,b =
∆,i
(i)
R∆,Γ ub
ˆ, ∈W
(4.14)
i=1
∆,i
where ub
∈ W∆(i) (i = 1, · · · , N) satisfy that
∆,i
(i)
Ai (ub , v) = −Ai (uΠ , v), ∀v ∈ W∆ .
(4.15)
Step 4. Compute ug = u∆,a + uΠ + u∆,b .
(4.16)
In the following, we will derive the expressions for u∆,a , u∆,b and uΠ in Algorithm 4.1. By using the definition of S˜ and (4.12), we have ,i ˜ ∆ )−1 (R∆,Γ )T g . u∆ = ((I∆ )T SI a (i)
(i)
(i)
(4.17)
Combining (4.17) and (4.11), we obtain u∆,a =
N ∑
(i)
(i)
(i)
(i)
˜ ∆ )−1 (R∆,Γ )T g . R∆,Γ ((I∆ )T SI
(4.18)
i=1
Similar to the derivation of (4.18), we find u∆,b = −
N ∑
(i)
(i)
(i)
(i)
˜ ∆ )−1 (I∆ )T SI ˜ Π uΠ . R∆,Γ ((I∆ )T SI
(4.19)
i=1
Using (4.17), we have
˜ (g , v) − A(
N ∑ i=1
ua∆,i , v) = ((IˆΠ )T g , v) − (
N ∑
,i ˜ ∆ u∆ (IΠ )T SI a , v) = (R0 g , v), ∀v ∈ WΠ
i=1
(i)
(4.20)
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
195
where R0 = (IˆΠ )T −
N ∑
(i)
(i)
(i)
(i)
˜ ∆ ((I∆ )T SI ˜ ∆ )−1 (R∆,Γ )T , (IΠ )T SI
(4.21)
i=1
ˆ W ˆ ). here IˆΠ := IW is the interpolation operator defined in (1.2) (where V = WΠ and U = W Π It is easy to know that
˜ Π ,Γ uΠ , RΠ ,Γ v) = (FΠ Π uΠ , v), ∀v ∈ WΠ , A(R
(4.22)
where
˜ Π ,Γ . FΠΠ = (RΠ ,Γ )T SR
(4.23)
Inserting (4.20) and (4.22) into (4.13), and since v ∈ WΠ is arbitrary, it implies −1 uΠ = FΠΠ R0 g .
(4.24) −1
Substituting (4.17), (4.19) and (4.24) into (4.16), we can arrive at the expression of MBDDC defined in Algorithm 4.1 as follows: −1 MBDDC =
N ∑
(i)
(i)
(i)
(i)
1 ˜ ∆ )−1 (R∆,Γ )T + RT0 FΠ−Π R∆,Γ ((I∆ )T SI R0 .
(4.25)
i=1
−1 In order to bound the condition number of MBDDC , we need to rewrite the preconditioned operator in a more concise form than (4.25). ˜ ∆ , by using (3.34), there exists Firstly, we give the equivalent expression of FΠ Π defined in (4.23). For any given g˜ ∆ ∈ W a decomposition
g˜ ∆ =
N ∑
(i)
(i)
(i)
(i)
I∆ g˜ ∆ , where g˜ ∆ ∈ W∆ .
(4.26)
i=1
From (4.10) and (4.26), one has −1 ˜ S˜∆ S g˜ ∆ =
N N ∑ ∑
(i)
(i)
(i)
(j)
(i)
(j)
˜ ∆ )−1 (I∆ )T SI ˜ ∆ g˜∆ . I∆ ((I∆ )T SI
(4.27)
i=1 j=1 (l)
Since the support property of the functions in W∆ (l = 1, . . . , N), it implies that (i)
(j)
˜ ∆ = 0 , i ̸ = j. (I∆ )T SI
(4.28)
Using (4.28) and (4.26), we derive from (4.27) that −1 ˜ S˜∆ S g˜ ∆ =
N ∑
(i)
(i)
(i)
(i)
(i)
(i)
˜ ∆ )−1 (I∆ )T SI ˜ ∆ g˜∆ = I∆ ((I∆ )T SI
i=1
N ∑
(i)
(i)
I∆ g˜ ∆ = g˜ ∆ .
(4.29)
i=1
−1 ˜ ˜ ∆ , then (4.29) implies S g˜ ∈ W Note that S˜∆ −1 ˜ 2 −1 ˜ (S˜∆ S) = S˜∆ S.
(4.30)
−1 From (4.9), (4.23) and (4.30), together with the symmetry of S˜ and S˜∆ , we get the equivalent form of FΠ Π as follows −1 ˜ ˜ Π − S˜∆−1 SI ˜ Π) FΠΠ = (IΠ − S˜∆ SIΠ )T S(I
˜ Π − S˜∆−1 SI ˜ Π ) − (IΠ )T S( ˜ S˜∆−1 S˜ − (S˜∆−1 S) ˜ 2 )IΠ = (IΠ )T S(I T˜ T ˜ ˜ −1 ˜ = (IΠ ) SIΠ − (IΠ ) S S∆ SIΠ .
(4.31)
Next, we can derive the expression of the inverse operator of S˜ as −1 1 −1 1 ˜ Π FΠ−Π ˜ Π FΠ−Π S˜ −1 = S˜∆ + S˜∆−1 SI (IΠ )T S˜ S˜∆ − S˜∆−1 SI (IΠ )T −1 −1 1 − IΠ FΠΠ (IΠ )T S˜ S˜∆ + IΠ FΠ−Π (IΠ )T .
(4.32)
˜ , we only need to check that the operator B := S In fact, according to the definition (3.33) of W
˜ −1
˜ ∆, BS˜ g˜ ∆ = g˜ ∆ , ∀g˜ ∆ ∈ W
satisfies (4.33)
196
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
and BS˜ g˜ Π = g˜ Π , ∀g˜ Π ∈ WΠ .
(4.34)
From (4.29) and (4.32) , it is easy to know that (4.33) is established. By using (4.31) and (4.32), we find −1 ˜ −1 ˜ −1 T ˜ ˜ −1 ˜ T˜ ˜Π BS˜ g˜ Π = S˜∆ SIΠ g˜ Π + S˜∆ SIΠ FΠ Π ((IΠ ) S S∆ SIΠ − (IΠ ) SIΠ )g −1 −1 ˜ ˜ Π )g˜Π = g˜Π , ∀g˜Π ∈ WΠ , − IΠ FΠΠ ((IΠ )T S˜ S˜∆ SIΠ − (IΠ )T SI
then (4.34) holds. □ −1 ˜ →W ˆ as In order to present a concise form of MBDDC , we need to introduce an averaging operator ED : W N ∑
ED =
(i)
(i)
R∆,Γ R∆ + RΠ ,
(4.35)
i=1 (i)
W
(i)
(i)
W
where the operator R∆,Γ is defined in (4.6), R∆ := R ˜ ∆ (i = 1, . . . , N) and RΠ := R ˜ Π are both the restriction operators W W defined in (1.1). By using the definitions of the restriction operator and the interpolation operator, it is easy to verify that (i)
RΠ IΠ = I , R∆ IΠ = 0, i = 1, . . . , N ,
(4.36)
(i) (j)
R∆ I∆ = δi,j I , i, j = 1, . . . , N ,
(4.37)
where I is the identity operator. Using the above-mentioned preparations, we have Theorem 4.1. A concise form of the adaptive BDDC preconditioned operator can be written as −1 MBDDC = ED S˜ −1 EDT ,
˜ −1
where S
(4.38)
and ED are defined in (4.32) and (4.35), respectively.
Proof. From (4.35) and the interpolation operator IˆΠ , we have
( ED S˜ −1 EDT =
N ∑
) (i)
(i)
R∆,Γ R∆ + IˆΠ RΠ
⎞ ⎛ N ∑ (j) (j) (R∆ )T (R∆,Γ )T + (RΠ )T (IˆΠ )T ⎠ S˜ −1 ⎝ j=1
i=1
= M1 + M2 +
M2T
+ M3 ,
(4.39)
where M1 =
N ∑
(i)
(i)
R∆,Γ R∆ S˜ −1
i=1
N ∑
(j)
(j)
(R∆ )T (R∆,Γ )T , M2 =
N ∑
j=1
(i)
(i)
R∆,Γ R∆ S˜ −1 (RΠ )T (IˆΠ )T ,
i=1
M3 = IˆΠ RΠ S˜ −1 (RΠ )T (IˆΠ )T . Using the above expression of M1 , together with (4.32) and the second equation of (4.36), we obtain M1 = F
N ∑
(j)
(j)
(4.40)
−1 R∆,Γ R∆ S˜∆ .
(4.41)
1 ˜ Π FΠ−Π ˜ T, (R∆ )T (R∆,Γ )T + F SI (IΠ )T SF
j=1
where F =
N ∑
(i)
(i)
i=1
Inserting (4.10) into the (4.41) and using (4.37), we can rewrite the above operator F as F =
N N ∑ ∑
(i)
(i) (j)
(j)
(j)
(j)
˜ ∆ )−1 (I∆ )T = R∆,Γ R∆ I∆ ((I∆ )T SI
i=1 j=1
N ∑
(i)
(i)
(i)
(i)
˜ ∆ )−1 (I∆ )T . R∆,Γ ((I∆ )T SI
(4.42)
i=1
Similarly, we can get the expressions of M2 and M3 as follows −1 ˆ T −1 ˆ T ˜ Π FΠΠ M2 = −F SI (IΠ ) , M3 = IˆΠ FΠ Π ( IΠ ) .
(4.43)
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
197
Further, substituting (4.40) and (4.43) into (4.39), and using (4.42), (4.37) and (4.21), we have ED S˜ −1 EDT = F
N ∑
(j)
(
(j)
)
(
1 ˜ Π FΠ−Π ˜ T (R∆ )T (R∆,Γ )T + IˆΠ − F SI (IˆΠ )T − (IΠ )T SF
)
j=1 N
=
∑
(i)
(i)
(i)
(i)
1 ˜ ∆ )−1 (R∆,Γ )T + RT0 FΠ−Π R0 , R∆,Γ ((I∆ )T SI
i=1
this combines with (4.25), we complete the proof of (4.38). □ In order to derive the required expressions for the preconditioned system of (2.20), we now want to present the equivalent ˆ form of S. ˆ into W ˜ (see [8]) such that for each wΠ ∈ WΠ , wk,∆ ∈ Wk,∆ (k = 1, . . . , M), we Let IΓ be the natural injection from W have (j)
(i)
IΓ wΠ = wΠ , IΓ wk,∆ = Tk wk,∆ + Tk wk,∆ ,
(4.44)
(ν )
where Tk (ν = i, j) are defined in (4.1), i and j are the indices of the subdomains which satisfy Fk = ∂ Ωi ∩ ∂ Ωj . (ν ) From (4.44) and the definition (3.7) of the scaling operator DF (k = 1, . . . , M , ν = i, j), we can easily prove that k
(ν ) IΓ DF w k
=
(ν ) D F IΓ k
w , ∀w ∈ Wk,∆ .
(4.45)
ˆ , using the decomposition (3.28) of W ˆ , we have For any given uˆ , vˆ ∈ W ζˆ =
M ∑
ˆ, (ζˆ k,∆ + ζˆ k,Π ), ζˆ = uˆ , vˆ ∈ W
(4.46)
k=1
where ζˆ k,∆ ∈ Wk,∆ , ζˆ k,Π ∈ Wk,Π . From (4.44) and (4.46), we find IΓ ζˆ =
N ∑ ∑
(i)
Tk ζˆ k,∆ +
i=1 k∈Mi
M ∑
ζˆ k,Π , ζˆ = uˆ , vˆ .
(4.47)
k=1
Furthermore, combining (2.8), (2.19) and (4.46), yields
ˆ = A(uˆ , v) ˆ = (Sˆ uˆ , v)
N ∑
Ai (uˆ (i) , vˆ (i) ),
(4.48)
i=1
where (i) ζˆ =
∑
(i)
Tk (ζˆ k,∆ + ζˆ k,Π ), ζˆ = uˆ , vˆ .
(4.49)
k∈Mi
Using (4.5), (4.47) and (4.48), it is easy to derive that
˜ Γ. Sˆ = IΓT SI
(4.50)
Using (4.38) and (4.50), we present the preconditioned systems of (2.20) as
ˆ = ED S˜ −1 EDT IΓT SI ˜ Γ. G
(4.51)
ˆ in the next section. We will give the condition number bound of G 5. Analysis of the condition number
ˆ For this reason, we firstly derive the following partition of First of all, we want to estimate the minimum eigenvalue of G. unity condition. ˆ →W ˜ be the natural injection operator defined in (4.44) and ED : W ˜ →W ˆ be the averaging operator Lemma 5.1. Let IΓ : W defined in (4.35), then ED IΓ = I ,
ˆ →W ˆ is the identity operator. where I : W
(5.1)
198
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
ˆ , by using (3.28), we have the decomposition Proof. For any uˆ ∈ W uˆ =
M ∑
uˆ k,∆ + uˆ Π ,
(5.2)
k=1
where uˆ k,∆ ∈ Wk,∆ , uˆ Π ∈ WΠ . From the definition (4.44) of IΓ and (5.2), we have IΓ uˆ =
N ∑ ∑
(i)
Tk uˆ k,∆ + uˆ Π ,
(5.3)
i=1 k∈Mi (i)
where the linear basis transformation operator Tk is defined in (4.1). (i) (l) By the definitions (4.35) of ED , (1.1) of R∆ (i = 1, . . . , N) and RΠ , together with (5.3), the property (4.8) of R∆,Γ (l = 1, . . . , N) and (5.2) , it follows that ED IΓ uˆ =
N ∑
=
(i)
Tk uˆ k,∆ + uˆ Π
k∈Mi
i=1 M ∑
∑
(i)
R∆,Γ (i)
(j)
(i)
(j)
(R∆,Γ Tk uˆ k,∆ + R∆,Γ Tk uˆ k,∆ ) + uˆ Π
k=1
=
M ∑
uˆ k,∆ + uˆ Π = uˆ
k=1
where we have used the assumption that each interface Fk = ∂ Ωi ∩ ∂ Ωj in the second equality. ˆ is arbitrary, the proof of (5.1) is completed. □ From this and note that uˆ ∈ W By using Lemma 5.1, an argument similar to Lemma 3.4 in [8] shows that we have the following theorem.
ˆ satisfies Lemma 5.2. The minimum eigenvalue of the preconditioned system G ˆ ≥ 1. λmin (G)
(5.4)
˜ →W ˜ be the jump ˆ Let PD : W Then, we are in the position to derive an upper bound for the maximum eigenvalue of G. operator defined by P D = I − IΓ E D .
(5.5)
ˆ in an algebraic framework (see [19], Lemma 3.1 and A conversion process similar to the maximum eigenvalue of G Lemma 3.2) shows that
ˆ ≤ λmax (Gd ), λmax (G)
(5.6)
˜ ˜
˜
where the operator Gd PDT SPD S −1 . Note that Gd and S −1 PDT SPD share the same set of nonzero eigenvalues. S −1 PDT SPD with respect to the bilinear form A( ) and the definition (4.5) of S,
=
˜
˜
˜
˜ ·, ·
From this and using (5.6), the symmetry of
˜ we can arrive at
˜ ˜ −1 T ˜ ˜ ˜ ˆ ≤ max A(S PD SPD w , w) . λmax (G) ˜ w ˜ \{0} ˜ , w) ˜ ˜ ∈W w A( Further, we have the following lemma.
ˆ satisfies Lemma 5.3. The maximum eigenvalue of the preconditioned system G ˜ ˜ ˜ ˆ ≤ max A(PD w , PD w) . λmax (G) ˜ ˜ \{0} ˜ , w) ˜ ˜ ∈W w A(w
(5.7)
˜ , we can derive the decomposition formula of PD w. ˜ , we have ˜ ∈W ˜ Using the definition (3.33) of W For any given w ˜ = w
N ∑ ∑ i=1 k∈Mi
(i)
(i)
(i)
wk,∆ + wΠ , where wk,∆ ∈ Wk,∆ , wΠ =
M ∑ k=1
wk,Π ∈ WΠ .
(5.8)
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
199
˜ can be By the definition (5.5) of PD , (4.35) of ED , (1.1) of RΠ and (4.44) of IΓ , and using the decomposition (5.8), PD w rewrite as follows: ˜ =w ˜ − IΓ ED w ˜ PD w ˜ − IΓ ( =w
N ∑
(i)
(i)
˜ R∆,Γ R∆ + RΠ )w
i=1 N
=
∑∑
(i)
wk,∆ + wΠ − IΓ
N ∑
i=1 k∈Mi
=
N ∑ ∑
(i)
(i)
˜ − wΠ R∆,Γ R∆ w
i=1 (i)
wk,∆ − IΓ
i=1 k∈Mi
N ∑
(i)
(i)
˜. R∆,Γ R∆ w
(5.9)
i=1 (i)
(i)
Using (5.8), the definition (1.1) of R∆ (i = 1, . . . , N), (4.6) of R∆,Γ , the properties (4.7) and (4.45), and the definition (4.44) of IΓ , we find the second term in (5.9) satisfies IΓ
N ∑
(i)
(i)
˜ = IΓ R∆,Γ R∆ w
i=1
N ∑
∑
(i)
R∆,Γ
(i)
wk,∆
k∈Mi
i=1 N
∑∑
=
(i)
(i)
k DF IΓ T(i) wk,∆ k
i=1 k∈Mi N ∑ ∑
=
DF (wk,∆ + T (i,j) wk,∆ ) (i)
(i)
(i)
(5.10)
k
i=1 k∈Mi k is defined in where we assume that Fk = ∂ Ωi ∩ ∂ Ωj (k ∈ Mi , 1 ≤ i ≤ N), and the linear basis transformation operator T(i)
(4.1), T (i,j) : Wk → Wk be the linear basis transformation operator satisfy (j)
(i)
k,i
k,j
˜l T (i,j) φ
= φ˜ l , l = 1, . . . , nk .
(5.11)
Substituting (5.10) into (5.9), and using the property of follows
˜ = PD w
N ∑ ∑(
(ν ) DF ( k
˜ as ν = i, j) in (3.8), we can obtain the decomposition of PD w
(DF + DF )wk,∆ − DF (wk,∆ + T (i,j) wk,∆ ) (i)
(j)
k
k
(i)
(i)
(i)
(i)
)
k
i=1 k∈Mi
=
N ∑ ∑
(DF wk,∆ − DF T (i,j) wk,∆ ) (j)
(i)
k
(i)
(i)
k
i=1 k∈Mi
=
N ∑ ∑
DF (wk,∆ − T (j,i) wk,∆ ). (j)
(i)
(j)
k
(5.12)
i=1 k∈Mi
Using Lemma 5.3 and the above decomposition (5.12), we can derive the following lemma:
ˆ satisfies Lemma 5.4. For a given tolerance Θ > 0, the maximum eigenvalue of the adaptive BDDC preconditioned system G ˆ ≤ CΘ, λmax (G) where C =
2CF2
(5.13)
and CF = max{|Mi |}, here |Mi | denotes the number of interface on ∂ Ωi . i
Proof. According to (5.7), in order to give the proof of (5.13), we only need to show that max ˜ \{0} ˜ ∈W w
˜ Dw ˜ , PD w) ˜ A(P ˜ w ˜ , w) ˜ A(
≤ CΘ.
˜ ·, ·), and the decompositions (5.8) and (5.12), it is equivalent to In view of the definitions (4.3), (4.4) of the bilinear form A( show that N ∑ i=1
˜ (i) , (PD w) ˜ (i) ) ≤ C Θ Ai ((PD w)
N ∑ i=1
˜ (i) , w ˜ (i) ), Ai (w
(5.14)
200
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
where
∑
˜ (i) = (PD w)
(j)
(i)
(i)
˜ k,∆ ), w ˜ (i) = DF (wk,∆ − w
∑
k
k∈Mi
(i)
(i)
(wk,∆ + wk,Π ),
(5.15)
k∈Mi
here
˜ k,∆ = T (j,i) wk,∆ , wk,Π = Tk wk,Π . w (j)
(i)
(i)
(i)
Firstly, by (5.15), the inequality Ai (
J ∑
J ∑
αl ,
N ∑
J ∑
Ai (αl , αl ), CF = max{|Mi |} and the essential properties (3.13), we i
l=1
l=1
l=1
have
αl ) ≤ J
˜ (i) , (PD w) ˜ (i) ) Ai ((PD w)
i=1
=
N ∑
Ai (
∑
(j)
(i)
(i)
˜ k,∆ ), DF (wk,∆ − w
k∈Mi
i=1
∑
k
(j)
(i)
(i)
˜ k,∆ )) DF (wk,∆ − w k
k∈Mi
N
≤ CF
∑∑
(j)
(j)
(i)
(j)
(i)
(j)
(i)
(i)
˜ k,∆ , DF wk,∆ − DF w ˜ k,∆ ) Ai (DF wk,∆ − DF w k
k
k
k
i=1 k∈Mi
≤ 2CF
N ∑ ∑(
(j)
(j)
(i)
(j)
(i)
(i)
(j)
(i)
)
(j)
)
˜ k,∆ , DF w ˜ k,∆ ) Ai (DF wk,∆ , DF wk,∆ ) + Ai (DF w k
k
k
k
i=1 k∈Mi
= 2CF
N ∑ ∑(
(j)
(j)
(i)
(i)
(i)
(j)
(i)
˜ k,∆ , DF w ˜ k,∆ ) Ai (DF wk,∆ , DF wk,∆ ) + Aj (DF w k
k
k
k
i=1 k∈Mi N ∑ ∑
≤ 2CF Θ
(i)
(i)
(i)
(i)
¯ k,∆ + w ¯ k,Π , w ¯ k,∆ + w ¯ k,Π ) A i (w
(5.16)
i=1 k∈Mi
where
¯ (i) , w ¯ (i) , ¯ k(i),∆ = T¯k(i) wk(i),∆ ∈ W ¯ k(i),Π = T¯k(i) wk(i),Π ∈ W w k,∆ k,Π
(5.17)
(i) (i) ¯ (i) is defined by here the linear basis transformation operator T¯k : Wk → W k k,i
(i) ˜ T¯k φ l
k,i
= φ˜¯ l , l = 1, . . . , nk , k,i
˜¯ } k is defined in (3.35). and the basis functions {φ l l=1 Secondly, for each k ∈ Mi , by using (5.15), we obtain (i)
(i)
˜ (i) = wk,∆ + wk,Π + w
n
∑
(i)
(i)
(wm,∆ + wm,Π )
m∈Mi m̸ =k
¯ k(i),∆ − (w ¯ k(i),∆ − wk(i),∆ )) + (w ¯ k(i),Π − (w ¯ k(i),Π − wk(i),Π )) + = (w
∑
(i)
(i)
(wm,∆ + wm,Π )
m∈Mi m̸ =k
¯ k(i),∆ + w ¯ k(i),Π ) + w1 + w2 = (w ∑ (i) (i) ¯ k(i),∆ − wk(i),∆ ) − (w ¯ k(i),Π − wk(i),Π ). where w1 = (wm,∆ + wm,Π ) and w2 = −(w
(5.18)
m∈Mi m̸ =k
Obviously, (i)
(i)
(i)
¯ , w1 ∈ ⊕ m∈Mi Wm(i) , ¯ k,∆ + w ¯ k,Π ∈ W w k
(5.19)
m̸ =k
(i)
and from (5.17) and the definition (3.5) of Zk , we know that (i)
w2 ∈ Zk .
(5.20)
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
201
Fig. 2. The checkerboard distribution of the local problem size in a geometrically conforming partition.
Using (5.18), (5.19), (5.20) and the orthogonality condition (3.6), we have (i)
(i)
(i)
(i)
˜ (i) , w ˜ (i) ) = Ai (w ¯ k,∆ + w ¯ k,Π + w1 + w2 , w ¯ k,∆ + w ¯ k,Π + w1 + w2 ) Ai (w ¯ k(i),∆ + w ¯ k(i),Π , w ¯ k(i),∆ + w ¯ k(i),Π ) + Ai (w1 + w2 , w1 + w2 ) = Ai (w ¯ k(i),Π ) ¯ k(i),∆ + w ¯ k(i),Π , w ¯ k(i),∆ + w ≥ A i (w
(5.21)
for all k ∈ Mi . Finally, the estimate (5.14) follows from (5.16) and (5.21). □ By the results of Lemma 5.2 and Lemma 5.4, we can obtain the following theorem. Theorem 5.1. For a given tolerance Θ > 0, we obtain the following condition number bound of the adaptive BDDC preconditioned ˆ satisfying systems G
ˆ ≤ CΘ, κ (G)
(5.22)
where C is a constant which is just depending on the maximum number of interfaces per each subdomain.
6. Numerical results In this section, we will present some numerical results of our adaptive BDDC algorithm for solving the Schur complement system (2.20). We set the given region Ω = (0, 1)2 is decomposed into N geometrically conforming or unconforming square subdomains. Each subdomain is divided into a uniform triangulation mesh with n or β n elements in each direction distributed as checkerboard (β ̸ = 1 means the grids is non-matching), the case with geometrically conforming subdomains see Fig. 2. The PCG method is stopped when the relative residual is reduced by the factor of 10−10 . For each interface, we emphasize that the nonmortar side is the one whose domain has larger step size. The algorithm is implemented using Matlab and run in a machine with Intel(R) Xeon(R) CPU E5-2650 v2 2.60 GHz and 96 GB memory. Numerical experiment results show that the eigenvalue λ of the generalized eigenvalue problems (3.17) satisfies λ ≥ 1, hence, in our adaptive BDDC algorithm, we set the tolerance Θ = 1 + min{log(n), log(β n)} for a given mesh partition. Additionally, we fixed the transformation matrix T⃗Fk in each interface Fk are defined in (3.19). Therefore, the algorithm is ⃗ (ν ) (ν = i, j) in each interface. In the following experiments, we separately uniquely determined by the scaling matrices D F (ν )
k
⃗ (ν = i, j, k = 1, . . . , M) defined in (3.9) and (3.10) as M1 and M2. we will investigate the denote the algorithm with D Fk robustness of these methods by some important parameters, such as Iter(number of iterations), λmin (minimum eigenvalue), λmax (maximum eigenvalue), κ (condition number), pnum(number of primal unknowns), ppnum(proportion of the total number of primal unknowns to the total number of dofs), time(time spent on PCG solver and eigenvalue problems). Firstly, we present some numerical results with geometrically conforming square subdomains. Without loss of generality we assume that the mesh parameter β ̸ = 1 and the space Xh is associated with the P2 Lagrange finite element space. Example 6.1. Consider model problem (2.1) with ρ (x) = 1 and ε (x) = 1 for all Ωi . In Table 1, the results are presented by increasing n and with a fixed subdomain partition (N = 32 ) for Example 6.1 with the mesh parameter β = 1/2. We can observe that the total number of primal unknowns are the same in both methods and independent of n, and M2 has lesser iterations than M1. Example 6.2. Consider model problem (2.1) with ε (x) = 1 for all Ωi and ρ (x), which has channel patterns as shown in Fig. 3.
202
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
Fig. 3. N = 32 with one channels (left) and three channels (right) in each subdomain: blue (ρ (x) = 1) and red (ρ (x) = η). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Table 1 Performance of the two methods for Example 6.1. n
Method
Iter
λmin
λmax
pnum
12
M1 M2
9 6
1.0014 1.0001
1.5148 1.3076
16 16
24
M1 M2
9 6
1.0018 1.0000
1.6696 1.4564
16 16
48
M1 M2
9 7
1.0024 1.0000
1.8275 1.6177
16 16
Table 2 Performance for Example 6.2 with fixed N = 32 , β = 1/2 and η = 103 . Channel
n 12
One
24 48 42
Three
56 70
λmin
λmax
pnum
Time
M1 M2 M1 M2 M1 M2
9 9 8 9 9 9
1.0000 1.0001 1.0001 1.0000 1.0001 1.0001
1.4122 2.9506 1.5060 2.9579 1.6317 2.9668
34 16 34 16 34 16
0.21 0.21 0.55 0.57 2.13 3.00
M1 M2 M1 M2 M1 M2
10 11 11 11 11 11
1.0001 1.0000 1.0001 1.0000 1.0001 1.0000
3.8197 2.9666 3.9869 2.9183 4.0407 2.9701
66 16 64 16 64 16
1.67 2.68 3.01 3.69 4.22 5.94
Method
Iter
We fixed N = 32 , β = 1/2 and η = 103 , and the results for Example 6.2 are presented in Table 2. We note that the total number of primal unknowns in M1 increase as more channels are introduced, but it is nearly independent of the local problem size in both methods. In particular, M2 chooses lesser primal unknowns than M1. The two methods give the same iteration counts, and note that M2 is more expensive than M1 in implementation, so M2 is more overhead in CPU time than M1. In Fig. 4, we plot C = κ/Θ of M1 and M2 with varying n for the constant and channel ρ (x), where Θ = 1 + log(0.5n). It is easy to see that the constant C in Theorem 5.1 is independent of n. For the case with three channels and n = 42, we present the numerical results for varying η in Table 3. We can see that the two methods are both robust to η. Especially, as η increases, the number of primal unknowns of M2 stays the same. Example 6.3. Consider model problem (2.1) with ε (x) = 1 for all Ωi and ρ (x) = 10r , where r is chosen randomly from (−3, 3) for each grid element, as shown in Fig. 5. For a given β = 23 , we present the numerical results of both methods for increasing n with a fixed N = 32 in Table 4, where the average number of primal unknowns per interface is given in the parentheses. For M1, the number of adaptive primal unknowns is more than 50% of the total interface unknowns. But it is worth pointing out that M2 has a significant advantage in iteration number and gives about 2 primal unknowns per interface as n increases, which shows that M2 is more robust and efficient for highly random coefficients than M1. In Table 5, the two methods are tested for highly varying and random ρ (x) by increasing N with a fixed n = 24. We observe a similar performance to the previous case.
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
203
Fig. 4. κ/(1 + log(0.5n)) of M1 and M2 with varying n for constant and channel ρ (x).
Fig. 5. The coefficient for random ρ (x) from 10−3 to 103 with N = 42 , n = 18 and β = 1.
Table 3 Performance for Example 6.2 with varying η in three channels, and fixed n = 42, N = 32 , β = 12 .
η
Method
Iter
λmin
λmax
pnum
10
M1 M2
11 9
1.0008 1.0001
1.9610 1.9325
16 16
102
M1 M2
15 10
1.0001 1.0000
3.9311 2.7299
34 16
103
M1 M2
10 11
1.0001 1.0000
3.8197 2.9666
66 16
104
M1 M2
9 11
1.0001 1.0000
1.5811 2.9953
70 16
105
M1 M2
9 12
1.0001 1.0000
1.6025 3.0008
70 16
Table 4 Performance for Example 6.3 by increasing n with a fixed N = 32 and β = n
Method
Iter
λmin
λmax
pnum
ppnum
12
M1 M2
19 12
1.0000 1.0003
3.3817 2.0596
183(15.25) 18(1.50)
66.30% 6.52%
24
M1 M2
22 14
1.0001 1.0008
4.1523 3.0392
371(30.92) 21(1.75)
65.78% 3.72%
48
M1 M2
24 15
1.0003 1.0009
4.8344 3.2978
650(54.17) 19(1.58)
57.02% 1.67%
3 . 2
Example 6.4. Consider model problem (2.1) with ρ (x) = 1 for all Ωi and ε (x) = 10r , where r is chosen randomly from (−3, 3) for each grid element. Set the mesh parameter β = 1/2. In Tables 6 and 7, the results for Example 6.4 are presented by increasing n with a fixed subdomain partition (N = 32 ) and increasing N with a fixed n = 24, respectively. From these tables we can see that these two methods are both robust, and M2 still has better performance than M1. Then, the similar results of Example 6.3 are also presented for the geometrically nonconforming partitions.
204
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206 Table 5 Performance for Example 6.3 by increasing N with a fixed n = 24 and β = Method
Iter
λmin
λmax
pnum
ppnum
2
M1 M2
23 16
1.0001 1.0008
4.1516 3.1044
703(29.29) 48(2.00)
62.32% 4.26%
52
M1 M2
22 17
1.0001 1.0004
4.1453 3.1094
1190(29.75) 86(2.15)
63.30% 4.57%
62
M1 M2
22 19
1.0001 1.0005
4.1702 3.9451
1829(30.48) 136(2.27)
64.86% 4.82%
N 4
Table 6 Performance for Example 6.4 by increasing n with a fixed N = 32 and β = n
Method
Iter
λmin
λmax
pnum
ppnum
12
M1 M2
15 10
1.0000 1.0005
2.7059 1.8198
97(8.08) 22(1.83)
73.48% 16.67%
24
M1 M2
19 12
1.0000 1.0003
3.3827 2.6333
188(15.67) 20(1.67)
68.12% 7.25%
48
M1 M2
22 14
1.0002 1.0006
4.1467 2.4754
339(28.25) 19(1.58)
60.11% 3.37%
Table 7 Performance for Example 6.4 by increasing N with a fixed n = 24 and β = Method
Iter
λmin
λmax
pnum
ppnum
2
M1 M2
19 14
1.0000 1.0005
3.4726 3.8050
390(16.25) 47(1.96)
70.65% 8.51%
52
M1 M2
19 14
1.0000 1.0005
3.4552 2.4679
646(16.15) 84(2.10)
70.22% 9.13%
62
M1 M2
20 16
1.0000 1.0006
3.7355 3.0482
948(15.80) 131(2.18)
68.70% 9.49%
N 4
3 . 2
1 . 2
1 . 2
Fig. 6. A geometrically unconforming partition, N = 18, n = 8 and β = 2.
For a given geometrically nonconforming square partition with N = 18, see Fig. 6, without loss of generality, we assume that the space Xh is associated with the P1 Lagrange finite element space, the results for Example 6.3 by increasing n with β = 2 are shown in Table 8. Finally, in order to investigate the effect of the order s associated with Xh = ⊕Ni=1 X (i) and Mh = ⊕M k=1 MFk on condition number of the preconditioned systems, we consider a 4 × 4 uniform subdomain partition with matching grids and three distributions of the order s of the polynomial functions associated with X (i) (i = 1, . . . , 16) are shown in Fig. 7, where we specified MFk satisfy the following rules: let Fk = ∂ Ωi ∩ ∂ Ωj , (1) if X (i) and X (j) are both P1 (or P2 ) Lagrange finite element spaces, then MFk is the corresponding standard P1 (or P2 ) Lagrange multiplier space. (2) if X (i) and X (j) are different spaces, then MFk is specified as the standard P1 Lagrange multiplier space (e.g. some interfaces in Fig. 7(b)). For ease of description, ˆ l (l = 1, 2, 3), respectively. we denote the corresponding preconditioned systems of Fig. 7(a), (b) and (c) as G In Tables 9 and 10, our algorithms are tested separately for constant coefficients and random coefficients by increasing ˆ l ) (l = 1, 2, 3) and the tolerance Θ := 1 + log(n) are listed. n, and the condition number of the preconditioned systems κ (G ˆ l ) (l = 1, 2, 3) are all less than 2Θ which coincide with the theoretical results, and the The numerical results show that κ (G condition number of the preconditioned systems are weakly depend on the order s.
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
205
Table 8 Performance of the methods with a geometrically nonconforming subdomain partition. Method
Iter
λmin
λmax
pnum
ppnum
8
M1 M2
16 15
1.0000 1.0004
2.6274 2.2196
131(3.45) 49(1.29)
67.53% 25.26%
16
M1 M2
20 17
1.0000 1.0004
3.4079 3.1529
233(6.13) 59(1.55)
54.69% 13.85%
32
M1 M2
22 19
1.0001 1.0003
4.1848 3.5826
390(10.26) 60(1.58)
43.82% 6.74%
64
M1 M2
25 18
1.0002 1.0006
4.9182 3.3565
698(18.37) 57(1.50)
38.39% 3.14%
n
Table 9 The condition number for Example 6.1 by increasing n. n
Method
κ (Gˆ 1 )
κ (Gˆ 2 )
κ (Gˆ 3 )
Θ
12
M1 M2
1.3298 1.2561
1.6300 1.4455
1.3828 1.6091
3.4849
24
M1 M2
1.5151 1.4129
1.7989 1.6242
1.9311 1.7992
4.1781
48
M1 M2
1.7046 1.5830
2.0086 1.8193
2.1000 2.0488
4.8712
Table 10 The condition number for Example 6.3 by increasing n.
(a)
n
Method
κ (Gˆ 1 )
κ (Gˆ 2 )
κ (Gˆ 3 )
Θ
12
M1 M2
3.4763 3.9410
3.3896 3.9316
3.3996 2.4833
3.4849
24
M1 M2
4.1397 3.3890
4.1109 3.3196
4.8642 3.3203
4.1781
48
M1 M2
4.8194 3.2739
4.8001 2.8744
4.8117 3.9841
4.8712
(b)
(c)
Fig. 7. Three distributions of the polynomial functions associated with Xh .
Remark 6.1. For ε = 0, we can get the similar results by using the regularized techniques in [30]. From all the experiment results above, we find that the condition numbers confirm our theoretical estimate. The two methods are all robust for the constant and channel ρ (x), but for highly varying and random coefficients, M2 shows better performance than M1. 7. Conclusions In this paper, we develop an adaptive BDDC algorithm in variational form for high-order mortar discretizations by introducing some vector-valued auxiliary spaces and operators with essential properties. Since there have not any continuity constraints at subdomain vertices in the mortar method involved in this paper, it simplifies the construction of the primal
206
J. Peng et al. / Journal of Computational and Applied Mathematics 335 (2018) 185–206
unknowns. We show that the condition number of the preconditioned system is bounded by a given tolerance, which is used to construct the transformation operators for selecting coarse basis functions. Numerical results are presented to verify the robustness and efficiency of the proposed approaches. Furthermore, we can extend this method to three-dimensional (3D) case by designing a special generalized eigenvalue problem for each edge which is shared by more than two subdomains. Acknowledgments The authors would like to thank Dr. Hyea Hyun Kim and Eric T. Chung for their helpful suggestions. This work is supported by the National Natural Science Foundation of China (Grant Nos. 11571293, 11201398, 11601462), Hunan Provincial Natural Science Foundation of China (Grant No. 2016JJ2129), General Project of Hunan Provincial Education Department of China (Grant No. 17C1527), and Hunan Provincial Civil-Military Integration Industrial Development Project ‘‘Adaptive Multilevel Solver and Its Application in ICF Numerical Simulation’’. References [1] C. Bernardi, Y. Maday, A.T. Patera, Domain decomposition by the mortar element method, in: H.G. Kaper, M. Garbey, G.W. Pieper (Eds.), Asymptotic and Numerical Methods for Partial Differential Equations with Critical Parameters, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1993, pp. 269–286. [2] C. Bernardi, Y. Maday, A.T. Patera, A new nonconforming approach to domain decomposition: The mortar element method, in: H. Brezis, J.-L. Lions (Eds.), Nonlinear Partial Differential Equations and their Applications, Longman Scientific & Technical, Harlow, UK, 1994, pp. 13–51. [3] Y. Achdou, Y. Maday, O.B. Widlund, Iterative substructuring preconditioners for mortar element methods in two dimensions, SIAM J. Numer. Anal. 36 (1999) 551–580. [4] B.I. Wohlmuth, A mortar finite element method using dual spaces for the lagrange multiplier, SIAM J. Numer. Anal. 38 (3) (2000) 989–1012. [5] B.I. Wohlmuth, Discretization Methods and Iterative Solvers Based on Domain Decomposition, Springer Science & Business Media, 2001, p. 17. [6] C.R. Dohrmann, A preconditioner for substructuring based on constrained energy minimization, SIAM J. Sci. Comput. 25 (1) (2003) 246–258. [7] J. Li, O.B. Widlund, FETI-DP, BDDC, and block Cholesky methods, Internat. J. Numer. Methods Engrg. 66 (2006) 250–271. [8] S.C. Brenner, L.-Y. Sung, BDDC and FETI-DP without matrices or vectors, Comput. Methods Appl. Mech. Engrg. 196 (2007) 1429–1435. [9] X. Tu, Three-level BDDC in two dimensions, Internat. J. Numer. Methods Engrg. 69 (1) (2007) 33–59. [10] M. Dryja, J. Galvis, M. Sarkis, BDDC methods for discontinuous Galerkin discretization of elliptic problems, J. Complexity 23 (2007) 715–739. [11] C. Canuto, L.F. Pavarino, A.B. Pieri, BDDC preconditioners for continuous and discontinuous Galerkin methods using spectral/hp elements with variable local polynomial degree, IMA J. Numer. Anal. 34 (3) (2014) 879–903. [12] H.H. Kim, M. Dryja, O.B. Widlund, A BDDC method for mortar discretizations using a transformation of basis, SIAM J. Numer. Anal. 47 (1) (2008) 136–157. [13] H.H. Kim, A BDDC algorithm for mortar discretization of elasticity problems, SIAM J. Numer. Anal. 46 (4) (2008) 2090–2111. [14] H.H. Kim, X. Tu, A three-level BDDC algorithm for mortar discretizations, SIAM J. Numer. Anal. 47 (2) (2009) 1576–1600. [15] J. Mandel, B. Sousedík, Adaptive selection of face coarse degrees of freedom in the BDDC and FETI-DP iterative substructuring methods, Comput. Methods Appl. Mech. Engrg. 196 (8) (2007) 1389–1399. [16] C.R. Dohrmann, C. Pechstein, Constraint and weight selection algorithms for BDDC. Talk by Dohrmann at the Domain Decomp. Meth. Sci. Engrg. XXI, Rennes, France, June 2012. http://www.numa.uni-linz.ac.at/clemens/dohrmann-pechstein-dd21-talk.pdf. [17] C. Pechstein, C.R. Dohrmann, Dohrmann, Modern domain decomposition solvers-BBDC, deluxe scaling, and an algebraic approach, Talk by Pechstein at RICAM, Linz, Austria, December 2013. http://people.ricam.oeaw.ac.at/c.pechstein/pechstein-bddc2013.pdf. [18] D.-S. Oh, O.B. Widlund, S. Zampini, C. Dohrmann, BDDC algorithms with deluxe scaling and adaptive selection of primal constraints for Raviart–Thomas vector fields, Math. Comp. (2017). http://dx.doi.org/10.1090/mcom/3254. (in press). [19] H.H. Kim, E.T. Chung, A BDDC algorithm with enriched coarse spaces for two-dimensional elliptic problems with oscillatory and high contrast coefficients, Multiscale Model. Simul. 13 (2) (2015) 571–593. [20] H.H. Kim, E. Chung, J. Wang, BDDC and FETI-DP preconditioners with adaptive coarse spaces for three-dimensional elliptic problems with oscillatory and high contrast coefficients, J. Comput. Phys. 349 (2017) 191–214. [21] S. Zampini, PCBDDC: a class of robust dual-primal methods in PETSc, SIAM J. Sci. Comput. 38 (5) (2016) S282–S306. [22] A. Klawonn, P. Radtke, O. Rheinbach, Adaptive coarse spaces for BDDC with a transformation of basis, in: Domain Decomposition Methods in Science and Engineering XXII, Springer International Publishing, 2016, pp. 301–309. [23] J.G. Calvo, O.B. Widlund, An adaptive choice of primal constraints for BDDC domain decomposition algorithms, Electron. Trans. Numer. Anal. 45 (2016) 524–544. [24] C. Pechstein, C.R. Dohrmann, A unified framework for adaptive BDDC, Tech. report RICAM-Report 2016-20, Johann Radon Institute for Computational and Applied Mathematics, RICAM, Austrian Academy of Sciences, Linz, Austria. [25] H.H. Kim, E.T. Chung, C. Xu, A BDDC algorithm with adaptive primal constraints for staggered discontinuous Galerkin approximation of elliptic problems with highly oscillating coefficients, J. Comput. Appl. Math. 311 (2017) 599–617. [26] L.B. Da Veiga, L.F. Pavarino, S. Scacchi, O.B. Widlund, S. Zampini, Adaptive selection of primal constraints for isogeometric BDDC deluxe preconditioners, SIAM J. Sci. Comput. 39 (1) (2017) A281–A302. [27] S. Zampini, X. Tu, Multilevel balancing domain decomposition by constraints deluxe algorithms with adaptive coarse spaces for flow in porous media, SIAM J. Sci. Comput. 39 (4) (2017) A1389–A1415. [28] W.N. Anderson Jr., D.J. Duffin, Series and parallel addition of matrices, J. Math. Anal. Appl. 26 (1969) 576–594. [29] L.B. Da Veiga, C. Chinosi, C. Lovadina, L.F. Pavarino, Robust BDDC preconditioners for Reissner–Mindlin plate bending problems and MITC elements, SIAM J. Numer. Anal. 47 (6) (2010) 4214–4238. [30] Q. Hu, A regularized domain decomposition method with lagrange multiplier, Adv. Comput. Math. 26 (2007) 367–401. [31] D.J. Rixen, C. Farhat, A simple and efficient extension of a class of substructure based preconditioners to heterogeneous structural mechanics problems, Internat. J. Numer. Methods Engrg. 44 (4) (1999) 489–516.