Journal of Computational and Applied Mathematics 271 (2014) 180–194
Contents lists available at ScienceDirect
Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
Parallel subspace correction methods for nearly singular systems Jinbiao Wu a , Hui Zheng b,∗ a
HEDPS, CAPT&LMAM, School of Mathematical Sciences, Peking University, Beijing 100871, China
b
School of Mathematical Sciences, Peking University, Beijing 100871, China
article
info
Article history: Received 12 November 2012 Received in revised form 2 April 2014 MSC: 65N55 65N12 65F10 65N30 Keywords: Parallel subspace correction method Preconditioned conjugate gradient Nearly singular system
abstract In this paper we consider the parallel subspace correction (PSC) methods for the nearly singular systems. We apply the PSC methods as the preconditioners when we solve the nearly singular systems by the conjugate gradient methods. Our focus is to estimate the condition number of the preconditioned systems. We deduce the parameter independent estimates on the PSC preconditioners for the nearly singular systems, under appropriate assumptions on subspace decomposition. The main assumption is that the kernel of the singular part of the system can be decomposed into a sum of local kernel subspaces. Our estimates can be applied into actual problems, and two examples are analyzed in this paper. One is the elliptic problem with large jumps in the coefficients, the other is the planar nearly incompressible elasticity problem with the Scott–Vogelius finite element discretization. We prove that the related parallel multilevel methods for both examples are convergent uniformly, with respect to the coefficients and the mesh size. © 2014 Elsevier B.V. All rights reserved.
1. Introduction In this paper we consider the parameter dependent problem: Find u ∈ V , the unique solution to Au = (ω1 A1 + ω2 A2 )u = f ,
(1)
where V is a finite dimensional Hilbert space with inner product (·, ·) and f ∈ V , ω1 and ω2 are two positive parameters, A1 ω and A2 : V → V are symmetric and positive semidefinite, and A : V → V is symmetric and positive definite. When ω1 is very 2 small and A2 is singular, this parameter dependent problem represents a nearly singular system. (1) appears in many kinds of actual problems, such as elasticity problems [1–3], Maxwell equations [4,5], two-phase flows [6,7], and so on. The elliptic problem with strongly discontinuous coefficients [8,9] and the planar nearly incompressible elasticity problem [10], which are considered in this paper, also fall into the category of the parameter dependent problems or the nearly singular systems. Numerous researchers focus on designing efficient algorithms with parallel architectures for solving linear systems, due to the applicability to many scientific and engineering problems. Parallel computing is the dominated form in the multi-core systems or the CPU–GPU heterogeneous systems. To take advantage of the power of parallel computers, design and analysis of parallel algorithms are necessary. The motivation of the paper is to develop efficient parallel algorithms for the parameter dependent problem (1).
∗
Corresponding author. Tel.: +86 13683314850. E-mail addresses:
[email protected] (J. Wu),
[email protected] (H. Zheng).
http://dx.doi.org/10.1016/j.cam.2014.04.012 0377-0427/© 2014 Elsevier B.V. All rights reserved.
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
181
Many iterative methods, including multigrid methods and domain decomposition methods, are classified into two groups, namely successive subspace correction (SSC) and parallel subspace correction (PSC) methods, by using the notions of the subspace decomposition and subspace correction [11]. SSC methods have been considered to solve the nearly singular systems in [12,13]. Following the general framework of PSC methods developed in [11,14], we apply the PSC methods to the nearly singular systems. More precisely, we solve the nearly singular problems by the preconditioned conjugate gradient methods, with the PSC methods as the preconditioners. Our focus is to estimate the condition number of the preconditioned systems. We deduce the parameter independent estimates on the PSC methods for (1), under appropriate assumptions on space decomposition. The main assumption implies that the kernel of the singular part of the system can be decomposed into a sum of local kernel subspaces. See more specific explanations on the assumption in Section 3. Our estimates can be applied to actual problems. In this paper, we consider two examples. The BPX-type multilevel preconditioners [15] are considered for solving the discretized systems of equations. BPX-type methods can be regarded as an important class of the PSC methods. The first example is the elliptic problems with large jumps in the coefficients which have been discussed in [8,9,16–18]. We construct a modified BPX preconditioner and prove the uniform convergence of the corresponding preconditioned conjugate gradient method. By our framework, we can obtain the estimate on the convergence rate of the conjugate gradient method with the ordinary BPX preconditioner, which improves the related result in [8]. The second example is the planar nearly incompressible elasticity problem with the Scott–Vogelius finite element discretization [10]. We prove the uniform convergence of the conjugate gradient method with the BPX-type preconditioner (which can be regarded as a multilevel additive Schwarz method), with respect to the parameters and the mesh size. The rest of this paper is organized as follows. In Section 2, we introduce our notation and the preliminary results on parallel subspace correction methods. In Section 3, we give the theory on the parallel subspace correction methods for the parameter dependent problems. In Section 4, we apply our theory to the elliptic problem with large jumps in the coefficients. In Section 5, we consider the BPX-type method for the planar nearly incompressible elasticity problem with the Scott–Vogelius finite element discretization. 2. Preliminary In this section we first clarify our notation, then introduce the parallel subspace correction (PSC) method and the preconditioned conjugate gradient (PCG) method with the PSC method as the preconditioner. To estimate the convergence rate of the PCG method, we give the related estimates on the condition number of the preconditioned system. 2.1. Notation Suppose V is a finite dimensional Hilbert space with the inner product (·, ·). A linear operator T : V → V is self-adjoint if (Tu, v) = (u, T v) for all u, v ∈ V . T is symmetric and positive definite (SPD) if T is self-adjoint and (Tu, u) > 0, for all u ∈ V \ {0}. T is symmetric and positive semidefinite (SPSD), if T is self-adjoint and (Tu, u) ≥ 0 for all u ∈ V . When T is SPD, a new inner product (·, ·)T is defined by
(u, v)T = (Tu, v),
∀u, v ∈ V .
Similarly, when T is SPSD, a semi inner product (·, ·)T is defined by
(u, v)T = (Tu, v),
∀u, v ∈ V .
For the problem (1), A is SPD, A1 and A2 are SPSD. We can define the inner product (·, ·)A and the semi inner products (·, ·)Ai (i = 1, 2), respectively. The relative norm ∥ · ∥A and semi-norms | · |Ai (i = 1, 2) are then defined by 1/2
∥ u∥ A = ( u, u) A ,
∀u ∈ V
and 1/2
|u|Ai = (u, u)Ai ,
∀u ∈ V .
In this paper we also use the general notation of the Sobolev space. Let Ω be an open and bounded domain, the L2 (Ω ) norm, H 1 (Ω ) norm, and H 1 (Ω ) semi-norm are denoted by ∥ · ∥0,Ω , ∥ · ∥1,Ω , and | · |1,Ω respectively; the L2 (Ω ) inner product, H 1 (Ω ) semi inner product, and H 2 (Ω ) semi inner product are denoted by (·, ·)0,Ω , (·, ·)1,Ω , and (·, ·)2,Ω respectively. Additionally, we denote x1 . y1 if there exists a generic positive constant C1 independent of any other parameters related to the mesh, the spaces and especially the coefficients such that x1 ≤ C1 y1 . Similarly, we denote x2 & y2 if there exists a generic positive constant C2 independent of any other parameters related to the mesh, the spaces and especially the coefficients such that x2 ≥ C2 y2 . We denote x3 ≃ y3 if x3 . y3 and x3 & y3 . 2.2. Parallel subspace correction methods and preconditioned conjugate gradient methods Following the framework of the subspace decomposition and subspace correction developed in [11], we assume that V J J is decomposed into a sum of subspaces {Wi }i=1 such that V = i=1 Wi . For any 1 ≤ i ≤ J, we define the projection Qi :
182
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
V → Wi by
(Qi v, wi ) = (v, wi ),
∀v ∈ V , wi ∈ Wi .
The subspace operator Ai : Wi → Wi is defined by
(Ai vi , wi ) = (Avi , wi ),
∀vi ∈ Wi , wi ∈ Wi .
We can regard Ai as the restriction of A on Wi . Assume that we have a linear subspace operator Ri : Wi → Wi for any 1 ≤ i ≤ J. Ri is called the subspace solver and can 1 −1 be regarded as an approximation of A− i . When Ri = Ai , we say the subspace solver is exact. The PSC algorithm as a solver for the problem Au = f is given as follows. Algorithm 2.1. Let the initial guess u0 ∈ V be given, for l = 1, . . . r = f − Aul−1 for i = 1, . . . , J ei = Ri Qi r endfor ul = ul − 1 +
J
ei
i =1
endfor In certain sense, ei = Ri Qi r solves the subspace variational problem find vi ∈ Wi such that (vi , wi )A = (r , wi ), −1
∀wi ∈ Wi
(2)
−1
approximately. If Ri = Ai , then ei = Ai Qi r is the unique solution satisfying
(ei , wi )A = (r , wi ),
∀wi ∈ Wi .
Algorithm 2.1 can be written as ul = ul−1 + B(f − Aul−1 ), where B=
J
Ri Qi .
i =1
Notice that B is SPD if each Ri is SPD; and we always assume each Ri is SPD in this paper. Algorithm 2.1 represents many actual algorithms. For example, when V = Rn with the subspace decomposition Rn =
n
span{ti },
i =1 1 where ti is the ith column of the identity matrix, and using the exact subspace solver Ri = A− i , then Algorithm 2.1 represents the Jacobi iterative method. We know that the Jacobi iterative method is not convergent for some SPD problems, hence Algorithm 2.1 is not practical for general SPD problems. Note that both B and A are SPD with respect to the inner product (·, ·), therefore BA is SPD with respect to the inner product (·, ·)A or (·, ·)B−1 . Then we can regard B as the preconditioner, and we have the following PCG algorithm.
Algorithm 2.2 (PCG). Apply the conjugate gradient method to the preconditioned system BAu = Bf , with respect to the inner product (·, ·)B−1 . Let uk be the kth iterative solution of Algorithm 2.2, then we have the well-known estimate
√ k κ(BA) − 1 ∥uk − u∥A ≤ 2 √ ∥ u0 − u∥ A , κ(BA) + 1 where κ(BA) is the spectrum condition number of BA. Now our aim is to estimate κ(BA), and we have κ(BA) = λmax (BA) (λmin (BA))−1 . Consequently we only need to estimate (λmin (BA))−1 and λmax (BA) respectively. For simplicity we introduce the notation
Λ1 = (λmin (BA))−1 ,
Λ2 = λmax (BA).
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
183
2.3. Estimates on the condition number In this subsection we give the preliminary estimates on Λ1 and Λ2 . The following identity on B−1 (see [11,14,19]) is important for our estimate. Lemma 2.1.
(B−1 v, v) =
vi ∈Wi ,
J
inf
1 R− i vi , vi .
J v =v i=1 i=1 i
(3)
Proof. By setting vi = ui + wi with ui = Ri Qi B−1 v , we get that
vi ∈Wi ,
J
inf
1 R− i vi , vi =
J v =v i=1 i=1 i
wi ∈Wi ,
inf
J i=1
J
wi =0 i=1
1 R− i (ui + wi ), ui + wi
= (B v, v) + −1
inf
wi ∈Wi ,
J
i=1
2 wi =0
J
J −1 (B v, wi ) + (Ri wi , wi )
i=1
i=1
−1
= (B v, v). −1
1 First we assume that R− = Ai (1 ≤ i ≤ J ), namely the exact subspace solvers are applied. With this assumption, we have i
(B−1 v, v) =
vi ∈Wi ,
J
inf
J v =v i=1 i=1 i
(vi , vi )A .
(4)
Lemma 2.2. J
Λ1 = sup v∈V
vi ∈Wi ,
Λ2 = sup
vi ∈Wi
J
i =1 J
(v, v)A
J v =v i=1 i
vi ,
(vi , vi )A
i=1
inf
J
vi
.
(5)
i=1
A
.
(6)
(vi , vi )A
i =1
Proof. For Λ1 = (λmin (BA))−1 , we have
(λmin (BA))−1 = λmax (A−1 B−1 ) = sup v∈V
(A−1 B−1 v, v)A (B−1 v, v) = sup . (v, v)A v∈V (v, v)A
(5) follows by (4). Similarly, we obtain for Λ2 = λmax (BA),
(BAv, v)B−1 (v, v)B−1 (v, v)A = sup J v∈V
λmax (BA) = sup v∈V
inf
vi ∈Wi ,
= sup
J
v =v i=1 i=1 i
sup
v∈V v ∈W , i i
J
v =v i=1 i
(vi , vi )A
(v, v)A (vi , vi )A J
i =1
= sup
vi ∈Wi
J
vi ,
i=1 J
J
vi
i=1
A
.
(vi , vi )A
i=1
For the general cases while the subspace solver Ri is not exact, we impose the assumption that 1 β(vi , vi )A ≤ (R− i vi , vi ) ≤ α(vi , vi )A ,
∀vi ∈ Wi ,
(7)
184
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
where α and β are two positive constants. Similar to (5) and (6), we have J
Λ1 ≤ α sup
v∈V vi ∈Wi ,
(vi , vi )A
i =1
inf
(v, v)A
J
v =v i=1 i
and
Λ2 ≤
1
β
sup
vi ∈Wi
J
vi ,
i=1 J
J
vi
i=1
A
.
(vi , vi )A
i=1
Since α and β are constants, we also only need to estimate the right hand side of (5) and (6). 1 In what follows, we always assume that Ri = A− i , namely the exact subspace solvers are applied. With this assumption, the preconditioner B can be defined by the following algorithm. Notice that here we also assume that V is decomposed into J a sum of subspaces {Wi }i=1 such that V =
J
Wi .
(8)
i =1
Algorithm 2.3 (PSC). For any given f ∈ V , for i = 1, . . . , J ei ∈ Wi is the unique solution to
(ei , wi )A = (f , wi ),
∀wi ∈ Wi
endfor Bf =
J
ei .
i=1
We observe that the preconditioner B is decided by the space decomposition (8). 3. Main assumptions and theorems In this section we apply Algorithm 2.2 with the preconditioner B defined by Algorithm 2.3 to the parameter dependent problem (1), and deduce the related estimates. First we introduce the null space of the operator A2 :
N = {u ∈ V | (u, v)A2 = 0, ∀v ∈ V }. The space N is defined as the kernel of a singular operator, A1 or A2 . If both A1 and A2 are singular, N is defined as the kernel of the (relatively) dominated operator, for example, the operator discretized from higher order derivatives or the operator with the larger parameter. Without losing generality, we always assume that the space N is defined as the kernel of A2 . We then define the A-orthogonal complement of N :
N ⊥ = {u ∈ V | (u, v)A = 0, ∀v ∈ N } = {u ∈ V | (u, v)A1 = 0, ∀v ∈ N }. Next we define the local null space Ni and its orthogonal complement Ni⊥ (⊂ Wi ):
Ni = {ui ∈ Wi | (ui , vi )A2 = 0, ∀vi ∈ Wi } = N ∩ Wi , Ni⊥ = {ui ∈ Wi | (ui , vi )A = 0, ∀vi ∈ Ni }. We continue to estimate Λ1 and Λ2 by (5) and (6). First we estimate Λ1 = (λmin (BA))−1 . To obtain the parameter independent estimate, we make the following assumption (A) on the space decomposition (8), which is similar to that in [12,13]. The assumption is that c ∈ N can be decomposed into the sum of terms in Ni , namely J J (A) N = i=1 (N ∩ Wi ) = i=1 Ni . We provide an example to illustrate why we impose the assumption (A). Suppose that A = ω1 I + A2 and ω1 ≪ 1, we have Ď
Ď
Ď
A−1 = (ω1 I + A2 )−1 = A2 + ω1−1 IN − ω1 (A2 )2 (I + ω1 A2 )−1 , where IN denotes the orthogonal projection from V to N , Ď
Ď
Ď A2
(9)
denotes the Moore–Penrose generalized inverse [20,21] of A2 ,
which satisfies that A2 A2 = A2 A2 = I − IN (noting that A2 is SPSD). (9) indicates that an appropriate preconditioner of A Ď
should be spectral equivalent to A2 on the range of A2 , be spectral equivalent to ω1−1 I on the null space of A2 . So N should be specially handled, explicitly or implicitly; for example, some Wi are the subspaces of N or some Wi are ‘‘large’’ enough to include some functions in N .
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
185
Table 1 Comparison of κ(B1 A) and κ(B2 A) with different ω1 .
ω1
1
10−1
10−2
10−3
10−4
10−5
κ(B1 A) κ(B2 A)
3.00 1.00
21.0 1.75
2.01 × 102 1.97
2.00 × 103 2.00
2.00 × 104 2.00
2.00 × 105 2.00
We then provide a simple numerical experiment to compare the condition numbers of the preconditioned systems when (A) is valid or not. Let A = ω1 A1 + A2 , where
A1 = I =
1 0
0 , 1
A2 =
−1
1 −1
1
.
V = R2 and N = span{(1, 1)t }. We consider two subspace decompositions: V = R2 = span{(1, 0)t } + span{(0, 1)t } = W1 + W2
(10)
V = R2 = span{(1, 0)t } + span{(0, 1)t } + span{(1, 1)t } = W1 + W2 + W3 .
(11)
and
In fact Algorithm 2.3 with the decomposition (10) is according to the Jacobi iteration. We have
N1 + N2 = (N ∩ W1 ) + (N ∩ W2 ) = ∅ + ∅ = ∅ ̸= N and
N1 + N2 + N3 = ∅ + ∅ + N = N . Hence the decomposition (10) does not satisfy (A) and the decomposition (11) satisfies (A). We denote the relative preconditioners according to (10) and (11) by B1 and B2 respectively, which are given explicitly as the following:
1
0
B1 = ω1 + 1 0
1
1
+
B2 = ω1 + 11
,
1 2ω1
1 1
2ω1
ω1 + 1 2ω1 ω1 + 1 The results on κ(B1 A) and κ(B2 A) with different ω1 , are shown in Table 1.
1 .
+
2ω1
Through the above example, we can see the importance of (A) on the condition number of the preconditioned systems. We now present our main estimate on Λ1 . Theorem 3.1. Assume (A) holds, and S ⊂ V satisfies V = N + S, v =c+w
for any v ∈ V , there exist c ∈ N and w ∈ S such that and (c , c )A + (w, w)A . (v, v)A .
(12)
Then we have that J
Λ1 . sup w∈S
i =1
inf
wi ∈Wi ,
J
i=1
wi =w
J
(ω1 (wi , wi )A1 + ω2 (wi , wi )A2 ) ω1 (w, w)A1 + ω2 (w, w)A2
+ sup c ∈N
inf
ci ∈Ni ,
(ci , ci )A1
i=1
J
c =c i=1 i
(c , c )A1
= I + II.
(13)
Proof. For any given v ∈ V , we have w ∈ S and c ∈ N , such that v = c + w and (c , c )A + (w, w)A . (v, v)A . Then we obtain J
vi ∈Wi ,
inf
J v =v i=1 i
J
(vi , vi )A
i=1
(v, v)A
≤
(wi + ci , wi + ci )A
i=1
inf
(v, v)A
J w =w i=1 i J c =c i=1 i
J .
wi ∈Wi ,
inf
J i=1
(wi , wi )A + inf J (w, w)A ci ∈Ni ,
i=1
wi =w
c =c i=1 i
J
(ci , ci )A . (c , c )A
i =1
We apply (c , c )A + (w, w)A . (v, v)A in the last inequality. The above estimate is similar to that of the SSC method in [12]. If S = N ⊥ , (12) is satisfied automatically. (12) admits us to split Λ1 into two terms according to N and S , respectively, (A) admits us to present the term according to N only using N and the local kernel subspaces Ni .
186
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
Next we estimate Λ2 = λmax (BA). Here we consider each local kernel subspace Ni and its orthogonal complement Ni⊥ . We define J
wi ,
i=1
µ1 = sup
J
wi ∈Ni⊥
J
wi
i=1
A
(14)
(wi , wi )A
i=1
and
µ2 = sup
ci ∈Ni
J
J
ci ,
i=1 J
ci
i=1
,
A
(15)
(ci , ci )A
i=1
then we have Theorem 3.2.
Λ2 ≤ 2 max{µ1 , µ2 }. Proof. For any vi ∈ Wi (1 ≤ i ≤ J ), we assume that vi = wi + ci with wi ∈ Ni⊥ and ci ∈ Ni , then
J
vi ,
i=1
J
vi
i =1
≤2
J
wi ,
J
i =1
A
wi
i=1
+
J i =1
A
ci ,
J i=1
ci A
J J ≤ 2µ1 (wi , wi )A + 2µ2 (ci , ci )A i =1
i=1 J
≤ 2 max{µ1 , µ2 }
(wi + ci , wi + ci )A
i=1
= 2 max{µ1 , µ2 }
J
(vi , vi )A .
i=1
By (6),
Λ2 ≤ 2 max{µ1 , µ2 }. (14) and (15) are similar to (4.3) in [11]. We impose (14) and (15) instead of the original estimate (6), because from them it is easy to get the related estimates independent of the parameters ω1 and ω2 . µ2 in (15) is obviously independent of ω1 and ω2 . For the planar nearly incompressible elasticity problems in Section 5, it is not difficult to prove that µ1 in (14) is also independent of ω1 and ω2 because of the local inf–sup condition (36). 4. BPX-type preconditioners for jump coefficient problems In this section we consider how to apply our framework to the elliptic problems with large jump in the coefficients. We consider the following second order elliptic problem with piecewise constant coefficients,
−∇ · (ω∇ u) = f u=0
in Ω , on ∂ Ω ,
(16)
where Ω ⊂ R2 is a polygonal domain or Ω ⊂ R3 is a polyhedral domain. Suppose Ω1 and Ω2 are two subdomains of Ω , satisfying
Ω1 ∩ Ω2 = ∅,
Ω1 ∪ Ω2 = Ω .
Moreover, we suppose ω = ωi in Ωi where ω1 , ω2 are two positive constants. (16) represents the jump coefficient problems with two subdomains. Now we define the finite element spaces and multigrid settings. Assume that we have a nested sequence of quasi-uniform triangulations Tk = {τki } (1 ≤ k ≤ L). For any τki ∈ Tk , τki ⊂ Ω1 or τki ⊂ Ω2 . The mesh size of the triangulation Tk is denoted by hk , and hk ≃ γ k for k = 1, . . . , L, where γ ∈ (0, 1) is a constant independent of k. The continuous piecewise linear finite element space associated with the triangulation Tk , is denoted by Vk . Then we obtain nested finite element spaces, V1 ⊂ · · · ⊂ Vk ⊂ · · · ⊂ VL .
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
187
Fig. 1. Two subdomains with a boundary cross point.
V is the finite element space on the finest grid, namely V = VL . The linear finite element approximation of (16) reads: Find u ∈ V , such that
(u, v)A = ω1 (u, v)A1 + ω2 (u, v)A2 = (f , v)0,Ω ,
∀v ∈ V
(17)
where
(u, v)Ai =
Ωi
∇ u · ∇v dx,
i = 1, 2.
We assume ω1 ≤ ω2 , and define the kernel space N as the null space of A2 . We consider the preconditioned conjugate gradient method with the BPX-type preconditioners for solving (17). We construct a modified BPX preconditioner and prove that the preconditioned conjugate gradient method with the preconditioner is uniformly convergent, with respect to the parameters and the mesh size. In [8], the convergence rate of the conjugate gradient method with the ordinary BPX preconditioner is proved to be bounded by 1 − C | log1h|+1 , by the weighted L2 projection, where h is the mesh size and C is independent of the parameters and the mesh size. The techniques developed in this paper and [9] are relied on the discrete harmonic extension and the L2 projection without weight. We can prove that the convergence rate of the conjugate gradient method with the ordinary BPX preconditioner is bounded by 1 − C1 where C is independent of the parameters and the mesh size. We present some preliminary results on the problem [9]. First we introduce two spaces, V b = {v ∈ V | supp(v) ⊂ Ω1 }, G = {v|Ω2 | v ∈ V }. Then let us define the discrete harmonic extension. For any given g ∈ G, the function v ∈ V is called the discrete harmonic extension of g if v|Ω2 = g and v satisfies
(v, φ)A1 = 0,
∀φ ∈ V b .
(18)
It is not difficult to see that the above definition is well-defined. In what follows, we denote by E g the harmonic extension for a given function g ∈ G. By the discrete harmonic extension, we define a subspace of V by
M = E g | g ∈ G,
Ω2
g =0 ,
and we can prove the equivalence between | · |A2 and ∥ · ∥1,Ω on M [9]. Lemma 4.1. | · |A2 is equivalent to ∥ · ∥1,Ω on M . The analysis on M heavily relies on this norm equivalence. The configuration of Ω1 and Ω2 may vary, and we only give a typical case to demonstrate how to estimate Λ1 and Λ2 for the PSC preconditioner defined by Algorithm 2.3. We suppose that the boundary of Ω intersects the boundary of Ω2 at exactly one point, as shown in Fig. 1. Assume that {p0 } = ∂ Ω ∩ ∂ Ω2 , where p0 is a node of the coarsest triangulation T1 and thus is a node of the finest triangulation TL . We now define a function v ∈ V = VL such that v = E g where g ∈ G, and when Ω is three dimensional, g ( x) =
0, 1,
if x = p0 if x is a node of TL in Ω2 and x ̸= p0
(19)
when Ω is two dimensional, g (x) = log(dist(p0 , x) + h) − log h,
if x is a node of TL in Ω2 .
(20)
The functions as v are also called ‘‘singular functions’’, because they cause the deterioration of the convergence rate from our viewpoint.
188
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
Then we define V = span{ v}, and split V into V = N + V + M.
(21)
By Lemma 4.2 in [9], the space splitting is stable. Namely, for any v ∈ V , there exist c ∈ N = V , v ∈ V and w ∈ M such that v = c + v + w and (c , c )A + ( v, v)A + (w, w)A . (v, v)A . (22) Similar to Theorem 3.1, we can deduce the estimate on Λ1 for the PSC preconditioner defined by Algorithm 2.3 as follows. b
Theorem 4.1. Assume that (A) and (22) hold. Then we have that J
Λ1 . sup
w∈M
wi ∈Wi ,
(ω1 (wi , wi )A1 + ω2 (wi , wi )A2 )
i=1
inf
J i=1
ω1 (w, w)A1 + ω2 (w, w)A2
wi =w
J
+ sup
i =1
inf
vi ∈Wi , v∈ V
J
v = v i=1 i
J
vi , vi )A2 ) (ω1 ( vi , vi )A1 + ω2 ( ω1 ( v, v)A1 + ω2 ( v, v)A2
+ sup c ∈N
inf
ci ∈Ni ,
J
c =c i=1 i
(ci , ci )A1
i=1
(c , c )A1
= I + II + III. Now we give the BPX-type preconditioners for the problem (17). Define the nodal basis functions φki , where φki is 1 at the ith inner node and is 0 at other nodes on the triangulation Tk . Then we have the following decomposition of the finite element space Vk , Vk =
nk
Vki ,
(23)
i=1
where nk is the number of inner nodes on Tk , Vki = span{φki }. The ordinary BPX preconditioner is then defined by Algorithm 2.3 with the space decomposition V =
nk L
Vki .
(24)
k=1 i=1
Induced by Theorem 4.1, we construct a modified BPX preconditioner defined by Algorithm 2.3 with the space decomposition V = V+
nk L
Vki .
(25)
k=1 i=1
The modified BPX preconditioner only need to perform an additional subspace correction on V than the ordinary BPX preconditioner. According to Algorithm 2.3, if V = span{ v}, we only need to add one more term t v˜ where t ∈ R satisfying t (˜v , v˜ )A = (f , v˜ ). Now we apply Theorem 4.1 to the modified BPX preconditioner (the PSC preconditioner method defined by Algorithm 2.3 with the decomposition (25)). It is not difficult to check the assumption (A). Obviously we have II ≤ 1 for the decomposition (25). Next we consider the term I. ∀w ∈ M , we decompose w into
w =0+
L
vk = 0 +
k=1
nk L
vki ,
k=1 i=1
where vk ∈ Vk , vki ∈ Vki , and
vk =
nk
vki = (QVk − QVk−1 )w,
1 ≤ k ≤ L,
i=1
where QVk is the L2 projection onto Vk and QV0 = 0. We can get nk (vki , vki )A1 . |QVk w − QVk−1 w|21,Ω .
(26)
i =1
The proof of (26) is similar to that in Section 5 of [13]. We note that the main difference is that here we apply the H01 (Ω )-norm on Ω instead of the A1 -norm for QVk w − QVk−1 w , because the Q -projection is the global projection on the whole domain Ω .
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
189
By the estimate on Qk (see [11,14]), we get L
|QVk w − QVk−1 w|21,Ω . |w|21,Ω ,
(27)
k=1
then by (26) and (27), we have nk L
(vki , vki )A1 .
L
|QVk w − QVk−1 w|21,Ω . |w|21,Ω .
(28)
k=1
k=1 i=1
The above result is on A1 and Ω1 , and a similar conclusion is correct for A2 and Ω2 , i.e., we have nk L
(vki , vki )A2 . |w|21,Ω .
k=1 i=1
By ω1 ≤ ω2 and Lemma 4.1, we have I.
ω1 |w|21,Ω + ω2 |w|21,Ω ω1 (w, w)A1 + ω2 (w, w)A2
.
|w|21,Ω (w, w)A2
. 1.
(29)
Finally we consider the term III. ∀c ∈ N , we decompose c into c =0+
L
ck = 0 +
k=1
nk L
cki ,
ck ∈ N ∩ Vk , cki ∈ N ∩ Vki ,
k=1 i=1
where c1 =
n1
c1i = QV1 ∩V b c ,
ck =
i =1
nk
cki = (QVk ∩V b − QVk−1 ∩V b )c ,
k > 2,
i =1
where QVk ∩V b is the L2 projection onto Vk ∩ V b . Restricting the problem on Ω1 , similar to (28), we get that nk L
(cki , cki )A1 . (c , c )A1 .
k=1 i=1
By the above inequality, we obtain nk L
III .
(cki , cki )A1
k=1 i=1
. 1.
(c , c )A1
The conclusion Λ1 . 1 follows by I . 1, II ≤ 1 and III . 1. We then consider the estimate on Λ2 for the modified BPX preconditioner by (6). For any v ∈ V and vki ∈ Vki , we want to prove
v+
nk L
v , v+ i k
k=1 i=1
nk L k=1 i=1
v
. ( v, v)A +
i k A
nk L (vki , vki )A .
(30)
k=1 i=1
It is sufficient to prove
nk L k=1 i=1
v, i k
nk L k=1 i=1
v
i k
. A
nk L (vki , vki )A . k=1 i=1
When k ≤ l, by the strengthened Cauchy–Schwarz inequality [14], we have
Ω1
j 1 i ∇vki ∇vlj . γ l−k h− l ∥∇vk ∥0,Ω1 ∥vl ∥0,Ω1 , j
and because vl is the basic function on the triangulation Tl with the mesh size hl , we have j
j
1 h− l ∥vl ∥0,Ω1 . ∥∇vl ∥0,Ω1 ,
then we have
Ω1
∇vki ∇vlj . γ l−k ∥∇vki ∥0,Ω1 ∥∇vlj ∥0,Ω1 ;
(31)
190
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
namely, 1/2
1/2
(vki , vlj )A1 . γ l−k (vki , vki )A1 (vlj , vlj )A1 , consequently
(vki , vlj )A1 . γ l−k ((vki , vki )A1 + (vlj , vlj )A1 ). Similarly we have
(vki , vlj )A2 . γ l−k ((vki , vki )A2 + (vlj , vlj )A2 ), then
(vki , vlj )A . γ l−k ((vki , vki )A + (vlj , vlj )A ). By the above inequality, we have
nk L
v, i k
nk L
v
i k
k=1 i=1
k=1 i=1
. A
nk L (vki , vki )A ), (Sk (γ ) k=1
i =1
where Sk (γ ) =
k
γ k−l +
l =1
L
γ l−k . 1.
l=k+1
We then prove (31) and get Λ2 . 1. Under other configurations of Ω1 and Ω2 , the similar conclusions can be achieved. Furthermore, this conclusion can be extended to the general case with more than two subdomains. More details can be seen in [9], and adding some special onedimensional spaces as V (no more than the number of subdomains) in the space decomposition is important for our analysis. L nk i We note that Algorithm 2.3 with the decomposition V = i=1 Vk is exactly the ordinary BPX method. Adding some k=1
L
n
i k one-dimensional spaces as V to the decomposition V = i=1 Vk , we get the modified BPX preconditioner accordingly. k=1 The computing cost of this modification is small. We can prove the uniform convergence of the conjugate gradient method with the modified BPX preconditioner. We also consider the conjugate gradient method with the ordinary BPX preconditioner. From our above analysis, we can also obtain Λ2 . 1. By restricting BA on the space that does not contain the spaces like V and by the Courant–Fischer min–max theorem, we can conclude that BA has at most n small eigenvalues which cannot be uniformly bounded below, where n is the number of subdomains. Therefore, we get the convergence estimate of the conjugate gradient method with ordinary BPX preconditioner [8,9]:
∥ u − uk ∥ A ≤ 2
√ k−n n 1 − λmax · √λmax /λn+1 − 1 ∥ u − u0 ∥ A , λi λmax /λn+1 + 1 i=1
(32)
where uk is the kth iterative solution and λ1 ≤ · · · ≤ λn ≤ λn+1 are the smallest n + 1 eigenvalues of BA. We present a numerical example to support our estimates. The example which have been computed in [8,9,16] is to find u ∈ V such that 3 i=1
Ωi
ωi ∇ u · ∇v dx = (f , v)0,Ω ,
∀v ∈ V ,
where V ⊂ H01 (Ω ) is the corresponding finite element space, the coefficients ω2 = ω3 = 1 and ω1 = ε , Ω2 = (0.25, 0.5) × (0.25, 0.5) × (0.25, 0.5), Ω3 = (0.5, 0.75) × (0.5, 0.75) × (0.5, 0.75) and Ω1 = Ω \(Ω2 ∪ Ω3 ) with Ω = (0, 1) × (0, 1) × (0, 1), as shown in Fig. 2. The mesh size of the coarsest grid T1 is h1 = 2−2 . We solve the problem exactly on V1 , the finite element space according to the coarsest grid. By the framework of the parallel subspace correction, we consider the conjugate gradient method with the ordinary BPX preconditioner. We also consider the conjugate gradient method with the modified BPX preconditioner, which preforms an additional subspace correction on a special one-dimensional space. ∥r ∥
We preform the numerical experiments with the stopping criteria ∥rk ∥0,Ω < 10−12 , where rk denotes the residual after 0 0,Ω kth iteration. The numbers of iterations with different ε and the mesh size h for these two methods are given in Tables 2 and 3, respectively. We also compute the condition numbers of the preconditioned systems for the modified BPX preconditioner, and the results are shown in Table 4. Their behavior is in agreement with the theoretical estimates. We can see our numerical results are robust with respect to the parameters. The performance of our numerical results, with respect to the mesh size, are similar to the performance of the condition numbers given in [15,22]. In [15,22], the typical BPX preconditioner for solving the Poisson-like equation is considered and the condition number is proved to be uniformly bounded [11].
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
191
Fig. 2. A model of three subdomains. Table 2 Iterations needed for an ordinary BPX preconditioner with different ε and mesh size.
ε
h
2−3 2−4 2−5 2−6 2−7 2−8
10−1
10−2
10−3
10−4
10−5
10−6
10−7
10−8
23 28 32 34 36 39
26 35 37 44 47 49
27 37 44 49 53 63
27 37 45 52 57 62
27 37 45 52 57 63
27 37 45 52 57 63
27 37 45 52 57 63
27 37 45 52 57 63
Table 3 Iterations needed for a modified BPX preconditioner with different ε and mesh size.
ε
h
−3
2 2−4 2−5 2−6 2−7 2−8
10−1
10−2
10−3
10−4
10−5
10−6
10−7
10−8
23 28 31 33 36 37
26 31 35 38 41 43
27 32 36 39 44 47
27 31 35 38 42 46
27 30 35 38 41 46
27 30 35 38 41 45
27 30 35 38 42 45
27 30 35 38 42 45
5. PSC for high-order finite element discretizations of the nearly incompressible elasticity problem In this section we consider the planar nearly incompressible elasticity problem with the high-order finite element discretization: Find u ∈ V such that
(u, v)A = ω1 (u, v)A1 + ω2 (u, v)A2 = (f, v),
∀v ∈ V,
(33)
where f ∈ V and
(u, v)A1 =
Ω
e(u) : e(v)dx,
(u, v)A2 =
Ω
div u div vdx,
where Ω ⊂ R2 is a polygonal domain and V ⊂ (H01 (Ω ))2 is the Scott–Vogelius finite element space [23]. e(u) denotes 1 (∇ u +(∇ u)T ) and div is the divergence operator. When ω1 ≪ ω2 , (33) is a finite element discretization of the planar nearly 2 incompressible elasticity problem. We construct a multilevel additive Schwarz method, and prove the uniform convergence for the conjugate gradient method with the BPX-type preconditioner, with respect to the parameters and the mesh size. Now we introduce the finite element and multigrid settings. Assume that we have a nested sequence of quasi-uniform triangulations Tk = {τki } of Ω . The mesh size of level k is denoted by hk , hk ≃ γ k for k = 1, . . . , L, where γ ∈ (0, 1) is a constant. The quasi-uniformity constant γ is independent of k. In the following we always assume that m ≥ 4. For 1 ≤ k ≤ L, we define m ,0
Pk
m ,0
(Ω ) = {v | v ∈ C 0 (Ω ), and v is polynomial of degree ≤ m on τki for all i}, m,0
P0,k (Ω ) = {v | v ∈ Pk
(Ω ), and v = 0 on ∂ Ω }.
192
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194 Table 4 Condition number of BA for a modified BPX preconditioner with different ε and mesh size.
ε
h
2−3 2−4 2−5 2−6 2−7 2−8
10−1
10−2
10−3
10−4
10−5
10−6
10−7
10−8
4.00 5.00 5.93 6.57 6.80 7.20
4.72 6.76 8.90 10.60 11.93 12.98
5.14 7.31 9.89 12.51 14.91 16.88
5.18 7.17 9.33 11.43 13.48 15.46
5.19 7.15 9.22 11.17 12.94 14.53
5.19 7.15 9.21 11.14 12.87 14.40
5.19 7.15 9.21 11.13 12.87 14.39
5.19 7.15 9.21 11.13 12.87 14.39
We define the finite element spaces as m,0
m,0
Vk = P0,k × P0,k . Then we have nested finite element spaces V1 ⊂ · · · ⊂ Vk ⊂ · · · ⊂ VL and V = VL . For all the inner nodes xik (1 ≤ i ≤ nk ) of the triangulation Tk , we denote by Ωki the patch of xik , then we can m,0
m,0
m,0
m,0
define the local space Vik = P0,k (Ωki )× P0,k (Ωki ), where the definition of P0,k (Ωki ) is similar to that of P0,k (Ω ). Obviously Vk =
nk
Vik ,
i=1
we then have the following multilevel decomposition of V: V=
nk L
Vik .
(34)
k=1 i=1
Algorithm 2.3 with the decomposition (34) gives a BPX-type multilevel preconditioner. We apply our framework to estimate the convergence rate of the conjugate gradient method with the multilevel preconditioner. Following the notation in Section 3, we define the kernel space
N = {u ∈ V | (u, v)A2 = 0, ∀v ∈ V}, the local kernel space
Nki = {u ∈ Vik | (u, v)A2 = 0, ∀v ∈ Vik } = N ∩ Vik , and their A-orthogonal complements N ⊥ and (Nki )⊥ . To describe the null spaces, we introduce the following spaces m+1,1
(Ω ) = {v ∈ C 1 (Ω ) | v is polynomial of degree ≤ m + 1 on τki for all i},
m+1,1
(Ω ) = {v ∈ Pkm+1,1 (Ω ) | v and the normal derivative of v are 0 on ∂ Ω }.
Pk
P0,k
m+1,1
We denote Sk = P0,k Then we have [23]
(Ω ) and Ski = P0m,k+1,1 (Ωki ), where the definition of P0m,k+1,1 (Ωki ) is similar to that of P0m,k+1,1 (Ω ).
S1 ⊂ · · · ⊂ Sk ⊂ · · · ⊂ SL , Sk =
nk
Ski ,
i=1
N = curlSL ,
Nki = curlSki .
Then we see that the assumption (A) is valid for the decomposition (34):
N = curlSL =
nk L k=1 i=1
curlSki =
nk L
Nki .
k=1 i=1
Next we introduce an important conclusion in [23]. A vertex (can be either in Ω or on ∂ Ω ) is called singular if all the edges of Tk meeting at this vertex fall on two straight lines. When x is not a singular vertex, we define θi (1 ≤ i ≤ l) to be the angles in the triangulation Tk around x consecutively. Then we define Rk (x) = max{|θi + θi+1 − π | | i = 1, . . . , l, θl+1 = θ1 }, and define Rk (Ω ) = min{R(x) | x is not a singular vertex}.
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
193
When Rk (Ω ) ≥ δ > 0 (1 ≤ k ≤ L) for some positive constant δ , we have
∥div v∥0,Ω & ∥v∥1,Ω ,
∀v ∈ N ⊥
(35)
and
∥div v∥0,Ω i & ∥v∥1,Ω i ,
∀v ∈ (Nki )⊥ , ∀i, k.
k
k
(36)
The two inequalities (35) and (36) are from [23]. We note that it is not difficult for the triangulation to satisfy Rk (Ω ) ≥ δ > 0 (1 ≤ k ≤ L), so we assume (35) and (36) are valid. Now we apply Theorem 3.1 to estimate Λ1 for Algorithm 2.3 with the decomposition (34). (A) is valid for the decomposition (34); and (12) is satisfied automatically by setting S = N ⊥ . L nk i i i First we consider the term I in (13). For any v ∈ N ⊥ and v = i=1 vk , vk ∈ Vk , by (35) we have k=1
ω1
nk L
(vik , vik )A1 + ω2
nk L
(vik , vik )A2
k=1 i=1
k=1 i=1
(ω1 + ω2 ) .
ω1 (v, v)A1 + ω2 (v, v)A2
(vik , vik )1,Ω
k=1 i=1
(ω1 + ω2 )(v, v)1,Ω nk
L
=
nk L
(vik , vik )1,Ω
k=1 i=1
(v, v)1,Ω
,
(37)
where we apply that (w, w)A2 . (w, w)A1 ≃ (w, w)1,Ω for any w ∈ V. By the inequality similar to (28), we have nk L
(vik , vik )1,Ω . (v, v)1,Ω ,
(38)
k=1 i=1
where vk ∈ Vk , vki ∈ Vik , and vk =
nk
vik = (QVk − QVk−1 )v,
1 ≤ k ≤ L,
i=1
where QVk is the L2 projection onto Vk and QV0 = 0. I . 1 follows by (37) and (38). L nk i i i We then estimate the term II in (13). For any c ∈ N and the decomposition c = i=1 ck , ck ∈ Nk , there exists k=1 i i i i w ∈ SL and wk ∈ Sk such that c = curlw and ck = curlwk . We have nk L
(wki , wki )2,Ω
k=1 i=1
. (w, w)2,Ω L nk i For any w ∈ SL , we decompose w = i=1 wk similarly to that in [10] where k =1 II . sup
w∈SL
wki ∈Ski ,
inf
k,i
wki =w
wk = PSk w − PSk−1 w,
wk =
nk
(39)
wki ,
i=1
where PSk is the H01 orthogonal projection from SL to Sk , PS0 = 0. By the technique developed in [10,11,13], we can deduce that nk L (wki , wki )2,Ω . (w, w)2,Ω .
(40)
k=1 i=1
We get II . 1 by (39) and (40). j j j Next we estimate Λ2 . First we estimate µ1 in (14). When vik ∈ (Nl )⊥ , vl ∈ (Nl )⊥ and k ≤ l, by the strengthened Cauchy– Schwarz inequality and (36), we have 1/2
1/2
(vik , vjl )A2 . γ l−k (vik , vik )A2 (vjl , vjl )1,Ω j
j
j
j
. γ l−k ((vik , vik )A2 + (vl , vl )1,Ω ) . γ l−k ((vik , vik )A2 + (vl , vl )A2 ),
namely
(vik , vjl )A2 . γ l−k ((vik , vik )A2 + (vjl , vjl )A2 ).
194
J. Wu, H. Zheng / Journal of Computational and Applied Mathematics 271 (2014) 180–194
Similarly we have
(vik , vjl )A1 . γ l−k ((vik , vik )A1 + (vjl , vjl )A1 ). Then we have
(vik , vjl )A . γ l−k ((vik , vik )A + (vjl , vjl )A ), therefore
µ1 ≤ max Sk (γ ) = max k
k
k l=1
γ
k−l
+
L
γ
l−k
. 1.
l=k+1
The estimate of µ2 in (15) is similar and we can obtain µ2 . 1. Then we have Λ2 . 1 by Theorem 3.2. The parallel subspace correction method defined by Algorithm 2.3 with the decomposition (34) is a multilevel additive Schwarz method. We prove the uniform convergence of the BPX-type method for the planar nearly incompressible elasticity problem. Acknowledgment The authors were partially supported by NSFC 10971006 and NSFC 91130011. The first author was also supported by NSFC 91130001. References [1] L. Boulaajinea, S. Nicaiseb, L. Paquetb, Rafilipojaona, Dual mixed finite element methods for the elasticity problem with Lagrange multipliers, J. Comput. Appl. Math. 221 (2008) 234–260. [2] T. Lin, X. Zhang, Linear and bilinear immersed finite elements for planar elasticity interface problems, J. Comput. Appl. Math. 236 (2012) 4681–4699. [3] J. Schoberl, Multigrid methods for a parameter dependent problem in primal variables, Numer. Math. 84 (1999) 97–119. [4] Z. Chen, Q. Du, J. Zou, Finite element methods with matching and nonmatching meshes for Maxwell equations with discontinuous coefficients, SIAM J. Numer. Anal. 37 (2006) 1542–1570. [5] J. Jones, B. Lee, A multigrid method for variable coefficient Maxwell’s equations, SIAM J. Sci. Comput. 27 (2006) 1689–1708. [6] S.B. Hazraa, K. Steiner, Computation of dilute two-phase flow in a pump, J. Comput. Appl. Math. 203 (2007) 444–460. [7] S. Tiwari, J. Kuhnert, Modeling of two-phase flows with surface tension by finite pointset method (FPM), J. Comput. Appl. Math. 203 (2007) 376–386. [8] J. Xu, Y. Zhu, Uniform convergent multigrid methods for elliptic problems with strongly discontinuous coefficients, Math. Models Methods Appl. Sci. 18 (2008) 77–105. [9] H. Zheng, J. Wu, Convergence analysis on multigrid methods for elliptic problems with large jumps in coefficients, 2012, Preprint. [10] Y.-J. Lee, J. Wu, J. Chen, Robust multigrid methods for the planar linear elasticity problems, Numer. Math. 113 (2009) 473–496. [11] J. Xu, Iterative methods by space decomposition and subspace correction, SIAM Rev. 34 (1992) 581–613. [12] Y.-J. Lee, J. Wu, J. Xu, L. Zikatanov, Robust subspace correction methods for nearly singular systems, Math. Models Methods Appl. Sci. 17 (2007) 1937–1963. [13] Y.-J. Lee, J. Wu, J. Xu, L. Zikatanov, A sharp convergence estimate of the method of subspace corrections for singular systems, Math. Comp. 77 (2008) 831–850. [14] J. Xu, Multilevel Finite Element Methods, in: Lecture Notes, 2004. [15] J.H. Bramble, J.E. Pasciak, J. Xu, Parallel multilevel preconditioners, Math. Comp. 55 (1990) 1–22. [16] L. Chen, M. Holst, J. Xu, Y. Zhu, Local multilevel preconditioners for elliptic equations with jump coefficients on bisection grids, 2010, Preprint. [17] P. Oswald, On the robustness of the BPX-preconditioner with respect to jumps in the coefficients, Math. Comp. 68 (1999) 633–650. [18] Y. Zhu, Domain decomposition preconditioners for elliptic equations with jump coefficients, Numer. Linear Algebra Appl. 15 (2008) 271–289. [19] J. Xu, L. Zikatanov, The method of alternating projections and the method of subspace corrections in Hilbert space, J. Amer. Math. Soc. 15 (2002) 573–597. [20] H.W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers Group, 1996. [21] J. Wu, Y.-J. Lee, J. Xu, L. Zikatanov, Convergence analysis on iterative methods for semidefinite systems, J. Comput. Math. 26 (2008) 797–815. [22] P. Oswald, On a BPX-preconditioner for P1 elements, Computing 51 (1993) 125–133. [23] L.R. Scott, M. Vogelius, Norm estimates for a maximal right inverse of the divergence operator in spaces of piecewise polynomials, RAIRO Model. Math. Anal. Numer. 19 (1985) 111–143.