Computers and Mathematics with Applications 68 (2014) 2262–2276
Contents lists available at ScienceDirect
Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa
An optimization based domain decomposition method for PDEs with random inputs Jangwoon Lee a , Jeehyun Lee b,∗ , Yoongu Hwang b a
Department of Mathematics, University of Mary Washington, Fredericksburg, VA 22401, United States
b
Department of Computational Science and Engineering, Yonsei University, Seoul 120-749, Republic of Korea
article
info
Article history: Available online 7 August 2014 Keywords: Domain decomposition Stochastic elliptic equation Stochastic optimal control Karhunen–Loève expansion Finite element method
abstract An optimization-based domain decomposition method for stochastic elliptic partial differential equations is presented. The main idea of the method is a constrained optimization problem for which the minimization of an appropriate functional forces the solutions on the two subdomains to agree on the interface; the constraints are the stochastic partial differential equations. The existence of optimal solutions for the stochastic optimal control problem is shown as is the convergence to the exact solution of the given problem. We prove the existence of a Lagrange multiplier and derive an optimality system from which solutions of the domain decomposition problem may be determined. Finite element approximations to the solutions of the optimality system are defined and analyzed with respect to both spatial and random parameter spaces. Then, the results of some numerical experiments are given to confirm theoretical error estimate results. © 2014 Elsevier Ltd. All rights reserved.
1. Introduction The domain decomposition method (DDM) based on optimization strategies has been used extensively in the last few decades; e.g., see [1–5] and references therein. However, all these works were considering partial differential equations (PDEs) whose input data were exactly given. It is also true that many real world model problems in engineering and science have some unknown information as input data; for example, the lack of available collected data for some materials used in a problem and/or its insufficient boundary knowledge. To express these circumstances in a model, in other words, to develop a model that describes a natural/physical phenomenon better, people have used randomness in probability theory for incomplete input data and introduced stochastic PDEs (SPDEs) as a result. In fact, SPDEs have been used to model a complicated physical/engineering phenomenon as powerful tools, and there has been an increasing interest in solving the SPDEs by the stochastic Galerkin finite element method and/or the (parallel) DDM (see the papers [6–13] or the web site http://www.ddm.org). Nevertheless, to the best of our knowledge, there are no papers focused on a priori error estimates for DDM finite element (FE) solutions to SPDEs using an optimization-based DDM. In our work, therefore, we extend optimization-based DDM techniques for use with SPDEs and focus on developing a priori error estimates for DDM FE solutions. The problem we consider here is the standard model problem, a second order elliptic SPDE with the homogeneous Dirichlet boundary condition:
∗
−∇ · (k(x, ω)∇ u(x, ω)) = f (x, ω) u(x, ω) = 0
in D, on ∂ D,
Corresponding author. E-mail addresses:
[email protected] (J. Lee),
[email protected] (J. Lee),
[email protected] (Y. Hwang).
http://dx.doi.org/10.1016/j.camwa.2014.07.021 0898-1221/© 2014 Elsevier Ltd. All rights reserved.
(1.1)
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
2263
Fig. 1. A subdivision of the domain into two subdomains.
where D ∈ Rd is a convex bounded polygonal domain, ∂ D is the boundary of the domain D, and k, f : D × Ω → R are stochastic functions. To analyze our stochastic elliptic problem (1.1) mathematically and computationally using the DDM based on optimization approaches, we first consider subdomain problems and show the convergence of the solution to our original problem from the solutions of related problems posed on subdomains. Then we use the Karhunen–Loève (KL) expansion (see e.g., [14]) to transform the stochastic problems with finite dimensional information to deterministic high dimensional problems deriving a priori error estimates for the solutions to the transformed problems. After that, we apply the theory of Brezzi–Rappaz–Raviart (see e.g., [15,16]) to develop a priori error estimates for DDM solutions defined on the subdomains satisfying the interface transmission conditions, which is our main result. At the end, we construct a computational algorithm for the stochastic elliptic problems and finally confirm the optimal rates of convergence in our new theoretical error estimate result using numerical examples. We now refer to some of the works in SPDEs that have inspired this paper. The work [7] described and analyzed the Monte Carlo method and the Galerkin finite element method for a linear stochastic elliptic problem including a comparison of their computational works. The work [9] shows that the authors analyzed a stochastic optimal control problem subject to an elliptic SPDE under the Neumann boundary condition; the control used was of the deterministic, boundary-value type. In [10], finite element methods for a standard stochastic elliptic PDE-constrained optimal control problem was considered; the Wiener–Ito chaos expansion was used for the diffusion coefficient as a main analytical tool; this shows some results in continuously differentiable solution spaces which require more regularity assumptions than the literature. On the other hand, the paper [12] presented L2 (Γ ; H01 (D)) convergence results for optimal distributed control problems based on the KL expansion with both feasibility and efficiency of rigorous error analysis demonstrated via numerical examples. We also, make some remarks about the optimization-based DDM literature in order. The work [1] proposed a new DDM for deterministic elliptic PDEs with the Dirichlet boundary condition; the authors showed the existence of an optimal solution for their optimization problem; they studied a parallelizable gradient method for the DDM solution presenting the results of some numerical experiments. The work [2] introduced a non-overlapping DDM for the solution of multiobjective optimization problems constrained by deterministic elliptic PDEs; the gradient method was defined and used for the optimization problem with a theoretical convergence result of the method. The paper [5] extended the results from [1] into time dependent problems illustrating how the algorithms and analysis of the optimization-based DDM can be obtained to the problems. The outline of the paper is as follows. In Section 2, we introduce some notations and provide some preliminary results for SPDEs. Then, we give a precise restatement of the optimization problem, and prove that an optimal solution exists. In Section 3, we study the convergence of the optimal solution as the penalty parameter β → 0. In Section 4, we show the existence of a Lagrange multiplier and derive an optimality system of equations from which the optimal solution may be determined. In Section 5, we transform a stochastic problem into a high-dimensional deterministic problem and analyze its finite element approximations. In Section 6, we study the finite element approximations of the solution of the optimality system and then estimate the error with respect to both spatial and random parameter spaces. Finally, in Section 7, we give the results of some numerical experiments. 2. Optimal control problem 2.1. The model problem For our model problem, we use the set of outcomes Ω , the σ -algebra of events F ⊂ 2Ω , and a probability measure P : F → [0, 1]. We now let (Ω , F , P ) be a complete probability space. Under this space, we try to find a stochastic function u : D × Ω → R such that the following stochastic equations hold almost surely (i.e., for almost every ω ∈ Ω ).
−∇ · (k(x, ω)∇ u(x, ω)) = f (x, ω) u(x, ω) = 0
in D, on ∂ D,
(2.1)
where D ∈ Rd is a convex bounded polygonal domain, ∂ D is the boundary of the domain D, and k, f : D × Ω → R are stochastic functions that have bounded continuous covariance functions. Here we use ∇ for differentiation with respect to x ∈ D only, not to ω ∈ Ω . In order to solve the stochastic problem (2.1) by a domain decomposition technique, we let D be partitioned into two open subdomains D1 and D2 such that D = D1 ∪ D2 . The interface between the two subdomains is denoted by ∂ D0 so that ∂ D0 = D1 ∩ D2 . Let ∂ D1 = D1 ∩ ∂ D and ∂ D2 = D2 ∩ ∂ D (e.g., see Fig. 1 in case d = 2).
2264
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
We wish to determine the solution of (2.1) by solving the following equations with mixed Dirichlet and Neumann boundary conditions almost surely:
−∇ · (k(x, ω)∇ u1 (x, ω)) = f (x, ω) u1 (x, ω) = 0 ∂u k(x, ω) 1 = g (x, ω) ∂ n1 −∇ · (k(x, ω)∇ u2 (x, ω)) = f (x, ω) u2 (x, ω) = 0
in D1 , on ∂ D1 ,
(2.2)
on ∂ D0 , in D2 , on ∂ D2 ,
(2.3)
∂u k(x, ω) 2 = −g (x, ω) on ∂ D0 , ∂ n2 where, for i = 1, 2, ∂∂n denotes the derivative in the direction of the outer normal to Di . i
To ensure the existence and uniqueness of the solutions of (2.1), (2.2), and (2.3), we assume that there are positive constants m and M such that m ≤ k(x, ω) ≤ M
a.e. (x, ω) ∈ D × Ω .
We note that the solutions u1 and u2 of (2.2) and (2.3) posed on subdomains may not be the same as the solution u of (2.1) due to the fact that u1 ̸= u2 on the interface ∂ D0 for an arbitrary choice of g; i.e., u1 ̸= u|D1 and u2 ̸= u|D2 . However, it is clear that there should be a choice of g such that the solutions of the subdomain problems (2.2) and (2.3) coincide with the solution of the original domain problem (2.1); for example, if g = k
∂u ∂ n1
∂ D0
= −k
∂u ∂ n2
∂ D0
, then we have u = ui
for i = 1, 2, respectively, on the corresponding subdomains. Therefore, our goal is to find a function g such that u1 = u2 along the interface ∂ D0 . One way to achieve this goal is to minimize the expectation of the functional J (u1 , u2 ) = E
1
2
|u1 − u2 | ds (2.4) 2 ∂ D0 over a set of appropriate functions g. In order to regularize the problem, we penalize J (·, ·) by a measure of the size of the control. That is, instead of minimizing (2.4), we minimize J β ( u1 , u2 , g ) = E
1
|u1 − u2 |2 ds +
β
|g |2 ds ,
(2.5) 2 ∂ D0 2 ∂ D0 where β is a constant that can be chosen for the importance between two terms in the functional (2.5). In summary, we find the solution to the stochastic problem (2.1) from the solutions to subdomain problems (2.2) and (2.3) using the DDM based on the following optimization problem: minimize Jβ (u1 , u2 , g ) over appropriate functions g subject to (2.2) and (2.3).
(2.6)
2.2. Stochastic function spaces and notations Throughout this paper, we use well known notations for the Sobolev spaces H r (D) with its norm denoted ∥·∥H r (D) , where r is a real number. We denote the inner product on the space H r by (·, ·)H r , and we use C as a generic constant. We will make use of the subspaces H01 (D) = {v ∈ H 1 (D)|v = 0 on ∂ D}
and H∂1i (Di ) = {v ∈ H 1 (Di )|v = 0 on ∂ Di }.
We define the stochastic Sobolev spaces L2 (Ω ; H r (D)) = {v : D × Ω → R | ∥v∥L2 (Ω ;H r (D)) < ∞}, where
∥v∥2L2 (Ω ;H r (D)) =
Ω
∥v∥2H r (D) dP = E∥v∥2H r (D) .
For simplicity, we put H r (D) = L2 (Ω ; H r (D)), H01 (D) = L2 (Ω ; H01 (D)), and H∂1i (Di ) = L2 (Ω ; H∂1i (Di )).
Inner products and bilinear forms on H 0 (D)(= L2 (D)) are defined by
[u, v]D = E
a[u, v]D = E
uv dx,
D
k∇ u · ∇v dx. D
Then a weak formulation corresponding to (2.1) is given as follows: Find u ∈ H01 (D) such that a[u, v]D = [f , v]D
∀v ∈ H01 (D).
(2.7)
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
2265
Theorem 2.1. Let f ∈ L2 (D). Then there exists a unique solution for (2.7) in H01 (D). Proof. Use the Lax–Milgram lemma (cf. [17]).
We also have a weak formulation corresponding to (2.2) and (2.3): Find u1 ∈ H∂11 (D1 ) and u2 ∈ H∂12 (D2 ) such that a[u1 , v]D1 = [f , v]D1 + [g , v]∂ D0
∀v ∈ H∂11 (D1 ),
(2.8)
a[u2 , v]D2 = [f , v]D2 − [g , v]∂ D0
∀v ∈ H∂2 (D2 ).
(2.9)
1
Theorem 2.2. Let f ∈ L2 (D) and g ∈ L2 (∂ D0 ). Then there exists a unique solution to (2.8) and (2.9), respectively. Moreover, for i = 1, 2, there exists a constant C > 0 such that
∥ui ∥ H 1 (Di ) ≤ C ∥f ∥ L2 (Di ) + ∥g ∥ L2 (∂ D0 ) .
(2.10)
Proof. Note that the bilinear forms a[·, ·]Di , i = 1, 2, are coercive and continuous, and use the Lax–Milgram lemma (cf. [17]). We refer the reader to [6,9] for details. 2.3. The existence of an optimal solution We examine the existence of an optimal solution that minimizes Jβ (·, ·, ·) in (2.5). Let the admissibility set for our problem be defined by
Uad = {(u1 , u2 , g ) ∈ H∂11 (D1 ) × H∂12 (D2 ) × L2 (∂ D0 ) | (2.8) and (2.9) are satisfied and Jβ (u1 , u2 , g ) < ∞}.
(2.11)
Then (ˆu1 , uˆ 2 , gˆ ) ∈ Uad is called an optimal solution if there exists ϵ > 0 such that Jβ (ˆu1 , uˆ 2 , gˆ ) ≤ Jβ (u1 , u2 , g )
(2.12)
for all (u1 , u2 , g ) satisfying
ˆ 2 ∥ ˆ ∥L2 (∂ D0 ) ≤ ϵ. ∥u1 − uˆ 1 ∥ H 1 (D1 ) + ∥u2 − u H 1 (D2 ) + ∥g − g We now state an existence and uniqueness theorem for an optimal solution in Uad to the problem (2.6). Theorem 2.3. There exists a unique optimal solution (ˆu1 , uˆ 2 , gˆ ) ∈ Uad . (n)
(n)
Proof. Clearly, Uad is not empty. Let {(u1 , u2 , g (n) )} be a minimizing sequence in Uad ; i.e., (n)
(n)
lim Jβ (u1 , u2 , g (n) ) = inf Jβ (u1 , u2 , g ) .
n→∞
Uad
(n)
(n)
Then {g (n) } is uniformly bounded from (2.11), and hence, so are {u1 } and {u2 } from (2.10). Thus, there exists a subsequence (ni )
(ni )
{ u1 , u2 , g
(ni )
} such that
(ni )
⇀ uˆ1 in H∂11 (D1 ),
(ni )
⇀ uˆ2 in H∂12 (D2 ), ⇀ gˆ in L2 (∂ D0 )
u1
u2
g (ni )
for some (ˆu1 , uˆ 2 , gˆ ) ∈ Uad . Passing to the limit, we have that (ˆu1 , uˆ 2 , gˆ ) satisfies (2.8) and (2.9). Also, the fact that the functional Jβ (·, ·, ·) is lower semicontinuous implies that (n )
(n )
inf Jβ (u1 , u2 , g ) = lim inf Jβ (u1 i , u2 i , g (ni ) ) ≥ Jβ (ˆu1 , uˆ 2 , gˆ ). n→∞
Uad
Hence, infUad Jβ (u1 , u2 , g ) = Jβ (ˆu1 , uˆ 2 , gˆ ) so that we conclude that (ˆu1 , uˆ 2 , gˆ ) is an optimal solution. Uniqueness follows from the convexity of the functional and the admissibility set and the linearity of the constraints.
3. Convergence with vanishing penalty parameter In this section, we show that, as β → 0, the optimal solution converges to the solution of (2.7). Let uex denote the solution of (2.7) and g ex = k
ex ∂ uex ∂ u = − k . ∂ n1 ∂ D0 ∂ n2 ∂ D 0
2266
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
β
β
Theorem 3.1. For each β > 0, let (u1 , u2 , g β ) ∈ Uad denote an optimal solution satisfying (2.11) and (2.12). Let uex 1 = β
β
ex ex ex 1 1 uex |D1 ∪∂ D0 and uex 2 = u |D2 ∪∂ D0 . Then as β → 0, u1 ⇀ u1 in H∂1 (D1 ) and u2 ⇀ u2 in H∂2 (D2 ).
β
β
Proof. Suppose that {(u1 , u2 , g β )} is a sequence of optimal solutions and that β → 0. Then we have that β
β
ex ex Jβ (u1 , u2 , g β ) ≤ Jβ (uex 1 , u2 , g ) ∀β
or
1
E
2 ∂ D0
2 β β β u1 − u2 ds + 2
∂ D0
β 2 g ds ≤ E β 2
∂ D0
ex 2 g ds
β
∀β. β
We note that ∥g β ∥L2 (∂ D0 ) is uniformly bounded in L2 (∂ D0 ), and ∥u1 − u2 ∥L2 (∂ D0 ) → 0 as β → 0. We also, note that β
β
both ∥u1 ∥ H 1 (D1 ) and ∥u2 ∥ H 1 (D2 ) are uniformly bounded by (2.10). Hence, there is a subsequence that converges to some β β (u∗1 , u∗2 , g ∗ ) ∈ H∂11 (D1 ) × H∂12 (D2 ) × L2 (∂ D0 ) as β → 0, and ∥u1 − u2 ∥L2 (∂ D0 ) → 0 yields u∗1 = u∗2 a.e. (x, ω) ∈ ∂ D0 × Ω . By
passing to the limit, we obtain
a[u∗1 , v]D1 = [f , v]D1 + [g ∗ , v]∂ D0 a[u∗2 , v]D2 = [f , v]D2 − [g ∗ , v]∂ D0
∀v ∈ H∂11 (D1 ), ∀v ∈ H∂1 (D2 ). 2
If we define u ∈ H01 (D) by ∗ u1 in D1 ∪ ∂ D0 u∗ = u∗2 in D2 ∪ ∂ D0 , ∗
then a[u∗ , v]D = [f , v]D
∀v ∈ H01 (D),
(3.1) ∗
ex
and by the uniqueness of the solution of (2.7), u = u . Thus, our convergence results follow.
4. The stochastic optimality system In this section, we use the Lagrange multiplier rule to derive an optimality system of equations from which solutions of the optimization problem (2.11) and (2.12) may be determined. In a problem with a deterministic PDE constraint, it is well known that we have the existence of a Lagrange multiplier (see, e.g., [18]). For a problem with a SPDE constraint, however, we need to check first if there exists a proper Lagrange multiplier before we use the technique to obtain an optimality system of stochastic equations. So, here we verify the existence of a Lagrange multiplier for our minimization problem and derive a stochastic optimality system (see [16] as a reference for this section). Here we talk about some relevant results we need. Let Θ , X , and Y be reflexive Banach spaces and ∥ · ∥Θ , ∥ · ∥X , and ∥ · ∥Y be their norms, respectively. Also, we use Θ ∗ , X ∗ , and Y ∗ to denote their dual spaces. We consider the functional J to be minimized with functionals F on X and E on Θ :
J (v, z ) = F (v) + E (z ) ∀(v, z ) ∈ X × Θ . We define the function M : X × Θ → X : M (v, z ) = v + TN (v) + TK (z ) ∀(v, z ) ∈ X × Θ , where K : Θ → Y and T : Y → X are bounded linear operators and N : X → Y is a differentiable map. Then we consider the constrained minimization problem: min
(v,z )∈X ×Θ
J (v, z ) subject to M (v, z ) = 0.
We now state our assumptions to show the existence of a Lagrange multiplier and derive a stochastic optimality system as follows: (Assumption I) v → J (v, z ) is Fréchet differentiable for any z ∈ Θ ; (Assumption II) v → M (v, z ) is Fréchet differentiable for any z ∈ Θ ; (Assumption III) z → E (z ) is convex; (Assumption IV) The Fréchet derivative N ′ maps from X into Z ; (Assumption V) Z is compactly embedded into Y .
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
2267
We then state the following result that gives the existence of a Lagrange multiplier (Theorem 2.5 in [16]). Theorem 4.1. Assume that the above set of assumptions hold. Suppose there exists an optimal solution (u, g ) ∈ X × Θ , and that the mapping z → E (z ) is Fréchet differentiable on Θ . Then there exists µ ∈ X ∗ , not equal to zeros, such that
⟨Ju (u, g ), w⟩ − µ, (I + TN ′ (u)) · w = 0 ∀w ∈ X
(4.1)
and
E ′ (g ), z − ⟨µ, TKz ⟩ = 0 ∀z ∈ Θ .
(4.2)
We now prove the existence of a Lagrange multiplier and derive an optimality system for our minimization problem. We define the spaces X = H∂11 (D1 ) × H∂12 (D2 ), Y = ( H∂11 (D1 ))∗ × ( H∂12 (D2 ))∗ , Θ = L2 (∂ D0 ), and Z = {0}. Then clearly we have Z ↩→↩→ Y . We define our continuous linear operator T ∈ L(Y ; X ): For f = (f1 , f2 ) ∈ Y , T f = u = (u1 , u2 ) ∈ X is a unique solution of
∀v ∈ H∂11 (D1 ), ∀v ∈ H∂1 (D2 ).
a[u1 , v]D1 = [f1 , v]D1 a[u2 , v]D2 = [f2 , v]D2
2
We also define the (differentiable) mapping N : X → Y by N (u) = −
f |D1 ∪∂ D0 f |D2 ∪∂ D0
or, equivalently,
⟨N (u), v⟩ = −[f , v1 ]D1 − [f , v2 ]D2 ∀v = (v1 , v2 ) ∈ X , and define K : g ∈ Θ → Y by
Kg =
−g
g
or, equivalently,
⟨Kg , v⟩ = −[g , v1 ]∂ D0 + [g , v2 ]∂ D0 ∀v = (v1 , v2 ) ∈ X . Then it is not hard to see that constraint equations Eqs. (2.8) and (2.9) can be expressed as u + TN (u) + TKg = 0.
(4.3)
With the obvious definitions for F (·) and E (·),
F (u) = E E (g ) = E
1
2 ∂ D0
β 2
|g | ds , 2
∂ D0
|u1 − u2 | ds , 2
we also see that the functional (2.5) can be expressed as
J (u, g ) = F (u) + E (g ). We are now in a position to verify the assumptions of Theorem 4.1: It is obvious that (Assumption I) and (Assumption II) β is convex; (Assumption IV) and are valid; (Assumption III) is satisfied because the mapping g → E (g ) = 2 ∥g ∥22 L (∂ D0 )
(Assumption V) hold since N ′ (u) · v = 0 ∈ Z ↩→↩→ Y for all u, v ∈ X ; and finally we note that the mapping g → E (g ) is Fréchet differentiable on Θ . Having verified all assumptions, we may apply Theorem 4.1 to conclude that there exists a Lagrange multiplier ξ = T ∗ µ ∈ X such that (4.1), (4.2), and (4.3) form a stochastic optimality system. We now state the theorem, in the present context, as follows. Theorem 4.2. Let (u1 , u2 , g ) ∈ H∂11 (D1 ) × H∂12 (D2 ) × L2 (∂ D0 ) denote an optimal solution that minimizes (2.5) subject to (2.8) and (2.9). Then there exists a nonzero Lagrange multiplier (ξ1 , ξ2 ) ∈ H∂11 (D1 ) × H∂12 (D2 ) such that a[ξ1 , v]D1 − [u1 − u2 , v]∂ D0 = 0 ∀v ∈ H∂11 (D1 ),
(4.4)
a[ξ2 , v]D2 + [u1 − u2 , v]∂ D0 = 0 ∀v ∈ H∂12 (D2 ),
(4.5)
β[z , g ]∂ D0 + [z , ξ1 ]∂ D0 − [z , ξ2 ]∂ D0 = 0 ∀z ∈ L2 (∂ D0 ).
(4.6)
and
2268
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
5. The high-dimensional elliptic PDEs In this section, we turn SPDE into a high-dimensional deterministic PDE under some appropriate assumptions and then analyze the resulting equation using traditional techniques to obtain finite element solutions. 5.1. Karhunen–Loève expansions We first talk about the well known tool, the Karhunen–Loève (KL) expansion, used to approximate stochastic functions. To have the KL expansion, we assume that our stochastic function k(x, ω) : D×Ω → R has a bounded continuous covariance function and that
k2 (x, ω)dx
E
< ∞.
D
Then we state the following result (see [19]). Theorem 5.1. The covariance function can be decomposed as C (x1 , x2 ) =
λn φn (x1 )φn (x2 ),
n≥0
where (λn , φn (x)) are solutions to the eigenvalue problem
C (x1 , x2 )φn (x1 )dx1 = λn φn (x2 ), D
and the eigenfunctions {φn (x)} are orthogonal and form a complete set. Note that we have k(x, ω) = k¯ (x) +
λn φn (x)Zn (ω),
n ≥0
where k¯ (x) = Ek(x, ω) and {Zn (ω)} are random variables such that
E[Zn ] = 0,
E[Zn Zm ] = δnm .
This expansion is called the KL expansion of k(x, ω). 5.2. Finite dimensional information We consider the following equations a[u1 , v]D1 = [f , v]D1
∀v ∈ H∂11 (D1 ),
(5.1)
a[u2 , v]D2 = [f , v]D2
∀v ∈ H∂12 (D2 ).
(5.2)
We assume here that our stochastic function k(x, ω) has its truncated KL expansion: k(x, ω) = k¯ (x) +
N
λn φn (x)Zn (ω).
(5.3)
n =1
Similarly, we assume that our another input function f (x, ω) has a finite KL expansion. Here we note that the assumption (5.3) may be valid in practical applications because in many realistic model applications, we need only finitely many (mutually independent) random variables to express the source of randomness. Also, based on Mercer’s theorem, we may prove the convergence of the truncated control problem (see e.g., [7]) Here we assume that there exist m, M > 0 such that m ≤ k¯ (x) +
N λn φn (x)Zn (ω) ≤ M a.e. (x, ω) ∈ D × Ω
(5.4)
n=1
as we did in Section 2. For the condition (5.4) for k(x, ω), as a practical example, k could have a log normal distribution (see [8] for a discussion of the ellipticity condition for k(x, ω), where log k(x, ω) is a Gaussian field). For n = 1, 2, . . . , N, we now assume that each Zn (Ω ) ≡ Γn ∈ R is bounded and that each Zn has a density function ρn : Γn → R+ . We also use the joint density function ρ(y) of (Z1 , Z2 , . . . , ZN ) for any y ∈ Γ ≡ Nn=1 Γn ∈ Rn . Under these
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
2269
circumstances, we can express the solution of (2.8) and (2.9) using finitely many random variables (e.g., see [6,7,20]); we have the high-dimensional deterministic variational formulation of (2.8) and (2.9) as follows:
Γ
ρ(y)
k(x, y)∇ ui (x, y) · ∇v(x, y)dxdy =
Γ
Di
ρ(y)
f (x, y)v(x, y)dxdy
(5.5)
Di
for i = 1, 2. Thus, the solution of SPDEs can be found by solving deterministic PDEs, and the finite element method can be used for stochastic problems. 5.3. Function spaces and notations We introduce Sobolev spaces for our high-dimensional problem as follows: L2 (Γ ; H r (D)) = {v : D × Γ → R | ∥v∥L2 (Γ ;H r (D)) < ∞}, where
∥v∥2L2 (Γ ;H r (D)) =
Γ
ρ∥v∥2H r (D) dy = E ∥v∥2H r (D) .
For simplicity, we put H r (D) = L2 (Γ ; H r (D)), H01 (D) = L2 (Γ ; H01 (D)), and H∂1i (Di ) = L2 (Γ ; H∂1i (Di )) for i = 1, 2. Note that the above Sobolev space is a Hilbert space with the inner product
(u, v)D =
Γ
ρ
∇ u · ∇v dxdy. D
We now define the Banach space for the solution of our high-dimensional elliptic PDE: C p (Γ ; H ) = {v : D × Γ → R | ∥v∥C p (Γ ;H ) < ∞} with a norm
∥v∥C p (Γ ;H ) = ∥v∥C (Γ ;H ) +
pj N
∥∂ykj v∥C (Γ ;H ) ,
j=1 k=1
where p = (p1 , p2 , . . . , pN ), H is a Hilbert space, and ∥v∥C (Γ ;H ) = supy∈Γ ∥v(·, y)∥H . 5.4. Finite element spaces Let X1h and X2h denote FE spaces such that X1h ∈ H∂11 (D1 ) and X2h ∈ H∂12 (D2 ). We assume that the following approximation properties hold: For all φ ∈ H 2 (Di ), there is C > 0 such that inf ∥φ − φ h ∥H 1 (Di ) ≤ Ch∥φ∥H 2 (Di ) ,
φ h ∈Xih
i = 1, 2,
(5.6)
where h > 0 is a maximum grid size. Next, we partition Γ into a finite number of disjoint RN boxes Bi as follows:
Γ =
i∈I
j ai
Bi =
N (aji , bji ), i∈I j=1
j bi
j
j
where ( , ) ⊂ Γj ; we define δ = max{|bi − ai |/2 : 1 ≤ j ≤ N , i ∈ I }. Let Y δ ∈ L2 (Γ ) be the FE space of piecewise polynomials with degree at most pj on each direction yj and p = (p1 , p2 , . . . , pN ). Then we have the interpolation property (see [21]) as follows: for all ψ ∈ C p+1 (Γ ), δ
inf ∥ψ − ψ ∥L2 (Γ ) ≤ δ
ψ δ ∈Y δ
γ
p +1 N ∥∂yjj ψ∥L2 (Γ ) j =1
(pj + 1)!
,
(5.7)
where γ = min1≤j≤N {pj + 1} for δ between 0 and 1. We then define the FE space on Di × Γ for i = 1, 2 Vihδ = {v hδ ∈ L2 (Di × Γ ) | v hδ ∈ span(φ h ψ δ : φ h ∈ Xih , ψ δ ∈ Y δ )}. We now denote by Rh the H 1 -projection from H 1 (Di ) onto Xih and P δ the L2 -projection from L2 (Γ ) onto Y δ : For each φ ∈ H 1 (Di ), for each ψ ∈ L2 (Γ ),
(Rh φ, φ h )H 1 (Di ) = (φ, φ h )H 1 (Di ) ∀φ h ∈ Xih , (P δ ψ, ψ δ )L2 (Γ ) = (ψ, ψ δ )L2 (Γ ) ∀ψ δ ∈ Y δ .
2270
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
It follows from (5.6) and from (5.7) that for all φ ∈ H 2 (Di ), ψ ∈ C p+1 (Γ ),
∥φ − Rh φ∥H 1 (Di ) ≤ Ch∥φ∥H 2 (Di ) , ∥ψ − P δ ψ∥L2 (Γ ) ≤ δ γ
i = 1, 2,
N ∥∂
pj +1 yj
j =1
(pj + 1)!
ψ∥L2 (Γ )
.
By using the last two inequalities, for all u ∈ C p+1 (Γ ; H 2 (Di )), we have
hδ
inf
uhδ ∈Vihδ
∥u − u ∥ h∥u∥ H 1 (Di ) ≤ C H 2 (Di ) + δ
γ
p +1 N ∥∂yjj u∥ H 1 (D ) i
(pj + 1)!
j =1
.
(5.8)
5.5. Error estimates on the high-dimensional problem We solve the high-dimensional deterministic problem (5.5) whose weak formulation is as follows: Find ui C p+1 (Γ ; H∂1i (Di )) such that a[ui , v]Di = [f , v]Di
∀v ∈ C p+1 (Γ ; H∂1i (Di ));
∈
(5.9)
its FE weak formulation is given by: Find uhi δ ∈ Vihδ such that
∀v hδ ∈ Vihδ
a[uhi δ , v hδ ]Di = [f hδ , v hδ ]Di
(5.10)
for i = 1, 2. We note that the following elliptic regularity property holds: for any y ∈ Γ , if f (·, y) ∈ L2 (D), then u(·, y) ∈ H 2 (D), and there exists C > 0 such that
∥u(·, y)∥H 2 (D) ≤ C ∥f (·, y)∥L2 (D) .
(5.11)
Then we obtain the following result. Theorem 5.2. Let f ∈ C p+1 (Γ ; L2 (D)), ui be the solution of (5.9), and uhi δ be the solution of (5.10) for i = 1, 2. Then there exists C > 0 such that γ ∥ui − uhi δ ∥ H 1 (Di ) ≤ C (h + δ )K ∥f ∥ L2 (D) ,
where K =
N
j =1
pj +1
max{1,
k=0
(5.12)
p +1 −k
j k ∥φj (x)∥L∞ (D) ∥∂yj f (·, y)∥L2 (D) }.
Proof. Note that if f ∈ C p+1 (Γ ; L2 (D)) and φj (x) ∈ L∞ (D), then using an inductive argument, for all j = 1, 2, . . . , N and for any y ∈ Γ , there exists C > 0 such that p +1
∥∂yjj
u(·, y)∥H 1 (Di )
(pj + 1)!
pj +1
≤C
p +1−k
j k ∥φj (x)∥L∞ (D) ∥∂yj f (·, y)∥L2 (D) .
k=0
Then from (5.8) with the last inequality and the regularity property (5.11), we obtain the estimate result (5.12).
6. Finite element approximations of the optimality system In this section, we derive error estimates for the discrete approximations of the optimality system. The FE approximations of solutions of the optimality system (2.8), (2.9) and (4.4)–(4.6) are defined as follows. Seek u1 ∈ V1hδ , u2 ∈ V2hδ , ξ1 ∈ V1hδ , and ξ2 ∈ V2hδ such that a[uh1δ , v hδ ]D1 = [f , v hδ ]D1 − a[uh2δ , v hδ ]D2 = [f , v hδ ]D2 +
1
β 1
β
[ξ1hδ − ξ2hδ , v hδ ]∂ D0 ∀v hδ ∈ V1hδ ,
(6.1)
[ξ1hδ − ξ2hδ , v hδ ]∂ D0 ∀v hδ ∈ V2hδ ,
(6.2)
a[ξ1hδ , v hδ ]D1 = [uh1δ − uh2δ , v hδ ]∂ D0 hδ 2
hδ
uh1δ
a[ξ , v ]D2 = −[
−
uh2δ
hδ
∀v hδ ∈ V1hδ ,
, v ]∂ D0 ∀v
hδ
∈
V2hδ
(6.3)
.
(6.4)
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
2271
6.1. Description of the Brezzi–Rappaz–Raviart theory We obtain error estimates for the FE approximations through the use of the Brezzi–Rappaz–Raviart (BRR) theory; see [15]. Here for the sake of completeness, we will state the BRR theory and its hypotheses. We consider a nonlinear problem: Find ψ ∈ X such that
ψ + T G(ψ) = 0,
(6.5)
where T ∈ L(Y; X), G : X → Y is a C mapping, where X and Y are Banach spaces. Note that ψ is a regular solution of (6.5) if (6.5) is satisfied and (ψ + T Gψ (ψ)) : X → Y is an isomorphism. Let Xh ⊂ X be a FE space and T h ∈ L(Y; Xh ) be a discretization of the operator T . Then we have the approximation problem for (6.5) as follows: Find ψ h ∈ Xh such that 2
ψ h + T h G(ψ h ) = 0.
(6.6)
We assume that there exists a Banach space Z ↩→ Y such that
Gψ (ψ) ∈ L(X; Z) ∀ψ ∈ X.
(6.7)
We also assume the following two limit properties: lim ∥(T
h
− T )ω∥X = 0 ∀ω ∈ Y
(6.8)
lim ∥(T
h
− T )∥L(Z;X) = 0.
(6.9)
h→0
and h→0
Note that if Z ↩→↩→ Y (i.e., Z is compactly embedded into Y), (6.9) follows from (6.8). Theorem 6.1 (Brezzi–Rappaz–Raviart Theorem). Let X and Y be Banach spaces. Assume that G is C 2 mapping from X into Y and that D2 G is bounded on all bounded sets of X. Assume that (6.7)–(6.9) hold and that ψ is a regular solutions of (6.5). Then there exist a neighborhood O of the origin in X and, for h ≤ h0 small enough, a unique C 2 function ψ h ∈ Xh such that ψ h is a regular solution of (6.6). Moreover, there exists a constant C > 0, independent of h, such that
∥ψ h − ψ∥X ≤ C ∥(T h − T )G(ψ)∥X .
(6.10)
6.2. Error estimates for discrete optimality system In order to use the BRR theory, we need to first fit our stochastic optimality system and its discrete expression into the BRR framework. Let
X= H∂11 (D1 ) × H∂12 (D2 ) × H∂11 (D1 ) × H∂12 (D2 ), Y = ( H∂11 (D1 ))∗ × ( H∂12 (D2 ))∗ × ( H∂11 (D1 ))∗ × ( H∂12 (D2 ))∗ , Z = L2 (∂ D0 ) × L2 (∂ D0 ) × L2 (∂ D0 ) × L2 (∂ D0 ), and
Xhδ = V1hδ × V2hδ × V1hδ × V2hδ . Note that the embedding Z ⊂ Y is compact. We define the operator T ∈ L(Y, X): For (σ1 , σ2 , η1 , η2 ) ∈ Y and
(u1 , u2 , ξ1 , ξ2 ) ∈ X,
T (σ1 , σ2 , η1 , η2 ) = (u1 , u2 , ξ1 , ξ2 ) if and only if a[ui , v]Di = [σi , v]Di
∀v ∈ H∂1i (Di )
(6.11)
a[ξi , v]Di = [ηi , v]Di
∀v ∈ H∂1i (Di )
(6.12)
and
for i = 1, 2. Also, we define the discrete operator T hδ ∈ L(Y, Xhδ ): For (σ1 , σ2 , η1 , η2 ) ∈ Y and (uh1δ , uh2δ , ξ1hδ , ξ2hδ ) ∈ Xhδ ,
T hδ (σ1 , σ2 , η1 , η2 ) = (uh1δ , uh2δ , ξ1hδ , ξ2hδ )
2272
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
if and only if a[uhi δ , v hδ ]Di = [σi , v hδ ]Di
∀v hδ ∈ Vihδ
a[ξihδ , v hδ ]Di = [ηi , v hδ ]Di
∀v hδ ∈ Vihδ
and for i = 1, 2. We now define the mapping G : X → Y: For (σ1 , σ2 , η1 , η2 ) ∈ Y and (u1 , u2 , ξ1 , ξ2 ) ∈ X,
G(u1 , u2 , ξ1 , ξ2 ) = (σ1 , σ2 , η1 , η2 ) if and only if
[σ1 , v]D1 = −[f , v]D1 + [σ2 , v]D2 = −[f , v]D2 −
1
β 1
β
H∂11 (D1 ), [ξ1 − ξ2 , v]∂ D0 ∀v ∈ H∂12 (D2 ), [ξ1 − ξ2 , v]∂ D0 ∀v ∈
[η1 , v]D1 = −[u1 − u2 , v]∂ D0 ∀v ∈ H∂11 (D1 ), [η2 , v]D = [u1 − u2 , v]∂ D ∀v ∈ H∂1 (D2 ). 2
0
2
Then by the definition of the operators T , T hδ , and G, it is clear that the reduced optimality system (2.8), (2.9), and (4.4)–(4.6) and the discrete stochastic optimality system (6.1)–(6.4) can be written as
(u1 , u2 , ξ1 , ξ2 ) + T G(u1 , u2 , ξ1 , ξ2 ) = 0
(6.13)
and
(uh1δ , uh2δ , ξ1hδ , ξ2hδ ) + T hδ G(uh1δ , uh2δ , ξ1hδ , ξ2hδ ) = 0, respectively. We now proceed to verify all assumptions in Theorem 6.1. Denote the Fréchet derivative of G with respect to (u1 , u2 , ξ1 , ξ2 ) by DG(u1 , u2 , ξ1 , ξ2 ). Then for (u1 , u2 , ξ1 , ξ2 ) ∈ X, we obtain DG(u1 , u2 , ξ1 , ξ2 ) · (˜u1 , u˜ 2 , ξ˜1 , ξ˜2 ) = (σ1 , σ2 , η1 , η2 ) if and only if
[σ1 , v]D1 =
1
β
[ξ˜1 − ξ˜2 , v]∂ D0 ∀v ∈ H∂11 (D1 ),
1 H∂12 (D2 ), [σ2 , v]D2 = − [ξ˜1 − ξ˜2 , v]∂ D0 ∀v ∈ β [η1 , v]D1 = −[˜u1 − u˜ 2 , v]∂ D0 ∀v ∈ H∂11 (D1 ), [η2 , v]D = [˜u1 − u˜ 2 , v]∂ D ∀v ∈ H∂1 (D2 ). 2
0
2
Note that DG(u1 , u2 , ξ1 , ξ2 ) ∈ L(X; Z), that G is twice continuously differentiable, and that D2 G is bounded on all bounded sets of X. Lemma 6.2. Let σi ∈ ( H∂1i (Di ))∗ , and ui ∈ H∂1i (Di ) be the solution of a[ui , v]Di = [σi , v]Di
∀v ∈ H 1 (Di ),
(6.14)
and uhi δ ∈ Vihδ be the solution of a[uhi δ , v hδ ]Di = [σi , v hδ ]Di
∀v hδ ∈ Vihδ
(6.15)
for i = 1, 2. Then we have
∥ui − uhi δ ∥ H 1 (Di ) → 0 as h, δ → 0 for i = 1, 2. Proof. Let ϵ > 0 be given and σi ∈ ( H∂1i (Di ))∗ . We fix y and put σi = σi (·, y). Then σi ∈ (H∂1i (Di ))∗ . Thus, there is a sequence ∞ 2 of C -functions {σik } ⊂ L (D) such that σik → σi in (H∂1i (Di ))∗ ; i.e., there exists k0 such that
∥σi − σik0 ∥(H 1 (Di ))∗ < ϵ ∂i
for i = 1, 2.
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
2273
Now consider the following equations:
∀v ∈ H∂1i (Di )
a[uik0 , v]Di = [σik0 , v]Di
(6.16)
and a[uhikδ0 , v hδ ]Di = [σik0 , v hδ ]Di
∀v hδ ∈ Vihδ
(6.17)
for i = 1, 2. From (6.14) and (6.16) and from (6.15) and (6.17), there exists C > 0 such that
∥ui − uik0 ∥H 1 (Di ) < C ∥σi − σik0 ∥(H 1 (Di ))∗ ∂i
and
∥uhi δ − uhikδ0 ∥H 1 (Di ) < C ∥σi − σik0 ∥(H 1 (Di ))∗ ∂i
for i = 1, 2. Hence, we have
∥ui − uhi δ ∥H 1 (Di ) ≤ ∥ui − uik0 ∥H 1 (Di ) + ∥uik0 − uhikδ0 ∥H 1 (Di ) + ∥uhikδ0 − uhi δ ∥H 1 (Di ) ≤ 2C ϵ + ∥uik0 − uhikδ0 ∥H 1 (Di ) for i = 1, 2. On the other hand, because σik0 ∈ L2 (D), Theorem 5.2 yields
∥uik0 − uhikδ0 ∥H 1 (Di ) ≤ C (h + δ γ )K ∥σik0 ∥L2 (D) , N pj +1 pj +1−k k where K = k=0 ∥φj (x)∥L∞ (D) ∥∂yj σik0 ∥L2 (D) }. j=1 max{1, Thus, from the last two inequalities, by letting h, δ → 0, we obtain ∥ui − uhi δ ∥H 1 (Di ) < (2C + 1)ϵ for i = 1, 2. Because ϵ is arbitrary and the last inequality holds for all y ∈ Γ , this completes the proof.
For any (σ1 , σ2 , η1 , η2 ) ∈ Y, 2 2 2 2 ∥(T − T hδ )(σ1 , σ2 , η1 , η2 )∥2X = ∥u1 − uh1δ ∥ + ∥ξ1 − ξ1hδ ∥ + ∥u2 − uh2δ ∥ + ∥ξ2 − ξ2hδ ∥ . H 1 (D ) H 1 (D ) H 1 (D ) H 1 (D ) 1
1
2
2
Hence, from the above lemma, we have
∥(T − T hδ )(σ1 , σ2 , η1 , η2 )∥X → 0 as h, δ → 0. Then (6.9) follows from the compact embedding result Z ⊂ Y. Note that a solution of (6.13) is regular due to the linearity and well-posedness of (6.11) and (6.12). We have verified all of the assumptions of Theorem 6.1 so that we obtain the following results. Theorem 6.3. Let (u1 , u2 , ξ1 , ξ2 ) ∈ X be the solution of the optimality system (2.8), (2.9) and (4.4)–(4.6), and (uh1δ , uh2δ , ξ1hδ , ξ2hδ ) ∈ Xhδ be the solution of the discrete optimality system (6.1)–(6.4). Then we have hδ hδ hδ γ ∥u1 − uh1δ ∥ H 1 (D1 ) + ∥ξ1 − ξ1 ∥ H 1 (D1 ) + ∥u2 − u2 ∥ H 1 (D2 ) + ∥ξ2 − ξ2 ∥ H 1 (D2 ) ≤ C (h + δ )∥f ∥ L2 (D) .
7. Numerical experiments The solutions of the optimization problem may be determined by solving the optimality system (2.8), (2.9), and (4.4)–(4.6). The optimality system is a two-point boundary value problem in which initial conditions are specified for the state system and terminal conditions are specified for the adjoint system. To achieve a parallelizable algorithm, we must decouple the subdomain problems; to make the individual subdomain problems tractable, we should uncouple the state and adjoint systems as well. In this paper, we consider a simple gradient method that effects both these uncouplings. Given an initial guess g (0) , let
α dJβ (g (n) ) g (n+1) = g (n) − , β dg
for n = 1, 2, 3, . . . ,
where α/β is a step size. The first derivative of Jβ yields, for n = 1, 2, 3, . . . , g (n+1) = (1 − α)g (n) − (n)
where ξ1 follows.
(n)
and ξ2
α (n) (ξ − ξ2(n) )|∂ D0 , β 1
are determined from (4.4) and (4.5), respectively. Then the algorithm based on this can be given as
2274
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276 Table 1
δ=
1 . 32
h
L2 -error
L2 -rate
H 1 -error
H 1 -rate
1 2
1.819e−01
–
5.368e−01
–
1 4
4.706e−02
1.95
2.800e−01
0.94
1 8
1.187e−02
1.99
1.414e−01
0.99
1 16
2.973e−03
2.00
7.089e−02
1.00
1 32
7.448e−04
2.00
3.547e−02
1.00
Algorithm 7.1. 1. Choose g (0) . 2. For n = 0, 1, 2, . . . , (n) (n) (a) determine u1 and u2 from (n)
a[u1 , v]D1 = [f , v]D1 + [g (n) , v]∂ D0 (n)
a[u2 , v]D2 = [f , v]D2 − [g (n) , v]∂ D0 (n)
(b) determine ξ1 (n)
(n)
and ξ2
from (n)
∀v ∈ H∂11 (D1 ),
(n)
(n)
∀v ∈ H∂12 (D2 );
a[ξ2 , v]D2 + [u1 − u2 , v]∂ D0 = 0 (c) determine g g
(n+1)
(n+1)
∀v ∈ H∂12 (D2 );
(n)
a[ξ1 , v]D1 − [u1 − u2 , v]∂ D0 = 0 (n)
∀v ∈ H∂11 (D1 ),
from
= (1 − α)g (n) −
α (n) (ξ − ξ2(n) )|∂ D0 . β 1
In practice, a stopping criteria should be added in order to terminate the iteration. In our numerical experiments, we used the stopping criterion defined by
∥change in successive values of u∥ < 10−5 . As we can see from the algorithm above, the parallelizability is obvious, and the major computational costs of the algorithm (n) (n) are in Steps 2(a) and 2(b) in which the determination of ui and ξi for i = 1, 2 is uncoupled, respectively. Of course, Algorithm 7.1 may not be giving the best method for solving the optimality system. However, we are still interested in using Algorithm 7.1 because we may be able to prove the convergence of our gradient method theoretically under some appropriate assumptions (see, e.g., [22]). We now give some preliminary computational results. Let the domain D = (0, 2) be divided into two subdomains D1 = (0, 1) and D2 = (1, 2) with the interface ∂ D0 = {1}. The finite element spaces Xih , i = 1, 2, and Y δ are chosen to consist of the continuous, piecewise linear polynomials. The computations were carried out for the problem with k = µ + σ Z (ω) and f = 12x(x − 1) which has the exact solution uex (x, ω) =
1
µ + σ Z (ω)
x3 (2 − x),
where µ = 1, σ = 0.5, and Z is a random variable with uniform distribution in [−1, 1]. In experiments with the gradient Algorithm 7.1, we choose β to be small so that we do not overpenalize the functional (2.5). Then, we choose α so that our gradient method converges while we still obtain good agreement with the exact solution. Actually, in our calculations, for a fixed value of β = 10−3 , we used a different step size α/β for each δ to have the convergence result; e.g., we used α/β = 10−(n+3) when δ = 2−n for n = 0, 1, 2, . . . . We wish to first verify the convergence rate with respect to h, a mesh size in spatial space, for a fixed value of small δ , a mesh size in stochastic space. To this purpose, we used δ = 1/32 so that the rate of convergence with respect to h is not compromised. The L2 -relative errors between the exact solution and the FE solutions based on DDM by the gradient method were computed for the mesh sizes h = 1/2, h = 1/4, h = 1/8, h = 1/16, and h = 1/32 and are presented in Table 1 and graphed in Fig. 2. Also the rate of convergence of the FE solutions found by comparing the consecutive errors on pairs of grids are presented in Table 1. Similarly, H 1 -relative errors and H 1 -rates were calculated and the results are presented in Table 1 and Fig. 2. As we can see from the results, the H 1 -rates of convergence get closer to what we obtained in Theorem 6.3 as the values of h get smaller; also, we see that L2 -rates are exactly what we expect with piecewise linear FE spaces when h = 1/16 and h = 1/32. In a second experiment (see Table 2 and Fig. 3), numerical calculations were performed with a fixed value of small h = 2−14 and various values of δ to illustrate the convergence rate in stochastic space. Table 2 and Fig. 3 provide the L2 -relative errors, L2 -rates, H 1 -relative errors, and H 1 -rates for various mesh sizes of δ . We observe that the H 1 -rate of convergence is getting closer to what we obtained in Theorem 6.3 and L2 -rate is exactly what we expect with spaces of piecewise polynomials with degree at most one.
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276
2275
Table 2 h=
1 214
.
δ
L2 -error
L2 -rate
H 1 -error
H 1 -rate
1
3.497e−02
–
3.497e−02
–
1 2
1.029e−02
1.76
1.029e−02
1.76
1 4
2.675e−03
1.94
2.676e−03
1.94
1 8
6.705e−04
2.00
6.741e−04
1.99
Fig. 2. Convergence of DDM solutions.
Fig. 3. Convergence of DDM solutions.
Remark 7.2. As an ongoing project, we have calculated the variances of the random solutions giving some convergence slower than what we obtained in this paper. Computational results of the convergence for the variance of the random solutions are given in the Appendix section. More numerical experiments and theoretical convergence results including rates will be addressed in a follow-up paper. Acknowledgments The authors thank the anonymous referees for various useful comments that helped us improve the presentation of this paper. We would also like to mention here that the work of Jangwoon Lee was partially supported by the Jepson Fellowship and Sabbatical Leave Programs at the University of Mary Washington; the work of Jeehyun Lee was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2013R1A1A2058848).
2276
J. Lee et al. / Computers and Mathematics with Applications 68 (2014) 2262–2276 Table 3 Convergence results for the variance with a fixed δ =
1 . 32
h
L2 -error
L2 -rate
1 2
2.83e−01
–
1 4
7.30e−02
1.96
1 8
1.86e−02
1.97
1 16
4.68e−03
2.00
1 32
1.17e−03
2.00
Table 4 Convergence results for the variance with a fixed h =
1 214
.
δ
L -error
L -rate
1
5.18e−01
–
1 2
4.39e−01
0.24
1 4
3.26e−01
0.43
1 8
2.34e−01
0.48
1 16
1.66e−01
0.50
2
2
Appendix. Numerical results in the Remark 7.2 See Tables 3 and 4. References [1] M. Gunzburger, H. Kwon, J. Peterson, An optimization based domain decomposition method for partial differential equations, Comput. Math. Appl. 37 (1999) 77–93. [2] M. Gunzburger, J. Lee, A domain decomposition method for optimization problems for partial differential equations, Comput. Math. Appl. 40 (2000) 177–192. [3] W. Guo, L.S. Hou, Generalizations and accerlerations of Lions’ nonoverapping domain decomposition method for linear elliptic PDE, SIAM J. Numer. Anal. 41 (2003) 2056–2080. [4] L.S. Hou, J. Lee, A Robin-Robin non-overlapping domain decomposition method for an elliptic boundary control problem, Int. J. Numer. Anal. Model. 8 (2011) 443–465. [5] J. Lee, An optimization-based domain decomposition method for parabolic equations, Appl. Math. Comput. 175 (2006) 1644–1656. [6] I. Babuska, P. Chatzipantelidis, On solving elliptic stochastic partial differential equations, Comput. Methods Appl. Mech. Engrg. 191 (2002) 4093–4122. [7] I. Babuska, R. Tempone, G.E. Zouraris, Galerkin finite element approximations of stochastic elliptic partial differential equations, SIAM J. Numer. Anal. 42 (2) (2004) 800–825. [8] J. Galvis, M. Sarkis, Approximating infinity-dimensional stochastic Darcys equations without uniform ellipticity, SIAM J. Numer. Anal. 47 (2009) 3624–3651. [9] M. Gunzburger, H.-C. Lee, J. Lee, Error estimates of stochastic optimal Neumann boundary control problems, SIAM J. Numer. Anal. 49 (4) (2011) 1532–1552. [10] L.S. Hou, J. Lee, H. Manouzi, Finite element approximations of stochastic optimal control problems constrained by stochastic elliptic PDEs, J. Math. Anal. Appl. 384 (2011) 87–103. [11] C. Jin, X.-C. Cai, C. Li, Parallel domain decomposition methods for stochastic elliptic equations, SIAM J. Sci. Comput. 29 (2007) 2096–2114. [12] H.-C. Lee, J. Lee, A stochastic Galerkin method for stochastic control problems, Commun. Comput. Phys. 14 (2013) 77–106. [13] K. Zhang, R. Zhang, Y. Yin, S. Yu, Domain decomposition methods for linear and semilinear elliptic stochastic partial differential equations, Appl. Math. Comput. 195 (2008) 630–640. [14] R.G. Ghanem, P.D. Spanos, Stochastic Finite Elements: A Spectral Approach, Springer-Verlag, 1991. [15] F. Brezzi, J. Rappaz, P. Raviart, Finite-dimensional approximation of nonlinear problems. Part I: branches of nonsingular solutions, Numer. Math. 36 (1980) 1–25. [16] M.D. Gunzburger, L.S. Hou, Finite-dimensional approximation of a class of constrained nonlinear optimal control problems, SIAM J. Control Optim. 34 (1996) 1001–1043. [17] P. Ciarlet, Finite Element Methods for Elliptic Problems, North-Holland, Amsterdam, 1978. [18] L.C. Evans, Partial Differential Equations, American Mathematical Society, Providence, RI, 1998. [19] R. Courant, D. Hilbert, Methods of Mathematical Physics, Interscience, New York, 1953. [20] M.K. Deb, I. Babuska, J.T. Oden, Solution of stochastic partial differential equations using Galerkin finite element techniques, Comput. Methods Appl. Mech. Engrg. 190 (2001) 6359–6372. [21] S.C. Brenner, L.R. Scott, The Mathematical Theory of Finite Element Methods, second ed., Springer, New York, 2002. [22] D. Keyes, J. Xu, Domain Decomposition Methods in Scientific and Engineering Computing, AMS, Providence, HI, 1994.