Error analysis of finite element approximations of the optimal control problem for stochastic Stokes equations with additive white noise

Error analysis of finite element approximations of the optimal control problem for stochastic Stokes equations with additive white noise

JID:APNUM AID:3338 /FLA [m3G; v1.232; Prn:7/03/2018; 15:18] P.1 (1-17) Applied Numerical Mathematics ••• (••••) •••–••• Contents lists available at...

3MB Sizes 0 Downloads 12 Views

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.1 (1-17)

Applied Numerical Mathematics ••• (••••) •••–•••

Contents lists available at ScienceDirect

Applied Numerical Mathematics www.elsevier.com/locate/apnum

Error analysis of finite element approximations of the optimal control problem for stochastic Stokes equations with additive white noise Youngmi Choi a , Hyung-Chun Lee b,∗ a b

College of Liberal Arts, Anyang University, Anyang 14028, Republic of Korea Department of Mathematics, Ajou University, Suwon 16499, Republic of Korea

a r t i c l e

i n f o

a b s t r a c t

Article history: Received 15 April 2017 Received in revised form 21 January 2018 Accepted 1 March 2018 Available online xxxx Keywords: Stochastic Stokes equations Optimal control White noise BRR theory

Finite element approximation solutions of the optimal control problems for stochastic Stokes equations with the forcing term perturbed by white noise are considered. To obtain the most efficient deterministic optimal control, we set up the cost functional as we proposed in [20]. Error estimates are established for the fully coupled optimality system using Green’s functions and Brezzi–Rappaz–Raviart theory. Numerical examples are also presented to examine our theoretical results. © 2018 IMACS. Published by Elsevier B.V. All rights reserved.

1. Introduction The optimal control problems for partial differential equations have been studied widely (see [1,9,10,15,14] and references therein). Recently there has been an increased interest in mathematical analyses and computations of stochastic partial differential equations (see [2–4,8,11,18,22,23] and references therein). In the article [6], numerical solutions of the stochastic Stokes equations driven by white noise perturbed forcing terms using finite element methods was considered. We use their results to analyze our optimal control problems for stochastic Stokes equations. The type of optimal control problem we consider is given as follows:

˙ and given a deterministic function U, we seek an optimal state field u∗ and an optimal control given a white noise W field f∗ such that a given cost functional J (u, f; U) is minimized over states u and controls f satisfying the stochastic Stokes equations

˙ in , −ν u + ∇ p = f + σ W div u = 0 in  u=0

*

on ∂ 

Corresponding author. E-mail addresses: [email protected] (Y. Choi), [email protected] (H.-C. Lee).

https://doi.org/10.1016/j.apnum.2018.03.002 0168-9274/© 2018 IMACS. Published by Elsevier B.V. All rights reserved.

(1.1)

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.2 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

2

Here,  is a convex polygon in R2 or a convex polyhedron in R3 , u is a velocity of the fluid flow, p is the pressure, f is a ˙ = (W ˙ 1 , ..., W ˙ d ) is the white noise such control, ν is the viscous constant, σ is a positive continuous function in  and W that

˙ j (x) W ˙ j (x )] = δ(x − x ), E[ W

x, x ∈ ,

j = 1, ..., d,

˙ i and W ˙ j , i = j, i , j = 1, ..., d, where δ denotes the usual Dirac delta function and E the expectation. We assume that W p dx = 0. The objective of this optimal con-

are independent. We also assume that p satisfies the zero mean constraint, 

trol problem is to seek a state variables u and p, and the control f which minimize the expectation of L 2 -norm distances between u and U and satisfy (1.1). In practices, the desired state U and the control f are deterministic. To find a deterministic control, we first set up the stochastic control problem and then take the average of the stochastic control to get the deterministic control. In this case the computations are so complicate and take large amount of time. The optimality system of the optimal control problems is usually much more complicated than the state equations. So, the convergence analysis for an optimal control problem of Stokes equations with random inputs is much complicated than that of Stokes equations with random inputs. In [20], an efficient method for determining an applicable deterministic control f was considered. So, to get more accurate solution for the stochastic optimal control problem, we set up the control problem as follows: Seek a stochastic optimal state (u∗ , p ∗ ) and a deterministic optimal control f ∗ such that the deterministic cost functional



J (u, f; U) = E ⎣

1





α |u − U| dx⎦ +



2

2

|f|2 dx

2



(1.2)



is minimized, subject to u, p and f satisfying the stochastic Stokes equation (1.1). The second terms in the cost functionals (1.2) is added to limit the cost functional. The positive penalty parameter α can be used to change the relative importance of the two terms appearing in the definitions of the functionals. The plan of the paper is as follows. We introduce weak solutions of (1.1) using the Green’s functions and the approximation errors for the stochastic Stokes equations with discretized white noise forcing terms. In the next section, we define an optimal solution of the optimal control problems for the stochastic Stokes equations with a continuous white noise and an approximate solution of the optimal control problems by discretizing the white noise. We show there is an optimal solution for each optimization problem. Then we derive the optimality system of equations by the Gâteaux differentiability. In section 3, we construct finite element approximations for the stochastic Stokes equations and carry out the error analysis using Green’s functions and Brezzi–Rappaz–Raviart theory. Finally, in section 4, we present numerical simulation results using the method and algorithm constructed in section 3 and section 4. 1.1. Weak solutions In this subsection we use the Green’s functions of the Stokes equations to define the weak solution of (1.1). We use some standard notations which will be used throughout the rest of the paper. For v = ( v 1 , . . . , v n ) T ∈ Rn , denote |v| =  v 21 + · · · + v n2 . We use  ·  to denote the norm in L 2 () and  · k denote the usual norm in Sobolev spaces H k () for

k ∈ R. The Green’s functions for the Stokes equation are given by G = (G i j )d×d and H = ( H j )1×d for u and p, respectively (see, e.g. [19]), where

(xi − y i )(x j − y j ) δi j ln + g i1j (x, y ), i , j = 1, 2, G i j (x, y ) = − + 4πν |x − y | |x − y |2 ∂ 1 1 H j (x, y ) = D j (x, y ), D j (x, y ) = ln + h1j (x, y ), i , j = 1, 2 ∂xj 2π |x − y | 1

for d = 2 and

G i j (x, y ) = −

1



1

 δi j

1

+

(xi − y i )(x j − y j ) + g i2j (x, y ), i , j = 1, 2, 3, |x − y |3

8πν |x − y | ∂ 1 1 H j (x, y ) = D j (x, y ), D j (x, y ) = + h2j (x, y ), i , j = 1, 2, 3 ∂xj 4π | x − y |

¯ . To simplify the presentation we assume that for d = 3. Here g ikj and hkj are piecewise differentiable functions on  a constant function on . We define the weak solution of (1.1) as follows.





u(x) =

G (x, y )f( y )dy + σ 

σ is

G (x, y )dW ( y ) 

(1.3)

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.3 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

and



 p (x) =

3

H (x, y )f( y )dy + σ div 

D (x, y )dW ( y )

(1.4)



where the stochastic integral is defined in Ito’s sense. From Ito’s isometry, it is easy to see that u ∈ ( L 2 ())d almost surely. We will treat p as a function in H −1 (). In fact it can be easily shown that

E  p 2−1 ≤ C  D 2 . We state the properties concerning the regularity of Green’s functions in [6]. Lemma 1.1. There exists a constant C such that if | y − z| is sufficiently small, then



|G (x, y ) − G (x, z)|2 dx ≤ C | ln(| y − z|)| | y − z|2 ,

(1.5)

| D (x, y ) − D (x, z)|2 dx ≤ C | ln(| y − z|)| | y − z|2

(1.6)







for d = 2 and



|G (x, y ) − G (x, z)|2 dx ≤ C | y − z|,

(1.7)

| D (x, y ) − D (x, z)|2 dx ≤ C | y − z|

(1.8)







for d = 3. 1.2. Discretization of white noise and error estimates

˙ First we introduce a In this subsection we define an approximate solution of (1.1) by discretizing the white noise W. discretization for the white noise. Let {Ts } be a family of triangulations of , where s ∈ (0, 1) is the mesh size. We assume that the family is quasiuniform, i.e., there exist positive constants ρ1 and ρ2 such that cir ρ1 s ≤ R inr ∀ T ∈ T s , ∀0 < s < 1, T < R T ≤ ρ2 s, cir where R inr T and R T are the inradius and the circumradius of T , respectively. Write

j ξT



1

=√

|T |

1 dW j (x)

j = 1, ..., d

T j

for each triangle T ∈ Ts , where | T | denotes the area of T . It is well-known that ξ T ∈T is a family of independent identically distributed normal random variables with mean 0 and variance 1 (see [25]). Then the piecewise constant approximation to ˙ j (x) is given by W

˙ s (x) = W j



1

| T |− 2 ξT χT (x) j

T ∈Ts

˙ s1 , · · · , W ˙ s = (W ˙ sd ) ∈ ( L 2 ())d almost surely. However, we where χ T is the characteristic function of T . It is apparent that W ˙ have the following estimate, in [7], which shows that Ws  is unbounded as s → 0. ˙ s = Lemma 1.2. Let W





˙ s1 2 + · +  W ˙ sd 2 . Then there exist positive constant C 1 and C 2 independent of s such that W



˙ s 2 ≤ C 2 s−k C 1 s−k ≤ E W where k = 2 or 3 for d = 2 or 3, respectively.

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.4 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

4

In the sequel, we will omit ‘almost surely’ without ambiguous loss except that we want to emphasize it. ˙ s: Now we consider the approximation problem for (1.1) with the discretized white noise forcing term W

˙ s in , −ν us + ∇ p s = f + σ W div us = 0 in  us = 0

(1.9)

on ∂ .

Similar to (1.3) and (1.4), we define the weak solution for (1.9) as follows.



 us (x) =

G (x, y )f( y )dy + σ 

and

G (x, y )dW s ( y ) 



 p s (x) =

(1.10)

H (x, y )f( y )dy + σ div 

D (x, y )dW s ( y )

(1.11)



We have the following estimates concerning the bounds for us and p s . Lemma 1.3. Given f ∈ ( L 2 ())d , there exists a positive constant C independent of s such that

E us 22 +  p s 21 ≤ C s−k

(1.12)

where k = 2 or 3 for d = 2 or 3, respectively. Proof. From the standard estimates of Stokes equations (see e.g., [13], [24]) and Lemma 1.2, we have that



˙ s 2 ≤ C 1 (f2 + C 2 s−k ) E us 22 +  p s 21 ≤ C 1 E f2 +  W

(1.13)

≤ C 1 C 3 (1 + s−k ) ≤ 2 C 1 C 3 s−k ≤ C s−k where C 3 = max(f2 , C 2 ) and k = 2 or 3 for d = 2 or 3, respectively.

2

For the errors u − us and p − p s , we have the following estimates. Proposition 1.4. Let (u, p ) and (us , p s ) be the solution of (1.1) and (1.9) respectively. Then there exists a constant C such that

E u − us 2 +  p − p s 2−1 ≤ C | ln s|s2

(1.14)

for d = 2 and

E u − us 2 +  p − p s 2−1 ≤ C s

(1.15)

for d = 3. Proof. The proof of this Proposition can be found in (Theorem 1 in [6]). Here we briefly sketch the proof for the completeness of our paper. We only prove (1.14). The proof of (1.15) is similar. Using Ito’s isometry and Hölder inequality, we have that



E u − us 2





 2 ⎤      ⎢  ⎥ = σ E ⎣  G (x, y )dW ( y ) − G (x, y )dW s ( y ) dx⎦   ⎡







 2 ⎤        ⎢  ⎥ −1 G (x, y )dW ( y ) − | T | G (x, z)dz 1dW ( y ) dx⎦ = σE⎣  T ∈Ts  T ∈Ts 

T

T

T

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.5 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••



5



 2        ⎥ −1  | T | (G (x, y ) − G (x, z))dzdW ( y ) dx⎦     T ∈Ts T T ⎛ ⎞  2       ⎜  −1 ⎟ =σ ⎝ (G (x, y ) − G (x, z))dz dy ⎠ dx | T |   T ∈Ts

⎢ = σE⎣

T



T

From the Hölder inequality and (1.5) we obtain, for a constant C  > 0,

E u − us 2 ≤ C  || | ln s|s2 . 

(1.16)



Next we estimate E  p − p s 2−1 . Let φ ∈ H 01 (). Using integration by parts and the same procedure as in the estimate of

E[u − us ]2 , we obtain, for a constant C  > 0,

E[ p − p s 2−1 ] ≤ C  || | ln s|s2 .

(1.17)

Thus, from (1.16) and (1.17) with C = max(C  , C  ) × ||

E u − us 2 +  p − p s 2−1 ≤ C | ln s|s2 .

(1.18)

2

The proof of (1.14) is complete. 2. The optimal control problem

˙ 2.1. The optimization problem with white noise W Let u ∈ ( L 2 ())d and p ∈ H −1 () denote the state variables which are defined in (1.3) and (1.4), and let f ∈ ( L 2 ())d denote the distributed control. We formulate the optimal control problem in the following terms: (P) Find (u, p , f) such that the functional (1.2) is minimized subject to (1.1). With J (u, p , f) given by (1.2), the admissibilit y set Uad is defined by

Uad = {(u, p , f) ∈ ( L 2 ())d × H −1 () × ( L 2 ())d :

(2.1)

J (u, p , f) < ∞ and (u, p , f) satisfies (1.1) almost surely}. ˆ , pˆ , fˆ) ∈ Uad is called an optimal solution of J (u, p , f) if there exists Then (u

ε > 0 such that

J (uˆ , pˆ , fˆ) ≤ J (u, p , f) ∀(u, p , f) ∈ Uad satisfying

uˆ − u +  pˆ − p −1 + fˆ − f ≤ ε .

(2.2)

The optimal control problem can now be formulated as a constrained minimization problem in a Hilbert space

min

(u, p ,f)∈Uad

J (u, p , f).

(2.3)

The existence and uniqueness of an optimal solution of (2.3) is easily proven using standard arguments in the following theorem (see, e.g., [21]).

ˆ , pˆ , fˆ) ∈ Uad of the optimization problem (2.3) almost surely. Theorem 2.1. There is a solution (u Proof. We first note that Uad is clearly not empty, e.g., (u, p , 0) ∈ Uad . Let f(n) be a minimizing sequence for the optimal control problem and set u(n) = u(f(n) ), p (n) = p (f(n) ). Then (u(n) , p (n) , f(n) ) ∈ Uad for all n and satisfies

lim J (u(n) , p (n) , f(n) ) =

n→∞

inf

(u, p ,f)∈Uad

J (u, p , f), almost surely.

The sequence f(n) is uniformly bounded in ( L 2 ())d from (2.1) and the corresponding (u(n) , p (n) ) is uniformly bounded in ( L 2 ())d × H −1 () from the definition of the weak solutions (1.3) and (1.4), we have that the sequence (u(n) , p (n) ) is uni-

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.6 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

6

formly bounded in ( L 2 ())d × H −1 (), almost surely. So, we may then extract subsequences, still denoted by (u(n) , p (n) , f(n) ), such that

f(n)  fˆ

in ( L 2 ())d ,

p (n)  pˆ

in H −1 (),

ˆ u(n)  u

in ( L 2 ())d

ˆ , pˆ , fˆ) ∈ ( L 2 ())d × H −1 () × ( L 2 ())d . We may then easily pass to the limit in (1.3) and (1.4). Now, by the weak for some (u ˆ , pˆ , fˆ) is an optimal solution, i.e., lower semi-continuity of J (·, ·, ·), we conclude that (u inf

(u, p ,f)∈Uad

J (u, p , f) = lim inf J (u(n) , p (n) , f(n) ) = J (uˆ , pˆ , fˆ). n→∞

Thus, we have shown that an optimal solution belonging to Uad exists. Finally, the uniqueness of the optimal solution follows from the convexity of the functional and the linearity of the constraint equations. 2 The first-order optimality condition associated with problem (P) can be derived by Gâteaux derivative. We shall show that the optimal solution must satisfy the first-order necessary condition (see, e.g. [1]). Theorem 2.2. Let f ∈ ( L 2 ())d . The mapping f → (u(f), p (f)), defined as the weak solution of (1.3) and (1.4), has a Gâteaux derivative (d(u(f), p (f))/df · g in every direction g in ( L 2 ())d . Furthermore (˜v, q˜ ) := (d(u(f), p (f))/df · g is the solution of the problem

−ν v˜ + ∇ q˜ = g in , div v˜ = 0

in ,

(2.4)

v˜ = 0 on ∂ . Finally, (˜v, q˜ ) ∈ ( H 01 () ∩ H 2 ())d × ( H 1 () ∩ L 20 ()). Proof. It is immediate from the linearity of (1.3) and (1.4).

2

ˆ , pˆ , fˆ) is an optimal solution for (P), then the equality Theorem 2.3. If (u

α fˆ + E[ˆv] = 0

(2.5)

holds, where vˆ is the adjoint state that is the solution of the adjoint problem

−ν v + ∇ q = u − U in , div v = 0

in ,

(2.6)

v = 0 on ∂ .

ˆ , pˆ , fˆ) be an optimal solution. The Gâteaux derivative of the functional J (fˆ) in the direction of g is defined by Proof. Let (u dJ (u(fˆ), p (fˆ), fˆ) df

d

J (uˆ (fˆ +  g), pˆ (fˆ +  g), fˆ +  g)| =0 ⎤ ⎡   ˆ) ˆ d u ( f = E ⎣ (uˆ (fˆ) − U) · gdx⎦ + α fˆ gdx.

=

d

(2.7)

df





ˆ , pˆ , fˆ) be an optimal solution, Since (u

⎡ E⎣

 

⎤ (uˆ − U)˜v dx⎦ + α



ˆ (fˆ), pˆ (fˆ), fˆ) dJ (u df

should be zero. Thus, we have

fˆ gdx = 0. ∀g ∈ ( L 2 ())d

(2.8)



with u(f) = u and p (f) = p. We note that v, vˆ and v˜ are in divergence free spaces. Then, from (2.6) and integration by parts (see e.g. the section 1.2 of [1]), we have

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.7 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

⎡ E⎣

 





(uˆ − U)˜vdx⎦ = E ⎣ ⎡

= E⎣





 

(−v) v˜ dx⎦ ⎡

(−v˜ ) vˆ dx⎦ = E ⎣



7





⎤ g vˆ dx⎦ =



(2.9)

 E[ˆv] gdx. 

Since g ∈ ( L 2 ())d is arbitrary, we obtain (2.5) from (2.8) and (2.9).

2

From the above theorem, we conclude that solving the minimization problems (2.3) is equivalent to solving the following optimality system of equations.

⎧ ˙ in , ∇ · u = 0 in , u = 0 on ∂ , −ν u + ∇ p = f + σ W ⎪ ⎨ −ν v + ∇ q = u − U in , ∇ · v = 0 in , v = 0 on ∂ , ⎪ ⎩ α f + E[v] = 0 in . Using the optimality condition f = −

E[v]

α

(2.10)

, we obtain the following reduced optimality system: seek (u, p , v, q) such that

⎧ E[v] ⎪ ˙ in , + σW −ν u + ∇ p = − ⎪ ⎪ ⎪ α ⎪ ⎨ ∇ · u = 0 in , u = 0 on ∂ , ⎪ ⎪ ⎪ −ν v + ∇ q = u − U in , ⎪ ⎪ ⎩ ∇ · v = 0 in , v = 0 on ∂ .

(2.11)

˙s 2.2. The optimization problem with discretized white noise W 

Let us ∈ ( H 01 ())d and p s ∈ L 20 ()(:= {q ∈ L 2 ()|  q(x)dx = 0}) denote the state variables, and let f ∈ ( L 2 ())d denote the distributed control. The state and control variables are also constrained to satisfy the system (1.9), which recast into the weak form with a and b:

˙ s , w) ∀w ∈ ( H 01 ())d ν a(us , w) + b(w, p s ) = (f, w) + (σ W b(us , r ) = 0 ∀r ∈ L 20 ()

(2.12)

where two bilinear forms a and b are defined as follows

∀v, w ∈ ( H 01 ())d

a(v, w) = (∇ v, ∇ w)

(2.13)

and

b(v, q) = −(div v, q) ∀v ∈ ( H 01 ())d , q ∈ L 20 () We set the cost functional



J (us , fs ; U) = E ⎣

1



2

⎤ |us − U|2 dx⎦ +



α

(2.14)

 |fs |2 dx

2

(2.15)



We formulate the optimal control problem in the following terms: (Ps) Find (us , p s , fs ) such that the functional (2.15) is minimized subject to (2.12). With J (us , p s , fs ) given by (1.2), the admissibilit y set Uad is defined by

Uad = {(us , p s , fs ) ∈ ( H 01 ())d × L 20 () × ( L 2 ())d : J (us , p s , fs ) < ∞ and (us , p s , fs ) satisfies (2.12)}. ˆ s , pˆ s , fˆs ) ∈ Uad is called an optimal solution if there exists Then (u

J (uˆ s , pˆ s , fˆs ) ≤ J (us , p s , fs )

∀(us , p s , fs ) ∈ Uad

ε > 0 such that

(2.16)

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.8 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

8

satisfying

uˆ s − us 1 +  pˆ s − p s  + fˆs − fs  < ε . The optimal control problem can now be formulated as a constrained minimization problem in a Hilbert space

J (us , p s , fs ).

min

(us , p s ,fs )∈Uad

(2.17)

The existence and uniqueness of an optimal solution of (2.17) is easily proven using standard arguments in the following theorem (see, e.g., [21]).

˙ s ∈ ( L 2 ())d , there is a solution (uˆ s , pˆ s , fˆs ) ∈ Uad of the optimization problem (2.17) almost surely. Theorem 2.4. Given W (n)

Proof. We first note that Uad is clearly not empty, e.g., (us , p s , 0) ∈ Uad . Let fs be a minimizing sequence for the optimal (n) (n) (n) (n) (n) (n) (n) control problem and set us = us (fs ), p s = p s (fs ). Then (us , p s , fs ) ∈ Uad for all n and satisfies (n)

(n)

(n)

lim J (us , p s , fs ) =

n→∞

inf

(us , p s ,fs )∈Uad

J (us , p s , fs ), almost surely.

(n)

(n)

(n)

The sequence fs is uniformly bounded in ( L 2 ())d from (2.16) and the corresponding (us , p s ) is uniformly bounded in ˙ s ), almost surely, we have that the sequence (u(sn) , p (sn) ) ( H 01 ())d × L 20 () Also, using the bound us 1 +  p s  ≤ C (fs  +  W is uniformly bounded in H 01 ())d × L 20 (), almost surely. With this setting of function spaces the remaining part of the proof is the same as in Theorem 2.1. 2

ˆ s , pˆ s , fˆ) is not bounded as s → 0. We also see easily that us 1 + Remark 2.5. We should notice that an optimal solution (u ˙  p s  ≤ us 2 +  p s 1 ≤ C (f +  W s ) almost surely. The first-order optimality condition associated with problem (Ps) can be derived by Gâteaux derivative. We shall show that the optimal solution must satisfy the first-order necessary condition (see, e.g. [1]). Theorem 2.6. Let f ∈ ( L 2 ())d . The mapping f → (us (f), p s (f)), defined as the solution of (2.12), has a Gâteaux derivative (d(us (f), p s (f))/df · g in every direction g in ( L 2 ())d . Furthermore (˜vs , q˜ s ) := (d(us (f), p s (f))/df · g is the solution of the problem

ν a(˜vs , w) + b(w, q˜ s ) = (g, w) ∀w ∈ ( H 01 ())d , b(˜vs , ζ ) = 0 ∀ζ ∈ L 20 ().

(2.18)

Finally, (˜vs , q˜ s ) ∈ ( H 01 ())d × L 20 ().

2

Proof. It is immediate from the linearity of (2.12).

Theorem 2.7. If (us , p s , f) is an optimal solution for (P), then the equality

α (f, g) = −(E[vs ], g)

∀g ∈ ( L 2 ())d

(2.19)

holds, where vs is the adjoint state that is the solution of the adjoint problem

ν a(vs , z) + b(z, qs ) = (us − U, z) ∀z ∈ ( H 01 ())d , b(vs , ζ ) = 0 ∀ζ ∈ L 20 (). Proof. The proof is the same as in Theorem 2.3.

(2.20)

2

From the above theorem, we conclude that solving the minimization problems (2.17) is equivalent to solving the following optimality system of equations.

⎧ ˙ s , w) ∀w ∈ ( H 01 ())d , ν a(us , w) + b(w, p s ) = (f, w) + (σ W ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ b(us , r ) = 0 ∀r ∈ L 20 (), ⎪ ⎨ ν a(vs , z) + b(z, qs ) = (us − U, z) ∀z ∈ ( H 01 ())d , ⎪ ⎪ ⎪ ⎪ b(vs , ζ ) = 0 ∀ζ ∈ L 20 (), ⎪ ⎪ ⎪ ⎩ α (f, g) + (E[vs ], g) = 0 ∀g ∈ ( L 2 ())d .

(2.21)

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.9 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

Using the optimality condition f = − 1

d

( H ())

× ( L 20 ())d

× ( H ()) 1

d

E[vs ]

α

× ( L 20 ())d

9

, we obtain the following reduced optimality system: seek (us , p s , vs , q s ) ∈

such that

⎧ E[vs ] ⎪ ˙ s , w) ∀w ∈ ( H 01 ())d , ⎪ , w) + (σ W ⎪ν a(us , w) + b(w, p s ) = (− ⎪ α ⎪ ⎪ ⎨ b(us , r ) = 0 ∀r ∈ L 20 (), ⎪ ⎪ ⎪ ν a(vs , z) + b(z, qs ) = (us − U, z) ∀z ∈ ( H 01 ())d , ⎪ ⎪ ⎪ ⎩ b(vs , ζ ) = 0 ∀ζ ∈ L 20 ().

(2.22)

Integrations by parts may be used to show that the system (2.22) constitutes a weak formulation of the problem

⎧ E[vs ] ⎪ + σ W˙ s in , ⎪−ν us + ∇ p s = − ⎪ ⎪ α ⎪ ⎨ ∇ · us = 0 in , us = 0 on ∂ , ⎪ ⎪ ⎪ −ν vs + ∇ q s = us − U in , ⎪ ⎪ ⎩ ∇ · vs = 0 in , vs = 0 on ∂ .

(2.23)

2.3. Error estimate for E[u − us ] and E[v − vs ] Theorem 2.8. Let (u, v) and (us , vs ) be the solution of (2.11) and (2.22) respectively. Then there exists a constant C such that

E u − us 2 + v − vs 2 ≤ C | ln s|s2

(2.24)

for d = 2 and

E u − us 2 + v − vs 2 ≤ C s

(2.25)

for d = 3. Proof. We only prove (2.24). The proof of (2.25) is similar. We subtract (2.23) from (2.11) to obtain the equation

⎧ E[v − vs ] ⎪ ˙ − W˙ s ) in , + σ (W ⎪−ν (u − us ) + ∇( p − p s ) = − ⎪ ⎪ α ⎪ ⎨ ∇ · (u − us ) = 0 in , u − us = 0 on ∂ , ⎪ ⎪ ⎪ −ν (v − vs ) + ∇(q − q s ) = (u − us ) in , ⎪ ⎪ ⎩ ∇ · (v − vs ) = 0 in , (v − vs ) = 0 on ∂ .

(2.26)

Then u − us and v − vs are the solutions of the above equation. From the third and fourth equations of (2.26) and the theory of Stokes equation (see remark 5.3 in [12]), one knows that E[v − vs ] ≤ C 1 E[u − us ]. Thus, by some calculations and from Proposition 1.4, we easily have



E u − us 2 + v − vs 2 ≤ C 2 E u − us 2 ≤ C | ln s|s2 2

(2.27)

3. Finite element approximations 3.1. Finite element discretizations We consider the numerical approximations of (2.22) using the finite element method. First we choose a family of finite dimensional subspaces Vh ⊂ ( H 1 ())d and Q h ⊂ L 2 ( D ). We let Vh0 = Vh ∩ ( H 01 ())d and Q 0h = Q h ∩ L 20 (). These families are parameterized by a parameter h that tends to zero; commonly, h is chosen to be some measure of the grid size. First, we assume the approximation properties: there exist a constant C , independent of h, vh , and qh , such that

inf vs − vhs r ≤ Ch2−r vs 2

vhs ∈Vh

inf q s − qhs 0 ≤ Chq s 1

qhs ∈ Q h

r = 0, 1, ∀vs ∈ ( H 2 ())d ,

∀q s ∈ H 1 ().

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.10 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

10

Next, it is well-known that the Taylor–Hood pair (Vh , S h ) satisfies the inf-sup (or L B B) condition, i.e., there exists a constant C , independent of h, such that

inf

sup

0=qhs ∈ Q h 0=vh ∈Vh s

b(vhs , qhs )

vhs 1 qhs 

≥ C.

The finite element approximation of (2.22) is then as follows: find uhs ∈ Vh , p hs ∈ S h , vhs ∈ Vh and qhs ∈ S h such that

⎧ E[vh ] ⎪ ⎪ ˙ s , wh ) ∀wh ∈ Vh0 , ν a(uhs , wh ) + b(wh , phs ) = (− s , wh ) + (σ W ⎪ ⎪ ⎪ α ⎪ ⎨ b(uhs , r h ) = 0 ∀r h ∈ Q 0h , ⎪ ⎪ ⎪ ν a(vhs , zh ) + b(zh , qhs ) = (uhs − U, zh ) ∀zh ∈ Vh0 , ⎪ ⎪ ⎪ ⎩ b(vhs , ζ h ) = 0 ∀ζ h ∈ Q 0h .

(3.1)

3.2. Error estimates for the optimality system We obtain an error estimate for the finite element approximation using the Brezzi–Rappaz–Raviart Theory [5]. For this purpose, we show how to cast nonlinear problems (2.22) and (3.1) in the respective canonical forms

F (λ, ψ) ≡ ψ + T G (λ, ψ) = 0,

(3.2)

and

F h (λ, ψ h ) ≡ ψ h + T h G (λ, ψ h ) = 0. d

(3.3)

L 20 () × ( H 1 ())d

and Y = ( H −1 ())d

× ( H −1 ())d . We define the

Let λ = 1/ν . We set X = ( H ()) × × × ( L ()) linear operator T ∈ L(Y ; X ) as follows: T (η, ξ, τ ) = (us , p s , vs , q s ) for (η, ξ, τ ) ∈ Y and (us , p s , vs , q s ) ∈ X , if and only if 1

d

∀r ∈ L 20 (),

a(vs , z) + b(z, q s ) = (τ , z) b(vs , ζ ) = 0

2

∀w ∈ ( H 01 ())d ,

a(us , w) + b(w, p s ) = (η, w) + (ξ, w) b(us , r ) = 0

L 20 ()

∀z ∈ ( H 01 ())d ,

∀ζ ∈ L 20 ().

We set X h = Vh0 × Q 0h × Vh0 × Q 0h . We define the discrete operator T h ∈ L(Y ; X h ) as follows: T h (η, ξ, τ ) = (uhs , p hs , vhs , qhs ) for (η, ξ, τ ) ∈ Y and (uhs , phs , vhs , qhs ) ∈ X h , if and only if

a(uhs , wh ) + b(wh , p hs ) = (η, wh ) + (ξ, wh )

∀wh ∈ Vh0 ,

b(uhs , r h ) = 0 ∀r h ∈ Q 0h , a(vhs , zh ) + b(zh , qhs ) = (τ , zh )

∀zh ∈ Vh0 ,

b(vhs , sh ) = 0 ∀sh ∈ Q 0h . Let  denote a compact subset of R+ . Next, we define G :  × X → Y as follows:

˙ s , U − us ). G (λ, (us , p s , vs , q s )) = λ(α −1 E[vs ], −σ W It is easily seen that the reduced optimality system (2.22) is equivalent to

(us , λ p s , vs , λq s ) + T G (λ, (us , λ p s , vs , λq s )) = 0,

(3.4)

and that the discrete optimality system (3.1) is equivalent to

(uhs , λ phs , vhs , λqhs ) + T h G (λ, (uhs , λ phs , vhs , λqhs )) = 0.

(3.5)

We define a space Z = ( L ()) × ( L ()) × ( L ()) . Then clearly this space is continuously embedded into Y = ( H −1 ())d × ( L 2 ())d × ( H −1 ())d . Denote the Fréchet derivative of G (λ, (u, p , v, q)) with respect to (u, p , v, q) by D G (λ, (u, p , v, q)) or G (u, p ,v,q) (λ, (u, p , v, q)). Then for (uhs , p hs , vhs , qhs ) ∈ X h , we obtain 2

d

2

d

2

d

˜ hs , p˜ hs , v˜ hs , q˜ hs ) = λ(α −1 E[˜vhs ], 0, −u˜ hs ) D G (λ, (uhs , p hs , vhs , qhs )) · (u

∀(u˜ hs , p˜ hs , v˜ hs , q˜ hs ) ∈ X h

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.11 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

11

Proposition 3.1. D G (λ, (us , p s , vs , q s )) ∈ L( X ; Z ) for all (us , p s , vs , q s ) ∈ X . Proof. It is clear that

 D G (λ, (us , p s , vs , q s )) · (u˜ s , p˜ s , v˜ s , q˜ s )2Z = α −1 E[˜vs ]2 + u˜ s 2 ≤ C (u˜ s 21 + ˜vs 21 ). Therefore, D G (λ, (us , p s , vs , q s )) ∈ L( X ; Z ) for all (us , p s , vs , q s ) ∈ X .

2

Proposition 3.2. G is twice continuously differentiable and D 2 G is bounded on all bounded sets of X . Proof. For any (us , p s , vs , q s ) ∈ X ,

˜ s , p˜ s , v˜ s , q˜ s ) = (0, 0, 0) D 2 G (λ, (us , p s , vs , q s )) · (u

∀(u˜ s , p˜ s , v˜ s , q˜ s ) ∈ X .

Thus, it is easy to show that D 2 G is well defined, continuous, and bounded on all bounded sets of X . Proposition 3.3. For any (η, ξ, τ ) ∈ Y , ( T − T h )(η, ξ, τ ) X → 0 as h → 0. Proof.

( T − T h )(η, ξ, τ )2X = (us − uhs , p s − phs , vs − vhs , q s − qhs )2X = us − uhs 21 +  p s − phs 2 + vs − vhs 21 + q s − qhs 2 ≤ Ch2 (η2 + ξ 2 + τ 2 ) Hence we have ( T − T h )(η, ξ, τ ) X → 0 as h → 0.

2

Proposition 3.4.  T − T h L( Z , X )→0 as h → 0. Proof. Note that for (η, ξ, τ ) ∈ Z , we have

( T − T h )(η, ξ, τ )2X = (us − uhs , p s − phs , vs − vhs , q s − qhs )2X = us − uhs 21 +  p s − phs 2 + vs − vhs 21 + q s − qhs 2 ≤ Ch2 (η2 + ξ 2 + τ 2 ) = Ch2 (η, ξ, τ )2Z Thus, we obtain

( T − T h )(η, ξ, τ )2L( Z , X ) = as h → 0.

sup (η,ξ,τ )2Z =0

( T − T h )(η, ξ, τ )2X (η, ξ, τ )2Z

=

sup (η,ξ,τ )2Z =0

≤ Ch2 → 0

2

Proposition 3.5. {(λ.ψ(λ)) : λ ∈ } is a branch of nonsingular(regular) solutions of (3.2). Proof.

⎧ E[ˇvs ] ⎪ ⎪ ˇ s , v˜ s ) + λb(˜vs , pˇ s ) − λ(− a(u , v˜ s ) = (w, v˜ s ) ∀˜vs ∈ ( H 01 ())d , ⎪ ⎪ ⎪ α ⎪ ⎨ ˇ s , q˜ s ) = (l, q˜ s ) ∀˜q s ∈ L 20 (), b(u ⎪ ⎪ ⎪ a(ˇvs , u˜ s ) + λb(u˜ s , qˇ s ) − λ(uˇ s − U, u˜ s ) = (z, u˜ s ) ∀u˜ s ∈ ( H 01 ())d , ⎪ ⎪ ⎪ ⎩ b(ˇvs , p˜ s ) = (m, p˜ s ) ∀ p˜ s ∈ L 2 (), ˇ s , pˇ s , vˇ s , qˇ s ) ∈ X for every w, z ∈ ( H −1 ())d , l, m ∈ L 2 (). has a unique solution (u

2

2

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.12 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

12

Through Proposition 3.1–3.5 we have verified all of the assumptions of Theorem 3.6. Thus, we obtain the following results. Theorem 3.6. Let F (λ, ψ) = 0 denote abstract form (3.2) and assume that {(λ, ψ(λ))|λ ∈ } is a branch of regular solutions of (3.2). Furthermore, assume that T ∈ L (Y , X ), that G is a C 2 map  × X → Y such that all second derivatives of G are bounded on bounded subsets of  × X , and that there exists a space Z ⊂ Y , with continuous imbedding, such that D ψ G (λ, ψ) ∈ L ( X , Z ) for all λ ∈  and ψ ∈ X . If approximate problem (3.3) is such that

lim ( T − T h )g X = 0

(3.6)

h→0

for all g ∈ Y and

lim ( T − T h ) L ( Z , X ) = 0.

(3.7)

h→0

Then: 1. there exists a neighborhood O of the origin in X and, for h sufficiently small, a unique C 2 function λ → ψ h (λ) ∈ X h such that {(λ, ψ h (λ))|λ ∈ } is a branch of regular solutions of discrete problem (3.3) and ψ(λ) − ψ h (λ) ∈ O for all λ ∈ ; 2. for all λ ∈  we have

ψ(λ) − ψ h (λ) X ≤ C ( T − T h )G (λ, ψ(λ)) X ;

(3.8)

Therefore, from (3.8), we obtain the following error estimates.

us − uhs 1 +  p s − phs  + vs − vhs 1 + q s − qhs  ≤ Ch (us 2 +  p s 1 + vs 2 + q s 1 ) .

(3.9)

Remark 3.7. The error estimation (3.9) can also be written as the following L 2 () error estimates

us − uhs  +  p s − phs −1 + vs − vhs  + q s − qhs −1 ≤ Ch2 (us 2 +  p s 1 + vs 2 + q s 1 ) .

(3.10)

The convergence rates are recovered as the deterministic case since the control fhs take care of the stochastic effect from the ˙ additive white noise W. Theorem 3.8. It is supposed that the solution (u, p , v, q) for the optimality system (2.10) and the finite element solution (uhs , phs , vhs , qhs ) for the optimality system (3.1) satisfy the following error estimates

E(u − uhs 2 +  p − phs 2−1 + v − vhs 2 + q − qhs 2−1 ) ≤ 2E(u − us 2 +  p − p s 2−1 + v − vs 2 + q − q s 2−1 ) + 2E(us − uhs 2 +  p s − phs 2−1 + vs − vhs 2 + q s − qhs 2−1 )   ≤ C | ln s|s2 + Ch4 us 22 +  p s 21 + vs 22 + q s 21

(3.11)

≤ C 1 | ln s|s2 + C 2 h4 s−2 . Remark 3.9. In Theorem 3.8, taking s = h we have

E(u − uhs 2 +  p − phs 2−1 + v − vhs 2 + q − qhs 2−1 ) ≤ C | ln h|h2 4. Numerical experiments In this section we will present numerical experiments using the finite element method and discretized white noise described in Sections 2 and 3. We construct the finite dimensional subspaces Vh and Q h using the Taylor–Hood method (see page 177, [13]). The computational results are obtained by using FreeFEM++ code [17]. We will compute both cases of the control problems of deterministic and stochastic Stokes equations. We consider the optimality system with D = (0, 1) × (0, 1), the viscous constant ν = 1 and the desired velocity is set as in [8], U(x, y ) = (U 1 (x, y ), U 2 (x, y )), where

U 1 = 10

d dy

φ(x)φ( y ) and U 2 = −10

d dx

φ(x)φ( y )

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.13 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

13

and

φ( z) = (1 − z)2 (1 − cos(0.8π z)). The numerical algorithm consists of three steps. Step 1 Step 2 Step 3



˙ sm = T ∈ T h | T |− 2 ξ m χ T (x), of discretized white noise W ˙ s by generating For m = 1, · · · , M, generate samples W T m samples {ξ T } T ∈ T h of {ξ T } T ∈ T h where M is the sample size; 1

˙ s replaced by W ˙ sm , to obtain the realizations (us , p s , vs , q s , fs ) by For m = 1, · · · , M, solve (3.1), with W the finite element method; Evaluate statistics E( v (uhs , p hs , vhs , qhs , fhs )) (for example, if v (u, p ) = |u|2 , then the statistics is the second moment of u) using the Monte Carlo method: h,m

E( v (uhs , phs , vhs , qhs , fhs )) ≈

M 1

M

h,m

h,m

h,m

h,m

v (uhs ,m , p hs ,m , vhs ,m , qhs ,m , fhs ,m ).

m =1

In step 1, to generate Gaussian Random Numbers, we use the polar form of Box–Muller transformation which is both faster and more robust numerically than basic form of the transformation. The algorithmic description of the polar form of Box–Muller transformation is: Algorithm 1: Polar rejection. 1: repeat 2: x ← V 1 , y ← V 2 3: d ← x2 + y 2 4: until 0 < d < 1 5: f ← −2(ln d)/d 6: return ( f × x, f × y )

In step 2, the size of the discretized optimality systems are often too large for their practical solution. Thus, in addition to solving the optimality systems directly, i.e., as a single coupled system, we also consider a gradient iterative method to solve those systems. At each iteration, the three equations within (2.21) are solved sequentially. For a functional J (u, f), define the functional T (k) := J (u(k), f(k)), where u(k) corresponds to the solution of state equation for a given f(k), or in practice, the approximate finite element solution of that equation. Here k is the iteration number of the gradient algorithm. In the algorithm, τ will denote a prescribed tolerance used to test for the convergence of the functional. The gradient method for minimizing a functional T (k) may then be described as follows. (a) initialization: (1) choose τ and fhs (0); set k = 0 and (2) solve for

h,m us (0),

 = 1; h,m

m = 1, ..., M from B (us

(0), v) = (fhs (0), wh ) ∀ wh ∈ Vh0 .

(3) evaluate T (0) (b) main loop: (4) set k = k + 1 and compute E(uhs (k − 1)); h,m

(5) solve for vs

(k), m = 1, ..., M from B (vhs ,m (k), z) = (E(uhs (k − 1)) − U, z) ∀ z ∈ Vh0 and then compute E(vhs (k));

(6) solve for fhs (k) from fhs (k) = fhs (k − 1) −  (α fhs (k − 1) + E(vhs (k))); h,m

(7) solve for us

h h h ˙m (k), m = 1, ..., M from B (uhs ,m (k), wh ) = (fhs (k) + σ W s , w ) ∀ w ∈ V0 ;

(8) evaluate T (k); (9) if T (k) ≥ T (k − 1), set  = 0.5 and go to (6); otherwise continue; |T (k) − T (k − 1)| (10) if > τ , set  = 1.5 and go to (4); otherwise, stop; T (k) Here B means a weak form of the Stokes operator. The convergence of gradient algorithm can be easily proved by the similar way as performed in [15]. ˙ = 0 and the problem is linear, Now, we consider the control problem of deterministic Stokes equations. Since EW (E[u], E[ p ], E[f]) is the optimal solution of the deterministic control problem of the Stokes equations without white noise which is the optimal solution of the expectation of problem (1.2) and (1.1). Thus, we want to seek (u, p , f) which satisfies the minimization problem: minimize the functional

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.14 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

14

Table 4.1 Convergence rates for uh − UL2 ( D ) and J D (uh , p , fh ) w.r.t. h. 1/h

uh − U2L2 ( D )

C.R.

fh 2L2 ( D )

C.R.

J D (uh , ph , fh )

C.R.

4 8 16 32

5.443440e−04 5.204033e−04 5.171399e−04 5.165338e−04 5.164831e−04

– 2.82 2.58 3.69 –

1.026782e+03 1.009214e+03 1.007294e+03 1.007164e+03 1.007155e+03

– 3.25 3.89 3.98 –

5.408808e−03 5.306433e−03 5.295826e−03 5.294878e−03 5.294017e−03

– 3.29 3.51 3.82



˙ s | := ˙ s , vector form (left) and |W Fig. 4.1. The realization of the white noise W

J D (u, p , f) =

1 2

 |u − U|2 dx +

α

˙ s1 )2 + ( W ˙ s2 )2 where h = 1/16. (W



2

|f|2 dx

(4.1)

subject to the steady-state Stokes equations:

−u + ∇ p = f in , div u = 0 in  u=0

(4.2)

on ∂ 

From now on, we set α = 10−5 and s = h but we still use both h and s for convenience. Actually, we do not know the exact solution of the optimal control problem (4.1)–(4.2), so we gauge the behavior of uh − U, since uh comes close to U as h → 0 for a small α . This does not mean uh → U as h → 0, i.e. there is a gap between uh and U. Thus we estimate the value of EoL := limh→0 uh − U using the Aitken’s extrapolation techniques [16] and observe behaviors of | uh − U − EoL |. The convergence rate for the finite element approximations of the deterministic optimal control problem of the Stokes equation problem is well known as O (h2 ). In the Table 4.1, we list the L2 norm of uh − U2 and convergence rate (C.R.), L2 norm of fh 2L2 ( D ) and convergence rate and the cost J D (uh , p h , fh ) and convergence rate with respect to h. Before solving the optimal control problem for the stochastic case, we present some random velocities with white noise and the target velocity U in Fig. 4.1 and 4.3, respectively. We want to control these random velocities to be the target velocity U. We set σ = 0.1. We first discover, as shown in Fig. 4.2, that the variance/standard deviation of the velocity u = (u 1 , u 2 ) is quite small (in the order of 10−4 ), which indicates that we can perform Monte Carlo simulations with relatively small sample sizes. Actually, it is well known for a given random variable X , the convergence rate for the evaluation of E X using Monte ! that " S D( X ) Carlo simulation is O where S D ( X ) stands for the standard deviation of X and M is the sample size. So, the √ M ! " 1 sample size in our problem should be O so that the Monte Carlo error matches the finite element error. In our h2 | ln h| Monte Carlo simulations, sufficient numbers of samples have been used. Thus, the increase of the sample size is increased by 16 folds as the mesh size is decreased by half.

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.15 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

Fig. 4.2. The standard deviations of |uhs | =

15

  h 2 h 2 (uhs1 )2 + (uˆ hs2 )2 and the deterministic control |fhs | = ( f s1 ) + ( f s2 ) where h = 1/16.

Table 4.2 Convergence rates for Euhs − U2L2 ( D ) and J D (uhs , p hs , fhs ) w.r.t. h. 1/h

M

Euhs − U2L2 ( D )

C.R.

Efhs 2L2 ( D )

C.R.

J (uhs , phs , fhs )

C.R.

4 8 16 32

16 256 4096 65536 –

5.446686e−04 5.208362e−04 5.171661e−04 5.165838e−04 5.164020e−04

– 2.70 2.67 2.76 –

1.026587e+03 1.009237e+03 1.007296e+03 1.007234e+03 –

– 3.27 4.98 4.98 –

5.405272e−03 5.306605e−03 5.295061e−03 5.294099e−03 5.294012e−03

– 3.14 3.58 3.58 –



Table 4.3 Convergence rates for Euhs 2L2 ( D ) , Evhs 2L2 ( D ) and E p hs 2L2 ( D ) w.r.t. h. 1/h

M

Euhs 2L2 ( D )

C.R.

Evhs 2L2 ( D )

C.R.

E phs 2L2 ( D )

C.R.

4 8 16 32

16 256 4097 65536 –

3.299667e−01 3.357624e−01 3.362171e−01 3.362474e−01 3.362495e−01

– 3.69 3.91 3.91 –

1.026764e−07 1.009244e−07 1.007317e−07 1.007174e−07 1.007163e−07

– 3.24 3.76 3.76 –

2.065234e+00 1.928897e+00 1.907914e+00 1.905311e+00 1.904942e+00

– 2.74 3.01 3.01 –



In Table 4.2, we list the results of numerical approximations and convergence rates of Euhs − U2L2 ( D ) , Efhs 2L2 ( D ) and

J D (uhs , phs , fhs ), with respect to h.

In Table 4.3, we list the results of numerical approximations and convergence rates of Euhs 2L2 ( D ) , Evhs 2L2 ( D ) and

E phs 2L2 ( D ) with respect to h. Fig. 4.3 shows the desired state U in vector form and |U| :=



U 12 + U 22 where h = 1/16 and Fig. 4.4 shows the optimal

˙ s | and state |uhs | where h = 1/16. control with white noise |fhs | + |W 5. Conclusion

In this article, we have studied an optimal control problem governed by the two dimensional stochastic Stokes equations with additive white noise. The white noise is discretized by the piecewise constant polynomials. Then, to obtain one of the most efficient deterministic optimal control, we set up the cost functional as we proposed in [20]. We investigate the errors of the finite element approximation for the optimality system. The convergence rates are recovered as the deterministic case ˙ Numerical test carried out and results since the control fhs take care of the stochastic effect from the additive white noise W. are presented to validate our theorem. On going work includes the comparison study and finite element approximations for different matching cost functionals for optimal control problems of stochastic Navier–Stokes equations with additive white noise and uncertainties in viscosity term. We are also going to investigate a multilevel Monte Carlo methods for optimal control problems of stochastic Navier–Stokes equations.

JID:APNUM AID:3338 /FLA

16

[m3G; v1.232; Prn:7/03/2018; 15:18] P.16 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

Fig. 4.3. The desired state, vector form (left) and |U| :=



U 12 + U 22 where h = 1/16.

˙ s | and state E|uhs | where h = 1/16. Fig. 4.4. The optimal control with white noise |fhs | + |W

Acknowledgements The corresponding author’s research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF-2016R1D1A1B03932219). References [1] F. Abergal, R. Temmam, On some control problems in fluid mechanics, Theor. Comput. Fluid Dyn. 1 (1990) 303–325. [2] I. Babuska, P. Chatzipantelidis, On solving elliptic stochastic partial differential equations, Comput. Methods Appl. Mech. Eng. (2002). [3] I. Babuska, R. Tempone, G.E. Zouraris, Galerkin finite element approximations of stochastic elliptic partial differential equations, SIAM J. Numer. Anal. 42 (2) (2004) 800–825. [4] I. Babuska, R. Tempone, G.E. Zouraris, Solving elliptic boundary value problems with uncertain coefficients by the finite element method: the stochastic formulation, Comput. Methods Appl. Mech. Eng. 194 (2005) 1251–1294. [5] F. Brezzi, J. Rappaz, P. Raviart, Finite-dimensional approximation of nonlinear problems. Part 1: branches of nonsingular solutions, Numer. Math. 36 (1980) 1–25. [6] Y. Cao, Z. Chen, M.D. Gunzburger, Error analysis of finite element approximations of the stochastic Stokes equations, Adv. Comput. Math. (2010) 1–16. [7] Y. Cao, H. Yang, L. Yin, Finite element methods for semilinear elliptic stochastic partial differential equations, Numer. Math. 106 (2) (Feb. 2007) 181–198.

JID:APNUM AID:3338 /FLA

[m3G; v1.232; Prn:7/03/2018; 15:18] P.17 (1-17)

Y. Choi, H.-C. Lee / Applied Numerical Mathematics ••• (••••) •••–•••

17

[8] P. Chen, A. Quarteroni, G. Rozza, Multilevel and weighted reduced basis method for stochastic optimal control problems constrained by stokes equations, Numer. Math. (2016) 67–102. [9] Y. Choi, H.-C. Lee, S.D. Kim, Analysis and computations of least-squares method for optimal control problems for the stokes equations, J. Korean Math. Soc. 46 (5) (2009). [10] Y. Choi, H.-C. Lee, B.-C. Shin, A least-square/penalty method for distributed optimal control problems for stokes equations, Comput. Math. Appl. 53 (11) (2007) 1672–1685. [11] R.G. Ghanem, P.D. Spanos, Stochastic Finite Elements: A Spectral Approach, Springer-Verlag, 1991. [12] V. Girault, P. Raviart, Finite Element Methods for Navier–Stokes Equations, Springer, Berlin, 1986. [13] V. Girault, P.-A. Raviart, Finite Element Methods for Navier–Stokes Equations, Springer, Berlin, 1986. [14] M.D. Gunzburger, L.S. Hou, Finite-dimensional approximation of a class of constrained nonlinear optimal control problems, SIAM J. Control Optim. 34 (3) (1996) 1001–1043. [15] M. Gunzburger, S. Manservisi, Analysis and approximation of the velocity tracking problem for Navier–Stokes flows with distributed control, SIAM J. Numer. Anal. 37 (5) (2000) 1481–1512. [16] W. Hager, Applied Numerical Linear Algebra, Prentice Hall, New Jersey, 1988. [17] F. Hecht, New development in freefem++, J. Numer. Math. 20 (3–4) (2012) 251–265. [18] L. Hou, J. Lee, H. Manouzi, Finite element approximations of stochastic optimal control problems constrained by stochastic elliptic PDEs, J. Math. Anal. Appl. 384 (2011) 87–103. [19] O. Ladyzhenskaya, The Mathematical Theory of Viscous Incompressible Flows, 2nd edition, Gordon and Breach, New York, 1969. [20] H.-C. Lee, M. Gunzburger, Comparison of approaches for random PDE optimization problems based on different matching functionals, Comput. Math. Appl. 73 (8) (2017) 1657–1672. [21] J.L. Lions, Optimal Control of Systems governed by Partial Differential Equations, Springer, New York, 1971. [22] H. Manouzi, A finite element approximation of linear stochastic PDEs driven by multiplicative white noise, Int. J. Comput. Math. 85 (3–4) (2008) 527–546. [23] G. Stefanou, The stochastic finite element method: past, present, and future, Comput. Methods Appl. Mech. Eng. 198 (9–12) (2009) 1031–1051. [24] R. Témam, Nonlinear Functional Analysis and Navier–Stokes Equations, SIAM, Philadelphia, 1983. [25] J. Walsh, Introduction to Stochastic Partial Differential Equations, Lecture Notes in Mathematics, vol. 1180, Springer, New York, 1986, pp. 265–439.