A random regularized approximate solution of the inverse problem for Burgers’ equation

A random regularized approximate solution of the inverse problem for Burgers’ equation

STAPRO: 8018 Model 3G pp. 1–9 (col. fig: nil) Statistics and Probability Letters xx (xxxx) xxx–xxx Contents lists available at ScienceDirect Stat...

322KB Sizes 2 Downloads 44 Views

STAPRO: 8018

Model 3G

pp. 1–9 (col. fig: nil)

Statistics and Probability Letters xx (xxxx) xxx–xxx

Contents lists available at ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

A random regularized approximate solution of the inverse problem for Burgers’ equation Erkan Nane a , Nguyen Hoang Tuan b , Nguyen Huy Tuan b, * a b

Department of Mathematics and Statistics, Auburn University, Auburn, USA Applied Analysis Research Group, Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Viet Nam

article

a b s t r a c t

info

Article history: Received 26 February 2017 Received in revised form 24 June 2017 Accepted 23 August 2017 Available online xxxx Keywords: Inverse problem Burgers equation Random approximation

1

1. Introduction In this work, we consider a backward in time problem for 1-D Burgers’ equation

2

ut − (A(x, t)ux )x u(x, t) u(x, T )

{ 3

4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

In this paper, we find a regularized approximate solution for an inverse problem for Burgers’ equation. The solution of the inverse problem for Burgers’ equation is ill-posed, i.e., the solution does not depend continuously on the data. The approximate solution is the solution of a regularized equation with randomly perturbed coefficients and randomly perturbed final value and source functions. To find the regularized solution, we use the modified quasi-reversibility method associated with the truncated expansion method with nonparametric regression. We also investigate the convergence rate. © 2017 Elsevier B.V. All rights reserved.

= uux + G(x, t), = 0, = H(x),

(x, t) ∈ Ω × (0, T ), x ∈ ∂Ω, x ∈ Ω,

(1.1)

where Ω = (0, π ). Here the coefficient A(x, t) is a C 1 smooth function and A(x, t) ≥ a > 0. The Burgers equation is a fundamental partial differential equation occurring in various areas of applied mathematics, such as fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow (Kukavica, 2007). One can see that the term (A(x, t)ux )x is ∆u = uxx if A = 1. However, one cannot use spectral methods to study the operator (A(x, t)ux )x . So, the problem (1.1) is a challenging one. The second observation is that for the equation ut − (A(x, t)ux )x = f (u, ux ) when A(x, t) is deterministic and f (u, ux ) = f (u), the problem is a consequence of Theorem 4.1 in our recent paper (Kirane et al., 2017a). However, if A(x, t) is randomly perturbed and f (u, ux ) depends on u and ux then the problem is again more challenging. Until now, the deterministic Burgers equation with the randomly perturbed case has not been studied. Hence, the paper is the first study of Burgers’ equation backward in time. The inclusion of the gradient term in uux in the right hand side of Burgers’ equation makes Burgers’ equation more difficult to study. We need to find an approximate function for uux . This task is nontrivial. This paper is a continuation of our study of backward problems in the two recent papers (Kirane et al., 2017a, b). In those papers the equations did not have random coefficients in the main equations. The paper (Kirane et al., 2017a) does not consider the random operator. The paper (Kirane et al., 2017b) considers the simple coefficient A(x, t) = A(t) and the source

*

Corresponding author. E-mail address: [email protected] (N.H. Tuan).

https://doi.org/10.1016/j.spl.2017.08.014 0167-7152/© 2017 Elsevier B.V. All rights reserved.

Please cite this article in press as: Nane E., et al., A random regularized approximate solution of the inverse problem for Burgers’ equation. Statistics and Probability Letters (2017), https://doi.org/10.1016/j.spl.2017.08.014.

STAPRO: 8018

2

E. Nane et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

function is u − u3 . Hence, one can see that Burgers’ equation considered here is more difficult since the gradient term in the right hand side and the coefficient A(x, t) depend on both x and t. It is known that the backward problem mentioned above is ill-posed in general (Kukavica, 2007), i.e., a solution does not always exist. When the solution exists, the solution does not depend continuously on the given initial data. In fact, from a small perturbation of a physical measurement, the corresponding solution may have a large error. This makes the numerical computation troublesome. Hence a regularization is needed. It is well-known that there are some difficulties to study the nonlocal Burger equation. First, by the given form of coefficient A(x, t) in the main equation (1.1), the solution of problem (1.1) cannot be transformed into a nonlinear integral equation. Hence, classical spectral methods cannot be applied. The second reason that makes Burger’s equation more difficult to study is the presence of gradient term ux in the right hand side. Until now, although there are limited number of results on the backward problem for Burgers’ equation (Carasso, 1977; Hao et al., 2015), there are no results for regularizing the problem with random case. As is well-known, measurements always are given at a discrete set of points and contain errors. These errors may be generated from controllable sources or uncontrollable sources. In the first case, the error is often deterministic. If the errors are generated from uncontrollable sources as wind, rain, humidity, etc., then the model is random. Methods used for the deterministic cases cannot be applied directly to the random case. In this paper, we consider the following model:

˜ H(xk ) = H(xk ) + σk ϵk , ˜ Gk (t) = G(xk , t) + ϑξk (t),

for k = 1, n,

(1.2)

and

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

17 18

˜ Ak (t) = A(xk , t) + ϑξk (t),

for k = 1, n.

(1.3)

where xk = π and ϵk are unknown independent random errors. Moreover, ϵk ∼ N (0, 1), and σk , ϑ, ϑ are unknown positive constants which are bounded by a positive constant Vmax , i.e., 0 ≤ σk < Vmax for all k = 1, . . . , n. ξk (t) are Brownian motions. The noises ϵk , ξk (t) are mutually independent. Our task is reconstructing the initial data u(x, 0). We next want to mention about the organization of the paper and our methods in this paper. We prove some preliminary results in Section 2. We state and prove our main result in Section 3. The existence and uniqueness of solution of Eq. (1.1) is an open problem, and we do not investigate this problem here. For the inverse problem, we assume that the solution of Burgers’ equation (1.1) exists. In this case its solution is not stable. In this paper we establish an approximation of the backward in time problem for 1-D Burgers’ equation (1.1) with the solution of a regularized equation with randomly perturbed equation (2.13). The random perturbation in Eq. (2.13) is explained in Eqs. (1.2), (1.3), (2.13) and (2.14). 2k−1 2n

2. Some notation

20 21 22 23 24 25 26 27 28

29

We first introduce notation, and then state our main results in this paper. We define the Dirichlet–Laplacian.

∂ 2 f (x) Af (x) := −∆f (x) = − . ∂ x2

30

(2.4)

Since A is a linear densely defined self-adjoint and positive definite elliptic operator on the connected bounded domain Ω = (0, π ) with Dirichlet boundary condition, the eigenvalues of A satisfy

λ0 = 0 < λ1 ≤ λ2 ≤ λ3 ≤ · · · ≤ λp ≤ · · ·

Aϕp (x) = −λp φp (x), φp (x) = 0,



2

π

sin(px). Thus

x∈Ω x ∈ ∂Ω.

{

v ∈ L2 ( Ω ) :

∞ ∑

32 33

35 36

37

2 The functions ϕp are normalized so that {φp }∞ p=0 is an orthonormal basis of L (Ω ). Defining

{

31

34

with λp = p2 → ∞ as p → ∞. The corresponding eigenfunctions are denoted, respectively, by ϕp (x) = the eigenpairs (λp , φp ), p = 0, 1, 2, . . ., satisfy

H γ (Ω ) =

19

38 39

} ⟩2 ⟨ λ2pγ | v, φp | < +∞ ,

40

p=0

where ⟨·, ·⟩ is the inner product in L2 (Ω ), then H γ (Ω ) is a Hilbert space equipped with norm

∥v∥H γ (Ω )

⎛ ⎞1/2 ∞ ∑ ⟨ ⟩ 2 =⎝ λ2pγ | v, φp | ⎠ .

41

42

p=1

Now, we describe the main idea of our method. First, we approximate H and G by the approximating functions ˆ Hβn and ˆ Gβn that are defined in Theorem 2.1. Next, our task is of finding the approximating operator for ∇ (A(x, t)∇ ). We will not Please cite this article in press as: Nane E., et al., A random regularized approximate solution of the inverse problem for Burgers’ equation. Statistics and Probability Letters (2017), https://doi.org/10.1016/j.spl.2017.08.014.

43 44

STAPRO: 8018

E. Nane et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

1 2 3 4 5 6 7

8

approximate directly the time dependent operator ∇ (A(x, t)∇ ) as introduced in Lattès and Lions (1967). We introduce a new approach by giving the unbounded time independent operator P that is defined in Lemma 2.1. Then, we approximate P Aβn (x, t)∇ ) − P + Pρn by a bounded operator Pρn and we continue to replace ∇ (A(x, t)∇ ) by a new approximate operator ∇ (ˆ in order to establish an approximation for the regularized problem (2.13). Since this new operator is elliptic and bounded, we can show that the regularized problem (2.13) is well-posed. Here βn satisfies that limn→∞ βn = +∞, and we choose ρn that depends on βn suitably to obtain the convergence rate. Next, we state the following lemmas that will be used in this paper. Theorem 2.1 (Theorem 2.1 in Kirane et al., 2017a). Define the set Wβn for any n ∈ N

{

√ } βn

Wβn = p ∈ N : p ≤

9

10

3

(2.5)

where βn satisfies lim βn = +∞.

11

n→+∞ 12

For a given n and βn we define functions that are approximating H , G as follows:

[ ˆ Hβn (x) =

13



n

p∈Wβn 14

Let us choose µ0 >

1 . 2

] ] [ n n ∑ π∑ π ∑˜ ˜ H(xk )ψp (xk ) ψp (x), ˆ Gβn (x, t) = Gk (t)ψp (xk ) ψp (x). p∈Wβn

k=1

n

(2.6)

k=1

If H ∈ Hµ0 (Ω ) and G ∈ L∞ (0, T ; Hµ0 (Ω )) then the following estimates hold:

⏐ ⏐2 √ −µ ⏐ ⏐ , ≤ C (µ0 , H) βn n−4µ0 + 4βn 0 ⏐H ⏐ µ H 0 (Ω ) L2 (Ω ) ⏐2 ⏐ ⏐ ⏐2 √ ⏐ ⏐ −µ ⏐ ⏐ ≤ C (µ0 , G) βn n−4µ + 4βn 0 ⏐G⏐ Gβn (., t) − G(., t)⏐ E⏐ˆ ∞ 2 ∞ ⏐2 ⏐

⏐ ⏐

Hβn − H ⏐ E⏐ˆ 15

L

16

L

(0,T ;Hµ0 (Ω ))

,

where 2π 1/2

2 C (µ0 , H) = 8π Vmax

17

18

(0,T ;L (Ω ))

+

16C 2 µ0 π 1/2 ⏐ ⏐2 ⏐H ⏐ µ H 0 (Ω ) Γ (1/2)

+

16C 2 µπ 1/2 ⏐ ⏐2 . ⏐G⏐ ∞ L (0,T ;Hµ0 (Ω )) Γ (1/2)

Γ (1/2)

⏐ ⏐

and 2 C (µ0 , G) = 8π Vmax

19

2π 1/2

Γ (1/2)

⏐ ⏐

20

⏐ ⏐

⏐2 ⏐

21

Hβn − H ⏐ Corollary 2.1 (Corollary 2.1 in Kirane et al., 2017a). Let H , G be as in Theorem 2.1. Then the term E⏐ˆ

22

⏐2 ⏐ G⏐ ∞ L

(0,T ;L2 (Ω ))

L2 (Ω )

⏐ ⏐ Gβn − + T E⏐ˆ

is of order

(

23

) √ −4µ −µ max βn n 0 , βn 0 .

24

25

Lemma 2.1. Define the following space of functions:

{ 26

Zγ ,B (Ω ) :=

f ∈ L (Ω ) , 2

∞ ∑

} 2+2γ 2Bp2

p

e



f , ψp

⟩2

L2 (Ω )

< +∞ ,

(2.7)

p=1 27

28

for any γ ≥ 0 and B ≥ 0. Define also the operator P = A1 ∆ and Pρn is defined as follows: Pρn (v ) = A1

∑ p≤

29

30 31

32



p2 v (x), ψp



ρn



L2 (Ω )

ψp ,

(2.8)

A1

for any function v ∈ L2 (Ω ). Then for any v ∈ L2 (Ω )

∥Pρn (v )∥L2 (Ω ) ≤ ρn ∥v∥L2 (Ω ) ,

(2.9)

and for v ∈ Zγ ,TA1 (Ω ) then

∥Pv − Pρn v∥L2 (Ω ) ≤ A1 ρn−γ e−T ρn ∥v∥Zγ ,TA1 (Ω ) .

(2.10) 1

Please cite this article in press as: Nane E., et al., A random regularized approximate solution of the inverse problem for Burgers’ equation. Statistics and Probability Letters (2017), https://doi.org/10.1016/j.spl.2017.08.014.

STAPRO: 8018

4

E. Nane et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

Proof. First, for any v ∈ L2 (Ω ), we have



∥Pρn (v )∥2L2 (Ω ) = A21

p≤

p4 v (x), ψp



p≤

L2 (Ω )

ρn



A1

∑ ⟨

≤ ρn2

⟩2



v (x), ψp

ρn

⟩2

= ρn2 ∥v∥2L2 (Ω ) ,

L2 (Ω )

(2.11)

A1

and

∥Pv − Pρn (v )∥2L2 (Ω ) = A21

∞ ∑ p>



p−4γ e−2TA1 |p| p4+4γ e2TA1 p v (x), ψp 2

2



ρn

⟩2

L2 (Ω )

A1

∞ ∑

≤ A21 ρn−2γ e−2TA1 ρn



p>

p4+4γ e2TA1 p v (x), ψp 2



ρn

⟩2

L2 (Ω )

A1

= A21 ρn−2γ e−2T ρn ∥v∥2Zγ ,TA

1

(Ω )

.

(2.12)



Now, we can assume that Aρn (x, t), A(x, t) ≤ A0 for all (x, t) ∈ Ω × (0, T ) and we choose A1 > A0 . We describe our regularized problem by defining the following:

⎧ ( ) ∂˜ Uρn ,βn ∂˜ Uρn ,βn ⎪ ⎪ ⎪ − P˜ Uρn ,βn + Pρn ˜ Uρn ,βn = − A (x , t) βn ⎪ ⎪ ∂t ∂x ⎪ x ⎪ ( ) ⎨ ∂˜ Uρn ,βn 0 ˜ =F ˆ Uρn ,βn , +ˆ Gρn (x, t), 0 < t < T , Qn ⎪ ∂x ⎪ ⎪ ˜ ⎪ Uρn ,βn (x, t) = 0, x ∈ ∂ Ω , ⎪ ⎪ ⎪ ⎩ ˜ Uρn ,βn (x, T ) = ˆ Hβn (x).

(2.13)

Here Aβn is defined by





2

π

4

n

]

π ∑˜ Ak (t)ψp (xk ) ψp (x)

(2.14)

6

sin(px). Noting as above, the function F (u, ux ) = uux in the first equation of problem (1.1) is locally

7

p∈Wβn

where ψp (x) =

3

5

[ Aβn (x, t) =

2

n

k=1

0

Lipschitz function and is approximated by the function F ˆ Qn

) ( ∂˜ U ˜ Uρn , ρn in the first equation of problem (2.13) where ∂x

8

0

F ˆ v ) := V ˆ V Qn (v, ˆ

9

and

10

⎧ Qn , ⎨−ˆ V := v, ⎩ˆ Qn ,

if if if

v ∈ (−∞, −ˆ Qn ), v ∈ [−ˆ Qn , ˆ Qn ], v ∈ (ˆ Qn , +∞),

⎧ Qn , ⎨−ˆ ˆ V := ˆ v, ⎩ˆ Qn ,

if ˆ v ∈ (−∞, −ˆ Qn ), if ˆ v ∈ [−ˆ Qn , ˆ Qn ], if ˆ v ∈ (ˆ Qn , +∞).

(2.15)

Here the function ˆ Qn is an increasing function and limn→+∞ ˆ Qn = +∞. For a sufficiently large n > 0 we have that

11

12

( ) ˆ Qn ≥ max ∥u∥L∞ ((0,T );L2 (Ω )) , ∥ux ∥L∞ ((0,T );L2 (Ω )) .

13

0

We show that F ˆ Qn is a globally Lipschitz function by the following lemma.

14

Lemma 2.2. For any (v, ˆ v ) ∈ R2 , (w, w ˆ ) ∈ R2 ,

15

⏐ ⏐ ( ) 0 ⏐ ˆ ⏐ 0 ( v, ˆ v ) − F ( w, w ˆ ) ≤ Q |v − w| + | ˆ v − w ˆ | . ⏐F ˆ ⏐ ˆ n Qn Qn

(2.16)

16 17

1

Proof. We divide the proof into five cases:

2

Case 1. If max{v, ˆ v} < −ˆ Qn and max{w, w ˆ} < −ˆ Qn then it is easy to see that F ˆ v) − F ˆ ˆ) = 0. Qn (v, ˆ Qn (w, w

3

Case 2. If max{v, ˆ v} < −ˆ Qn ≤ max{w, w ˆ} ≤ ˆ Qn then using triangle inequality, we get the following.

0

0

Please cite this article in press as: Nane E., et al., A random regularized approximate solution of the inverse problem for Burgers’ equation. Statistics and Probability Letters (2017), https://doi.org/10.1016/j.spl.2017.08.014.

STAPRO: 8018

E. Nane et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

5

(a) If −ˆ Qn ≤ w, w ˆ≤ˆ Qn then

⏐ ⏐ ⏐ ⏐ ⏐ ( ) ( ) ⏐⏐ 0 ⏐ ⏐ 2 ⏐ 0 ⏐ ⏐ v) − F ˆ ˆ)⏐ = ⏐ˆ Qn − w w ˆ⏐ = ⏐ˆ Qn ˆ Qn + w − w ˆ Qn + w ˆ ⏐ ⏐F ˆ Qn (v, ˆ Qn (w, w ≤ˆ Qn |w + ˆ Qn | + |w||w ˆ+ˆ Qn | ( ) ( ) ≤ˆ Qn |w + ˆ Qn | + |w ˆ+ˆ Qn | ≤ ˆ Qn |w − v| + |w ˆ −ˆ v| . (b) If w < −ˆ Qn ≤ w ˆ≤ˆ Qn then

⏐ ⏐ ⏐ ⏐ ⏐ ) ⏐⏐ 0 ⏐ ⏐ˆ2 ˆ ⏐ ⏐ˆ (ˆ ⏐ 0 ( v, ˆ v ) − F ( w, w ˆ ) = + Q w ˆ Q = Q Q + w ˆ ⏐ ⏐ ⏐ ⏐ ⏐ ⏐F ˆ ˆ n n n Qn Qn n ( ) ≤ˆ Qn |ˆ v−w ˆ| ≤ ˆ Qn |v − w| + |ˆ v−w ˆ| . (c) If w ˆ < −ˆ Qn ≤ w ≤ ˆ Qn then

⏐ ⏐ ⏐ ⏐ ⏐ ) ⏐⏐ 0 ⏐ ⏐ 2 ˆ ⏐ ⏐ˆ (ˆ ⏐ 0 v) − F ˆ ˆ)⏐ = ⏐ˆ Qn + Qn w ⏐ = ⏐Qn Qn + w ⏐ ⏐F ˆ Qn (v, ˆ Qn (w, w ( ) ≤ˆ Qn |ˆ v−w ˆ| ≤ ˆ Qn |v − w| + |ˆ v−w ˆ| . 4

5

Case 3. If max{v, ˆ v} < −ˆ Qn < ˆ Qn ≤ max{w, w ˆ} then

⏐ ⏐ ⏐ ⏐ 0 ⏐ ⏐ 2 ˆ2 ⏐ ⏐ 0 Qn − Qn ⏐ = 0. v) − F ˆ ˆ)⏐ = ⏐ˆ ⏐F ˆ Qn (v, ˆ Qn (w, w Case 4. If −ˆ Qn < max{v, ˆ v}, max{w, w ˆ} ≤ ˆ Qn then

⏐ ⏐ ⏐ ⏐ ⏐ ⏐ 0 ⏐ 0 ⏐ ⏐ ⏐ ⏐ ⏐ ( v − w ) ˆ v + w ( ˆ v − w ˆ ) = v ˆ v − w w ˆ = ( v, ˆ v ) − F ( w, w ˆ ) ⏐F ˆ ⏐ ⏐ ⏐ ⏐ ⏐ ˆ Qn Qn ( ) ≤ |ˆ v||v − w| + |w||ˆ v−w ˆ| ≤ ˆ Qn |v − w| + |ˆ v−w ˆ| .

7

Case 5. The case max{v, ˆ v} > ˆ Qn and max{w, w ˆ} > ˆ Qn is similarly proved as four above steps. By all cases above, we complete the proof of lemma 2.2. □

8

3. Regularized solutions for backward problem for Burgers’ equation

6

9

10 11

12

Our main result in this paper is stated as follows: Theorem 3.1. Let the functions H ∈ Hµ0 (Ω ) and A, G ∈ L∞ (0, T ; Hµ0 (Ω )), for (µ0 > 12 . Then problem (2.13) has unique solution ) ˜ Uρn ∈ C ([0, T ]; L2 (Ω )). Assume that problem (1.1) has unique solution u ∈ L∞ 0, T ; Zγ ,TA1 (Ω ) . Let us choose ˆ Qn such that ( ) 2 ( 16|ˆ Qn | T ) −µ lim exp max e2ρn T βn1/2 n−4µ , e2ρn T βn 0 , ρn−2γ = 0. (3.17) n→+∞ A1 − A0 2

13

14

Then for n large enough, E∥˜ Uρn ,βn (x, t) − u(x, t)∥L2 (Ω ) is of order 2 ( 16|ˆ Qn | T )

exp

A1 − A0

e

( −2κn t

max e

) 2ρn T

β

1/2 −4µ n n

2ρn T

,e

−µ0

βn

−2γ

, ρn

.

(3.18)

15

16

17

Proof. Denote by B(x, t) = A1 − A(x, t),

Bβn (x, t) = A1 − Aβn (x, t).

(3.19)

The first equation of problem (1.1) can be written as

(( ) ∂u ) ∂u ( ∂u ) + Bβn (x, t) = uux + Bβn (x, t) − B(x, t) ∂t ∂x x ∂x x + A1 ∆u + G(x, t)

(3.20)

Please cite this article in press as: Nane E., et al., A random regularized approximate solution of the inverse problem for Burgers’ equation. Statistics and Probability Letters (2017), https://doi.org/10.1016/j.spl.2017.08.014.

STAPRO: 8018

6

E. Nane et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

and the first equation of problem (2.13) is rewritten as

) ( ( ( ( ) ∂˜ ∂˜ Uρn ,βn ∂˜ Uρn ,βn ) ∂˜ Uρn ,βn Uρn ,βn ) 0 ˜ + Bβn (x, t) =F ˆ , + Bβn (x, t) − B(x, t) U ρn ,βn Qn x x ∂t ∂x ∂x ∂x + Pn ˜ Uρn ,βn + Gρn (x, t).

(3.21)

For κn > 0, we put

1

]

[

Uρn ,βn (x, t) − u(x, t) . Yρn ,βn (x, t) = eκn (t −T ) ˜

2

Then the last two equations, and a simple computation give

3

∂ Yρn ,βn ( ∂ Yρn ,βn ) + Bρn − κn Yρn ,βn x ∂t ∂x (( ( ) ) ∂u ) = Pn Yρn ,βn − eκn (t −T ) Pρn − P u − eκn (t −T ) Bβn (x, t) − B(x, t) ∂x x ( ] [ ) ˜ [ ] ∂ U 0 ρn ,βn ˜ Uρn ,βn , + eκn (t −T ) F ˆ − uux + eκn (t −T ) Gβn (x, t) − G(x, t) Qn ∂x

4

and Yρn ,βn |∂ Ω = 0, Yρn ,βn (x, T ) = H βn (x) − H(x).

5

By taking the inner product of the two sides of the last equality with Yρn ,βn and noting the equality

6

∫ ( Ω

Bβn

∂ Yρn ,βn ) Yρn ,βn dx = − x ∂x

⏐ ∂Y ⏐ ⏐ ρn ,βn ⏐2 ⏐ dx, ∂x



Bβn (x, t)⏐



7

one deduces that 1 d 2 dt

8

⏐2 ⏐ ∂Y ⏐ ρn ,βn ⏐ ⏐ dx − κn ∥Yρn ,βn (·, t)∥2L2 (Ω ) ∂ x ⟨Ω ⟩ ⟨ ⟩ ( ) = Pn Yρn ,βn , Yρn ,βn + eκn (t −T ) Pρn − P u, Yρn ,βn L2 (Ω ) L2 (Ω )      

∥Yρn ,βn (·, t)∥2L2 (Ω ) −



Bβn (x, t)⏐

˜12,n =:J



+ −e 

(( κn (t −T )

⟨ + eκn (t −T )

+ e 

9

˜14,n =:J

( ) ] ⟩ ∂˜ Uρn 0 ˜ F ˆ U , − uu x , Yρn ,βn ρn ,βn Qn ∂x L2 (Ω )  

[

 ⟨

˜13,n =:J

⟩ ) ∂u ) , Yρn ,βn Bβn (x, t) − B(x, t) ∂x x L2 (Ω )  

˜15,n =:J

[ κn (t −T )

Gβn (x, t) − G(x, t) , Yρn ,βn

]



⟩ L 2 (Ω )

.



˜16,n =:J

˜12,n , we have the following: For J

10

⏐ ⏐ ⏐˜ ⏐ ⏐J12,n ⏐ ≤ ∥Pn Yρn ∥L2 (Ω ) ∥Yρn (·, t)∥L2 (Ω ) ≤ ρn ∥Yρn (·, t)∥2L2 (Ω ) ,

(3.22)

11

˜13,n , using Cauchy–Schwarz inequality and (2.10), we have the following upper where we used inequality (2.9). And for J

12

bound:

13

⏐ ⏐ ⏐ ˜ ⏐ 1 2κn (t −T ) 2 −2γ −2T ρn A 1 ρn e ∥u∥2∞ ( ⏐J13,n ⏐ ≤ e 2

L

1

)

0,T ;Zγ ,TA (Ω ) 1

+ ∥Yρn ,βn (·, t)∥2L2 (Ω ) . 2

(3.23)

Please cite this article in press as: Nane E., et al., A random regularized approximate solution of the inverse problem for Burgers’ equation. Statistics and Probability Letters (2017), https://doi.org/10.1016/j.spl.2017.08.014.

14

STAPRO: 8018

E. Nane et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

1

2

7

The Cauchy–Schwarz inequality leads to the following estimation:

⏐ ⏐ ⏐ ⏐⟨ ⟩ (( ⏐ ) ∂u ) ⏐ ˜ ⏐ ⏐⏐ κn (t −T ) ⏐ , Yρ ,β Bβn (x, t) − B(x, t) ⏐J14,n ⏐ = ⏐ − e ∂ x x n n L2 (Ω ) ⏐ ⏐ ⏐⟨ (( ⏐ ⏐ ) ∂ u ) ∂ Yρn ,βn ⟩ ⏐ = ⏐⏐ − eκn (t −T ) Bβn (x, t) − B(x, t) , 2 ∂x ∂ x L (Ω ) ⏐  2 ⏐ ∫ ⏐  ∂u  ⏐ ∂ Yρn ,βn ⏐2 e2κn (t −T ) A1 − A0 2  ⏐ ⏐ ≤ ∥Bρn (., t) − B(., t)∥L2 (Ω )  ( ·, t) +  ∂x  ⏐ ∂ x ⏐ dx 2(A1 − A0 ) 2 Ω L2 (Ω ) 2  2 ∥Bβn (., t) − B(., t)∥L2 (Ω ) A1 − A0  ∂ Yρn ,βn    ∥u(·, t)∥2H 1 (Ω ) + ≤  ∂x  2 . 2(A1 − A0 ) 2 0 L (Ω ) 0

3

4

5

6

7

˜15,n , we note that F ˆ For J Qn (u, ux ) = uux and thanks to (2.16), we obtain ( ( ) ) ⏐ ⏐ ⏐ ⏐ ∂˜ Uρn ,βn ∂˜ Uρn ,βn 0 ⏐ 0 ⏐ 0 ⏐ ⏐ ˜ ˜ Uρn ,βn , = ⏐F ˆ U , F (u , u ) − uux ⏐ − ⏐F ˆ ⏐2 ˆ ρ ,β x Qn Q Q n n n n L (Ω ) L2 (Ω ) ∂x ∂x   ( )   ∂˜  Uρn ,βn ˜   ˆ ≤ Qn Uρn ,βn − u L2 (Ω ) +  − ux   ∂x 2 (Ω )   L) (    ∂ Y ρ ,β n n  = eκn (T −t) ˆ Qn Yρn ,βn L2 (Ω ) +   ∂x  2 L (Ω )     κn (T −t) ˆ  ∂ Yρn ,βn  ≤ 2e Qn  , ∂ x L2 (Ω ) 

   ∂ Yρn ,βn  ∂x  2



where we note that Yρn ,βn L2 (Ω ) ≤ 

L (Ω )

. This implies that

[ ( ) ] ⏐ ⏐ ⏐⟨ ⏐ ⟩ ∂˜ Uρn ,βn ⏐ ˜ ⏐ ⏐ κn (t −T ) 0 ⏐ ˜ U , F ˆ − uu ⏐J15,n ⏐ = ⏐ e ⏐ ρn ,βn x , Yρn ,βn Qn ∂x 2 L (Ω ) ) ( 2 ⏐ ⏐2 ∂˜ Uρn ,βn 8|ˆ Qn | A1 − A0 ⏐ 0 ⏐ ˜ + − uu F U , ∥Yρn ,βn (·, t)∥2L2 (Ω ) ≤ e2κn (t −T ) ⏐ ⏐ ˆ x ρ ,β Qn n n 2 L 2 (Ω ) ∂x A1 − A0 8|ˆ Qn |  2 2 8|ˆ Qn | A1 − A0  ∂ Yρn ,βn    + ≤ ∥Yρn ,βn (·, t)∥2L2 (Ω ) .  ∂x  2 2 A1 − A0 L (Ω ) ⏐ ⏐

⏐ ⏐

˜16,n ⏐ can be bounded by The term ⏐J ⏐ ⏐⟨ ⏐ ⟩ ] ⏐ ˜ ⏐ ⏐ κn (t −T ) [ Gβn (., t) − G(., t) , Yρn ,βn ⏐J16,n ⏐ = ⏐ e 2

L (Ω )

8

≤ 9

1 2

2κn (t −T )

e

⏐ ⏐ ⏐

1

  Gβn (., t) − G(., t)22

L (Ω )

+ ∥Yρn ,βn ∥L2 (Ω ) . 2

Combining all the previous estimates, we get

⏐ ∂Y ⏐ ⏐ ρn ,βn ⏐2 ⏐ dx − 2κn ∥Yρn ,βn (·, t)∥2L2 (Ω ) dt ∂x Ω ) ≥ −2ρn ∥Yρn ,βn (·, t)∥2L2 (Ω ) − e2κn (t −T ) A21 ρn−2γ e−2T ρn ∥u∥2∞ ( d

∥Yρn ,βn (·, t)∥2L2 (Ω ) − 2



Bβn (x, t)⏐

0,T ;Zγ ,TA (Ω )

L

1

2

10

− ∥Yρn ,βn (·, t)∥2L2 (Ω ) −

∥Bβn (., t) − B(., t)∥L2 (Ω ) (A1 − A0 )

∥u(·, t)∥2H 1 (Ω ) 0

  2  ∂ Yρn ,βn 2 16|ˆ Qn |   − 2(A1 − A0 )  − ∥Yρn ,βn (·, t)∥2L2 (Ω ) ∂ x L2 (Ω ) A1 − A0  2 − e2κn (t −T ) Gβn (., t) − G(., t)L2 (Ω ) − ∥Yρn ,βn ∥2L2 (Ω ) . Please cite this article in press as: Nane E., et al., A random regularized approximate solution of the inverse problem for Burgers’ equation. Statistics and Probability Letters (2017), https://doi.org/10.1016/j.spl.2017.08.014.

STAPRO: 8018

8

E. Nane et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

By taking the integral from t to T and a simple calculation yields 2 L2 (Ω )

− ∥Yρn ,βn (·, t)∥ ∥Yρn ,βn (·, T )∥ ⎛ ∫ T ⎝A21 ρn−2γ e−2T ρn ∥u∥2 ( + ∞



2

)

0,T ;Zγ ,TA (Ω )

L

t

1

2 L2 (Ω )

+

∥Bβn (x, t) − B(x, t)∥L2 (Ω ) (A1 − A0 )

1

2 L∞ ((0,T );H 1 (Ω ))

∥u∥

0

⎠ ds

 )  ∂ Yρn ,βn 2   Bρn (x, s) − (A1 − A0 )  ≥2 dxds ∂ x L2 (Ω ) t Ω ) ( ∫ T 2 16|ˆ Qn | − 2 ∥Yρn ,βn (·, s)∥2L2 (Ω ) ds + 2κn − 2ρn − A − A 1 0 t 2  − Te2κn (t −T ) Gβn (., t) − G(., t)L∞ (0,T ;L2 (Ω )) ) ∫ T( 2 16|ˆ Qn | 2κn − 2ρn − ≥ − 2 ∥Yρn ,βn (·, s)∥2L2 (Ω ) ds A1 − A0 t 2  − Te2κn (t −T ) Gβn (., t) − G(., t)L∞ (0,T ;L2 (Ω )) . ∫

T

∫ (

2

where we used the fact that

3

Bβn (x, s) = A1 − Aρn (x, s) ≥ A1 − A0 .

4

Let us choose κn = ρn then we obtain 2κn (t −T )

e

5

2

E∥˜ Uρn ,βn (., t) − u(., t)∥L2 (Ω ) 2

≤ E∥H βn − H ∥L2 (Ω ) + TA21 ρn−2γ e−2T ρn ∥u∥2∞ ( L

)

0,T ;Zγ ,TA (Ω ) 1

 2 + T EGβn (., t) − G(., t)L∞ (0,T ;L2 (Ω ))

6

2

E∥Bβn (., t) − B(., t)∥L∞ (0,T ;L2 (Ω ))

+ +

(A1 − A0 ) 2 ( 16|ˆ Qn |

A1 − A0

T

)∫

+2

t

∥u∥

2 L∞ ((0,T );H 1 (Ω )) 0

2

e2κn (s−T ) E∥˜ Uρn ,βn (., s) − u(., s)∥L2 (Ω ) ds.

Multiplying both sides of the last inequality by e2κn T , we obtain 2κn t

e

7

2 E∥˜ Uρn ,βn (., t) − u(., t)∥L2 (Ω ) 2

≤ e2κn T E∥H βn − H ∥L2 (Ω ) + TA21 ρn−2γ ∥u∥2∞ ( L

)

0,T ;Zγ ,TA (Ω ) 1

2  + Te2κn T EGβn − GL∞ (0,T ;L2 (Ω ))

8

2

+ e2κn T +

E∥Bβn − B∥L∞ (0,T ;L2 (Ω )) (A1 − A0 )

2 ( 16|ˆ Q | n

A1 − A0

T

)∫

+2

t

∥u∥2L∞ ((0,T );H 1 (Ω )) 0

2

e2sκn E∥˜ Uρn ,βn (., s) − u(., s)∥L2 (Ω ) ds.

Applying Gronwall’s inequality, we deduce that 2

E∥˜ Uρn ,βn (x, t) − u(x, t)∥L2 (Ω )

9

2 ( 16|ˆ ) Qn | (T − t) ≤ exp + 2(T − t) e−2κn t B′ A1 − A0

(3.24)

where

10

11



B =e

2κn T

2

E∥H βn − H ∥L2 (Ω ) +

TA21 n−2γ

ρ

∥u∥

2 ( ) L∞ 0,T ;Zγ ,TA (Ω ) 1 2

E∥Bβn − B∥L∞ (0,T ;L2 (Ω ))  2 ∥u∥2L∞ ((0,T );H 1 (Ω )) . + Te2κn T EGβn − GL∞ (0,T ;L2 (Ω )) + e2κn T (A1 − A0 ) 0 Thanks to Theorem 2.1, we have that

⏐ ⏐2 ⏐ ⏐ E⏐ˆ Hβn − H ⏐ 2

L (Ω )

12

13

⏐ ⏐2 ⏐ ⏐ + T E⏐ˆ Gβn − G⏐ ∞ L

2

(0,T ;L2 (Ω ))

+ E∥Bβn − B∥L∞ (0,T ;L2 (Ω ))

Please cite this article in press as: Nane E., et al., A random regularized approximate solution of the inverse problem for Burgers’ equation. Statistics and Probability Letters (2017), https://doi.org/10.1016/j.spl.2017.08.014.

14

STAPRO: 8018

E. Nane et al. / Statistics and Probability Letters xx (xxxx) xxx–xxx

1

2

3 4 5 6 7 8 9

10

11

12

9

( ) 2 −µ 1/2 is of order max βn n−4µ , βn 0 for any µ > 21 . This together with (3.24) implies that E∥˜ Uρn ,βn (., t) − u(., t)∥L2 (Ω ) is of order ) ( 2 ( 16|ˆ Qn | T ) −2κn t 2ρn T 1/2 −4µ 2ρn T −µ0 −2γ exp . □ (3.25) e max e βn n , e βn , ρn A1 − A0 Remark 3.1. (1) In Theorem 3.1, to obtain the error estimate, we require strong assumptions on u. This is a limitation of Theorem 3.1, since there are not many functions u satisfying these conditions. Especially, these conditions are more difficult to be observed and checked in practice. To remove this limitation, we need to find a new estimator. The convergence rate in the case of weak assumptions on u is a nontrivial problem. In a future work, we plan to apply the method in Tuan et al. (2017) to present regularization results in the case of a weaker assumption for u, for example, u ∈ C ([0, T ]; L2 (Ω )). (2) Using the ideas in Kirane et al. (2017a), our error estimates give the norm of difference between the regularized solution and the sought solution in H 1 (Ω ). Uncited references Eubank (1999), Kirsch (2011), Trong et al. (2015), Tuan and Nane (2017) Acknowledgments

14

The authors also desire to thank the handling editor and the anonymous referees for their helpful comments on this paper.

15

References

13

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 33 32

Carasso, A., 1977. Computing small solutions of Burgers’ equation backwards in time. J. Math. Anal. Appl. 59 (1), 169–209. Eubank, R.L., 1999. Nonparametric regression and spline smoothing, second ed.. In: Statistics: Textbooks and Monographs, vol. 157, Marcel Dekker, Inc., New York, xii+338 pp. Hao, D.N., Duc, N.V., Thang, N.V., 2015. Stability estimates for Burgers’-type equations backward in time. J. Inverse Ill-Posed Probl. 23 (1), 41–49. Kirane, M., Nane, E., Tuan, N. H., On a backward problem for multidimensional Ginzburg-Landau equation with random data, 2017a, submitted for publication. URL: https://arxiv.org/abs/170203024. Kirane, M., Nane, E., Tuan, N. H., Regularized solutions for some backward nonlinear partial differential equations with statistical data, 2017b, submitted for publication. URL: https://arxiv.org/abs/170108459. Kirsch, A., 2011. An Introduction to the Mathematical Theory of Inverse Problems, second ed.. In: Applied Mathematical Sciences, vol. 120, Springer, New York, xiv+307 pp. Kukavica, I., 2007. Log–log convexity and backward uniqueness. Proc. Amer. Math. Soc. 135, 2415–2421. Lattès, R., Lions, J.-L., 1967. Méthode de Quasi-réversibilité et Applications. Dunod, Paris. Trong, D.D., Khanh, T.D., Tuan, N.H., Minh, N.D., 2015. Nonparametric regression in a statistical modified Helmholtz equation using the Fourier spectral regularization. Statistics 49 (2), 267–290. Tuan, N.H., Kirane, M., Bessem, S., Au, V.V., 2017. A new Fourier truncated regularization method for semilinear backward parabolic problems. Acta Appl. Math. 148, 143–155. Tuan, N.H., Nane, E., 2017. Inverse source problem for time fractional diffusion with discrete random noise. Statist. Probab. Lett. 120 (January), 126–134.

Please cite this article in press as: Nane E., et al., A random regularized approximate solution of the inverse problem for Burgers’ equation. Statistics and Probability Letters (2017), https://doi.org/10.1016/j.spl.2017.08.014.