A globally convergent numerical method for a coefficient inverse problem for a parabolic equation

A globally convergent numerical method for a coefficient inverse problem for a parabolic equation

Journal of Computational and Applied Mathematics 289 (2015) 153–172 Contents lists available at ScienceDirect Journal of Computational and Applied M...

515KB Sizes 5 Downloads 160 Views

Journal of Computational and Applied Mathematics 289 (2015) 153–172

Contents lists available at ScienceDirect

Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

A globally convergent numerical method for a coefficient inverse problem for a parabolic equation Onur Baysal Department of Mathematics and Computer Science, İzmir University, İzmir, Turkey

article

info

Article history: Received 1 September 2014 Received in revised form 28 January 2015 Keywords: Multidimensional coefficient inverse problem Parabolic equation Globally convergent algorithm Beilina–Klibanov method

abstract In this work a Multidimensional Coefficient Inverse Problem (MCIP) for a parabolic PDE with the data resulting from a single measurement event is considered. This measured data is obtained using a single position of the point source. The most important property of the method presented here is that even though we do not need any advanced knowledge of a small neighborhood of the solution, we still obtain points in this neighborhood. This is the reason why the numerical algorithm for this method is called globally convergent. In the literature a globally convergent numerical method for the MCIP with single measurement data has inclusively been considered in the book (Beilina and Klibanov, 2012) and some other publications of Beilina and Klibanov. In those publications a globally convergent algorithm for MCIP for a hyperbolic PDE was developed. Here we modify their technique to prove the global convergence property of their method for the parabolic case. © 2015 Elsevier B.V. All rights reserved.

1. Introduction Multidimensional Coefficient Inverse Problems (MCIPs) are a popular area of research in the last several decades. This is because they have many applications. MCIP are generally difficult problems because they are both nonlinear and ill-posed. In realistic applications the minimal amount of available information can be seen as a third complicating factor. In this case only a single measurement event is considered and the data are not redundant. However, the nonlinearity and ill-posedness combined cause very serious challenges in numerical treatments of MCIPs. Indeed, conventional least square cost functionals for MCIPs suffer from the phenomenon of multiple local minima and ravines. This means that any method of the minimization of such a functional must start in a small neighborhood of the exact solution, i.e. one needs to have an a priori available good first guess about this solution. However, such a guess is rarely available in applications. It is impossible of course to invent such a numerical method, which would satisfy two requirements simultaneously: (a) To deliver a good approximation for the exact solution regardless on the availability of a good first guess and (b) To work for all MCIPs. However, it might be possible to invent such a method which would satisfy the requirement (a) for at least one MCIP. The latter was done in works of Beilina and Klibanov in 2008–2014 [1–8]. Furthermore, their method was completely verified on experimental data [3–8]. Recently Chow and Zou [9] have studied the Beilina–Klibanov method numerically for a MCIP for a hyperbolic equation, which is different from the one studied in [1–8]. Rephrasing the concept of global convergence of Beilina and Klibanov, we call a numerical method for an MCIP globally convergent, if there is a rigorous guarantee of delivering at least some points in a small neighborhood of the exact solution without any advanced knowledge of this neighborhood. Of course, the stability of this method should also be a part of that rigorous guarantee.

E-mail address: [email protected]. http://dx.doi.org/10.1016/j.cam.2015.02.029 0377-0427/© 2015 Elsevier B.V. All rights reserved.

154

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

However, the Beilina–Klibanov method was developed only for MCIPs for a hyperbolic equation. Also, this method was extended to an MCIP for an elliptic equation, see, e.g. [10] for the work on experimental data. Thus, the goal of the current paper is to extend their technique to the case of an MCIP for a parabolic equation. The critical point of our study is to estimate the so-called tail function first. Next, we present the corresponding approximate mathematical model and finally we prove the global convergence property of this model. Outline of the paper is as follows, in Section 2 we introduce the MCIP considered here. Then in Section 3, we consider some properties of the Laplace transform of the solution of the forward problem. In Sections 4 and 5, the layer stripping procedure and corresponding algorithm are presented, respectively. In Section 6 we give some estimates in L2 and Hölder norms of the tail function. Using these estimates, the global convergence theorem is proved in Section 7. Results are summarized in Section 8. 2. Statements of forward and inverse problems Consider the Cauchy problem for the following parabolic equation



ut = D△u − a(x)u in R3 × (0, ∞), u(x, 0) = δ(x − x0 ) for x ∈ R3 .

(1)

We assume that D = const . > 0. We also assume that the function a ∈ C α R3 and a (x) has a compact support. Here and below C k+α and C 2k+α,k+α/2 are Hölder spaces, where k ≥ 0 is an integer and α ∈ [0, 1) [11]. The solution u(x, t ) of the problem (1) is understood in the classical sense of fundamental solution of parabolic equation, see, e.g. [12]. Eq. (1) appears in medical optical imaging applications [13]. In this case, the function u(x, t ) represents the light amplitude with the location of the light source x = x0 , a(x) = µa (x) ≥ 0 is the absorption coefficient of the medium, and D = const . = 1/(3µ′s ) > 0 is the diffusion coefficient which is inversely proportional to the reduced scattering coefficient µ′s . Experimental works exhibit that the diffusion coefficient of light can be assumed constant due to changing slowly in human tissues [14]. Let Ω ⊂ R3 be a bounded domain with the boundary ∂ Ω ∈ C 3 . Let b = const . > 0. We assume that the absorption coefficient a(x) of (1) is such that



a(x) ∈ [0, b], α



a(x) = 0 for x ∈ R3 \ Ω ,

a ∈ C (R ),

α ∈ (0, 1).

3

(2) (3)

If a ≡ b = const . ≥ 0, then the solution ub (x, t ) of problem (1) is



ub (x, t ) = exp

− |x − x0 |2 4Dt

 − bt

(4π Dt )3/2 .

(4)

On the other hand it follows from (2) and (3) that there exists unique solution u of the forward problem (1) such that the function (u − u0 ) ∈ C 2+α,1+α/2 (R3 × [0, T ]), ∀T > 0 [11]. Moreover we know from Theorem 11 of Chapter 2 of [12] that the fundamental solution of a general parabolic equation is positive for t > 0 (u(x, t ) > 0 for t > 0). We now pose the following inverse problem. Coefficient Inverse Problem (CIP): Let u(x, t ) be the solution of (1) and the unknown absorption coefficient a(x) satisfies conditions (2) and (3). Suppose that D = const . > 0 is a known diffusion coefficient. Determine a(x) in (1) for x ∈ Ω , assuming that the following function g (x, t ) is known for a fixed source position at x0 ̸∈ Ω : u(x, t ) = g (x, t ),

∀(x, t ) ∈ ∂ Ω × (0, ∞).

(5)

Here the function g (x, t ) represents the time dependent measurements at the boundary of the domain Ω . In practice this function is measured by detectors at a discrete set of points. As it is usually assumed in CIPs, these measurements can be interpolated over the surface ∂ Ω . Thus, we can assume that the function g (x, t ) is known at the entire surface, see, e.g. a similar situation for the case of real measurements in [3–8]. We do not discuss here a specific way of that interpolation. Still, we stress that the function g is given with an error. Therefore, our method should be stable with respect to this error. Its stability actually follows from the global convergence Theorem 4: see the parameter σ there. However, for brevity, we formulate this stability in terms of the Laplace transform of the function g, keeping mind that the Laplace transform of g is stable with respect to small variations of g, see (57). We show in the next section that the Laplace transformation of the solution of (1) decays rapidly as t tends to infinity. Hence, the values of g (x, t ) can be omitted for t > T for a sufficiently large finite number T > 0. Hence the assumption of the infinite time interval in (1) and (5) is not a serious restriction. Global uniqueness result of this inverse problem is a long standing open question due to the δ -function in the initial condition in (1). On the other hand one can consider the following approximation of δ -function for a sufficiently small number ϵ > 0 i.e.

 |x − x0 |2 exp − . u (x, 0) = √ ϵ2 ( π ϵ)3 ϵ

1



In this case uniqueness theorem is valid [1,15]. Hence, we assume uniqueness of our CIP for our purpose.

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

155

In the following section we give a transformation procedure to reduce the inverse problem considered above to the Dirichlet boundary value problem. 3. The transformation procedure for the parabolic case Below we use the numbers s, s with 0 < s < s. They should be sufficiently large. In the computational practice, at least for the hyperbolic case, they are chosen numerically [1]. In fact, we use the asymptotic behavior (31) of a certain function with respect to s in order to get the first approximation for the tail function in (62) and (63). Thus, as it is always the case with asymptotic behaviors, it is hard to establish analytically precise boundaries for the parameters s, s. Nevertheless it was shown numerically in [9] that images are quite stable with respect to variations of s, as long as this parameter is sufficiently large. However, images degrade significantly for rather small values of s [9]. Consider the Laplace transform of the solution of the Cauchy problem (1)

w(x, s) = Lu(x, t ) =





u(x, t ) exp −s2 t dt ,





∀s ≥ s = const . > 0.

(6)

0

Here the parameter s is called pseudo frequency, and we assume that the number s is sufficiently large to make the integral in (6) convergent absolutely, together with these integrals for corresponding derivatives of the function u(x, t ). We obtain from (1) and (6) that the function w is the solution of the following problem: D△w − (a(x) + s2 )w = −δ(x − x0 ),

x ∈ R3 , ∀s ≥ s,

lim w(x, s) = 0.

(7) (8)

|x|→∞

Fulfillment of the condition (8) follows from Lemma 2. Let ν = const . > 0. Consider the integral formula in Section 4.5 of [16]: ∞





t ν−1 e−ω/(4t )−s t dt = 2 2

0

ν/2

ω 4s2



Kν (s ω),

(9)

where Kν is the Macdonald function. Suppose that a (x) ≡ b = const . ≥ 0. Hence, using the fact that the well known formula (4) represents the fundamental solution of the heat equation as well as formula (9), we obtain that the solution of the problem (7), (8) can be written in this case as

wb (x, s) = Lub (x, t ) =

√  −|x − x0 | s2 + b exp . √ 

1 4π D|x − x0 |

D

(10)

It is well known that the function w0 (x, s) is the fundamental solution of the problem (7)–(8), corresponding to the case a (x) ≡ 0 [17]. Hence, in the general case of the function a (x) ̸= const . satisfying conditions (2) and (3) and for x0 ̸∈  Ω the solution of the problem (7), (8) is understood as w (x, s) = w0 (x, s)+w (x, s), where the function w (x, s) ∈ C 2+α G , ∀s ≥ s, for any bounded domain G ⊂ R3 is the solution of the following problem D△w − (a(x) + s2 )w = a (x) w0 , lim w(x, s) = 0,

x ∈ R3 , ∀s ≥ s,

∀s ≥ s.

|x|→∞

(11) (12)

Since x0 ̸∈ Ω and a (x) = 0 for x ∈ R3 Ω , the right hand side of Eq. (11)  does not have a singularity at {x = x0 }, and it belongs to C α R3 , ∀s ≥ s. The fact that the solution w (x, s) ∈ C 2+α R3 , ∀s ≥ s of the problem (11), (12) exists and is unique follows from Lemma 2 and Theorem 1. Below we use the following notation for the Hölder norm

|f |k+α := ∥f ∥C k+α (Ω ) ,

∀f ∈ C k+α (Ω ).

Lemma 1. Let Ω , Ω1 ∈ R3 be two bounded domains such that

Ω ⊂ Ω1 ,

∂ Ω ∩ ∂ Ω1 = ∅,

∂ Ω , ∂ Ω1 ∈ C 3 , x0 ̸∈ Ω 1 .

(13)

α

Let the function f (x) be such that f ∈ C (Ω 1 ) and f (x) = 0 for x ∈ Ω1 Ω . For s > 0 define the function z (x, s) =

 Ω1

w0 (x + x0 − ξ , s)f (ξ )dξ .

(14)

Then z (x, s) ∈ C 2+α (R3 ), ∀s > 0. In addition, there exists a constant C = C (s, Ω , Ω1 ) > 0 independent on f , such that ∥z (x, s) ∥C 2+α (R3 ) ≤ C |f |α , ∀s > 0 and also the function z (x, s) satisfies D△z − s2 z = −f

in R3 ,

lim z (x, s) = 0,

|x|→∞

∀s > 0.

(15)

156

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

Proof. For brevity, we assume in this proof that x0 = 0. The case x0 ̸= 0 is completely similar. It follows from (10) that lim|x|→∞ w0 (x, s) = 0 for all x ∈ Ω1 . Hence, by (14) lim|x|→∞ z (x, s) = 0. Now we prove that the function z (x, s) satisfies the rest of condition (15) for the case f ∈ C 1 (Ω 1 ). Let x = (x1 , x2 , x3 ). If x ̸∈ Ω , then the expression in the integral in (14) does not have a singularity at {x = ξ }. Hence, in this case D△z − s2 z =



D△w0 − s2 w0 (x − ξ , s)f (ξ )dξ =



 Ω1

 Ω1

0 · f (ξ )dξ = 0.

This means that Eq. (15) is valid for x ̸∈ Ω . Consider now the case when x ∈ Ω . Using integration by parts and the fact that the function w0xi (x − ξ , s) has a weak singularity at {x = ξ }, we obtain zxi (x, s) =

 Ω1

w0xi (x − ξ , s)f (ξ )dξ =

 Ω1

w0 (x − ξ , s)fξi (ξ )dξ .

Hence, zxi xi (x, s) =

 Ω1

w0xi (x − ξ , s)fξi (ξ )dξ .

(16)

Let ε > 0 be a sufficiently small number. Denote Bε (x) = {|x − x0 | < ε} and Sε (x) = {|x − x0 | = ε}. Then using Gauss’ formula, we obtain from (16)

△z (x, s) =

3   i=1

= lim

ε→0

Ω1

w0xi (x − ξ , s)fξi (ξ )dξ

3   i=1

= − lim

ε→0

Ω1 Bε

w0xi (x − ξ , s)fξi (ξ )dξ

3   Sε (x)

i=1

 = − lim

ε→0 S (x) ε

w0ξi (x − ξ , s)f (ξ )dSε + lim



ε→0 Ω B ε 1

∂w0 (x − ξ , s)f (ξ )dSε + lim ε→0 ∂ nξ

 Ω1 Bε

△x w0 (x − ξ , s)f (ξ )dξ

△x w0 (x − ξ , s)f (ξ )dξ ,

(17)

where nξ is the normal vector to the sphere Sε (x) pointing towards its center x. Hence, ∂/∂ nξ = −∂/∂ r, where r = |x − ξ |. Consider the first limit in the last line of (17). We have

       sr exp − √sr s exp − √sr ∂  exp − √D  D D =− − . ∂r 4π Dr 4π Dr 2 4π D3/2 r Hence, for ξ ∈ Sε (x)

  ∂w0 1 sε (x − ξ , s) = exp − √ (1 + O (ε)) , ∂ nξ 4π Dε 2 D

ε → 0.

Hence, in spherical coordinates with the center at {x},

∂w0 (x − ξ , s)f (ξ )dSε ∂ nξ  2π  π 1 =− lim f (x1 + ε cos ϕ sin θ , x2 + ε sin ϕ sin θ , x3 + ε cos θ) sin θ dθ dϕ 4π D ε→0 0 0 f (x) =− . 

− lim

ε→0 S (x) ε

D

Hence, using (17), we obtain D△z − s z = −f (x) + lim 2



ε→0 Ω B ε 1



D △x w0 − s2 w0 (x − ξ , s)f (ξ )dξ .



(18)

However, since D △x w0 − s2 w0 (x − ξ , s) = 0 for x ̸= ξ , the second term on the right hand side of (18) equals zero. Hence,





the function z (x, s) indeed satisfies Eq. (15), as long as the function f ∈ C 1 Ω 1 .





O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

157

On the other hand, since f (x) = 0 for x ∈ Ω1 Ω , we can rewrite (14) as z (x, s) =

 Ω

w0 (x − ξ , s)f (ξ )dξ ,

∀s > 0.

(19)

This means in this integral the function w0 (x − ξ , s) does not have singularity for x ∈ ∂ Ω1 . Then it follows from (19) that z (x, s) ∈ C 3 (∂ Ω1 )

and ∥z (x, s) ∥C 2+α (∂ Ω1 ) ≤ C |f |α ,

∀s > 0.

(20)

Hence by the Schauder theorem z (x, s) ∈ C 2+α , ∀s > 0 and by (20), and the following estimate holds for all f ∈ C 1 (Ω 1 ) with f (x) = 0 for x ∈ Ω1 Ω :

∥z (x, s) ∥C 2+α (Ω 1 ) ≤ C ∥z (x, s) ∥C 2+α (∂ Ω1 ) ≤ C |f |α .

(21)

1 Now we consider the case f ∈ C α (Ω 1 ), with f (x) = 0 for x ∈ Ω1 Ω . There exists a sequence of functions {fn }∞ n=1 ⊂ C (Ω 1 ) be the corresponding sequence of functions such that fn (x) = 0 for x ∈ Ω1 Ω and limn→∞ |fn − f |α = 0. Let {zn (x, s)}∞ n =1 defined as

z n ( x, s) =

 Ω1

w0 (x − ξ , s)fn (ξ )dξ .

(22)

It follows from the above that zn (x, s) ∈ C 2+α (Ω 1 ), ∀s > 0 and by (21)

∥zn (x, s) ∥C 2+α (Ω 1 ) ≤ C |fn |α .

(23)

2+α Hence, {zn (x, s)}∞ (Ω 1 ), ∀s > 0. Hence it converges in C 2+α (Ω 1 ) to a n=1 is the Cauchy sequence in the space C 2+α function  z ∈ C (Ω ). Substituting in (19) fn instead of f we conclude that the sequence {zn (x, s)}∞ n=1 converges to the function z (x, s) in the norm of the space C 1 (Ω 1 ). Due to uniqueness of the limit we conclude  z (x, s) ≡ z (x, s). Hence, z (x, s) ∈ C 2+α (Ω 1 ), ∀s > 0. Also it follows from (23) that

∥z (x, s) ∥C 2+α (Ω 1 ) ≤ C |f |α ∀f ∈ C α (Ω 1 ) with f (x) = 0 for x ∈ Ω1 Ω .  Since for ξ ∈ Ω1 the function w0 (x − ξ , s) tends to zero exponentially as |x| → ∞ together with its derivatives of any order, then by (19) z (x, s) ∈ C 2+α (R3 ), ∀s > 0 and ∥z (x, s) ∥C 2+α (R3 ) ≤ C |f |α .  Lemma 2. Assume that the function a(x) satisfies conditions (2) and (3). For s = s (|a|α ) > 0, such that if for s ≥ s the function w(x, s) is the Laplace transform (6) of the solution u(x, t ) of the problem (1), then w(x, s) satisfies condition (8). Furthermore, for s ≥ s there exists a solution of  the problem (7), (8), which can be represented in the form w (x, s) = w0 (x, s) + w (x, s), where the function w (x, s) ∈ C 2+α R3 satisfies conditions (11), (12). Proof. We have from (1) and (4) u (x, t ) = u0 (x, t ) −

 t Ω

0

u0 (x + x0 − ξ , t − τ ) a (ξ ) u (ξ , τ ) dξ dτ .

Using this equation, (10), detailed estimates of the fundamental solution of the general parabolic equation of Chapter 4 of [11], as well as the convolution theorem for the Laplace transform, we conclude that there exists a number s = s (|a|α ) > 0 such that the Laplace transform (6) can be applied to the function u (x, t ) for all s ≥ s. Hence,

w (x, s) = w0 (x, s) −

 Ω

w0 (x + x0 − ξ , s) a (ξ ) w (ξ , s) dξ ,

∀ s ≥ s.

Hence, using (10) again, we obtain (8). The rest of the proof follows from Lemma 1.



Now we can prove Theorem 1. Theorem 1. Assume that the function a (x) satisfies conditions (2) and (3). Let s = s (|a|α ) > 0 be the number of Lemma 2. Then for all s ≥ s there exists unique solution of problem (7), (8) of the form w(x, s) = w0 (x, s) + w(x, s), where the function w ∈ C 2+α (R3 ) is the solution of the problem (11), (12). Furthermore, 0 < wb (x, s) < w(x, s) ≤ w0 (x, s).

(24)

Proof. Existence follows from Lemma 2. We now prove uniqueness. It is sufficient to prove uniqueness of the problem (11), (12). Suppose that there exist two functions w 1 (x, s), w 2 (x, s) satisfying (11), (12). Denote w  (x, s) = w 1 (x, s) − w 2 (x, s). Then D△ w − (a(x) + s2 ) w = 0, lim w (x, s) = 0,

|x|→∞

∀ s ≥ s.

x ∈ R3 , ∀s ≥ s,

(25) (26)

158

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

For an arbitrary R > 0 let BR := {x ∈ R3 : |x| ≤ R}. Then by the maximum principle and (25) maxBR | w (x, s)| ≤   w (x, s)| = 0. Hence, w  (x, s) ≡ 0. Uniqueness is proved. max|x|=R | w (x, s)|. However, by (26) limR→∞ max|x|=R | We now prove the right inequality (24). Consider again the problem (11), (12). The right hand side of Eq. (11) a (x) w0 ≥ 0. Suppose that there exist a point y ∈ BR and a number s0 ≥ s such that w (y, s0 ) = c > 0. Then the function w (x, s0 ) must attain a positive maximum m∗ (R) ≥ c in BR , and, by the maximum principle, m∗ (R) = max|x|=R w (x, s0 ). On the other hand, by (12) limR→∞ m∗ (R) = 0. Hence, the function w (x, s) ≤ 0, ∀x ∈ Rr , ∀s ≥ s. This proves the right inequality (24): w(x, s) ≤ w0 (x, s). It is obvious that wb (x, s) > 0 by (10). We now prove the left inequality (24): wb (x, s) < w(x, s). Let p(x, s) = w(x, s) − wb (x, s). Then

∀ s ≥ s,

D△p − (a(x) + s2 )p = [a(x) − b]wb ,

(27)

lim p(x, s) = 0.

|x|→∞

Due to the inequality a(x) − b ≤ 0, (27) implies D△p − (a(x) + s2 )p ≤ 0. For a sufficiently small ε > 0, let BR (ε) = BR ∩ {|x| ≥ ε}. Consider a number s0 ≥ s. Since the function w (x, s0 ) = w (x, s0 ) − w0 (x, s0 ) belongs to C 2+α (BR (ε)), then by (10)

√ 



exp −s0 |x − x0 | / D

w ( x, s0 ) =

4π D |x − x0 |

(1 + O (|x − x0 |)) ,

|x − x0 | → 0,



1

wb (x, s0 ) =

4π D |x − x0 |

 − s2 + b|x − x |  0 0 . √

exp

D

Hence, for a sufficiently small ε > 0 we have:

w (x, s0 ) − wb (x, s0 ) = p (x, s0 ) > 0,

for |x − x0 | = ε.

(28)

Suppose now there exist a point y ∈ BR and a number s0 ≥ s such that p (y, s0 ) = −c < 0. Then the function p (y, s0 ) must attain a negative minimum m∗ (R) ≤ −c in BR (ε). It follows from the maximum principle and (28) that this minimum is achieved at the surface {|x| = R}, i.e. m∗ (R) = min|x|=R p (x, s0 ). On the other hand, by (12) limR→∞ m∗ (R) = 0. Hence, the function p (x, s) is non-negative. In addition (28) implies a stronger inequality: p (x, s) > 0, ∀x ∈ R3 , ∀s ≥ s. This proves the left inequality (24).  Now using the positivity of the solution w(x, s), we define the function v := ln w in Ω , for s > s. Since x0 ̸∈ Ω , then it follows from (7) that ∀s ≥ s.



D△v + D|∇v|2 − s2 = a(x), ∀x ∈ Ω , v(x, s) = ln ψ(x, s), ∀x ∈ ∂ Ω ,

(29)

where

ψ(x, s) =





∀s ≥ s = const . > 0,

g (x, t ) exp −s2 t dt ,





(30)

0

where the function g was defined in (5). Also, the following asymptotic behavior of w(x, s) holds [18]:





 

Dβx Dks w(x, s) = Dβx Dks w0 (x, s) 1 + O

1 s

,

as s → ∞,

(31)

where |β| ≤ 2, k = 0, 1. Consider the function

v(x, s) := ln w(x, s) − ln w0 (x, s) = v(x, s) − v 0 (x, s),

(32)

where v 0 := ln w0 . The following asymptotic behavior is valid for x ∈ Ω , |β| ≤ 2, k = 0, 1 [18]:



Dβx Dks v(x, s) = O

1 sk+1



,

s → ∞.

Denote q(x, s) := ∂s v(x, s). Then



Dβx q(x, s) = O

1 s2



,

s→∞

and

v(x, s) = −





q(x, τ )dτ . s

(33)

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

159

Hence, we may rewrite the function v(x, s) as

v(x, s) = −

s



q(x, τ )dτ + V (x, s),

(34)

s

where s > s is a large number. We call the function V (x, s) the tail function: V (x, s) = v(x, s) = ln w(x, s) − ln w0 (x, s).

(35)

Now differentiating equation for v in (29) with respect to s and using (33)–(35), we obtain the following nonlinear integral differential equation for the function q:

△q − 2(∇ q + ∇ q0 )

s



∇ q(x, τ )dτ + 2(∇ q∇ V + ∇ q∇v 0 + ∇ q0 ∇ V ) = 0,

(36)

s

ψs (x, s) ψ0,s (x, s) (37) − for (x, s) ∈ ∂ Ω × [s, s]. ψ(x, s) ψ0 (x, s) Here we denote ψ0 as the Laplace transform of g0 (x, t ) := u0 (x, t )|∂ Ω ×(0,∞) and q0 (x, s) := ∂s v 0 (x, s). β β Suppose that we can approximate functions q and V in Ω together with their derivatives Dx q, Dx V , |α| ≤ 2. Then equation in (29) allows us to obtain an approximation for the unknown coefficient a(x). We can approximate both functions q|∂ Ω = φ(x, s) =

q and V , because we find them differently: While we approximate iteratively the function q by solving Dirichlet boundary value problems for elliptic equations inside of the domain Ω , for the function V , we iteratively approximate it by solving (7), (8) and updating the function a(x) for some iteration and using (35) then. 4. The layer stripping with respect to s In this section we consider the layer stripping procedure in order to approximately solve the integral–differential equation (36). Consider a partition of the interval [s, s] with a sufficiently small grid step size h, s = s N < s N −1 < · · · < s 1 < s 0 = s ,

si−1 − si = h.

(38)

We assume that q(x, s) is a piecewise constant function with respect to s ∈ [s, s], i.e., q(x, s) = qn (x) for s ∈ [sn , sn−1 ). We set q0 ≡ 0. Hence s



∇ q(x, τ )dτ = (sn−1 − s)∇ qn (x) + h s

n −1 

∇ qj (x),

s ∈ [sn , sn−1 ).

(39)

j =1

We approximate the boundary condition (37) as, qn (x) = φn (x) :=

1 h



sn−1

φ(x, s)ds for x ∈ ∂ Ω .

(40)

sn

Assume for a moment that qj (x), j = 0, 1, 2, . . . , n − 1 and V are known for all previous subintervals. Then it follows from (36) that the function qn (x) in [sn , sn−1 ), n ≥ 1 is the solution of the following Dirichlet boundary value problem

   n−1  Ln (qn ) := △qn − 2(∇ qn + ∇ q0 ) h ∇ qj (x) + (sn−1 − s)∇ qn (x) j =0

qn |∂ Ω

+ 2(∇ qn ∇ V + ∇ qn ∇v 0 + ∇ q0 ∇ V ) = 0, = φn (x).

x ∈ Ω , s ∈ [sn , sn−1 ),

(41)

Eq. (41) is nonlinear and depends on the parameter s. However, the function qn (x) is independent on s. This inconsistency is natural result of the approximation of the function qn (x) by a piecewise constant function. On the other hand, this discrepancy helps us to mitigate the influence of the nonlinear term (∇ qn )2 in (41) by using the following s dependent Carleman weight function (CWF) Cn,λ = exp[λ(sn − sn−1 )],

s ∈ [sn , sn − 1).

Multiply Eq. (41) with CWF and integrate with respect to s over (sn , sn−1 ), we obtain

   n −1    Ln (qn ) := △qn − 2h ∇ qj + An − 2∇ V − En ∇ qn     j =0    n −1  J 2   = |∇ qn | + Bn h ∇ qj − ∇ V for x ∈ Ω , s ∈ [sn , sn−1 ),   I  j=0   qn (x) = φn (x) for x ∈ ∂ Ω .

(42)

160

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

Here the parameter λ ≫ 1 and sn−1



eλ(sn −sn−1 ) ds =

I = sn sn−1

 J =

1

λ

(1 − e−λh ) ≥ 0,

(sn−1 − 1)eλ(sn −sn−1 ) ds =

sn

2

An =

I 2

Bn =

λ2

(1 − e−λh (1 + λh)) ≥ 0,

(sn−1 − s)∇ q0 (x, s)eλ(s−sn−1 ) ds,

(43)

sn sn−1



I 2

En =

sn−1



2

∇ q0 (x, s)eλ(s−sn−1 ) ds,

sn sn−1



I

∇v 0 (x, s)eλ(s−sn−1 ) ds.

sn

We observe that 0≤

J I

=

2 1 − (1 + λh)/ exp(λh)

λ

1 − 1/ exp(λh)

<

2

λ

for λh ≥ 1.

(44)

Hence, for λ ≫ 1 the effect of the nonlinear term in (42) can be omitted. Then we just solve linear elliptic Dirichlet boundary value problem (42) without the term |∇ qn |2 iteratively at each step n. In accordance with the Tikhonov concept for ill-posed problems [19,20], we assume the existence of the ‘‘ideal’’ exact solution a∗ of our inverse problem. The function a∗ (x) satisfies conditions (2) and (3) and has noiseless data g ∗ (x, t ). All functions below, which correspond to a∗ (x), have the superscript ‘‘∗’’. We have

φ ∗ (x, s) =

ψs∗ (x, s) ψ0,s (x, s) − for (x, s) ∈ ∂ Ω × [s, s] ψ ∗ (x, s) ψ0 (x, s)

(45)

where the function ψ ∗ (x, s) is the Laplace transform of the function g ∗ (x, t ). Evidently q∗ ∈ C 2+α (Ω ) × C 1 [s, s]. We approximate the functions q∗ (x, s) and φ ∗ (x, s) by the piecewise constant functions with respect to s ∈ [s, s], functions q∗n (x) and ψn∗ (x). We set q∗0 (x) ≡ 0. Then the estimate (2.111) given in [1] holds also for the function q∗n (x): max |q∗n |2+α ≤ C ∗ ,

(46)

1≤n≤N

where C ∗ = C ∗ (∥q∗ ∥C 2+α (Ω )×C 1 (s,s) , s) = const . > 0. We assume without loss of generality that C ∗ ≥ 1. Clearly, an analogue of (41) for the exact solution q∗n (x) is:

   n −1  J   △q∗n − 2h ∇ q∗j + An − 2∇ V ∗ − En ∇ q∗n = |∇ q∗n |2    I  j =0     n −1    + B h ∇ q∗j − ∇ V ∗ + Fn (x, h, λ), for x ∈ Ω , s ∈ [sn , sn−1 ), n    j=0    ∗ qn (x) = φn∗ (x), for x ∈ ∂ Ω ,

(47)

where V ∗ is exact tail, the function Fn ∈ C α (Ω ) is the reminder term and max |Fn (x, h, λ)|α ≤ C ∗ h.

(48)

λh≥1

5. Globally convergent algorithm Based on the considerations of the previous section, we construct here an algorithm for finding the solution of our coefficient inverse problem. This algorithm reconstructs approximations an,i (x) of a(x) only inside of the domain Ω . Next, we extend function an,i (x) outside of Ω in an appropriate way. For this purpose we use a standard procedure, so that the resulting function  an,i would be in C α (R3 ),  an,i ≥ 0 in Ω and  an,i = 0 outside of Ω . First, we choose a smaller sub-domain Ω ′ ⊂ Ω and a function χ (x) such that 1

 χ (x) ∈ C





R

3



,

χ (x) = ∈ [0, 1] 0

in Ω ′ in Ω \ Ω ′ outside of Ω .

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

161

We define the target extension  an,i = 0 outside of Ω as

 an,i = χ (x)an,i (x),

∀x ∈ R3 .

(49)

Hence  an,i = 0 outside of the domain Ω and  an,i = an,i in Ω ′ . Using (29), (34) and (39), denote

vn,i (x) = −hqn,i (x) − h

n −1 

qj (x) + Vn,i (x),

x ∈ Ω,

(50)

j =0

an,i (x) = D△(vn,i + v 0 (x, sn )) + D|∇(vn,i + v 0 (x, sn ))|2 − s2n ,

x∈Ω

(51)

where functions qj , qn,i and a certain approximation for the tail function Vn,i will be defined later. Here i = 1, . . . , mn and mn is the number of inner iterations for each n, see below. Recall that we assume q0 ≡ 0. Hence, we set q1,1 := 0,

qn,1 := qn−1 ,

and

Vn,1 (x) := Vn−1,mn−1

for n ≥ 2

(52)

where V1,1 (x) is a starting value for the tail function. Formulas (50) and (51) can be written also for the exact coefficient a∗ (x):

vn∗ (x) = −hq∗n (x) − h

n −1 

q∗j (x) + V ∗ (x),

x ∈ Ω,

(53)

j =0

a∗ (x) = D△(vn∗ (x) + v 0 (x, sn )) + D|∇(vn∗ (x) + v 0 (x, sn ))|2 − s2n + F n (x),

x∈Ω

(54)

where

|F n |α ≤ C ∗ h.

(55)

5.1. The first tail function V1,1 (x) Recall that functions g (x, t ) and g ∗ (x, t ) generate boundary data φ(x, s) and φ ∗ (x, s) respectively. Since g ∗ (x, t ) corresponds to the noiseless exact data and g (x, t ) corresponds to real data and real data always contain noise, then we assume that max φ(x, s) − φ ∗ (x, s)C 2+α (∂ Ω ) ≤ σ , s∈[s,s]





(56)

σ > 0 is a small number characterizing the level of the error in the function φ(x, s) in (37). Hence, (40) and (56) imply that it is reasonable to assume that the error in functions φn (x) in (40) is   φn − φ ∗  2+α ≤ C ∗ (h + σ ) , (57) n C (∂ Ω ) where h > 0 is the grid step size in the partition (38) and C ∗ > 0 is a constant. ∗ Let V ∗ (x, s) be the exact tail function, which corresponds   to the exact coefficient a (x) via (32) and (35). By (32), (33), ∗ 2+α (35) and Theorem 1 there exists a function p ∈ C Ω such that p∗ (x)

+ O(1/s ), s → ∞. s Hence, taking into account (35), we assume that V ∗ (x, s) =

V ∗ (x, s) ≈

p∗ (x) s

2

= ln w∗ (x, s) − ln w0 (x, s).

(58)

Since q (x, s) = ∂s V (x, s)|s=s , by (58) ∗



q∗ (x, s) = −

p∗ (x) s

2

.

(59)

In Eq. (36) we have terms ∇v 0 , ∇ q0 , where q0 = ∂s v 0 . We now simplify them. We have v 0 = −s |x − x0 | − ln (4π |x − x0 |). Here x0 ̸∈ Ω and x ∈ Ω . Hence, taking s sufficiently large, we obtain

v 0 (x, s) ≈ −s |x − x0 | , q0 (x, s) ≈ − |x − x0 | ,

s → ∞.

(60)

Substituting V (x, s) = p (x)/s and q (x, s) = −p (x)/s in Eq. (36), setting there s := s and taking into account (60), we obtain ∗







△p∗ = 0 in Ω , 2 p∗ (x) = −s φ ∗ (x, s) for x ∈ ∂ Ω .



2

(61)

162

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

Hence, we consider the following approximation for the first tail function V1,1 (x): V1,1 (x) =

p(x) s

,

(62)

where the function p(x) is the solution of the following Dirichlet boundary value problem



△p = 0 in Ω , p(x) = −s φ(x, s) for x ∈ ∂ Ω . 2

(63)

By Schauder theorem and estimate (56) we may estimate the difference of solutions of problems (61) and (63) as:

  p − p∗ 

2+α

≤ K s2 σ .

(64)

Remark. Thus, the assumptions (58)–(60), (62) enable us to obtain a good approximation for the exact solution already on the first iteration of our method. The accuracy of this approximation depends only on the level of error σ in the boundary data. We point out that we use the assumptions (58)–(60), (62) only on the first iteration of our method: to get the first tail V1,1 (x). They are not used on follow up iterations. The numerical experience of [1] and other publications on the Beilina–Klibanov method shows that we still need to iterate further to get a better approximation. 5.2. Description of the algorithm 1st step: We use the first tail function V1,1 defined in (62), (63). Recall that q0 ≡ 0. We find the function q1,1 ∈ C 2+α Ω as the solution of the following elliptic Dirichlet boundary value problem:

 



△q1,1 − (2h∇ q0 + A1 − 2∇ V1,1 − E1 )∇ q1,1 = B1 (h∇ q0 − ∇ V1,1 ) in Ω , q1,1 (x) = φ1 (x) for x ∈ ∂ Ω .

(65)

Next, we construct v1,1 and a1,1 by the formulas (50) and (51), respectively. Having the extension (49)  a1,1 of a1,1 , we solve the following Cauchy problem:

 ∂t u1,1 = D△u1,1 − a1,1 (x)u1,1 in R3 × (0, ∞), u1,1 (x, 0) = δ(x − x0 )

for x ∈ R3 .

Next, using (6), we construct w1,1 = Lu1,1 and then set V1,2 = ln w1,1 (x, s) − v 0 (x, s). 2nd step: Now, using V1,2 , we obtain q1,2 by solving the following Dirichlet boundary value problem:



△q1,2 − (2h∇ q0 + A1 − 2∇ V1,2 − E1 )∇ q1,2 = B1 (h∇ q0 − ∇ V1,2 ) for x ∈ Ω , q1,2 (x) = φ1 (x) for x ∈ ∂ Ω .

Then similarly in the beginning part of this step we evaluate v1,2 , a1,2 ,  a1,2 , w1,2 and V1,2 , respectively. This process continues until the stopping criteria in the 5th step is satisfied for a1,m1 . Then we set q1 := q1,m1 , a1 = a1,m1 and V2,1 := ln w1,m1 (x, s) − v 0 (x, s). 3rd step: Suppose that the functions q1 , q2 , . . . , qn−1 ∈ C 2+α (Ω ),

an − 1 ∈ C α ( Ω )

and the tail Vn,1 (x, s) ∈ C 2+α (Ω ) are known. We now construct qn,1 ∈ C 2+α by solving following Dirichlet boundary value problem:

   n −1    △qn,1 − 2h ∇ qj + An − 2∇ Vn,1 − En ∇ qn,1     j =0     n −1    = Bn h ∇ qj − ∇ Vn,1 in Ω    j =0    qn,1 (x) := φn (x) for x ∈ ∂ Ω .

(66)

4th step: Having the function qn,i ∈ C 2+α (Ω ), we construct the next approximation an,i ∈ C α (Ω ) for the target coefficient using (50) and (51) and its extension by (49). Then we update the new tail Vn,i+1 (x, s) = ln wn,i −v0 (x, s), where wn,i = Lun,i and un,i satisfies

 ∂t un,i = D△un,i − an,i (x)un,i in R3 × (0, ∞) un,i (x, 0) = δ(x − x0 )

for x ∈ R3 .

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

163

5th step: In this step we construct qn,i+1 ∈ C 2+α by solving following Dirichlet boundary value problem:

   n−1    ∇ qj + An − 2∇ Vn,i+1 − En ∇ qn,i+1 △qn,i+1 − 2h    j = 0    n −1    = Bn h ∇ qj − ∇ Vn,i+1 in Ω    j=0   qn,i+1 (x) := φn (x) for x ∈ ∂ Ω . Then we evaluate an,i+1 as in the step 4. We iterate with respect to i until convergence occurs at the step i := mn . 6th step: Set an := an,mn ≈ a|Ω , qn := qn,mn and Vn+1,1 (x, s) = ln wn,mn − v0 (x, s). Next, solve the following Dirichlet boundary value problem

   n    △qn+1,1 − 2h ∇ qj + An+1 − 2∇ Vn+1,1 − En+1 ∇ qn+1,1     j =0    n    = Bn+1 h ∇ qj − ∇ Vn+1,1 in Ω    j =0   qn+1,1 (x) := φn+1 (x) for x ∈ ∂ Ω . Similarly as in the first two steps we construct an approximation of the target function: an+1,mn+1 . Stopping criteria with respect to i and n are determined numerically. It was demonstrated in [1–6,8,9] that such numerical criteria can always be chosen. 6. Estimates for the tail function Choose a number b > b. We introduce the set of functions P (b, b), P (b, b) = {a ∈ C α (Ω ) : |a|α ≤ b, a ∈ [0, b]}, and assume that the exact solution of considered CIP a∗ , which satisfies conditions (2) and (3), is in this set: P (b, b). For a function a ∈ P (b + 1, b + 1) with a (x) = 0, for x ∈ R3 Ω and for s > 0, we consider the solution wa (x, s) of the following problem D△wa − (a(x) + s2 )wa = −δ(x − x0 ), lim wa (x, s) = 0,



x ∈ R3

(67)

|x|→∞

such that wa (x, s) = w0 (x, s) + w a (x, s), where w a ∈ C 2+α (R3 ). In Theorem 2 we estimate tails in non-Hölder norms. Theorem 2. Let Ω be a bounded domain, x0 ̸∈ Ω and s > 1. Let V ∗ be the exact tail function, corresponding to the exact coefficient a∗ ∈ P (b, b), V ∗ (x) = ln w ∗ (x, s) − v 0 (x, s), where w ∗ is the solution of (7) with the coefficient a∗ . For a function a ∈ P (b + 1, b + 1), consider the solution wa of the problem (67) and let Va be the corresponding tail function defined by Va (x) = ln wa (x, s) − v 0 (x, s). Then there exists a constant B = B(Ω , s, b, b, x0 ) > 2 depending only on listed parameters such that for all functions a ∈ P (b + 1, b + 1) the following inequalities hold:

∥∇ Va ∥C (Ω ) , ∥∇ V ∗ ∥C (Ω ) ≤ B,

(68)

∥∇ Va − ∇ V ∗ ∥L2 (Ω ) + ∥△Va − △V ∗ ∥L2 (Ω ) ≤ B∥a − a∗ ∥L2 (Ω ) .

(69)

Proof. In proofs of this and next theorems, it is convenient for us to denote x = (x, y, z ). Below in this paper B > 2 denotes different constants depending on parameters indicated in the formulation of this theorem. For brevity we estimate only ∥Vx ∥C (Ω ) . Estimates of two other derivatives are similar. We also use the notations w and V instead of wa and Va , respectively. By the definitions of V and V ∗ , for x ∈ Ω

   wx (x, s)  − v 0x (x, s); |Vx (x)| =  w(x, s)

 ∗   w (x, s)  − v 0x (x, s). |Vx∗ (x)| =  x∗ w (x, s)

(70)

164

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

Here

−s|x − x0 | − ln(4π D|x − x0 |). √

v 0 (x, s) = ln(w0 (x, s)) =

D

Since x0 ̸∈ Ω , v 0 and v 0x have no singularity in Ω and

   s(x − x0 ) x − x0   ≤ B. |v 0x (x, s)| =  √ + |x − x0 |2  D|x − x0 | In addition, since by (24) w0 ≥ w ∗ , w ≥ wb+1 ≥ 0, we have for x ∈ Ω

|w ∗ (x, s)| ≤ B,

|w ∗ (x, s)|−1 ≤ B,

|w(x, s)| ≤ B and |w(x, s)|−1 ≤ B.

Hence, we obtain following estimates from (70),

|Vx (x)| ≤ B|wx (x, s)| + B and |Vx∗ (x)| ≤ B|wx∗ (x, s)| + B.

(71)

On the other hand, since

w(x, s) = w0 (x, s) − s2



w0 (x − ξ , s)a(ξ )w(ξ , s)dξ ,



(72)

then

wx (x, s) = w0x (x, s) − s

2

 Ω

w0x (x − ξ , s)a(ξ )w(ξ , s)dξ .

(73)

By (10),

w0x (x, s) =



x − x0

s

√ +

4π D|x − x0 |2

D



1

|x − x0 |

 exp

 −s|x − x0 | . √ D

Since x0 ̸∈ Ω then w0 (x, s) and w0x (x, s) do not singularity for x ∈ Ω . Hence, (73) implies that

|wx (x, s)| = B + B





exp

−|x − x0 − ξ |s √ D





1

|x − x0 − ξ |

+



1

|x − x0 − ξ |2

dξ ≤ B.

(74)

Since the function a∗ ∈ P (b + 1, b + 1), then the same estimate is valid for wx∗ . Hence (68) follows from (71) and (74). In order to prove (69), we define w  := w − w ∗ . Then for x ∈ Ω

wx (x, s) wx∗ (x, s) − ∗ w(x, s) w (x, s)    w x (x, s) 1 1 = − wx∗ (x, s) − w(x, s) w(x, s) w(x, s)  ∗   w  wx w (x, s). = − w ww∗

(V − V ∗ )x (x, s) =

(75)

Using w0 ≥ w ∗ , w ≥ wb+1 ≥ 0 and (74) in (75), we obtain

∥∇ V − ∇ V ∗ ∥L2 (Ω ) ≤ B(∥∇ w∥L2 (Ω ) + ∥ w∥L2 (Ω ) ).

(76)

Let  a(x) = a(x) − a∗ (x). Then a(x)w(x, s) − a∗ (x)w ∗ (x, s) = a(x) w (x, s) − a(x)w ∗ (x, s) for x ∈ Ω . Hence, D△ w − (s w  + a(x) w) =  a(x)w ∗ 2

for x ∈ R3 ,

(77)

lim w (x, s) = 0.

|x|→∞

It is clear from (72) that both functions w , w∗ decay exponentially together with their appropriate derivatives as |x| → ∞. Multiplying both sides of (77) by (− w) and integrating over BR := {x ∈ R3 : |x| < R}, we obtain



 a(x)w (− w )dx = ∗

BR



(D△ w − (s2 + a(x)) w )(− w )dx BR



 = −D ∂ BR

w 

dw  dn



(x, s)dS +

 BR

D|∇ w (x, s)|2 + (s + a(x)) w2 (x, s)dx. 2

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

165

Setting R → ∞ and using  a = 0 in R3 Ω , we obtain

 − Ω

 a(x)(w ∗ w )(x, s)dx =



D|∇ w (x, s)|2 + (s + a(x)) w2 (x, s)dx. 2

(78)

BR

Using the Cauchy–Schwarz inequality as well as a(x) ∈ [0, b + 1] and w ∗ ≤ w0 , we obtain from the last estimate

∥ w ∥L2 (R3 ) + ∥∇ w∥L2 (R3 ) ≤ B∥ a∥L2 (Ω ) .

(79)

Hence, using (76), we obtain

∥∇ V − ∇ V ∗ ∥L2 (Ω ) ≤ B∥ a∥L2 (Ω ) . Finally, we estimate ∥△V − △V ∗ ∥L2 (Ω ) ,

(Vxx − Vxx∗ )(x, s) =

 

=

wx w∗ − x∗ w w



(x, s) =



x

w x w∗ − x∗ w  w ww



(x, s) x

 ∗ w xx w − wx w x (wxx w  + wx∗ w x )ww∗ − (wx w ∗ + wwx∗ ) w wx∗ − (x, s). w2 w2 w∗ 2

Since similar estimates can be obtained for y, z derivatives, then

(△V − △V ∗ )(x, s) =



 △ w ∇w∇ w ∇w ∗ ∇ w w △w∗ |∇w ∗ |2 w w w∗ w ∇w∇w ∗ − − − − (x, s). + w w2 ww∗ ww∗ w2 w∗ 2 w2 w∗ 2

(80)

Since D△w = (a(x) + s )w,

in Ω ,

2

D△w = (a (x) + s )w ∗ , ∗

2



in Ω ,

D△ w = (s w  + a(x) w ) + a(x)w ∗ , 2

0 < wb+1 ≤ w, w



in Ω ,

and |∇w(x, s)|, |∇w ∗ (x, s)| ≤ B,

in Ω ,

then (80) implies that

|(△V − △V ∗ )(x, s)| ≤ B(|∇ w (x, s)| + | w (x, s)|) in Ω . Hence, combining (79) with (80), we obtain the desired result (69).



In order to obtain estimates for the tail function in Hölder norm we need to prove the following lemma. Lemma 3. Let Ω , Ω1 ⊂ R3 be two domains satisfying conditions in Lemma 1. For any a ∈ P (b + 1, b + 1), suppose that wa is the solution of problem (7), (8) with taking a(x) :=  a(x) = χ (x)a(x). Then wa (x, s) ∈ C 3 (∂ Ω1 ) and there exists a constant B = B(Ω , Ω1 , s, b, b, χ , x0 , D) ≥ 2 such that

∥wa (x, s)∥C 3 (∂ Ω1 ) ≤ B.

(81)

Furthermore, if we denote w (x) = wa1 (x, s) − wa2 (x, s) for any two functions a1 , a2 ∈ P (b + 1, b + 1), then

∥ w (x, s)∥C 3 (∂ Ω1 ) ≤ B|a1 − a2 |α .

(82)

Proof. By Lemma 2,

w(x, s) = w0 (x, s) −

 Ω

w0 (x + x0 − ξ , s)a(ξ )wa (ξ , s)dξ .

(83)

Since the integrand above does not have a singularity for x ∈ Ω1 \ Ω , we have wa (x, s) ∈ C 3 (∂ Ω1 ). Also estimate (81) follows from (83) and 0 < wb+1 (x, s) ≤ wa (x, s) ≤ w0 (x, s). The definition of  a and denoting  a(x) = a1 (x) − a2 (x) imply  a1 (x) − a2 (x) = χ (x) a(x). Now using formula (83) for wa1 and wa2 and denoting w (x) = wa1 (x, s) − wa2 (x, s), we obtain

w  (x , s ) = −

 Ω

w0 (x + x0 − ξ , s)χ (ξ ) a(ξ )wa1 (ξ , s)dξ −

 Ω

w0 (x + x0 − ξ , s)χ (ξ )a2 (ξ ) w (ξ , s)dξ .

(84)

Let I1 and I2 denote the first and second integrals in (84), respectively. Using the fact wa1 ∈ C 2+α (Ω ) and assertion of this lemma we have

∥I1 ∥C 3 (∂ Ω1 ) ≤ B∥ a∥L2 (Ω ) ≤ B| a| α .

(85)

166

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

Also using (79) we have

∥ w∥L2 (R3 ) ≤ B∥ a∥L2 (Ω ) ≤ B| a|α .

(86)

This estimate implies ∥I2 ∥C 3 (∂ Ω1 ) ≤ B| a|α . Hence combining (85) with (86), we obtain (82).



Now we derive estimates for tails in Hölder norms. Theorem 3. Suppose that conditions of Theorem 2 hold. Then there exists a constant B = B(Ω , s, b, b, x0 ) > 2 depending only on listed parameters such that for all functions a ∈ P (b + 1, b + 1) the following inequalities hold:

|∇ Va |1+α , |∇ V ∗ |1+α ≤ B,

(87)

|∇ Va − ∇ V |1+α + |△Va − △V |α ≤ B|a − a |α . ∗





(88)

Proof. Again for brevity we estimate only |Vx |1+α . Estimates of two other derivatives are similar. We also use the notations w and V instead of wa and Va , respectively. It follows from (71), (75) and (80) that in order to prove Theorem 3, we need to show that

|w ∗ |2+α , |w|2+α ≤ B and | w |2+α ≤ B| a| α where  a(x) = a(x) − a∗ (x) and w (x) = w(x) − w ∗ (x). Let w(x, s)|∂ Ω1 = f (x). Then D△w − (s + a(x))w = 0, 2

for x ∈ Ω1 ,

w|∂ Ω1 (x, s) = f (x). Since the integrand in formula (72) does not have a singularity for x ∈ Ω1 Ω , then w|∂ Ω1 (x, s) ∈ C 3 (∂ Ω1 ) and ∥w(·, s)∥C 3 (∂ Ω1 ) ≤ B. By Theorem 1 w(x, s) ∈ C 2+α (Ω 1 ). Thus, by Schauder’s Theorem:

|w|2+α ≤ ∥w∥C 2+α (Ω 1 ) ≤ B∥w∥C 2+α (∂ Ω 1 ) ≤ B∥w∥C 3 (∂ Ω 1 ) ≤ B. Similarly |w ∗ |2+α ≤ B. Hence, we have proved (87). Now we prove that | w|2+α ≤ B| a|α . By the definition of  a = χ (x)a(x), we have

| a − a∗ |α ≤ |χ|α | a| α .

(89)

Let w (x, s)|∂ Ω1 = f (x). Then ∗



D△ w − (s + a∗ ) w = aw 2

in Ω1

(90)

w |∂ Ω1 = (f − f )(x). ∗

It follows from Lemma 3 that

∥ w∥C 3 (∂ Ω1 ) = ∥f ∗ − f ∥C 3 (∂ Ω1 ) ≤ B| a|α .

(91)

Using  a(x) = 0 for x ̸∈ Ω and |w|2+α ≤ B as well as (89), we obtain

∥ aw∥C α (Ω 1 ) = ∥ aw(x, s)∥C α (Ω ) ≤ B| a|α .

(92)

Hence, (90)–(92) and Schauder’s theorem imply that | w |2+α ≤ B|a − a |α . ∗



7. Global convergence theorem The goal of this section is to prove that the algorithm, given in Section 5, has a global convergence property. First, we give an important estimate, which follows from the Schauder theorem, for the solution of the Dirichlet boundary value problem:

 r   △z + bj zxj − b0 z = f in Ω j =1   z (x) = g (x) for x ∈ ∂ Ω .

(93)

Assume that the following conditions are satisfied: g ∈ C 2+α (∂ Ω ); bj , f ∈ C α (Ω ) for j = 0 : r ;

b0 ≥ 0

and

max |bj |α ≤ M

(94)

j=0:r

where the upper bound M is a positive constant. By Schauder theorem, assumptions (94) guarantee the existence and uniqueness of solution z ∈ C 2+α (Ω ) of the boundary value problem (93) as well as the following estimate

|z |2+α ≤ K [∥g ∥C 2+α (∂ Ω ) + |f |α ],

(95)

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

167

where constant K depends only on the domain Ω and the constant M [21]. Since K appears in (64), which is special case of (93), it can be used K instead of K and assumed K ≥ 1. Now we prove global convergence property of the algorithm in Section 5. Theorem 4. Let Ω ⊂ Ω1 ⊂ R3 be 2 convex bounded domains defined in Lemma 1 and a∗ (x) ∈ P (b, b) be exact solution of CIP (1)–(5) satisfying conditions (2)–(3). Suppose that conditions (56)–(57) hold with the level of the error σ > 0 in data, φn ∈ C 2+α (∂ Ω ) in (42). Further, we assume that the estimates (46), (48) and (55) hold with the constant C ∗ ≥ 1. Let λ ≥ 1/h. Denote η := 2(h + σ ). Consider the algorithm in Section 5 with the first tail V1,1 (x) calculated by (62)–(63). Let β = β(Ω , Ω1 , s¯, b, b¯ , C ∗ , χ , x0 , D) ≥ B ≥ 2 such that if 0 < η < η0 :=

1 KN β 3Nm

,

(96)

then for all (n, k) ∈ [1, N ] × [1, m] an,k ∈ P (b + 1, b¯ + 1).

(97)

In addition, the following estimates hold for all (n, k) ∈ [1, N ] × [1, m]

|∇ Vn,k |1+α ≤ β,

(98)

|∇ Vn,k − ∇ V |1+α ≤ β ∗

|qn,k − qn |2+α ≤ K β ∗

3(k−1+m(n−1))+1

3(k+m(n−1))

η,

(99)

η,

(100)

|qn,k |2+α ≤ 2C , ∗

|an,k − a |α ≤ β ∗

(101)

3(k+m(n−1))

η.

(102)

In particular, consider the number ω,

ω=

ln(KN ) 3Nm ln β + ln(KN )

∈ (0, 1)

(103)

then (102) becomes

|an,k − a∗ |α ≤ ηω .

(104)

Proof. First, we prove that (97) and (104) follows from estimate (102). Let (102) holds. Using (103), we obtain 3ωNm ln β + ω ln(KN ) = ln(KN ). Then taking exp of both sides we obtain:

β 3Nmω ≤ β 3Nm = (KN )1−ω ≤ (KN β 3Nm )1−ω = η0ω−1 ≤ ηω−1 . Using this in (102) we conclude that estimate (104) holds:

|an,k − a∗ |α ≤ β 3(k+m(n−1)) η ≤ β 3(k+m(N −1)) η ≤ β 3mN η ≤ ηω−1 η ≤ ηω . To obtain (97) from (102) we use the assumption a∗ ∈ P (b, b):

|an,k |α ≤ |an,k − a∗ |α + |a∗ |α ≤ β 3(k+m(n−1)) η + b ≤ 1 + b.

(105)

Similarly we can show an,k ≤ b + 1. This, with (105) implies (97). Furthermore, estimate (98) for gradients of the tail functions Vn,k directly follows from (97) and Theorem 3. In the second part of theorem we prove estimates (99)–(102). For this aim we will use mathematical induction. Denote

  qn,k = qn,k − q∗n , Vn,k = Vn,k − V ∗ ,   Fn = Fn + J |∇ q∗n |2 /I , φn = φn − φn∗ ,

 an , k = an , k − a∗ .

Taking (n, k) = (1, 1) in (44) and (65) we get:

△q1,1 − (2h∇ q0 + A1 − 2∇ V1,1 − E1 )∇ q1,1 = B1 (h∇ q0 − ∇ V1,1 ), J

△q∗1 − (2h∇ q∗0 + A1 − 2∇ V ∗ − E1 )∇ q∗1 = |∇ q∗1 |2 + B1 (h∇ q∗0 − ∇ V ∗ ) + F1 .

I Subtracting the second equation from the first one and using q0 = q∗0 = 0, we obtain the equation

△ q1,1 − (A1 − 2∇ V1,1 − E1 )∇ q1,1 = −2∇ q∗1 ∇ V1,1 − B1 ∇ V 1 ,1 −  F1 in Ω ,

(106)

168

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

with the boundary condition

 q1,1 (x) =  φ1 (x) for x ∈ ∂ Ω .

(107)

Here  qn,k = qn,k − qn ,  Fn = Fn + J |∇ qn | /I and  φn = φn − φn . By estimate (44), J /I ≤ 2/λ. Then using λh ≥ 1 with η > 2h we obtain: ∗

∗ 2



2 2 | Fn |α ≤ C ∗ h + C ∗ ≤ C ∗ η. λ

(108)

Let us analyze each right hand side terms of (106). Since max |q∗n |2+α ≤ C ∗

n∈[1,N ]

with C ∗ > 1

and V1,1 is calculated by (62)–(63), Schauder’s estimate with assuming β ≥ K s/2 implies (98) for (n, k) = (1, 1). Also from estimate (64) we obtain (99) for (n, k) = (1, 1):

   p p∗  ∗   |V1,1 |2+α = |V1,1 − V |2+α ≤  −  ≤ K sσ ≤ 2βσ ≤ βη. s s 2+α

(109)

We can assume that β ≥ 2C ∗ . Hence, the first term on the right hand side of (106) can be estimated as follows:

| − 2∇ q∗1 ∇ V1,1 |α ≤ 2C ∗ βη ≤ β 2 η. To estimate B1 we use the estimates |v 0 (x, s)|2+α ≤ s¯β, |q0 (x, s)|2+α ≤ β and I ∈ (2/(3λ), 1/λ). We have:

  s  n−1 2  2 |Bn |α =  ∇ q0 (x, s)eλ(s−sn−1 ) ds ≤ β h ≤ 3λhβ, I sn I α

n = 1, . . . , N .

Then choosing s¯ as s¯ ≥ λh ≥ 1, we have

|B1 ∇ V1,1 |α ≤ 3s¯β 2 η. Using the above estimates we obtain the following estimate for the right hand side of (106):

| − 2∇ q∗1 ∇ V1,1 − B1 ∇ V 1 ,1 −  F1 |α ≤ η(β 2 + 3s¯β 2 + C ∗ ) ≤ 5s¯β 2 η.

(110)

Now we estimate the coefficient on the left hand side of Eq. (106):

|A1 − E1 − 2∇ V1,1 |α ≤ |A1 |α + |E1 |α + 2β. Similarly as in estimating Bn , we obtain the following bounds for An and En :

   s n−1  2 (sn−1 − s)∇ q0 (x, s)eλ(s−sn−1 ) ds ≤ 3λh2 β ≤ 3s¯2 β; |An |α =  I s α   ns  n−1 2  λ( s − s ) n−1 ds ≤ 3λhs¯β ≤ 3s¯2 β. |En |α =  ∇v 0 (x, s)e I α

sn

Therefore,

|A1 − E1 − 2∇ V1,1 |α ≤ |A1 |α + |E1 |α + 2β ≤ 8s¯2 β. By the Schauder theorem and estimate (57) (with 6s¯ ≤ β )

| q1,1 |2+α ≤ K (6s¯β 2 η + ∥ φ1 ∥C 2+α (∂ Ω ) )   C ∗η ≤ K β 2 5s¯η + 2 ≤ 6K s¯β 2 η ≤ K β 3 η. 2β

(111)

This estimate, in particular, implies the required estimate (100) for (n, k) = (1, 1). Further, estimate (101), for (n, k) =

(1, 1), follows from (96) and (111):

|q1,1 |2+α ≤ | q1,1 |2+α + |q∗1 |2+α ≤ K β 3 η + C ∗ ≤ 1 + C ∗ ≤ 2C ∗ .

(112)

Now we estimate  an,k := an,k − a , for (n, k) = (1, 1). Consider the formula, given in the algorithm: ∗

an,k (x) = D△(vn,k + v0n ) + D|∇(vn,k + v0n )|2 − s2n ,

(113)

a (x) = D△(vn + v0n ) + D|∇(vn + v0n )| −

(114)







2

s2n

− F n (x),

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

169

where v0n := v0 (x, sn ). Subtracting (114) from (113) and taking (n, k) = (1, 1) we obtain:

 a1,1 (x) = D△ v1,1 + D[∇v1,1 + ∇v1∗ + 2∇v0 ]∇ v1,1 + F 1 (x),

(115)

where

vn,k (x) = −hqn,k (x) − h

n−1 

qj (x) + Vn,k (x),

(116)

j =0

vn∗ (x) = −hq∗n (x) − h

n −1 

q∗j (x) + V ∗ (x),

(117)

j =0

and  vn,k = vn,k − vn∗ . Since  v1,1 ∈ C 2+α (Ω ), we have v1,1 ∈ C 2+α (Ω ). To estimate |a1,1 |α we use |F n |α ≤ C ∗ h ≤ C ∗ η/2:

| a1,1 |α = D max{|△ v1,1 |α , |∇ v1,1 |α }[1 + |∇v1,1 |α + |∇v1∗ |α + 2|∇v01 |α ] +

C ∗η 2

.

(118)

By definitions, q∗0 (x) = q0 (x) = 0,  v1,1 = −h q1,1 +  V1,1 . Then using the above proved estimates (109), (111) and the assumption (96) we conclude:

η | v1,1 |2+α ≤ h| q1,1 |2+α + | V1,1 |2+α ≤ K β 3 + βη ≤ 2βη, 2

2

which implies max{|△ v1,1 |α , |∇ v1,1 |α } ≤ 2βη.

(119)

Let us consider the term 1 + |∇v1,1 |α + |∇v1∗ |α + 2|∇v01 |α in (118). By the definitions of vn,k and vn∗ in (116) and (117), we have: 1 + |∇v1,1 |α + |∇v1∗ |α + 2|∇v01 |α ≤ 1 + h|∇ q1,1 |α + |∇ V1,1 |α + h|∇ q∗1 |α + |∇ V ∗ |α + 2|∇v01 |α

≤ 1 + 2C ∗ h + β + hC ∗ + β + 2β s ≤

β

2

+ 3hC ∗ + 4β s ≤ 5β s.

Using this estimate with (119) in (118) we obtain the desired estimate (102) for (k, n) = (1, 1):

| a1,1 |α = D2βη(5β s) + C ∗

η 2

≤ β 3η

(120)

where we assumed max{10Ds, 24s} < β . By the algorithm given in Section 5, qn := qn,mn ,

an := an,mn

and

Vn+1,1 (x, s) = v n,mn (x, s) − v0 (x, s).

Having these functions we calculate qn+1,1 . Also recall that q0 = q∗0 = 0. For the convenience of the mathematical induction, we temporary set qn,0 := qn−1 for n ≥ 1 and a0 := a∗ , V0,0 := V1,1 . Hence (97)–(102) are valid for (n, k) = (0, 0). Now assume (97)–(102) are proved for (n′ , k′ ) ∈ [0, n] × [0, k − 1] where k ≥ 1 we now need to prove (97)–(102) for ′ ′ (n , k ) = (n, k). Using (66) and (47) we obtain,

 △qn,k − 2h

n −1 





∇ qj + An − 2∇ Vn,k − En ∇ qn,k = Bn h

j=0

 ∗

△qn − 2h

n −1 

n−1 

 ∇ qj − ∇ Vn,k ,

(121)

j =0

 ∗





J

 ∗ 2

∇ qj + An − 2∇ V − En ∇ qn = |∇ qn | + Bn h I

j =0

n−1 

 ∗

∇ qj − ∇ V



+ Fn .

(122)

j =0

Subtracting (122) from (121), we obtain: For x ∈ Ω ,

 △ qn,k − 2h

n −1 

 ∇ qj + An − 2∇ Vn,k − En ∇ qn,k

j=0

 ∗

= 2∇ qn h

n −1  j =0





∇ qj − ∇ Vn,k + Bn h

n−1  j =0

 ∇ qj − ∇ V n ,k −  Fn

(123)

170

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

with the boundary condition

 qn,k (x) =  φn (x) for x ∈ ∂ Ω .

(124)

In order to estimate right hand side of (123), we consider  Vn,k first. Since (97)–(102) are valid for (n′ , k′ ) ∈ [0, n] × [0, k − 1]. Then Theorem 3 implies that

|∇ Vn,k |1+α ≤ β, |∇ Vn,k |1+α = |∇ Vn,k − ∇ V ∗ |1+α ≤ ββ 3(k−1+m(n−1)) η ≤ β 3(k−1+m(n−1))+1 η.

(125)

These estimates prove (98)–(99), for (n , k ) = (n, k). Next, using estimate (100) with (96) implies that ′

  n −1      ∇ qj  h  j=0 

≤ hKN β 3Nm η ≤



η 2

.

(126)

1+α

Also, by estimates |Bn | ≤ 3sβ and maxn∈[1,N ] |q∗n |2+α ≤ C ∗ , we obtain the required estimate for the right hand side of (123):

        n −1 n −1     η   ∗ + β 3(k−1+m(n−1))+1 η + C ∗ η ∇ qj − ∇  V n ,k + B n h ∇ qj − ∇ Vn,k −  Fn  ≤ (2C ∗ + 3sβ) 2∇ qn h   2 j =0 j=0 α

≤ 10sββ 3(k−1+m(n−1))+1 η.

(127)

Now consider the coefficient on left hand side of (123). We have:

  n −1      ∇ qj + An − 2∇ Vn,k − En  ≤ 2NhC ∗ + 3sβ + β + 3s2 β 2h  j =0  α

≤ 2NhC ∗ + (3s + 1)β + 3s2 β ≤ 2sC ∗ + 4s2 β + 3s2 β ≤ 8s2 β.

(128)

It can be seen that α -norm of the coefficient on left hand side of (123) bounded by 8s β . Hence, applying the Schauder theorem to (123) we obtain estimate (100): 2

| qn,k |2+α ≤ K (10sββ 3(k−1+m(n−1))+1 η + ∥ φn ∥C 2+α (∂ Ω ) )   C ∗η 3(k−1+m(n−1))+1 η+ ≤ K β 3(k+m(n−1)) η. ≤ K 10sββ

(129)

2

We now prove estimate (101). To do this we use (96) and (129):

|qn,k |2+α ≤ | qn,k |2+α + |q∗n |2+α ≤ K β 3(k+m(n−1)) η + C ∗ ≤ 1 + C ∗ ≤ 2C ∗ .

(130)

Finally we prove (102). It follows from definitions (113) and (114) that

 an,k (x) = D△ vn,k + D[∇vn,k + ∇vn∗ + 2∇v0n ]∇ vn,k + F n (x),

(131)

where  vn,k = vn,k − vn and vn,k , vn are defined in (116), (117), respectively. Since  vn,k , vn ∈ C vn,k ∈ C 2+α (Ω ). Next, using |F n |α ≤ C ∗ h ≤ C ∗ η/2, we get: ∗





| an,k |α ≤ D max{|△ vn,k |α , |∇ vn,k |α }[1 + |∇vn,k |α + |∇vn∗ |α + 2|∇v0 |α ] +

C ∗η

First we estimate the max term |△ vn,k |α in (132). Using (96) we obtain

| vn,k |2+α ≤ h| qn,k |2+α + h|h

n−1 

∇ qj |2+α + | Vn,k |2+α

j =0

≤ K β 3(k+m(n−1)) ≤ ≤ ≤

1 2

η2 2

+ KNhβ 3(k−1+m(n−1)) η + β 3(k−1+m(n−1)) η

KN ηβ 3(k+m(n−1)) η + 1

β 3(k+m(n−1)) η +

4β 2 3 3(k−1+m(n−1))+1 2

β

η.

1 2 1

KN ηβ 3(k−1+m(n−1)) η + β 3(k−1+m(n−1)) η

4β 2

β 3(k−1+m(n−1)) η + β 3(k−1+m(n−1)) η

2

.

2+α

(Ω ), we conclude

(132)

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

171

Then the term in (132) is estimated as follows: 3

max{|△ vn,k |α , |∇ vn,k |α } ≤

2

β 3(k−1+m(n−1))+1 η.

(133)

Second, we consider the term 1 + |∇v1,1 |α + |∇v1∗ |α + 2|∇v0 |α in (132). By the definition of vn,k and vn∗ in (116) and (117), we obtain:

|∇vn,k |α ≤ h|∇ qn,k |α + h

n−1 

|qj |α + |∇ Vn,k |α

j =0

≤ 2hC + 2NhC + β ≤ β ∗



|∇vn∗ |α ≤ h|∇ q∗n |α + h

n−1 



η 2

+

1 3



+ 1 ≤ 2sβ ≤

β2 12

;

|q∗j |α + |∇ V ∗ |α

j =0

≤ hC + NhC + β ≤ β ∗





η 4

+

s 2

 β2 + 1 ≤ 2sβ ≤ . 12

By the estimate 2|∇v0 |α ≤ 2sβ ≤ β 2 /12 we get: 1 + |∇vn,k |α + |∇vn∗ |α + 2|∇v0 |α ≤

β2 4

+

β2 12

+

β2 12

+

β2 12



β2 2

.

(134)

Using estimates (133) and (134) in (132), we obtain the final assertion of the theorem:

| an,k |α ≤ D max{|△ vn,k |α , |∇ vn,k |α }[1 + |∇vn,k |α + |∇vn∗ |α + 2|∇v0 |α ] + ≤ ≤

3 2 3 4

β

3(k−1+m(n−1))+1

β

3(k+m(n−1))+1

η

β2 2 1

+

η+ β 4

C ∗η 2

C ∗η 2

3(k+m(n−1))+1

η

≤ β 3(k+m(n−1))+1 η. Since β depends on D, we may ignore it in this estimation. This completes the proof of the theorem.



8. Summary We have presented globally convergent numerical method for a CIP for a parabolic equation. As it can be seen from the global convergence Theorem 4, the accuracy of the reconstruction of the tail function V (x) is a crucial ingredient of the accuracy of the iteration algorithm. For the first tail V1,1 , we use the asymptotic behavior (33). However, we do not use this behavior on follow up iterations. Next, we construct V1,1 by solving the Dirichlet boundary value problem (63) for the Laplace equation. Hence, estimate (64) implies that the accuracy of the reconstruction of the first tail function is defined only by the level of error σ in the boundary data. Using this fact, we show in Theorem 4 that the accuracy of the reconstruction of the unknown coefficient is defined only by σ and the step size h in the discretization with respect to s, which is quite natural. As a result, the approach presented here is a continuation of the methodology of [1] for CIP for a hyperbolic equation. A numerical validation of this approach can be found in [22].

Acknowledgments This research was supported by The Scientific and Technological Research Council of Turkey (TÜBİTAK-2219). I thank Alemdar Hasanov for useful comments and corrections on drafts of this manuscript.

References [1] L. Beilina, M.V. Klibanov, Approximate Global Convergence and Adaptivity for Coefficient Inverse Problems, Springer, 2012. [2] L. Beilina, M.V. Klibanov, A new approximate mathematical model for global convergence for a coefficient inverse problem with backscattering data, J. Inverse Ill-Posed Probl. 20 (2012) 513–565.

172

O. Baysal / Journal of Computational and Applied Mathematics 289 (2015) 153–172

[3] L. Beilina, M.V. Klibanov, Reconstruction of dielectrics from experimental data via a hybrid globally convergent/adaptive inverse algorithm, Inverse Problems 26 (2010) 125009. 30. [4] M.V. Klibanov, M.A. Fiddy, L. Beilina, N. Pantong, J. Schenk, Picosecond scale experimental verification of a globally convergent algorithm for a coefficient inverse problem, Inverse Problems 26 (2010) 045003. [5] A. Kuzhuget, L. Beilina, M. Klibanov, A. Sullivan, L. Nguyen, M. Fiddy, Blind backscattering experimental data collected in the Leld and an approximately globally convergent inverse algorithm, Inverse Problems 28 (2012) 095007. [6] N.T. Thanh, L. Beilina, M.V. Klibanov, M.A. Fiddy, Reconstruction of the refractive index from experimental backscattering data using a globally convergent inverse method, SIAM J. Sci. Comput. 36 (3) (2014) B273–B293. [7] L. Beilina, N.T. Thanh, M.V. Klibanov, M.A. Fiddy, Reconstruction from blind experimental data for an inverse problem for a hyperbolic equation, Inverse Problems 30 (2014) 025002. [8] L. Beilina, N.T. Thánh, M.V. Klibanov, J.B. Malmberg, Reconstruction of shapes and refractive indices from backscattering experimental data using the adaptivity, Inverse Problems 30 (2014) 105007. [9] Y.T. Chow, J. Zou, A numerical method for reconstructing the coefficient in a wave equation, Numer. Methods Partial Differential Equations (2014) http://dx.doi.org/10.1002/num.21904. published online. [10] J. Su, M.V. Klibanov, Y. Liu, Z. Lin, N. Pantong, H. Liu, Optical imaging of phantoms from real data by an approximately globally convergent inverse algorithm, Inverse Probl. Sci. Eng. 21 (2013) 1125–1150. [11] O.A. Ladyzhenskaya, V.A. Solonnikov, N.N. Uralceva, Linear and Quasilinear Equations of Parabolic Type, AMS, Providence, RI, 1968. [12] A. Friedman, Partial Differential Equations of Parabolic Type, Prentice Hall, Inc., Englewood Cliffs, NJ, 1964. [13] S. Arridge, Optical tomography in medical imaging, Inverse Problems 15 (1999) 841–893. [14] D. Grosenick, H. Wabnitz, H.R. Rinneberg, K.T. Moesta, P.M. Schlag, Development of a time-domain optical mammograph and first in vivo applications, Appl. Opt. 38 (1999) 2927–2943. [15] A.L. Bukhgeim, M.V. Klibanov, Uniqueness in the large of a class of multidimensional inverse problems, Sov. Math. Dokl. 260 (1981) 244–247. [16] H. Bateman, A. Erdelyi, Tables of Integral Transforms, Vol. 1, McGrawHill, New York, 1954. [17] V.S. Vladimirov, Equations of Mathematical Physics, M. Dekker, New York, 1971. [18] M.V. Klibanov, A. Timonov, Carleman Estimates for Coefficient Inverse Problems and Numerical Applications, VSP, Utrecht, 2004. [19] A.N. Tikhonov, A.V. Goncharsky, V.V. Stepanov, A.G. Yagola, Numerical Methods for the Solution of Ill-Posed Problems, Kluwer, London, 1995. [20] A.N. Tikhonov, V. Arsenin, Solution of Ill-Posed Problems, Wiley, New York, 1977. [21] O.A. Ladyzhenskaya, N.N. Uralceva, Linear and Quasilinear Elliptic Equations, Academic Press, New York, 1969. [22] A. Rhoden, N. Pantong, Y. Liu, J. Su, H. Liu, A globally convergent numerical method for coefficient inverse problems with time-dependent data, in: L. Beilina (Ed.), Applied Inverse Problems: Select Contributions from the First Annual Workshop on Inverse Problems, in: Springer Proceedings in Mathematics & Statistics, vol. 48, 2013, http://dx.doi.org/10.1007/978-1-4614-7816-47.