Available online at www.sciencedirect.com
ScienceDirect Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423 www.elsevier.com/locate/trmi
Original article
Application of Sinc-Galerkin method for solving a nonlinear inverse parabolic problem A. Zakeri ∗, A.H. Salehi Shayegan, S. Sakaki Faculty of Mathematics, K.N. Toosi University of Technology, P.O. Box 16315-1618, Tehran, Iran
Received 27 November 2016; received in revised form 27 April 2017; accepted 29 May 2017 Available online 22 June 2017
Abstract In this paper, using Sinc-Galerkin and Levenberg–Marquardt methods a stable numerical solution is obtained to a nonlinear inverse parabolic problem. Due to this, this problem is reduced to a parameter approximation problem. To approximate unknown parameters, we consider an optimization problem where objective function is minimized by Levenberg–Marquardt method. This objective function is obtained by using Sinc-Galerkin method and the overposed measured data. Finally, some numerical examples are given to demonstrate the accuracy and reliability of the proposed method. c 2017 Published by Elsevier B.V. on behalf of Ivane Javakhishvili Tbilisi State University. This is an open access article under ⃝ the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Keywords: Inverse problem; Sinc-Galerkin method; Nonlinear parabolic partial differential equation; Levenberg–Marquardt method
1. Introduction Sinc methods have been increasingly used for finding a numerical solution of ordinary and partial differential equations [1–3]. The books [4,5] provide overviews of existing methods based on Sinc functions for solving ODEs, PDEs, and integral equations [1]. These methods have also been employed for some inverse problems [6–8]. There are many reasons that why these methods motivated authors to use them. First, the most important benefit of the Sinc methods is good accuracy that they make in the neighborhood of singularities [5,9]. Second, they are typified by exponentially decaying errors and in special cases by optimal convergence rate, even for problems over infinite and semi-infinite domains [5,9]. Finally, due to their rapid convergence rate, these methods do not suffer from the usual instability problems that typically occur in different methods [5,9,10]. ∗ Corresponding author.
E-mail addresses:
[email protected] (A. Zakeri),
[email protected] (A.H. Salehi Shayegan),
[email protected] (S. Sakaki). Peer review under responsibility of Journal Transactions of A. Razmadze Mathematical Institute. http://dx.doi.org/10.1016/j.trmi.2017.05.003 c 2017 Published by Elsevier B.V. on behalf of Ivane Javakhishvili Tbilisi State University. This is an open access article under the 2346-8092/⃝ CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
412
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
The main aim of this paper is to use the Sinc-Galerkin method for solving a nonlinear inverse parabolic problem of the form u t − u x x + H (u) = F(x, t), 0 < x < 1, t > 0 u(x, 0) = φ(x) 0⩽x ⩽1 (1.1) u(0, t) = p(t) t ⩾0 u(1, t) = q(t) t ⩾0 where H (u) is considered as follows H (u) = u n (n > 1), sin(u), cos(u), exp(±u), sinh(u), cosh(u), or any analytic function of u that has a power series expansion. In the above problem, F(x, t), φ(x) and q(t) are known analytic functions in an open interval 0 < x < 1, t > 0 and may be singular in 0 or 1 or both, and the analytic functions p(t) and u(x, t) are unknown. If p = p(t) is given, then the problem (1.1) is called direct problem (DP). The existence and uniqueness of DP (1.1) have been widely investigated in [11–14]. To find the pair (u, p), we use the overposed measured data u(x ∗ , t) = E(t),
0 < x ∗ < 1.
(1.2)
Let us denote by the notation u[x, t; p] the solution of the DP (1.1). Then from the additional condition (1.2) it is seen that the nonlinear inverse parabolic problem (1.1) consists of solving the following nonlinear functional equation u[x ∗ , t; p] = E(t),
0 < x ∗ < 1.
(1.3)
In general, instead of solving the functional equation (1.3), an optimization problem is solved, where objective function is minimized by an effective regularization method. This objective function is defined as S( p) =
I ∑
(u[x ∗ , ti ; p] − E(ti ))2 .
(1.4)
i=1
In this paper, we attempt to obtain an approximate solution for the unknown function p(t). For this purpose, first let ) ( n ∑ t − ih , p(t) ¯ ≃ pi Sinc h i=1 be a linear combination of Sinc functions, where h is the step size of time and pi ’s are unknown parameters that should be derived. Then, the Sinc-Galerkin Method is used to obtain the approximate solution u m x ,m t [x, t, p] ¯ of the problem (1.1) with p(t) ¯ instead of p(t). In other words, the problem (1.1) is reduced to a parameter approximation problem. These parameters are determined by minimizing the objective function (1.4) such that u[x ∗ , ti ; p] is replaced by u m x ,m t [x, t, p]. ¯ Due to this the Levenberg–Marquardt method is used. This method is a Newton-type method for nonlinear least-squares problem that is treated in many numerical optimization text books, e.g. [15]. The Levenberg– Marquardt method has also been successfully applied to the solution of linear problems that are too ill-conditioned to permit the application of linear algorithms [16,17]. The paper is organized as follows. Section 2 is devoted to the basic formulation of the Sinc function required for our subsequent development. In Section 3, the computational algorithm based on the Sinc-Galerkin method and the Levenberg–Marquardt method is provided and sensitivity matrix is obtained. Finally, in Section 4 some numerical examples are given and shown the efficiency and accuracy of the proposed numerical scheme. 2. Sinc function properties In this section using the notations of [2,3,5,10], an overview of the basic formulation of the Sinc function is presented. The Sinc function is defined on the whole real line −∞ < x < ∞ by { sin(π x) x ̸= 0 Sinc(x) = πx 1 x = 0.
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
413
For h x > 0 and h t > 0, the translated Sinc functions with evenly spaced nodes for space and time variables are given as ( ) z − kh x , k = 0, ±1, ±2, . . . S(k, h x )(z) = Sinc hx ( ) z − kh t S ∗ (k, h t )(z) = Sinc , k = 0, ±1, ±2, . . . . ht To construct approximations on the⏐ interval is used in this paper, the eye-shaped domain in } { ( )⏐ (0, 1), which ⏐arg z ⏐ < d ⩽ π is mapped conformally onto the infinite strip D S = the z-plane, D = z = x + i y : E 1−z 2 { } w = t + is : |s| < d ⩽ π2 with ( ) z w = ϕ(z) = ln . 1−z The composition ( S j (x) = S( j, h x )oϕ(x) = Sinc
ϕ(x) − j h x hx
)
,
defines the basis element on the interval (0, 1), where h x is the mesh size in D S for the uniform grids kh x , −∞ < k < ∞. The inverse map of w = ϕ(z) is z = ϕ −1 (w) = ψ(w) =
exp(w) . exp(w) + 1
Thus, the inverse images of the equispaced grids are xk = ψ(kh x ), k = 0, ±1, ±2, . . .. { For the temporal space [5],π }we define the function Υ (t) = ln(t) which is a conformal mapping from DW = t = r + is : |arg(t)| < d ⩽ 2 the wedge-shaped temporal domain onto D S . The basis element on the interval (0, ∞) are derived from the composite translated Sinc functions ( ) Υ (t) − j h t ∗ ∗ S j (t) = S ( j, h t )oΥ (t) = Sinc . ht The mesh size h t is the mesh size in D S for the uniform grids kh t , −∞ < k < ∞. The inverse map Υ −1 (t) is exp(t). Thus, the inverse images of the equispaced grids are tk = exp(kh t ), k = 0, ±1, ±2, . . .. 3. Sinc-Galerkin solution of the nonlinear inverse parabolic problem 3.1. The direct problem An approximate solution of DP (1.1) is considered by uˆ m x ,m t (x, t) =
N∑ x +1
Nt ∑
u i, j X i (x)Θ j (t),
i=−Mx −1 j=−Mt −1
where m x = Mx + N x + 1, m t ⎧ 1−x ⎨ X i (x) = S(i, h x )oϕ(x) ⎩ x
= Mt + Nt + 1, i = −Mx − 1 −Mx ⩽ i ⩽ N x i = Nx + 1
and Θ j (t) =
⎧ ⎨
t +1 j = −Mt − 1 2 t 1 ⎩ S ∗ ( j, h+)oΥ (t) −Mt ⩽ j ⩽ Nt . t
(3.1)
414
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
Using the boundary and initial conditions in DP (1.1), we have Nt ∑
uˆ m x ,m t (0, t) =
u −Mx −1, j Θ j (t) = p(t),
i=−Mt −1 Nt ∑
uˆ m x ,m t (1, t) =
u Nx +1, j Θ j (t) = q(t),
i=−Mt −1 N∑ x +1
uˆ m x ,m t (x, 0) =
u i,−Mt −1 X i (x) = φ(x).
i=−Mx −1
Thus, we can write the approximate solution (3.1) based on Sinc basis functions as uˆ m x ,m t (x, t) =
Nx ∑
Nt ∑
u i, j Si (x)S ∗j (t) + p ∗ (t)X −Mx −1 (x) + q ∗ (t)X Nx +1 (x) + φ(x)Θ−Mt −1 (t),
i=−Mx j=−Mt
where p ∗ (t) = p(t) − φ(0)Θ−Mt −1 (t), and q ∗ (t) = q(t) − φ(1)Θ−Mt −1 (t). The unknown coefficients u i, j , i = −Mx , . . . , N x , j = −Mt , . . . , Nt are determined by orthogonalizing the residual with respect to the functions Sk,l , i.e., ( ) L uˆ m x ,m t − F, Sk,ℓ = 0, −Mx ⩽ k ⩽ N x , −Mt ⩽ l ⩽ Nt , where Lu ≡ u t − u x x + H (u) and Sk,l = Sk (x)Sl∗ (t) = (S(k, h x )oϕ(x)) (S(l, h t )oΥ (t)) . The weighted inner product here is defined by ∫ ∞∫ 1 f (x, t)g(x, t)w(x)τ (t)d xdt, ( f, g) = 0
0
where w(x)τ (t) is a product weight function. This orthogonalization may be written ( ) Lu m x ,m t − F ∗ , Sk,ℓ = 0, −Mx ⩽ k ⩽ N x , −Mt ⩽ l ⩽ Nt , in which the homogeneous part of the approximate solution is given by u m x ,m t (x, t) =
Nx ∑
Nt ∑
u i, j Si (x)S ∗j (t),
(3.2)
i=−Mx j=−Mt
and ) ∂ ( ∗ p (t)X −Mx −1 (x) + q ∗ (t)X Nx +1 (x) + φ(x)Θ−Mt −1 (t) ∂t ) ∂2 ( ∗ + 2 p (t)X −Mx −1 (x) + q ∗ (t)X Nx +1 (x) + φ(x)Θ−Mt −1 (t) ∂ x( ) − H p ∗ (t)X −Mx −1 (x) + q ∗ (t)X Nx +1 (x) + φ(x)Θ−Mt −1 (t) ,
F ∗ (x, t) = F(x, t) −
for more details, one can refer to [4, Page 244]. An alternative approach is to analyze instead (( ) ) (( ) ) ( ( ) ) ( ) u m x ,m t t , Sk,l − u m x ,m t x x , Sk,l + H u m x ,m t , Sk,l = F ∗ , Sk,l .
(3.3)
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
415
The method of approximating the integrals in (3.3) begins by integrating by parts to transfer all derivatives from u m x ,m t to Sk,l . Thus, we have ∫ ∞∫ 1 (( ) ) ) ∂ ( u m x ,m t t , Sk,l = u m x ,m t (x, t) Sk (x)w(x)Sl∗ (t)τ (t)d xdt, 0 0 ∂t ∫ ∞∫ 1 ) ∂ ( ∗ = BT1 − u m x ,m t (x, t) Sl (t)τ (t) Sk (x)w(x)d xdt, ∂t 0 0 and ) ∂2 ( u m x ,m t (x, t) Sk (x)w(x)Sl∗ (t)τ (t)d xdt, 2 0 0 ∂x ∫ ∞∫ 1 ∂2 = BT2 + u m x ,m t (x, t) 2 (Sk (x)w(x)) Sl∗ (t)τ (t)d xdt, ∂x 0 0
(( ) ) u m x ,m t x x , Sk,l =
∫
∞
∫
1
where 1
∫ BT1 = 0
( ) ⏐∞ Sk (x)w(x) u m x ,m t (x, t)Sl∗ (t)τ (t) ⏐0 d x,
and ) ⏐1 ) ∂ ( u m x ,m t (x, t) Sk (x)w(x) ⏐0 dt ∂x 0 ( ) ∫ ∞ ⏐1 ∂ − Sl∗ (t)τ (t) u m x ,m t (x, t) (Sk (x)w(x)) ⏐0 dt. ∂x 0 ∫
BT2 =
∞
Sl∗ (t)τ (t)
(
So, we can write ( ) ∫ ∞∫ 1 ) ∂ ( ∗ ∂2 u m x ,m t (x, t) − Sl (t)τ (t) Sk (x)w(x) + 2 (Sk (x)w(x)) Sl∗ (t)τ (t) d xdt ∂t ∂x 0 0 ∫ ∞∫ 1 ( ) H u m x ,m t (x, t) Sk (x)Sl∗ (t)w(x)τ (t)d xdt + BT + 0 0 ∫ ∞∫ 1 F ∗ (x, t)Sk (x)Sl∗ (t)w(x)τ (t)d xdt, = 0
0
where BT = BT1 + BT2 . The weight functions w(x) and τ (t) are defined in the following forms √ 1 w(x) = √ ′ , τ (t) = Υ ′ (t). ϕ (x) These weight functions cause BT = 0. For approximating the above double integrals, we use the Sinc quadrature rule that is given in the following theorem. Theorem 3.1 ([18,19]). For each fixed t, let G(z, t) ∈ B(D E ) and h > 0. Let ϕ and Υ be one-to-one conformal maps of the domains D E and DW onto D S , respectively. Let z i = ϕ −1 (i h z ), t j = Υ −1 ( j h t ) and Γz = ϕ −1 (R), Γt = Υ −1 (R). Assume there are positive constants αz , βz , and C z (t) such that ⏐ ⏐ { (z) ⏐ G(z, t) ⏐ ⏐ ⏐ ⩽ C z (t). exp(−αz |ϕ(z)|), z ∈ Γa(z) , ⏐ ϕ ′ (z) ⏐ exp(−βz |ϕ(z)|), z ∈ Γb , where Γa(z) ≡ {z ∈ Γz : ϕ(z) = u ∈ (−∞, 0)}, Γb(z) ≡ {z ∈ Γz : ϕ(z) = u ∈ [0, ∞)}. Also for each fixed z, let G(z, t) ∈ B(DW ) and assume there are positive constants αt , βt , and Ct (z) such that ⏐ ⏐ { (t) ⏐ G(z, t) ⏐ ⏐ ⏐ ⩽ Ct (z). exp(−αt |Υ (t)|), t ∈ Γa(t) , ⏐ Υ ′ (z) ⏐ exp(−βt |Υ (t)|), t ∈ Γb ,
416
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
where Γa(t) ≡ {t ∈ Γt : Υ (t) = u ∈ (−∞, 0)}, Γb(t) ≡ {t ∈ Γt : Υ (t) = u ∈ [0, ∞)}. Then the Sinc trapezoidal quadrature rule is ∫ ∫ Nz Nt ∑ ∑ G(z i , t j ) + O(exp(−αz Mz h z )) + O(exp(−βz Nz h z )) G(z, t)dzdt = h z h t ′ (z )Υ ′ (t ) ϕ i j Γt Γz i=−M j=−M z
t
+ O(exp(−2π d/ h z )) + O(exp(−αt Mt h t )) + O(exp(−βt Nt h t )) + O(exp(−2π d/ h t )). Hence, make the selections ⏐] [⏐ ⏐ ⏐ αz ⏐ Nz = ⏐ Mz + 1⏐⏐ , β
⏐] [⏐ ⏐ αz ⏐ ⏐ Mt = ⏐ Mz + 1⏐⏐ , αt
z
⏐] [⏐ ⏐ αz ⏐ ⏐ Nt = ⏐ Mz + 1⏐⏐ βt
where h ≡ h z = h t , and √ h = 2π d/(αz Mz ), √ and the exponential order of the Sinc trapezoidal quadrature rule is O(exp(− 2π dαz Mz )). Applying the Sinc quadrature rule that is defined in Theorem 3.1 yields ∫ ∞∫ 1 ) ∂ ( ∗ − u m x ,m t (x, t)Sk (x)w(x) Sl (t)τ (t) d xdt ∂t 0 0 ⏐ Nt Nx u m x ,m t (x p , tq )Sk (x p )w(x p ) ∂t∂ (Sl∗ (t)τ (t)) ⏐t=tq ∑ ∑ ≃ −h x h t ϕ ′ (x p )Υ ′ (tq ) p=−Mx q=−Mt ⏐ Nx Nt u p,q Sk (x p )w(x p ) ∂t∂ (Sl∗ (t)τ (t))⏐t=tq ∑ ∑ ≃ −h x h t , ϕ ′ (x p )Υ ′ (tq ) p=−M q=−M x
t
where x p = ϕ ( ph x ) and tq = Υ −1 (qh t ). Also, we have ∫ ∞∫ 1 ∂2 u m x ,m t (x, t) 2 (Sk (x)w(x)) Sl∗ (t)τ (t)d xdt ∂x 0 0 −1
≃ h x ht
Nx ∑
⏐ 2 Nt u m x ,m t (x p , tq )Sl∗ (tq )τ (tq ) ∂∂x 2 (Sk (x)w(x)) ⏐x=x p ∑
p=−Mx q=−Mt
≃ h x ht
Nx ∑
⏐ 2 Nt u p,q Sl∗ (tq )τ (tq ) ∂∂x 2 (Sk (x)w(x)) ⏐x=x p ∑
p=−Mx q=−Mt ∞
∫ 0
∫
ϕ ′ (x p )Υ ′ (tq )
ϕ ′ (x p )Υ ′ (tq )
,
1
( ) H u m x ,m t (x, t) Sk (x)Sl∗ (t)w(x)τ (t)d xdt
0
≃ h x ht
Nx ∑ p=−Mx
≃ h x ht
Nx ∑
Nt ∑ H (u m x ,m t (x p , tq ))Sk (x p )Sl∗ (tq )w(x p )τ (tq ) ϕ ′ (x p )Υ ′ (tq ) q=−M t
Nt ∑
p=−Mx q=−Mt
H (u p,q )Sk (x p )Sl∗ (tq )w(x p )τ (tq ) , ϕ ′ (x p )Υ ′ (tq )
and ∫ 0
∞
∫ 0
1
F ∗ (x, t)Sk (x)Sl∗ (t)w(x)τ (t)d xdt ≃ h x h t
Nx ∑ p=−Mx
Nt ∑ F ∗ (x p , tq )Sk (x p )Sl∗ (tq )w(x p )τ (tq ) . ϕ ′ (x p )Υ ′ (tq ) q=−M t
The Sinc-Galerkin method actually requires the evaluated derivatives of Sinc basis functions S(i, h x )oϕ(x) and S ∗ ( j, h t )oΥ (t) at the Sinc nodes x = xk and t = tk , respectively. The p-th derivative of S(i, h x )oϕ(x), with respect to
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
ϕ, evaluated at the nodal point xk is denoted by ⏐ dp ( p) δi,k ≡ h p p [S(i, h)oϕ(x)] ⏐x=x . k dϕ
417
(3.4)
Theorem 3.2 ([4,5]). Let ϕ be a conformal one-to-one map of the simply connected domain D E onto D S then { ⏐ 1, k = i (0) δi,k = [S(i, h)oϕ(x)] ⏐x=x = 0, k ̸= i, k ⎧ 0, k = i ⎨ ⏐ d (1) (k−i) ⏐ (−1) δi,k = h [S(i, h)oϕ(x)] x=x = ⎩ k , k ̸= i dϕ (k − i) and ⎧ −π 2 ⎪ ⎪ ⎨ 2 , k=i ⏐ d (2) 3 (k−i) δi,k = h 2 2 [S(i, h)oϕ(x)] ⏐x=x = −2(−1) ⎪ k dϕ ⎪ , k ̸= i. ⎩ (k − i)2 Proof. See [4,5]. □ We note that, the similar formula as (3.4) and similar theorem as above for S ∗ ( j, h t )oΥ (t) are satisfied. ( p) Define the m × m matrices Im for 0 ≤ p ≤ 2 by ⎛ ⎞ 1 ... 0 [ ] ⎜ ⎟ (0) Im(0) = δi,k = ⎝ ... . . . ... ⎠ = I, 0 ··· 1 1 2
⎛ 0
Im(1)
−1
⎜ ⎜ ⎜ ⎜ −1 [ ] ⎜ 1 ⎜ (1) = δi,k = ⎜ ⎜ 2 ⎜ .. ⎜ . ⎜ ⎝ (−1)m−1
...
m−1
...
..
.
−
1 2
1
⎞ (−1)m−1 m−1 ⎟ ⎟ .. ⎟ ⎟ . ⎟ 1 ⎟ ⎟, ⎟ 2 ⎟ ⎟ −1 ⎟ ⎠ 0
and ⎛
Im(2)
−π 2 3
2
⎜ ⎜ ⎜ ⎜ 2 [ ] ⎜ ⎜ −2 (2) ⎜ = δi,k = ⎜ 22 ⎜ ⎜ .. ⎜ . ⎜ ⎝ −2(−1)m−1
−2 22 ..
...
.
−2 22
...
−2(−1)m−1
⎞
(m − 1)2 .. .
⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎟ ⎟ ⎟ ⎠
−2 22 2 −π 2 , 3
2 (m − 1) The above matrices are the m × m Toeplitz matrices (see [3]). Then the discrete system can be represented in the following matrix form 2
A x V + V BtT + H = G,
(3.5)
where V = D(w)U¯ D ∗
(
τ √ Υ′
)
,
418
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
) ( τ , H = D(w) H¯ D ∗ √ Υ′ ( ) τ G = D(w) F¯ D ∗ √ , Υ′ ⎛ ⎡ ⎞⎤ ( )′′ ( ′ ) ( ) −1 1 1 ⎠⎦ D ϕ ′ (x) , A x = D ϕ (x) ⎣− 2 Im(2)x + D ⎝ ( ) 3 √ ′ hx ϕ (x) ϕ′ x 2 [ ( )] (√ ) ) (√ 1 −τ ′ Bt = D ∗ Υ ′ (t) − Im(1)t − D ∗ Υ ′ (t) , D∗ ′ ht τ Υ (t) and ⎛
g(x−Mx )
⎜ D(g) = ⎝ ⎛
0 g(t−Mt )
⎜ D ∗ (g) = ⎝
0
..
.
..
.
0 U¯ = u i, j (
)
⎞
⎟ , ⎠ g(x Nx ) m x ×m x ⎞ 0 ⎟ , ⎠ g(t Nt )
m t ×m t
, m x ×m t
( ) H¯ = H (u i, j ) m x ×m t , ( ) F¯ = F ∗ (xi , t j ) m x ×m t . Now, we have a nonlinear system of m x × m t equations of the m x × m t unknown coefficients u i, j . These coefficients are obtained by using Newton’s method or many other different methods such as, conjugate gradient method, genetic algorithms, Steffensen’s methods and so on [4,20,21]. 3.2. The nonlinear inverse parabolic problem In this subsection, to find the unknown function p(t) of the problem (1.1), a computational algorithm is provided. Algorithm: Identification of the unknown function p(t) Step 1. Put ( ) n ∑ t − ih , p(t) ¯ ≃ pi Sinc h i=1 be an approximation of the unknown function p(t), where h is the step size of time and pi ’s are unknown parameters. Step 2. Using the Sinc-Galerkin solution (3.2), obtain an approximate solution for u[x, t, p]. ¯ In this case, when we solve the nonlinear system of Eqs. (3.5), the unknown coefficients u i, j are obtained according to pi ’s. In other words, we have u m x,m t [x, t, p] ¯ =
Nx ∑
Nt ∑
u i, j ( p1 , p2 , . . . , pn )Si (x)S ∗j (t).
i=−Mx j=−Mt
Step 3. Obtain the n unknown parameters pi , based on the minimization of the least squares norm S( p) =
I ∑ (u m x,m t [x ∗ , ti ; p] ¯ − E(ti ))2 .
(3.6)
i=1
Since, the obtained system of algebraic equations is ill-conditioned, therefore the Levenberg–Marquardt method according to step 4 is used.
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
419
Step 4. Levenberg–Marquardt regularization [17]. Suppose that, Um x ,m t ( p) = [U1 , U2 , . . . , U I ]T E = [E 1 , E 2 , . . . , E I ]T , and p = [ p1 , p2 , . . . , pn ]T , where E i = E(ti ) and Ui = u m x,m t [x ∗ , ti ; p], ¯ i = 1, 2, . . . , I . Then the matrix form of the functional (3.6) is given by [ ]T [ ] S( p) = E − Um x ,m t ( p) E − Um x ,m t ( p) , in which [ ]T E − Um x ,m t ( p) ≡ [E 1 − U1 , E 2 − U2 , . . . , E I − U I ] . The superscript T denotes the transpose and I is the total number of measurements. To minimize the least squares norm, the derivatives of S( p) with respect to each unknown parameters { pi }i=n i=1 are equated to zero. That is ∂ S( p) ∂ S( p) ∂ S( p) = = ··· = = 0, ∂ p1 ∂ p2 ∂ pn or in matrix form ] [ ] ∂UmT x ,m t ( p) [ E − Um x ,m t ( p) = 0, ∇ S( p) = 2 − ∂p where ⎡ ∂ ⎤ ⎢ ∂ p1 ⎥ ⎢ ∂ ⎥ ⎢ ⎥ T ⎥[ ∂Um x ,m t ( p) ⎢ ∂ p2 ⎥ U 1 =⎢ ⎢ ⎥ ∂p ⎢ .. ⎥ ⎢ . ⎥ ⎣ ∂ ⎦
U2
] . . . UI .
∂ pn Hence, the sensitivity matrix can be written in the form [17] ⎡ ∂U ∂U1 ∂U1 1 ... ⎢ ∂ p1 ∂ p2 ∂ p3 ⎢ ]T ⎢ ∂U2 ∂U2 ∂U2 [ ... ⎢ ∂UmT x ,m t ( p) ∂ p1 ∂ p2 ∂ p3 =⎢ J ( p) = ⎢ .. .. ∂p ⎢ .. . . ⎢ . ⎣ ∂U I ∂U I ∂U I ... ∂ p1 ∂ p2 ∂ p3
∂U1 ⎤ ∂ pn ⎥ ∂U2 ⎥ ⎥ ⎥ ∂ pn ⎥ .. ⎥ ⎥ . ⎥ ∂U I ⎦
(3.7)
∂ pn
⏐ ⏐ i Remark 3.3. We note that [17], when the sensitivity coefficients (J (P))i, j = ∂U are small, we have ⏐(J (P))T J (P)⏐ ≈ ∂p j ⏐ ⏐ 0 and the inverse problem is ill-conditioned. It can also be shown that ⏐(J (P))T J (P)⏐ is null if any column of J ( p) can be expressed as a linear combination of the other columns [17]. Now, the computational algorithm for the Levenberg–Marquardt regularization is provided as follows [17]. Suppose an initial guess for the vector of unknown coefficients p is available. Denote it with p (0) . 1. Set µ0 be an arbitrary regularization parameter (for example µ0 = 0.001 ) and k = 0. 2. Compute Um x ,m t ( p (0) ) and S( p (0) ). 3. Compute the sensitivity matrix J k defined by (3.7) and then Ω k = diag[(J k )T J k ], by using the current values of p (k) . 4. Solve the following linear system of algebraic equations [ k T k ] [ ] (J ) J + µk Ω k 1p k = (J k )T E − Um x ,m t ( p (k) ) , in order to compute 1p k = p k+1 − p k .
420
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
Fig. 1. The approximate solution p(t) ¯ with h x =
π 4,
ht =
π 2,
h=
π 4,
Mx = N x = 16, Mt = Nt = 4 and H (u) = u 2 .
Table 1 The L 1 -norm error of the introduced method for Mx = N x = Mt = Nt = 4 and H (u) = u 2 . t ∥ p(t) − p(t)∥ ¯ 1
1 0.15239
4.1415 0.20502
10.4248 0.04755
19.8496 0.01921
22.9911 0.00060
27.7035 0.01034
32.4159 0.00293
37.1283 0.00390
41.8407 0.00456
48.1239 0.00246
32.4159 0.01176
37.1283 0.00322
41.8407 0.00638
48.1239 0.00119
Table 2 The L 1 -norm error of the introduced method for Mx = N x = 8, Mt = Nt = 4 and H (u) = u 2 . t ∥ p(t) − p(t)∥ ¯ 1
1 0.17440
4.1415 0.19443
10.4248 0.03191
19.8496 0.00316
22.9911 0.00788
27.7035 0.00826
5. Compute p k+1 = 1p k + p k . 6. If S( p k+1 ) ≥ S( p k ) replace µk by 10µk and go to 4. k k 7. If S( p k+1 ) < S( p k ) accept p k+1 and replace by 0.1µ µ . k+1 k 8. Assume that tol (tolerance) is given. If p − p ≤ tol, then an acceptable approximation is obtained. Otherwise, replace k by k + 1 and go to 3. In last section, the application of the proposed approach to solve the problem (1.1) is illustrated by three examples. 4. Numerical results In this section, to show the validation of the introduced method three numerical examples are given. In these examples, the numerical results are listed with different values of h x , h t , h, Mx , Mt , N x , Nt and for 0 ≤ t ≤ 50. Also, in order to solve the obtained nonlinear system of equations in (3.5), we apply Newton’s method. Example 4.1. Consider the nonlinear inverse parabolic problem of the form u t − u x x + u 2 = F(x, t), u(x, 0) = 0 u(0, t) = p(t) u(1, t) = 0.9te−t sin(1)
0 < x < 1, t > 0 0⩽x ⩽1 t ⩾0 t ⩾0
( ) where F(x, t) = −2te−t cos(x) + e−t (x − 0.1) sin(x) 1 + e−t t 2 (x − 0.1) sin(x) and p(t) is unknown. We note that the exact solutions are u(x, t) = (x − 0.1)te−t sin(x) and p(t) = 0. In this example the overposed measured data is considered by u(0.1, t) = 0 and p(t) is approximated by ( ) ( ) t −h t − 2h p(t) ∼ ¯ = p1 Sinc + p2 Sinc . (4.1) = p(t) h h π π The numerical results are listed in Table 1 for h x = h t = h = π2 , in Table 2 for h x = 2√ , h t = π2 , h = 2√ and 2 2 π π π in Table 3 for h x = 4 , h t = 2 , h = 4 . As we observe, the results show the efficiency and accuracy of the method. Also, Fig. 1 shows the approximate solution p(t). ¯
421
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423 Table 3 The L 1 -norm error of the introduced method for Mx = N x = 16, Mt = Nt = 4 and H (u) = u 2 . t ∥ p(t) − p(t)∥ ¯ 1
1 0.08626
4.1415 0.00097
10.4248 0.00991
19.8496 0.00424
22.9911 0.00381
27.7035 0.00334
Table 4 The L 1 -norm error of the introduced method for Mx = N x = Mt = Nt = 4 and H (u) = t ∥ p(t) − p(t)∥ ¯ 1
1 0.42572
4.1415 0.70087
10.4248 0.06571
19.8496 0.00311
22.9911 0.07001
1 0.44657
4.1415 0.77071
10.4248 0.10003
19.8496 0.06335
22.9911 0.05201
27.7035 0.05146
1 0.13101
4.1415 0.08657
10.4248 0.00116
19.8496 0.00254
Fig. 2. The approximate solution p(t) ¯ with h x =
π 4,
22.9911 0.00122
ht =
π 2
, h=
π 4,
41.8407 0.00156
48.1239 0.00109
32.4159 0.04958
37.1283 0.04253
41.8407 0.03651
48.1239 0.00209
37.1283 0.03009
41.8407 0.01810
48.1239 0.00158
37.1283 0.00111
41.8407 0.00108
48.1239 0.00025
1 . 1+u 2
27.7035 0.03154
Table 6 The L 1 -norm error of the introduced method for Mx = N x = 16, Mt = Nt = 4 and H (u) = t ∥ p(t) − p(t)∥ ¯ 1
37.1283 0.00218
1 . 1+u 2
Table 5 The L 1 -norm error of the introduced method for Mx = N x = 8, Mt = Nt = 4 and H (u) = t ∥ p(t) − p(t)∥ ¯ 1
32.4159 0.00278
27.7035 0.00045
32.4159 0.00761
1 . 1+u 2
32.4159 0.00043
Mx = N x = 16, Mt = Nt = 4 and H (u) =
1 . 1+u 2
Example 4.2. For second example, we consider the following problem 1 = F(x, t), 0 < x < 1, t > 0 1 + u2 u(x, 0) = 0 0⩽x ⩽1 u(0, t) = p(t) t ⩾0 u(1, t) = 0.9te−t sin(1) t ⩾0
ut − u x x +
where F(x, t) = −2e−t t cos(x) + e−t (x − 0.1) sin(x) +
1+
e−2t t 2 (x
1 − 0.1)2 sin(x)2
and p(t) is unknown. The exact solutions are u(x, t) = (x − 0.1)te−t sin(x) and p(t) = 0. Again the overposed measured data is considered by u(0.1, t) = 0 and p(t) is approximated by (4.1). The numerical results are listed in π π Table 4 for h x = h t = h = π2 , in Table 5 for h x = 2√ , h t = π2 , h = 2√ and in Table 6 for h x = π4 , h t = π2 , h = π4 . 2 2 As we observe, the results show the validation and accuracy of the method. Also, Fig. 2 shows the approximate solution p(t). ¯
422
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
Fig. 3. The approximate solution p(t) ¯ with h x =
π 4,
ht =
π 2,
h=
π 4,
Mx = N x = 16, Mt = Nt = 4 and H (u) = sin(u).
Table 7 The L 1 -norm error of the introduced method for Mx = N x = Mt = Nt = 4 and H (u) = sin(u). t ∥ p(t) − p(t)∥ ¯ 1
1 0.14522
4.1415 0.15112
10.4248 0.03212
19.8496 0.01584
22.9911 0.00431
27.7035 0.00220
32.4159 0.00063
37.1283 0.00047
41.8407 0.00131
48.1239 0.00317
41.8407 0.00622
48.1239 0.00106
41.8407 0.00065
48.1239 0.00109
Table 8 The L 1 -norm error of the introduced method for Mx = N x = 8, Mt = Nt = 4 and H (u) = sin(u). t ∥ p(t) − p(t)∥ ¯ 1
1 0.16707
4.1415 0.16232
10.4248 0.03149
19.8496 1.88 × 10−6
22.9911 0.00451
27.7035 0.00826
32.4159 0.00952
37.1283 0.00130
Table 9 The L 1 -norm error of the introduced method for Mx = N x = 16, Mt = Nt = 4 and H (u) = sin(u). t ∥ p(t) − p(t)∥ ¯ 1
1 0.04484
4.1415 0.01749
10.4248 0.00242
19.8496 0.00075
22.9911 0.00169
27.7035 0.00059
32.4159 0.00011
37.1283 0.00038
Example 4.3. Consider the following problem u t − u x x + sin(u) = F(x, t), 0 < x < 1, t > 0 u(x, 0) = 0 0⩽x ⩽1 u(0, t) = p(t) t ⩾0 u(1, t) = 0.9te−t sin(1) t ⩾0 where ( ) F(x, t) = −2te−t cos(x) + e−t (x − 0.1) sin(x) + sin te−t (x − 0.1) sin(x) , and p(t) is unknown. The exact solutions are u(x, t) = (x − 0.1)te−t sin(x) and p(t) = 0. The overposed measured data is considered by u(0.1, t) = 0 and p(t) is approximated by (4.1). The numerical results are listed in Table 7 for π π , h t = π2 , h = 2√ and in Table 9 for h x = π4 , h t = π2 , h = π4 . As we h x = h t = h = π2 , in Table 8 for h x = 2√ 2 2 observe, the results show the accuracy of the method. Also, Fig. 3 shows the approximate solution p(t). ¯ References [1] J.L. Mueller, T.S. Shores, New Sinc-Galerkin method for convection–diffusion equations with mixed boundary conditions, Comput. Math. Appl. 47 (2004) 803–822. [2] M. El-Gamel, A.I. Zayed, Sinc-Galerkin method for solving nonlinear boundary-value problems, Comput. Math. Appl. 48 (2004) 1285–1298. [3] J. Rashidinia, M. Nabati, Sinc-Galerkin and Sinc-Collocation methods in the solution of nonlinear two-point boundary value problems, Comput. Appl. Math. 32 (2013) 315–330. [4] J. Lund, K. Bowers, Sinc Methods for Quadrature and Differential Equations, SIAM, Philadelphia, 1992. [5] F. Stenger, Numerical Methods Based on Sine and Analytic Functions, Springer-Verlag, New York, 1993. [6] J. Lund, C. Vogel, A fully-Galerkin method for the numerical solution of an inverse problem in a parabolic partial differential equation, Inverse Problems 6 (1990) 205–217.
A. Zakeri et al. / Transactions of A. Razmadze Mathematical Institute 171 (2017) 411–423
423
[7] R. Smith, K. Bowers, Sinc-Galerkin estimation of diffusivity in parabolic problems, Inverse Problems 9 (1993) 113–135. [8] J. Mueller, T. Shores, Uniqueness and numerical recovery of a potential on the real line, Inverse Problems 13 (1997) 289–303. [9] M. Dehghan, F. Emami-Naeini, The Sinc-collocation and Sinc-Galerkin methods for solving the two-dimensional Schrodinger equation with nonhomogeneous boundary conditions, Appl. Math. Model. 37 (2013) 9379–9397. [10] A. Saadatmandi, M. Dehghan, The use of Sinc-collocation method for solving multi-point boundary value problems, Commun. Nonlinear Sci. Numer. Simul. 17 (2012) 593–601. [11] R. Agarwal, C. Akrivis, Boundary value problems occurring in plate deflection theory, J. Comput. Appl. Math. 8 (1982) 145–154. [12] P. Kelevedjiev, Existence of positive solutions to a singular second order boundary value problem, Nonlinear Anal. 50 (2002) 1107–1118. [13] Y. Liu, H. Yu, Existence and uniqueness of positive solution for singular boundary value problem, Comput. Math. Appl. 50 (2005) 133–143. [14] M. Badiale, E. Serra, Semilinear Elliptic Equations for Beginners, Springer, 2011. [15] M. Hanke, The regularizing Levenberg–Marquardt scheme is of optimal order, J. Integral Equations Appl. 22 (2010) 259–283. [16] C.T. Kelley, Iterative Methods for Optimization, SIAM, Philadelphia, 1999. [17] M.N. Ozisik, H.R.B. Orlande, Inverse Heat Transfer, Fundamentals and Applications, Taylor Franscis, New York, 2000. [18] S. Koonprasert, K.L. Bowers, The fully Sinc-Galerkin method for time-dependent boundary conditions, Numer. Methods Partial Differential Equations 20 (2004) 494–526. [19] A. Shidfar, A. Babaei, The Sinc-Galerkin method for solving an inverse parabolic problem with unknown source term, Numer. Methods Partial Differential Equations 29 (2013) 64–78. [20] J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [21] P.Y. Nie, A null space method for solving system of equations, Appl. Math. Comput. 149 (2004) 215–226.