High order numerical derivatives for one-dimensional scattered noisy data

High order numerical derivatives for one-dimensional scattered noisy data

Applied Mathematics and Computation 175 (2006) 1744–1759 www.elsevier.com/locate/amc High order numerical derivatives for one-dimensional scattered ...

336KB Sizes 0 Downloads 101 Views

Applied Mathematics and Computation 175 (2006) 1744–1759

www.elsevier.com/locate/amc

High order numerical derivatives for one-dimensional scattered noisy data T. Wei

*,1,

M. Li

1

School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, People’s Republic of China

Abstract Based on the smoothing spline approximation, in this paper we propose a regularization method for computing high order numerical derivatives from one-dimensional noisy data. The convergence rates under two different choices of the regularization parameter are obtained. Numerical examples show that the proposed method is effective and stable.  2005 Elsevier Inc. All rights reserved. Keywords: Numerical derivatives; Radial basis function; Tikhonov regularization

1. Introduction Numerical differentiation for scattered noisy data is an important problem in scientific research and practical applications. For example, the problems in image process [4] and solving Volterra integral equation [5] had been focused *

Corresponding author. E-mail addresses: [email protected] (T. Wei), [email protected] (M. Li). 1 The authors are supported by the National Natural Science Foundation of China (No. 10571079). 0096-3003/$ - see front matter  2005 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2005.09.018

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

1745

on gaining the numerical differentiation. The problem of numerical differentiation is well known to be ill-posed in the sense that the small errors for measurement values on specified points may induce a large error to the computed derivative. So the key work is to develop a stable algorithm. A number of computational methods have been researched in the past years [2,3,6, 10,11]. They mainly fall into two categories: difference methods [2,6,10] and Tikhonov regularization methods [3,7,11,14]. The finite difference method is simple and effective for precise data. For noisy data, the stepsize, being a regularization parameter, depends on the level of noise, the specified points will be restricted. To solve effectively the ill-posed problems, Tikhonov regularization methods play an important role [12]. Most of the regularization procedures for numerical differentiation make use of solving a smoothing functional to get an approximate function. Once the regularization parameter is chosen, the approximations to the unknown function and its derivatives are then obtained. Unfortunately, the determination of a minimizer of a functional and an optimal regularization parameter are generally non-trivial tasks. Recently, Wang et al. [14] gave an effective method for constructing numerical derivative from one-dimensional noisy data. Based on an a priori choice strategy for the regularization parameter, they derived a convergence estimate on the numerical approximation. Unlike the use of a semi-norm on a finite interval in the paper [14], we use a semi-norm in the global space as a stabilizing functional in the regularization process. This modification allows us to represent the minimizer of the smoothing functional as a linear combination of high order splines and polynomials. The advantage of this representation is particularly helpful in treating high order numerical derivatives. This kind of regularization approach has been used in [8,13], but in which the regularization parameters are chosen by cross-validation or generalized cross-validation method. In this paper, we utilize an a priori choice rule and Morozovs discrepancy principle for the choice of the regularization parameter. Based on the work of Powell in [9], we deduce some convergence results on the numerical approximation of high order derivatives in one-dimensional space. Numerical verification of the proposed method indicates that the a priori choice of regularization parameter works well for data with low-noise level. In the case when the noise level is high, Morozovs discrepancy principle gives much better numerical results.

2. Formulation and solution of the problem In this paper, let k P 2 be a fixed integer. Let y = y(x) be a function in R and {a = x1 < x2 <    < xn = b} be a set of distinct points on [a, b] and n P k. Denote stepsizes hi = xi  xi1 for i = 2, . . . , n and let

1746

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

h ¼ max hi

ð2:1Þ

26i6n

be the maximum of stepsizes. Denotes hmin = min26i6n hi and suppose the following condition is satisfied h 6 c; ð2:2Þ hmin where c is a positive constant independent of the selection of points. In this paper we consider the following problem: given the noisy measurements ~y i of the values y(xi) which satisfy n 1X 2 ð~y  yðxi ÞÞ 6 d2 ; ð2:3Þ n i¼1 i where d is called the noise level. The purpose is to find an approximate function f*(x) such that the jth order derivative fðjÞ ðxÞ approximates the exact function y(j)(x) where 1 6 j 6 k  1 is a positive integer. Define X k ¼ ff jf 2 C k1 ðRÞ; f ðkÞ 2 L2 ðRÞg. Consider a smoothing functional on Xk: n 1X 2 2 Uk ðf Þ ¼ ðf ðxi Þ  ~y i Þ þ akf ðkÞ kL2 ðRÞ ; ð2:4Þ n i¼1 where a is a regularization parameter under Tihkonov regularization sense. Consider the following two problems: (1) How to find fa 2 Xk such that Uk(fa) 6 Uk(f) for all f 2 Xk. (2) If such fa has been obtained, how to choose ðjÞ the regularization parameter a* such that fa ðxÞðj ¼ 1; . . . ; k  1Þ converges (j) to the exact function y (x) with a high convergence rate? To answer the first problem, a simple proof based on the mathematical analysis is given in the following Theorem 1 and a different constructive approach based on reproducing kernel space was given in reference [8]. For the second problem, convergence estimates will be given in Theorems 4 and 6 when taking a = d2 and choosing a by Morozovs discrepancy principle. Theorem 1. Denote fa ðxÞ ¼

n X

cj jx  xj j

2k1

j¼1

þ

k X

d j xj1 ;

ð2:5Þ

j¼1

where coefficients fcj gn1 and fd j gk1 satisfy k

fa ðxi Þ þ 2ð2k  1Þ!ð1Þ anci ¼ ~y i ; n X cj xij ¼ 0; i ¼ 0; . . . ; k  1;

i ¼ 1; 2; . . . ; n;

ð2:6Þ ð2:7Þ

j¼1

then fa is a unique minimizer of the smoothing functional (2.4), i.e. Uk ðfa Þ ¼ minf 2X k Uk ðf Þ 6 Uk ðyÞ.

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

1747

Proof. It is clear that fa ðxÞ 2 C 2k2 ðRÞ. Further, from (2.5), the kth order derivative of fa(x) can be calculated faðkÞ ðxÞ ¼ C k2k1 k!

n X

cj jx  xj jk1 ðsignðx  xj ÞÞk .

ð2:8Þ

j¼1

Due to condition (2.7), we know faðkþjÞ ðxÞ ¼ 0 for x 6 x1 or x P xn where j P 0 is an integer. Hence fa 2 Xk. Furthermore, the derivative of the (2k  1) order is

fað2k1Þ ðxÞ ¼

8 0; > > > ! > > > i1 n P P > > < ð2k  1Þ! cj  cj ; j¼1

j¼i

j¼1

j¼iþ1

x < x1

and

x > xn ;

xi1 < x < xi ;

> > ! > > > i n P P > > > cj  cj ; xi < x < xiþ1 . : ð2k  1Þ!

Therefore fað2k1Þ ðxi þÞ  fað2k1Þ ðxi Þ ¼ 2ð2k  1Þ!ci ;

i ¼ 1; 2; . . . ; n;

where fað2k1Þ ðxi Þ and fað2k1Þ ðxi þÞ are respectively the left and right limits of function fað2k1Þ ðxÞ at point xi. Consider (2.6), we obtain fað2k1Þ ðxi þÞ  fað2k1Þ ðxi Þ ¼

1 k

ð1Þ an

ð~y i  fa ðxi ÞÞ;

i ¼ 1; 2; . . . ; n. ð2:9Þ

In the following, we show that fa is a minimizer of the functional Uk. Since Uk ðf Þ  Uk ðfa Þ ¼

n 1X ½ðf ðxi Þ  ~y i Þ2  ðfa ðxi Þ  ~y i Þ2  n i¼1 Z 2 2 þ a ððf ðkÞ Þ  ðfaðkÞ Þ Þ dx R

n 1X 2 2 ¼ ½f ðxi Þ  fa ðxi Þ þ akf ðkÞ  faðkÞ kL2 ðRÞ n i¼1 n 2X þ ½f ðxi Þ  fa ðxi Þ½fa ðxi Þ  ~y i  n i¼1 Z þ 2a faðkÞ ðf ðkÞ  faðkÞ Þ dx R

ð2:10Þ

1748

and

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

Z

faðkÞ ðf ðkÞ



faðkÞ Þ dx

R

¼

Z

xn

faðkÞ ðf ðkÞ  faðkÞ Þ dx

x1

¼ ð1Þk1

Z

xn

fað2k1Þ ðf ð1Þ  fað1Þ Þ dx

x1 k

¼ ð1Þ

n X

ðfað2k1Þ ðxi þÞ  fað2k1Þ ðxi ÞÞ

i¼1

 ðf ðxi Þ  fa ðxi ÞÞ. Combining (2.9) and (2.11) into (2.10), then we have n 1X 2 2 Uk ðf Þ  Uk ðfa Þ ¼ ðf ðxi Þ  fa ðxi ÞÞ þ ajjf ðkÞ  faðkÞ jjL2 ðRÞ P 0. n i¼1

ð2:11Þ

ð2:12Þ

This shows that fa is a minimizer of functional Uk. Further, assume that there exists another function f* 2 Xk such that Uk(f*) = Uk(fa). By (2.12), we knows jjfðkÞ  faðkÞ jj2L2 ðRÞ ¼ 0;

f  ðxi Þ ¼ fa ðxi Þ; i ¼ 1; 2; . . . ; n.

Hence f*  fa is a polynomial with order k  1 and f* is a function with form (2.5). Since n P k, then f* = fa, i.e. the minimizer of functional (2.4) is uniquely determined by (2.5)–(2.7). h Theorem 2. There is a unique solution for linear system of Eqs. (2.6) and (2.7). Proof. Denote matrices A = (jxi  xjj2k1)n·n and D = 2(2k  1)!an Æ diag(1, 1, . . . , 1). The vector ~y ¼ ð~y 1 ; . . . ; ~y n ÞT where T represents the transpose of a matrix. Denote the matrix P as 1 0 1 x1    xk1 1 C B C B 1 x2    xk1 2 C P ¼B B             C. A @ 1

xn

   xk1 n

Then the linear system of Eqs. (2.6) and (2.7) can be rewritten as !    k ~y c A þ ð1Þ D P ¼ ; T d 0 P 0

ð2:13Þ

where c = (c1, c2, . . . , cn)T and d = (d1, d2, . . . , dk)T are the vectors of unknown coefficients in (2.5).

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

1749

Let (aT,bT) = (a1,a2, . . . , an, b1, . . . , bk) is any solution of the homogeneous equations (2.13), i.e. (A + (1)k D)a + Pb = 0 and PTa = 0, then it is easy to see aT ðA þ ð1Þk DÞa ¼ 0. ð2:14Þ Pn 2k1 T . Using the fact P a = 0 and the similar proDefine gðxÞ ¼ i¼1 aj jx  xj j cedure in Theorem 1, we have 2

jjgðkÞ jjL2 ðRÞ ¼ ð1Þ

k

n X ðgð2k1Þ ðxi þÞ  gð2k1Þ ðxi ÞÞgðxi Þ i¼1

¼ ð1Þ

n X k

k

2ð2k  1Þ!ai gðxi Þ ¼ 2ð2k  1Þ!ð1Þ aT Aa.

ð2:15Þ

i¼1

Therefore we show that (1)kaTAa P 0, note that for any a > 0, aTDa P 0, then by (2.14), we can deduce aTDa = 0. This result gives a = 0. Since n P k, from the linear system Pb = 0, we know b = 0. Thus the coefficient matrix of system (2.13) is non-singular and the solution is unique. h

3. Convergence rates In this section, we consider an a priori choice strategy and an a posteriori choice rule to find the regularization parameter. The a priori rule takes a = d2 and the a posteriori rule is Morozovs discrepancy principle, i.e. choosing a from the following equation: n 1X 2 ðfa ðxi Þ  ~y i Þ ¼ d2 . n i¼1

ð3:1Þ

Under each choice of the regularization parameter, convergence estimates up to (k  1)th order derivative can be obtained and will be given in following Theorems 4 and 6. Numerical results of comparing two choice strategies will be shown in next section. The proof of Theorem 4 will use the following lemma generalized from Powells paper [9]. Lemma 3. Let f^x1 ; ^x2 ; . . . ; ^xm g be any finite set of distinct points in R, m P k and let f^c1 ; ^c2 ; . . . ; ^cm g be any real multipliers that satisfy the conditions m X ^ci^xji ¼ 0; j ¼ 0; 1; . . . ; k  1. ð3:2Þ i¼1

Further, let g 2 Xk, then the linear functional

1750

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

LðgÞ ¼

m X

^ci gð^xi Þ

ð3:3Þ

i¼1

has the property 1

jLðgÞj 6 ½2ð2k  1Þ! C k ð^cÞ

1=2

jjgðkÞ jjL2 ðRÞ ;

ð3:4Þ

where C k ð^cÞ ¼ ð1Þk 2ð2k  1Þ!

m X m X i¼1

^ci^cj j^xi  ^xj j2k1 P 0.

ð3:5Þ

j¼1

Proof. Denote m X

tðxÞ ¼

^ci jx  ^xi j2k1 ;

x 2 R;

i¼1

then for any g 2 Xk, we have LðgÞ ¼

m X

^ci gð^xi Þ ¼ ½2ð2k  1Þ!1

i¼1

¼ ½2ð2k  1Þ!1 ð1Þk

m X ðtð2k1Þ ð^xi þÞ  tð2k1Þ ð^xi ÞÞgð^xi Þ i¼1

Z

gðkÞ tðkÞ dx.

R

Thus, jLðgÞj 6 ½2ð2k  1Þ!1 jjgðkÞ jjL2 ðRÞ jjtðkÞ jjL2 ðRÞ .

ð3:6Þ

Denoting C k ð^cÞ ¼

2 jjtðkÞ jjL2 ðRÞ k

¼

Z

2

ðtðkÞ ðxÞÞ dx

R

¼ ð1Þ 2ð2k  1Þ!

m X m X i¼1

^ci^cj j^xi  ^xj j2k1 P 0;

ð3:7Þ

j¼1

combining (3.7) and (3.6) leads to (3.4). Hereafter, k Æ k denotes L2 ðRÞ norm. h Theorem 4. Let y be an exact function in space Xk and fa be an approximate function of y given in Theorem 1. Then when taking a = a1 = d2, we have a convergence estimate jjfa1  yjjL2 ða;bÞ 6 P k hk1=2 þ Qk d;

ð3:8Þ

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

1751

when choosing a = a2 by Morozovs discrepancy principle, the convergence estimate is e k d; jjfa2  yjjL2 ða;bÞ 6 Pe k hk1=2 þ Q

ð3:9Þ

e k are constants depending on k, c, b  a, ky k. where Pk,Qk and Pe k ; Q (k)

Proof. For any x 2 [a, b], there exists p 2 {0, 1, 2, . . . , n  k} such that x 2 [xp+1, xp+k]. Let ci be the Lagrange interpolation polynomials as follows: k Y

ci ¼

j¼1;j6¼i

x  xpþj ; xpþi  xpþj

i ¼ 1; 2; . . . ; k.

ð3:10Þ

In Lemma 3, let g be the error function e = fa  y. Taking ^cj ¼ cj ; ^xj ¼ xpþj ; j ¼ 1; 2; . . . ; k and ^ckþ1 ¼ 1; ^xkþ1 ¼ x, then it can be proved that kþ1 X

^cj^xij ¼ 0;

i ¼ 0; 1; . . . ; k  1.

j¼1

By Lemma 3, we have 1

1=2

jLðeÞj 6 ½2ð2k  1Þ! C k ð^cÞ

jjeðkÞ jj;

ð3:11Þ

where LðeÞ ¼ c1 eðxpþ1 Þ þ    þ ck eðxpþk Þ  eðxÞ. Further, we get jeðxÞj 6 jLðeÞj þ jc1 eðxpþ1 Þ þ    þ ck eðxpþk Þj.

ð3:12Þ

Since jx  xp+jj 6 (k  1)h and jxp+i  xp+jj P hmin, thus jci j 6

½ðk  1Þh hk1 min

k1 k1

¼ ½ðk  1Þc

;

i ¼ 1; 2; . . . ; k.

Furthermore, by (3.7), we have C k ð^cÞ ¼ ð1Þk 2ð2k  1Þ!

kþ1 X kþ1 X i¼1

" k

¼ ð1Þ 2ð2k  1Þ!

^ci^cj j^xi  ^xj j2k1

j¼1

k X k X i¼1

6 2ð2k  1Þ!½ðk  1Þh

ci cj jxpþi  xpþj j

2k1

j¼1

2

2k1 4

where Ak = 2(2k  1)!(k  1)

2k1

2k1

# ci jx  xpþi j

2k1

i¼1

k X

!2 jci j

þ2

k X

i¼1

6 2ð2k  1Þ!ðk  1Þ

2

k X

jci j5

i¼1

½kððk  1ÞcÞ

[k((k  1)c)

3

k1

k1

2

þ 1 h2k1 6 Ak h2k1 ; 2

+ 1] .

ð3:13Þ

1752

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

Note that 2

jc1 eðxpþ1 Þ þ    þ ck eðxpþk Þj 6

k X

c2i

i¼1

k X

e2 ðxpþi Þ

i¼1

6 k½ðk  1Þc2ðk1Þ

k X

e2 ðxpþi Þ

i¼1

6 Bk

k X

e2 ðxpþi Þ;

ð3:14Þ

i¼1

where Bk = k[(k  1)c]2(k1). Integrating the square of two sides in (3.12) on interval [xp+1, xp+k] together with (3.14) gives " # Z xpþk k X 2 2 2 e ðxÞ dx 6 2 L ðeÞ þ Bk e ðxpþi Þ ðxpþk  xp Þ. ð3:15Þ xpþ1

i¼1

Summating above inequality over [a, b] together with (3.11) and (3.13) finally yields Z b e2 ðxÞ dx 6 4ðb  aÞ½2ð2k  1Þ!2 Ak h2k1 jjeðkÞ jj2 a

þ 4Bk ðk  1Þh

n X

e2 ðxi Þ.

ð3:16Þ

i¼1

Let a = a1 = d2, we have n X ðfa1 ðxi Þ  yðxi ÞÞ2 h i¼1

6



 X ðb  aÞh 1 n 2 þh ðfa ðxi Þ  yðxi ÞÞ hmin n i¼1 1

6 2ðb  aÞðc þ 1Þ

n 1X ððfa1 ðxi Þ  ~y i Þ2 þ ð~y i  yðxi ÞÞ2 Þ n i¼1

6 2ðb  aÞðc þ 1ÞðUk ðyÞ þ d2 Þ 6 2ðb  aÞðc þ 1Þðjjy ðkÞ jj2 þ 2Þd2 .

ð3:17Þ

In the following, we show that ke(k)k is bounded. Since a1 jjfaðkÞ jj2 6 Uk ðfa1 Þ 6 Uk ðyÞ 6 d2 þ a1 jjy ðkÞ jj2 ; 1

ð3:18Þ

jjfaðkÞ jj2 6 1 þ jjy ðkÞ jj2 . 1

ð3:19Þ

then

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

1753

Therefore 2

2

2

2

jjfaðkÞ  y ðkÞ jj 6 2jjfaðkÞ jj þ 2jjy ðkÞ jj 6 2 þ 4jjy ðkÞ jj . 1 1

ð3:20Þ

Combining (3.16), (3.17) and (3.20), we obtain Z b 2 2 2 ðfa1  yÞ dx 6 4ðb  aÞ½2ð2k  1Þ! Ak h2k1 ð2 þ 4jjy ðkÞ jj Þ a 2

þ 4Bk ðk  1Þ2ðb  aÞðc þ 1Þðjjy ðkÞ jj þ 2Þd2

ð3:21Þ

and jjfa1  yjjL2 ða;bÞ 6 P k hk1=2 þ Qk d;

ð3:22Þ

where P k ¼ 2ðb  aÞ

1=2

pffiffiffi 1 1=2 ½2ð2k  1Þ! Ak ð 2 þ 2jjy ðkÞ jjÞ

and pffiffiffi pffiffiffi 1=2 ðc þ 1Þ1=2 ðk  1Þ1=2 ðjjy ðkÞ jj þ 2Þ. Qk ¼ 2 2B1=2 k ðb  aÞ When taking a = a2 by Morozovs discrepancy principle (3.1), we have n X 2 ðfa2 ðxi Þ  yðxi ÞÞ h i¼1

6



 X ðb  aÞh 2 n þh ððfa2 ðxi Þ  ~y i Þ2 þ ð~y i  yðxi ÞÞ2 Þ hmin n i¼1

6 4ðb  aÞðc þ 1Þd2 .

ð3:23Þ

(k)

Next we show ke k is bounded. Since n 1X jj2 ¼ Uk ðfa2 Þ  ðfa ðxi Þ  e y i Þ2 6 Uk ðyÞ  d2 6 a2 jjy ðkÞ jj2 ; a2 jjfaðkÞ 2 n i¼1 2 then jj 6 jjy ðkÞ jj jjfaðkÞ 2 and 2

2

2

2

jjfaðkÞ  y ðkÞ jj 6 2ðjjfaðkÞ jj þ jjy ðkÞ jj Þ 6 4jjy ðkÞ jj . 2 2

ð3:24Þ

Combining (3.16), (3.23) and (3.24), we obtain Z b ðfa2  yÞ2 dx 6 4ðb  aÞ½2ð2k  1Þ!2 Ak h2k1 4jjy ðkÞ jj2 a

þ 4Bk ðk  1Þ4ðb  aÞðc þ 1Þd2

ð3:25Þ

1754

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

and e k d; jjfa2  yjjL2 ða;bÞ 6 Pe k hk1=2 þ Q

ð3:26Þ

where 1=2 1 1=2 Pe k ¼ 4ðb  aÞ ½2ð2k  1Þ! Ak jjy ðkÞ jj

and e k ¼ 4B1=2 ðb  aÞ1=2 ðc þ 1Þ1=2 ðk  1Þ1=2 . Q k Before giving the error estimate to high order derivatives of a function, let us review the interpolation theorem [1]. h Lemma 5. Let 1 < a < b < 1. Then for 0 < e 6 1 and every function f which is mth continuously differential on the open interval (a, b), we have Z b Z b Z b jf ðjÞ ðxÞj2 dx 6 Ke jf ðmÞ ðtÞj2 dt þ Kej=ðmjÞ jf ðtÞj2 dt. ð3:27Þ a

a

a

Proof. See [1]. h Theorem 6. For any y 2 Xk and let fa be the approximate solution given by Theorem 1. Let the regularization parameter be a = a1 = d2 or a = a2 given by Morozovs discrepancy principle, then for sufficiently small d and h, for i = 1, 2, we have the same convergence orders 1

j

kj

jjfaðjÞ  y ðjÞ jjL2 ða;bÞ 6 Ci hkj2þ2k þ Di d k ; i

j ¼ 1; 2; . . . ; k  1;

ð3:28Þ

(k)

where Ci , Di , i = 1, 2 are constants depending on k,b  a and ky k. Proof. Denote e(x) = fa(x)  y(x), a = a1 or a2. For sufficiently small d and h, 2ðkjÞ=k from Theorem 4 we can get kekL2 ða;bÞ 6 1. Let m = k and e ¼ kekL2 ða;bÞ in Lemma 5, we have 2

2ðkjÞ

2

jjeðjÞ jjL2 ða;bÞ 6 ½KjjeðkÞ jjL2 ða;bÞ þ KjjejjL2kða;bÞ .

ð3:29Þ

When a = a1, by Theorem 4 and (3.20), we have hpffiffiffiffi kj pffiffiffiffii ðjÞ ðkÞ jjfaðjÞ  y jj 6 K jje jj þ K jjejjLk2 ða;bÞ 2 2 L ða;bÞ L ða;bÞ 1 hpffiffiffiffi pffiffiffi kj kj i pffiffiffiffiih kj 1 j 6 K ð 2 þ 2jjy ðkÞ jjÞ þ K P kk hkj2þ2k þ Qkk d k 1

j

kj

¼ C1 hkj2þ2k þ D1 d k ;

ð3:30Þ

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

1755

where C1 ¼

i kj pffiffiffiffihpffiffiffi K 2 þ 2jjy ðkÞ jj þ 1 P kk

D1 ¼

i kj pffiffiffiffihpffiffiffi K 2 þ 2jjy ðkÞ jj þ 1 Qkk .

and

When choosing a = a2 by Morozovs discrepancy principle, we can also get the estimate (3.28) in which C2 ¼

kj pffiffiffiffi K ½2jjy ðkÞ jj þ 1 Pe kk

D2 ¼

kj pffiffiffiffi e k. K ½2jjy ðkÞ jj þ 1 Q k

and 

4. Numerical examples For verifying the effect of the proposed algorithm, a smooth function and a non-smooth function will be tested in various cases. In the numerical experiment, we always fix a = 2, b = 2, k = 3. To estimate the computational error, we choose N test points on the interval [a, b] and then compute the root mean square error by the following formula vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u1 X 00 Eðfa Þ ¼ t ð4:1Þ ðf 00 ðti Þ  y 00 ðti ÞÞ2 ; N i¼1 a where N is the total number of test points on the interval [a, b] and fti gNi¼1 is a set of test points. In our computations, we always take N = 201 uniform points over [a, b]. Choose 101 random points on [a, b] as a set of scattered points shown in Fig. 1. The maximum of stepsizes is h = 0.1152, and hmin = 1.888e4, c = 610.29. 2

Example 1. The exact function is chosen as y ¼ cosð2pxÞex . Noisy data are generated by adding random numbers in normal distribution with mean l = 0 and standard deviation r = 0.0001. In this case the noise level is d = 9.4696e5. The numerical results are shown in Fig. 2 in which the dotted lines represent fa00 and the solid lines represent y00 . In Fig. 2(a) we use the

1756

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759 h=0.11523 1 0.8

The scattered points

0.6 0.4 0.2 0

0

0.5

1

1.5

2

x

(a)

40

30

30

(x) and y(x)

40

20 10

The second derivative of f

The second derivative of f (x) and y(x)

Fig. 1. Scattered points.

0 -10 -20 -30 -40 -50 -2

-1.5

-1

-0.5

0

x

0.5

1

1.5

2

20 10 0 -10 -20 -30 -40 -50 -2

(b)

-1.5

-1

-0.5

0

0.5

1

1.5

2

x

Fig. 2. The plots of the function y00 (x) and its approximation fa00 (x). (a) a = a1 and (b) a = a2.

a priori choice rule a1 = d2 and in Fig. 2(b) the parameter a2 is given by Morozovs discrepancy principle. When noise are generated by random numbers in normal distribution with mean l = 0 and standard deviation r = 0.001, the noise level is d = 9.4696e4. Numerical results are shown in Fig. 3. In Fig. 4, we show the numerical results computed by the discrepancy principle for choosing the regularization parameter. Fig. 4(a) is computed with d = 0.0095 and Fig. 4(b) is plotted for d = 0.0947. It is observed that fa00 match y00 quite well for small d when using two choice methods for the regularization parameter. When the noise level is a bit large, the Morozovs discrepancy principle give much better numerical results.

40

40

30

30

The second derivative of f (x) and y(x)

The second derivative of f (x) and y(x)

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

20 10 0

-10 -20 -30 -40 -50 -2

(a)

-1.5

-1

-0.5

0

0.5

1

1.5

2

x

1757

20 10 0 -10 -20 -30 -40 -50 -2

(b)

-1.5

-1

-0.5

0

0.5

1

1.5

2

x

40

40

30

30

The second derivative of f (x) and y(x)

The second derivative of f (x) and y(x)

Fig. 3. The plots of function y00 (x) and its approximation fa00 (x). (a) a = a1 and (b) a = a2.

20 10 0 -10 -20 -30 -40 -50 -2

(a)

-1.5

-1

-0.5

0

x

0.5

1

1.5

2

20 10 0 -10 -20 -30 -40 -50 -2

(b)

-1.5

-1

-0.5

0

0.5

1

1.5

2

x

Fig. 4. The plots of function y00 (x) and its approximation fa00 (x). (a) d = 0.0095 and (b) d = 0.0947.

Example 2. Consider a non-smooth function (  16 x3 þ 2x  83 ; x 6 0; yðxÞ ¼ 1 3 x þ 2x  83 ; x P 0. 6 Its second derivative is y 0 (x) = jxj. The third-order derivative at the point x = 0 does not exist. Noisy data are generated by adding random numbers in normal distribution with mean l = 0 and standard deviation r = 0.001. In this case noise level is d = 9.4696e4. The numerical results using Morozovs discrepancy principle are shown in Fig. 5. Fig. 5(a) plot the first derivatives fa02 , y 0 and Fig. 5(b) display the second derivatives fa002 , y00 . Numerical results for Example 2 show that our proposed scheme with a posteriori choice rule is effective and stable. Take n = 201, ~y i ¼ y i þ  sinðpxi Þ where all points {x1, x2, . . . , xn} are equidistantly distributed on [2, 2]. For Example 1, we compute root mean square errors for second-order derivative. Numerical errors versus  are shown in Fig. 6. It is observed that for small noise level, two methods for the choice of

1758

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759 4

2 1.8

3.5

1.6 3 1.4 2.5

1.2

2

1 0.8

1.5

0.6 1 0.4 0.5

0.2

0

0

(a)

x

0.5

1

1.5

2

0

(b)

0

0.5

1

1.5

2

x

Fig. 5. The plots of Example 2. (a) The first-order derivatives and (b) the second-order derivatives.

18

RMSEs of the second order derivative

16

14

12

10

8

6

4

2

0

0

log10

Fig. 6. The root mean square errors E(fa00 ) with respect to error level . The dotted line is results for a1 and the starred solid line is for a2.

the regularization parameter work well, but for a bit large noise level, the discrepancy principle will give much more accurate numerical results.

5. Conclusion In this paper, we study a classical ill-posed problem-numerical differentiation by using smoothing spline and Tikhonov regularization technique.

T. Wei, M. Li / Appl. Math. Comput. 175 (2006) 1744–1759

1759

Convergence results for high order derivatives under two choice rules for regularization parameters are given. Numerical results show that the proposed method is effective and stable.

References [1] R.A. Adams, Sobolev spaces, Pure and Applied Mathematics, vol. 65, Academic Press, New York, London, 1975. [2] R.S. Anderssen, M. Hegland, For numerical differentiation, dimensionality can be a blessing! Math. Comput. 68 (227) (1999) 1121–1141. [3] J. Cullum, Numerical differentiation and regularization, SIAM J. Numer. Anal. 8 (1971) 254– 265. [4] S.R. Deans, The Radon Transform and Some of Its Applications, A Wiley-Interscience Publication, John Wiley & Sons Inc., New York, 1983. [5] R. Gorenflo, S. Vessella, Abel integral equations, Lecture Notes in Mathematics, vol. 1461, Springer-Verlag, Berlin, 1991. [6] C.W. Groetsch, Differentiation of approximately specified functions, Amer. Math. Monthly 98 (9) (1991) 847–850. [7] M. Hanke, O. Scherzer, Error analysis of an equation error method for the identification of the diffusion coefficient in a quasi-linear parabolic differential equation, SIAM J. Appl. Math. 59 (3) (1999) 1012–1027 (electronic). [8] F.J. Hickernell, Y.C. Hon, Radial basis function approximations as smoothing splines, Appl. Math. Comput. 102 (1) (1999) 1–24. [9] M.J.D. Powell, The uniform convergence of thin plate spline interpolation in two dimensions, Numer. Math. 68 (1) (1994) 107–128. [10] A.G. Ramm, A.B. Smirnova, On stable numerical differentiation, Math. Comp. 70 (235) (2001) 1131–1153 (electronic). [11] T.J. Rivlin, Optimally stable Lagrangian numerical differentiation, SIAM J. Numer. Anal. 12 (5) (1975) 712–725. [12] A.N. Tikhonov, V.Y. Arsenin, Solutions of Ill-Posed Problems, V.H. Winston & Sons, John Wiley & Sons, Washington, DC, New York, 1977, Translated from the Russian, Preface by translation editor Fritz John, Scripta Series in Mathematics. [13] G. Wahba, J. Wendelberger, Some new mathematical methods for variational objective analysis using splines and cross validation, Monthly Weather Rev. 108 (1980) 1122–1143. [14] Y.B. Wang, X.Z. Jia, J. Cheng, A numerical differentiation method and its application to reconstruction of discontinuity, Inverse Probl. 18 (6) (2002) 1461–1476.