A dynamic meshless method for the least squares problem with some noisy subdomains

A dynamic meshless method for the least squares problem with some noisy subdomains

Applied Mathematical Modelling 37 (2013) 3152–3163 Contents lists available at SciVerse ScienceDirect Applied Mathematical Modelling journal homepag...

957KB Sizes 0 Downloads 38 Views

Applied Mathematical Modelling 37 (2013) 3152–3163

Contents lists available at SciVerse ScienceDirect

Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

A dynamic meshless method for the least squares problem with some noisy subdomains M. Esmaeilbeigi ⇑, M.M. Hosseini Faculty of Mathematics, Yazd University, P.O. Box 89195-74, Yazd, Iran

a r t i c l e

i n f o

Article history: Received 19 February 2012 Received in revised form 7 July 2012 Accepted 23 July 2012 Available online 16 August 2012 Keywords: Gaussian radial basis function Noisy least squares problems Meshless method Weighted least squares method

a b s t r a c t The interpolation method by radial basis functions is used widely for solving scattered data approximation. However, sometimes it makes more sense to approximate the solution by least squares fit. This is especially true when the data are contaminated with noise. A meshfree method namely, meshless dynamic weighted least squares (MDWLS) method, is presented in this paper to solve least squares problems with noise. The MDWLS method by Gaussian radial basis function is proposed to fit scattered data with some noisy areas in the problem’s domain. Existence and uniqueness of a solution is proved. This method has one parameter which can adjusts the accuracy according to the size of noises. Another advantage of the developed method is that it can be applied to problems with nonregular geometrical domains. The new approach is applied for some problems in two dimensions and the obtained results confirm the accuracy and efficiency of the proposed method. The numerical experiments illustrate that our MDWLS method has better performance than the traditional least squares method in case of noisy data. Ó 2012 Elsevier Inc. All rights reserved.

1. Introduction Radial basis function (RBF) methods have been praised for their simplicity and ease of implementation in multivariate scattered data approximation [1], and they are becoming a viable choice as a method for the numerical solution of partial differential equations [2,3]. RBF-based methods offer numerous advantages, such as no need for a mesh or triangulation, simple implementation, dimension independence, and no staircasing or polygonization for boundaries. Moreover, depending on how the RBFs are chosen, high-order or spectral convergence can be achieved [4,5]. The interpolation method by radial basis functions is used widely for solving scattered data approximation. However, sometimes it makes more sense to approximate the solution by least squares fit. This is especially true when the data are contaminated with noise or if there are so many data points that efficiency considerations force us to approximate from a space spanned by fewer basis functions than data points. A problem encountered in many numerical applications is to find a smooth surface which approximately produces a given set of scattered data points. In some applications, the data are assumed to be contaminated by noise in some local regions in the problem’s domain. It is not appropriate to interpolate the noisy data and the traditional least squares method may fail to work. To overcome this difficulty, a meshfree method namely, meshless dynamic weighted least squares (MDWLS) method, is presented in this paper for the solution of the noisy least squares problems. The MDWLS method by Gaussian radial basis function is proposed to fit scattered data with some noisy areas in the problem’s domain.

⇑ Corresponding author. E-mail addresses: [email protected], [email protected] (M. Esmaeilbeigi), [email protected] (M.M. Hosseini). 0307-904X/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.apm.2012.07.015

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163

3153

A weighted least squares method based on a spline space was proposed by Zhou and Han [6] to fit scattered data in two dimensions with noise. This method works just in polygonal domains and needs a triangulation on the domain. So, there is still a need for an efficient, simple, meshfree, adaptable in nonregular geometrical domains method which is independent of the dimension of problems. In this way our dynamic algorithm plays a role. Numerical results will show the capabilities and improved efficiency of the MDWLS method when compared with the traditional least squares method in case of noisy data. The layout of the paper is as follows: In Section 2 we show that how we use the Gaussian radial basis functions to approximate the solution. Then we review the meshless dynamic weighted least squares method in next section. In Section 4, existence and uniqueness of a solution is proved. The results of numerical experiments are presented in Section 5. Section 6 is dedicated to a brief conclusion. The numerical results are obtained by using MATLAB programming. 2. Radial basis function approximation In the interpolation of the scattered data using radial basis functions the approximation of a function uðxÞ at the centers X ¼ fx1 ; . . . ; xN g, may be written as a linear combination of N RBFs; usually it takes the following form:

su;X ðxÞ ¼

N X

Q X bk pk ðxÞ:

j¼1

k¼1

aj /ðx  xj Þ þ

ð2:1Þ

Here, Q denotes the dimension of the polynomial space pm1 ðRd Þ, p1 ; . . . ; pQ denote a basis of pm1 ðRd Þ; x ¼ ðx1 ; x2 ; . . . ; xd Þ; d is the dimension of the problem, a’s and b’s are coefficients to be determined, / is the RBF. Some well-known RBFs are listed in Table 1. To cope with additional degrees of freedom, the interpolation conditions

su;X ðxj Þ ¼ uðxj Þ;

1 6 j 6 N;

ð2:2Þ

are completed by the additional conditions N X

aj pk ðxj Þ ¼ 0; 1 6 k 6 Q:

ð2:3Þ

j¼1

Solvability of this system is therefore equivalent to solvability of the system

A/;X P

T

P 0

! 

a b

 ¼

ujX 0

 ð2:4Þ

;

where A/;X ¼ ð/ðxj  xk ÞÞ 2 RNN and P ¼ ðpk ðxj ÞÞ 2 RNQ . This last system is obviously solvable if the coefficient matrix on the P left-hand side is invertible [7]. Eq. (2.1) can be written without the additional polynomial Qk¼1 bk pk ðxÞ. In that case, / must be unconditionally positive definite to guarantee the solvability of the resulting system (e.g. Gaussian or inverse multiquadrics). P However Qk¼1 bk pk ðxÞ is usually required when / is conditionally positive definite, i.e. when / has a polynomial growth towards infinity. For instance, suppose / is thin plate splines. It must be noted that radial functions that are conditionally positive definite of order one (such as the multiquadric) can be used without appending the constant term to solve the scattered data interpolation problem [8]. Moreover, since positive definite or conditionally positive definite functions are usually globally supported, the interpolation matrix is full and may be very ill-conditioned for some RBFs. Although to improve the conditioning of the system of collocation equations compactly supported RBFs (CSRBFs) have been applied, but the CSRBFs are vanish beyond a user defined threshold distance r. Therefore, only the entries in the collocation matrix corresponding to collocation nodes lying closer than r to a given CSRBF center are nonzero, leading to a sparse matrix. In fact, the interest in CSRBFs waned as it became evident that, in order to obtain a good accuracy, the overlap distance r should cover most nodes in the pointset, thus resulting in a populated matrix again [9]. We use the Gaussian RBFs in our method. The reason is that the Gaussian radial basis function interpolant has been shown to exhibit super-spectral convergence.

Table 1 Some well-known functions that generate RBFs. Name of function Multiquadrics (MQ) Inverse multiquadrics (IMQ) Gaussian (GA)

Definition qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi /ðxÞ ¼ kxk22 þ c2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 /ðxÞ ¼ kxk22 þ c2   /ðxÞ ¼ exp ckxk22

Thin plate splines (TPS)

/ðxÞ ¼ ð1Þkþ1 kxk2k 2 log kxk2

Conical splines

/ðxÞ ¼ kxk2kþ1 2

3154

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163

The accuracy and the stability for the infinitely smooth /ðxÞ depend on the number of data points and the value of the shape parameter c [10]. For a fixed c, as the number of data points increase, the RBF interpolation converges to the under const:  lying (sufficiently smooth) function being interpolated at a spectral rate, i.e. O e h where h is a measure of the ‘‘typical’’ distance between data points [11–13]. In certain special cases, such as a Gaussian RBF, the RBF interpolant has been shown to  const:   exhibit ‘‘super-spectral’’ convergence, i.e. O e h2 [12–14]. In either case, the value of const. in the estimates is effected by the value of c. For a fixed number of data points, Madych [15] has shown that the accuracy of RBF interpolant can often be significantly improved by decreasing the value of c. However, decreasing c or increasing the number of data points has a severe effect on the stability of the linear system (2.4). For a fixed c, the condition number of the matrix in the linear system grows exponentially as the number of data points is increased [12,16]. For a fixed number of data points, as the shape parameter becomes small the condition number of the linear system grows [12,16]. Furthermore, there is a good approximation property for Gaussian RBF as followed: Theorem 1. For any a > 0 and for any compact set Q in Rd , the set of Gaussian radial functions

n o 2 x ! eakxyk : y 2 Q is fundamental in CðQ Þ. Proof. [17]. h Recall that a set V in a normed linear space E is said to be fundamental if the closure of the span of V is E. In other words, the set of all linear combinations of elements of V is dense in E.

3. Meshless dynamic weighted least squares method In this section, a meshfree method namely, meshless dynamic weighted least squares method, is presented to solve least squares problems with noise. We consider ðxi ; f ðxi ÞÞ; i ¼ 1; . . . ; N, where xi 2 X  Rs and f ðxi Þ 2 R. For simplicity s ¼ 2 is considered, but for arbitrary s P 1, the MDWLS method can be also defined. The data are assumed to be contaminated by noise on subdomains X1 ; . . . ; Xd :

f ðxi Þ ¼ f ðxi Þ þ ei ;

8xi 2 Xj  X;

j ¼ 1; . . . ; d;

where ei is noisy term. It is not appropriate to interpolate the noisy data, and the traditional least squares method may fail to work. In the traditional method of least squares, in which a fixed weight function is used, the same manner is treated with noisy and non-noisy data. However, in the MDWLS method by using dynamic weight function, it is made possible to treat differently with noisy and non-noisy data. Furthermore, the weight function can be adjusted with the amount of noise in different subdomains. Now, we propose to construct a surface uðxÞ in the approximation space of the form

U ¼ spanfp1 ; . . . ; pm g;

m < N; 2

N with Gaussian radial basis functions fpk ¼ eckxnk k2 gm k¼1 such that nk 2 fxi gi¼1 ; k ¼ 1; . . . ; m. We intend to find the best discrete weighted least squares approximation from U to some given function f, i.e. we need to determine the coefficients cj in

uðxÞ ¼

m X cj pj ðxÞ;

x 2 R2 ;

j¼1

such that

kf  uk2;w ! min: Here the norm is defined via the discrete (pseudo) inner product

hf ; giw ¼

N X f ðxi Þgðxi Þwðxi Þ;

ð3:1Þ

i¼1

where wðxÞ is a positive weight function. We define the following dynamic weight function:

( wðxÞ ¼

v>0 CFðxÞ

x 2 [di¼1 Xi ; x 2 X  [di¼1 Xi

ð3:2Þ

where CFðxÞ is the cover function on rectangle ½a; b  ½c; d and this rectangle is the smallest vertical or horizontal rectangle that covers X. By (3.2), the weight function is equal to v on noisy subdomains and evaluated from CFðxÞ on non-noisy regions.

3155

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163

5 4 3 2 1 0 2 1 0 −1 −2

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Fig. 1. Schematic of the cover function on rectangle ½2; 2  ½2; 2.

The cover function on rectangle ½a; b  ½c; d is defined as follow: ðxaÞ

CFðxÞ ¼



e

q

þe

ððbaÞðxaÞÞ

q 1

1 þeq

!

ðycÞ

: 1þ

e

q

þe

ððdcÞðycÞÞ

q 1

1 þeq

! :

A sample of the cover function on ½2; 2  ½2; 2 in Fig. 1 is presented (with q ¼ 0:1). Increase or decrease in q effects on smoothness of the cover function. In the numerical results presented in Section 5, q ¼ 0:1 is used. As it can be seen in Fig. 1, the suggested cover function increases near the boundary. The motivation for this modification is the well-known fact that both for interpolation and collocation with radial basis functions the error is larger near the boundary [8]. In fact, with more emphasize on the boundary by weight function, we expect to reduce possible difficulty on the boundary. In addition, we can adjust the parameter v in CFðxÞ in accordance with the amount of noise. Different v j can be also chosen on different noisy subdomains Xj . The induced norm from (3.1) is then of the form

kf k22;w ¼

N X ½f ðxi Þ2 wðxi Þ: i¼1

It is well known that the best approximation u from U to f is characterized by

f  u?w U () hf  u; pk iw ¼ 0; * + m X cj pj ; pk ¼ 0; () f  j¼1

()

k ¼ 1; . . . ; m; k ¼ 1; . . . ; m;

w

m X cj hpj ; pk iw ¼ hf ; pk iw ;

k ¼ 1; . . . ; m;

j¼1

() Gc ¼ fp :

ð3:3Þ

Here the Gram matrix G has entries Gj;k ¼ hpj ; pk iw and the right-hand side vector is fp ¼ ½hf ; p1 iw ; . . . ; hf ; pm iw T . We refer to (3.3) as the normal equations associated with this problem. Another way to think of this problem would be as a pure linear algebra problem. To this end define the N  m matrix A with entries Aij ¼ pj ðxi Þ, and the vector c ¼ ½c1 ; . . . ; cm T and f ¼ ½f ðx1 Þ; . . . ; f ðxN ÞT . With this notation we seek a solution of the (over determined, since N > m) linear system Ac ¼ f . The standard weight least squares solution is given by the solution of the normal equations AT wAc ¼ AT wf , where w is the diagonal weight matrix w ¼ diagðw1 ; . . . ; wN Þ. This, however, is exactly what is reported in (3.3), i.e. the matrix G is of the form G ¼ AT wA, and for the right-hand side vector we have fp ¼ AT wf [8]. The normal equation method, though easy to understand and implement, may give rise to numerical difficulties in certain cases [18]. First, we might lose some significant digits during the explicit formation of AT wA, and the computed matrix AT wA

3156

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163

may be far from positive definite; computationally, it may even be singular. Second, the normal equations approach may, in certain cases, introduce more errors than those which are inherent in the problem. So, for these numerical difficulties with the normal equations method, a more efficient method is used. Theorem 2. (a) Given A 2 Rmn ðm P nÞ and b 2 Rm , a vector X 2 Rn is a least squares solution to AX ¼ b if and only if X satisfies the normal equations

AT AX ¼ AT b: (b) The least squares solution; when it exists, is unique if and only if A has full rank.

Proof. [18].

h

Corollary. Given A 2 Rmn ðm P nÞ; w 2 Rmm with non-negative entries wij ; wH 2 Rmm ; b 2 Rm and wH ij ¼ is a least squares solution to wH AX ¼ wH b if and only if X satisfies the normal equations

pffiffiffiffiffiffi wij , a vector X 2 Rn

AT wAX ¼ AT wb: Proof. This corollary can be proved directly from Theorem 2(a). Due to the relationship between matrices w and wH we have: T

wH wH ¼ w; So, T

ðwH AÞT :ðwH AÞ ¼ AT wH wH A ¼ AT wA and T

ðwH AÞT :wH b ¼ AT wH wH b ¼ AT wb: Thus, the proof is complete. h Now, instead of solving normal equations AT wAX ¼ AT wb, we can solve the least squares problem wH AX ¼ wH b. The least squares solution can then be obtained using the pseudoinverse. Assuming AX ¼ b is a least squares problem with A being m  n ðm P nÞ and having full rank, then AT A is invertible. Definition. The matrix

Ay ¼ ðAT AÞ1 AT ; is called the pseudoinverse or the Moore–Penrose generalized inverse of A. It therefore follows from Theorem 2 that the unique least squares solution is [18]

X ¼ Ay b: The MATLAB command pinv is used for computing the pseudoinverse. Another advantage in using the command pinv ðAÞ for computing the pseudoinverse of a matrix A is that if A has more rows than columns and computationally is not of full rank, then the overdetermined least squares problem for minimizing

normðAX  bÞ does not have a unique solution. In this situation, the solution computed by X ¼ pinv ðAÞb is the minimal norm solution because it minimizes normðXÞ.

4. Existence and uniqueness of a solution Our aim in this section is to study the discrete least squares problem (see e.g. [19,20]) involves finding a best approximation to f from U ¼ spanfp1 ; . . . ; pm g with respect to the seminorm derived from the (pseudo) inner product

hg 1 ; g 2 iw ¼

N X g 1 ðxi Þg 2 ðxi Þwðxi Þ; i¼1

ð4:1Þ

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163

3157

if fp1 ; . . . ; pm g is a linearly independent set, then the coefficients (w.r.t. this basis) of the best approximation from U are the components of the solution vector C to the matrix equation AC ¼ b, where A is the m  m matrix whose (i, j)th entry is hpi ; pj iw . The components of the m-vector b are hpi ; f iw ; i ¼ 1; . . . ; m. It is well known (cf. [19]) that U provides a unique best approximation to f precisely when A is invertible; equivalently, when the seminorm derived from the (pseudo) inner product (4.1) is a norm on U. So, to show that U provides a unique best approximation to f we need to prove linearly independence of fpk gm k¼1 and that the seminorm derived from the (pseudo) inner product (4.1) is a norm on U. First, we show linearly independence of fpk gm k¼1 . s Let F denotes a radial function on Rs and X ¼ fni gm i¼1 denotes a set of distinct points (called center) in R . Define pi ðxÞ ¼ Fðx  ni Þ and U ¼ spanfpi ; 1 6 i 6 mg. In recent years, considerable attention has been paid to the problem of interpolation from the space U, i.e. finding suitable classes of radial functions, F, so that for given data d1 ; . . . ; dm 2 R, there exists a unique function u 2 U satisfying the interpolation conditions uðni Þ ¼ di ; i ¼ 1; . . . ; m; or, equivalently, the interpolation matrix ðpi ðxj ÞÞm i;j¼1 is nonsingular. If F is Gaussian radial basis function, then it is known that the associated interpolation matrix is invertible [8]. One consequence of the invertibility of the aforementioned interpolation matrix is the linear independence of the functions fpi gm i¼1 [21]. In continue, we show that the seminorm derived from the (pseudo) inner product (4.1) is a norm on U. The seminorm derived from the (pseudo) inner product (4.1) is a norm on U, when

8f 2 U  f0g ) kf k2;w > 0: To prove this by contradiction, we assume it is false; that is,

9f 2 U  f0g ) kf k2;w ¼ 0: Since f 2 U, we have the following representation of f

f ¼

m X cj pj ðxÞ; j¼1

on the other hand, since kf k2;w ¼ 0 we have

hf ; f iw ¼

N X

!2 N m X X ½f ðxi Þ wðxi Þ ¼ cj pj ðxi Þ wðxi Þ ¼ 0: 2

i¼1

i¼1

j¼1

Evidently, the above equality is true when the following linear equations are true (with the weight function wðxÞ being positive): m X cj pj ðxi Þ ¼ 0;

ð4:2Þ

j¼1

the homogeneous system (4.2) is equivalent to a matrix equation AC ¼ 0, where A is the N  m matrix whose (i, j)th entry is pj ðxi Þ and C ¼ ½c1 ; . . . ; cm T . ckxnk k22 m Because the centers fnj gm gk¼1 in the MDWLS method defined in Section 3 are chosen to j¼1 associated with fpk ¼ e form a subset of the data location fxj gNj¼1 , then A has full rank. This is true, since in this case A will have an m  m square sub-matrix which is non-singular (by virtue of being an interpolation matrix) [8]. Since A is full rank, then AC ¼ 0 has only the trivial solution C ¼ 0 [22], this result contradicts the given fact that f – 0, thus completes the proof; i.e.

8f 2 U  f0g ) kf k2;w > 0: Therefore the seminorm derived from the (pseudo) inner product (4.1) is a norm on U. Finally, regards to linearly independence of fpk gm k¼1 and that the seminorm derived from the (pseudo) inner product (4.1) is a norm on U, thus U provides a unique best approximation to f. 5. Numerical results The aim of this section is to demonstrate the MDWLS method described earlier for the solution of noisy least squares problems. For this purpose, we take several examples to solve the problem on different regular or irregular domains in 2D space. In all examples, the capabilities of the MDWLS method compared with the traditional least squares method (based on RBFs) in case of noisy data. In examples proposed in this paper, by selecting an appropriate shape parameter for Gaussian radial basis functions in the MDWLS method, instability due to the system of algebraic equations are controlled efficiently. Two types of norms are used to measure the error of approximation. The L1 and the RMS are as follows:

3158

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163

Fig. 2. Schematic of the noisy area X1 in Example 1.

~ ðX i Þj; L1 ¼ maxjuðX i Þ  u vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N u1 X ~ ðX i Þj2 ; juðX i Þ  u RMS ¼ t N i¼1 ~ ðxÞ is the approximate solution and N is the total number of evaluation points. A uniform where uðxÞ is the exact solution, u distribution in the domain is used to evaluate the L1 and the RMS error norms in every example. Example 1. Consider 169 data points ðxi ; yi Þ’s over ½0; 1  ½0; 1 as shown in Fig. 3(a). Let fðxi ; yi ; f ðxi ; yi ÞÞ; i ¼ 1; . . . ; 169g be a scattered data set, where f ðx; yÞ is a test function. The data are assumed to be contaminated by noise on subdomain X1 (the grey area shown in Fig. 2)

f ðxi ; yi Þ ¼ f ðxi ; yi Þ þ ei ; where

8ðxi ; yi Þ 2 X1 ;

ei is noisy term. We use the following test function: f ðx; yÞ ¼ 16xð1  xÞyð1  yÞ

to evaluate the scattered data set. We set ei ’s to be random numbers produced by the command randn in MATLAB. This command returns a pseudorandom, scalar values drawn from a normal distribution with mean 0 and standard deviation 1. The L1 and the RMS errors are measured using 40  40 uniformly distributed points over ½0; 1  ½0; 1. It should be noted that in this example, 169 data points ðN ¼ 169Þ and 25 centers ðM ¼ 25Þ are used in the MDWLS method. The weights in this method are also set using the weight function (3.2) by v ¼ :0001. Fig. 3(a) shows nodes distribution (marked with circles) and centers distribution (marked with crosses). The profile of the surface representing the exact solution is depicted in Fig. 3(b).

1 0.9 1

0.8 0.7

0.8

0.6

0.6

0.5

0.4

0.4

0.2

0.3

0 1

0.2 0.1 0 0

0.5 0.2

0.4

0.6

0.8

1

0

0

Fig. 3. The nodes distribution (a) and the exact solution (b) in Example 1.

0.2

0.4

0.6

0.8

1

3159

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163 Table 2 Comparison of accuracy for the MDWLS method and the traditional least squares method on ½0; 1  ½0; 1 in Example 1. Method

N

m

RMS

MDWLS method

169

25

3:5  105

9:0  105

Traditional least squares method

169

25

1

6:2  101

1:9  10

L1

Table 3 Comparison of accuracy for the MDWLS method and the traditional least squares method on the non-uniform distribution of data points and centers in Example 1. Method

N

m

RMS

L1

MDWLS method

169

25

1:0  104

2:5  104

Traditional least squares method

169

25

2:6  101

6:7  101

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

0.2

0.4

0.6

0.8

1

Fig. 4. Chebyshev distribution of data points and centers in Example 1.

The shape parameter c ¼ :4 in the MDWLS method and in the traditional least squares method is used to obtain the results in Tables 2 and 3. From Table 2, it can be inferred that the surface created by the traditional least squares method does not perform well on the test function. However, our method produces much better approximate solutions. To show the efficiency of the MDWLS method in case of non-uniform data, instead of uniform distribution of data, the Chebyshev points in two dimensional as data points and centers are used. Fig. 4 shows distribution of data points (marked with circles) and centers (marked with crosses) in this case. The numerical results are presented in Table 3. Example 2. The node points and the centers are the same as in Example 1. Here, we consider a scattered data set fðxi ; yi ; f ðxi ; yi ÞÞ; i ¼ 1; . . . ; 169g by the following test function

f ðx; yÞ ¼

2:5 2:5 þ ðx  0:8Þ2 þ ðy  0:2Þ2

:

The data are assumed to be contaminated by noise on subdomain X1 (the grey area shown in Fig. 5)

f ðxi ; yi Þ ¼ f ðxi ; yi Þ þ ei ;

8ðxi ; yi Þ 2 X1 ;

where ei is noisy term. As in Example 1, we set ei ’s to be random numbers produced by the command randn in MATLAB. The L1 and the RMS errors are measured using 40  40 uniformly distributed points over ½0; 1  ½0; 1. Furthermore, the same as in Example 1, 169 nodes and 25 centers are used in the MDWLS method. The weights in the MDWLS method are also set using the weight function (3.2) by v ¼ :0001. Fig. 6(a) shows nodes distribution (marked with circles) and centers distribution (marked with plus signs). The profile of the surface representing the exact solution is depicted in Fig. 6(b). The shape parameter c ¼ :8 in the MDWLS method and in the traditional least squares method is used to obtain the results in Table 4.

3160

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163

Fig. 5. Schematic of the noisy area X1 in Example 2.

1 0.9 0.8

1

0.7

0.95

0.6

0.9

0.5

0.85 0.8

0.4

0.75

0.3

0.7

0.2

0.65 1

0.1 0

0

0.2

0.4

0.6

0.8

1

1 0.8

0.6

0.5 0.4

0.2

0 0

Fig. 6. The nodes distribution (a) and the exact solution (b) in Example 2.

Fig. 7. Schematic of the noisy areas X1 ; X1 and X3 in Example 3.

Example 3. Consider 465 data points ðxi ; yi Þ’s over a circular domain X as shown in Fig. 8(a). In this example, we consider a scattered data set fðxi ; yi ; f ðxi ; yi ÞÞ; i ¼ 1; . . . ; 465g by the following test function

f ðx; yÞ ¼ sinðpxÞ  coshðyÞ  cosðpxÞ  sinhðyÞ: The data are assumed to be contaminated by noise on subdomains X1 ; X2 and X3 (the grey areas shown in Fig. 7)

3161

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163 Table 4 Comparison of accuracy for the MDWLS method and the traditional least squares method on ½0; 1  ½0; 1 in Example 2. Method

N

m

MDWLS method

169

25

RMS 5:4  10

5

L1 1:6  104

Traditional least squares method

169

25

2:1  101

5:7  101

1 0.8 0.6

2

0.4

1

0.2

0

0 −0.2

−1

−0.4

−2 1

−0.6 −0.8 −1 −1

−0.5

0

0.5

0.5

0

−0.5

1

−1 −1

−0.5

0

0.5

1

Fig. 8. The nodes distribution (a) and the exact solution in the extended domain (b) in Example 3.

Fig. 9. Schematic of the noisy area X1 in Example 4.

f ðxi ; yi Þ ¼ f ðxi ; yi Þ þ ei ;

8ðxi ; yi Þ 2 X1 ; X2 ; X3 ;

where ei is noisy term. As in Examples 1 and 2, We set ei ’s to be random numbers produced by the command randn in MATLAB. The L1 and the RMS errors are measured using 1384 uniformly distributed points over the circular domain which 200 points are on its boundary. It should be noted that in this example, the number of data points and centers used in the MDWLS method are 465 and 109, respectively. Then the weights in the MDWLS method are set using the weight function (3.2) by v ¼ :000001. Fig. 8(a) shows nodes distribution (marked with circles) and centers distribution (marked with plus signs). The profile of the surface representing the exact solution in the extended domain is depicted in Fig. 8(b).

3162

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163

Table 5 Comparison of accuracy for the MDWLS method and the traditional least squares method on the circular domain in Example 3. Method

N

m

MDWLS method

465

109

RMS 7

L1

4:4  10

5:0  105

Traditional least squares method

465

109

1:6  103

2:6  101

2 1.5

4

1

2

0.5

0

0

−2

−0.5

−4 2

−1

1

−1.5 −2 −2

−1.5

−1

−0.5

0

0.5

1

1.5

2

0

−1

−2 −2

−1

0

1

2

Fig. 10. The nodes distribution (a) and the exact solution in the extended domain (b) in Example 4.

Table 6 Comparison of accuracy for the MDWLS method and the traditional least squares method on the computational domain in Example 4. Method

N

m

MDWLS method

345

85

RMS 3:4  10

6

L1 3:8  104

Traditional least squares method

345

85

6:9  103

7:4  101

The fixed parameter c ¼ :7 in the MDWLS method and in the traditional least squares method is used to obtain the results in Table 5. Example 4. Consider 345 data points ðxi ; yi Þ’s over the computational domain X as shown in Fig. 10(a). Where X is a domain such that its boundary is a curve defined by the parametric equation

@ X ¼ fðx; yÞ : x ¼ q cos ðhÞ; y ¼ q sin ðhÞ; p 6 h 6 pg; where





q ¼ 1 þ ðcos ð2 hÞÞ2 : Here, we consider a scattered data set fðxi ; yi ; f ðxi ; yi ÞÞ; i ¼ 1; . . . ; 345g by the following test function

f ðx; yÞ ¼ y sinðpxÞ þ x cosðpyÞ: The data are assumed to be contaminated by noise on subdomain X1 (the grey area shown in Fig. 9)

f ðxi ; yi Þ ¼ f ðxi ; yi Þ þ ei ;

8ðxi ; yi Þ 2 X1 ;

where ei is noisy term. We set ei ’s to be random numbers produced by the command randn in MATLAB. The L1 and the RMS errors are measured using 908 uniformly distributed points over the computational domain which 200 points are on its boundary. Furthermore, 345 nodes and 85 centers are used in the MDWLS method. The weights in the MDWLS method are also set using the weight function (3.2) by v ¼ :0001. Fig. 10(a) shows nodes distribution (marked with circles) and centers distribution (marked with plus signs). The profile of the surface representing the exact solution in the extended domain is depicted in Fig. 10(b).

M. Esmaeilbeigi, M.M. Hosseini / Applied Mathematical Modelling 37 (2013) 3152–3163

3163

The fixed parameter c ¼ :6 in the MDWLS method and in the traditional least squares method is used to obtain the results in Table 6.

6. Conclusion A problem encountered in many numerical applications is to find a smooth surface which approximately produces a given set of scattered data points. In some applications, the data are assumed to be contaminated by noise in some local regions in the problem’s domain. It is not appropriate to interpolate the noisy data, and the traditional least squares method may fail to work. To overcome this difficulty, a meshfree method namely, meshless dynamic weighted least squares method, was presented in this paper for the solution of the noisy least squares problems. The MDWLS method by Gaussian radial basis function has been proposed to fit scattered data with some noisy areas in the problem’s domain. This method is an efficient, simple, meshfree, adaptable in nonregular geometrical domains method which is independent of the dimension of problems. Several examples have been used to solve the problem on different regular or irregular domains in 2D space. Numerical results showed the capabilities and improved efficiency of the MDWLS method when compared with the traditional least squares method (based on RBFs) in case of noisy data. In the MDWLS method according to the position of noisy data and their amounts, an appropriate weight function is used. In general, choice of the optimal weight function, especially when the position of noisy data is not known, is an interesting research project which can be worked on as a future research. References [1] M.D. Buhmann, Radial Basis Functions: Theory and Implementations, Cambridge Monographs on Applied and Computational Mathematics, 12, Cambridge University Press, Cambridge, 2003. [2] M. Esmaeilbeigi, M.M. Hosseini, Syed Tauseef Mohyud-Din, A new approach of the radial basis functions method for telegraph equations, Int. J. Phys. Sci. 6 (6) (2011) 1517–1527. [3] M. Esmaeilbeigi, M.M. Hosseini, Dynamic node adaptive strategy for nearly singular problems on large domains, Eng. Anal. Bound. Elem. 36 (2012) 1311–1321. [4] A.H.-D. Cheng, M.A. Golberg, E.J. Kansa, G. Zammito, Exponential convergence and h-c multiquadric collocation method for partial differential equations, Numer. Methods Partial Differ. Equat. 19 (5) (2003) 571–594. [5] M. Buhmann, N. Dyn, Spectral convergence of multiquadric interpolation, Proc. Edinb. Math. Soc. 36 (2) (1993) 319–333. [6] T. Zhou, D. Han, A weighted least squares method for scattered data fitting, J. Comput. Appl. Math. 217 (2008) 56–63. [7] H. Wendland, Scattered data approximation, Cambridge Monographs on Applied and Computational Mathematics, 17, Cambridge University Press, Cambridge, 2005. [8] G.E. Fasshauer, Meshfree approximation methods with MATLAB, Interdisciplinary Mathematical Sciences, 6, World Scientific Publishing Company, Singapore, 2007. [9] F.M.B. Martìnez, Meshless methods for elliptic and free-boundary problems, Ph.D thesis, 2008. [10] G.B. Wright, Rdial basis function interpolation: numerical and analytical developments, Ph.D thesis, 2003. [11] J. Yoon, Spectral approximation orders of radial basis function interpolation on the Sobolev space, SIAM J. Math. Anal. 23 (2001) 946–958. [12] R. Schaback, Error estimates and condition numbers for radial basis function interpolation, Adv. Comput. Math. 3 (1995) 251–264. [13] W.R. Madych, S.A. Nelson, Bounds on multivariate polynomials and exponential error estimates for multiquadric interpolation, J. Approx. Theory 70 (1992) 94–114. [14] B. Fornberg, N. Flyer, Accuracy of radial basis function interpolation and derivative approximations on 1-D infinite grids, Adv. Comput. Math. 23 (2005) 5–20. [15] W.R. Madych, Miscellaneous error bounds for multiquadric and related interpolants, Comput. Math. Appl. 24 (1992) 121–138. [16] F.J. Narcowich, J.D. Ward, Norm estimates for the inverses of a general class of scattered-data radial-function interpolation matrices, J. Approx. Theory 69 (1992) 84–109. [17] E.W. Cheney, W.A. Light, A course in approximation theory, Graduate studies in mathematics vol. 101, Brooks/Cole Publishing Company, Pacific Grove, 2000. [18] B.N. Datta, Numerical Linear Algebra and Applications, second ed., Society for Industrial and Applied Mathematics, Philadelphia, 2010. [19] C.D. Boor, A practical guide to splines, Applied Mathematical Sciences, Springer, Berlin Heidelberg, New York, 1978. [20] E.W. Cheney, Introduction to approximation theory, McGraw-Hill, New York, 1966. [21] N. Sivakumar, J.D. Ward, On the least squares fit by radial functions to multidimensional scattered data, Numer. Math. 65 (1993) 219–243. [22] G. Englen-Müllges, F. Uhlig, Numerical algorithms with C, Springer, Berlin, New York, 1996.