Quasi-interpolation scheme for arbitrary dimensional scattered data approximation based on natural neighbors and RBF interpolation

Quasi-interpolation scheme for arbitrary dimensional scattered data approximation based on natural neighbors and RBF interpolation

Accepted Manuscript Quasi-interpolation scheme for arbitrary dimensional scattered data approximation based on natural neighbors and RBF interpolation...

632KB Sizes 1 Downloads 79 Views

Accepted Manuscript Quasi-interpolation scheme for arbitrary dimensional scattered data approximation based on natural neighbors and RBF interpolation Renzhong Feng, Shun Peng PII: DOI: Reference:

S0377-0427(17)30091-2 http://dx.doi.org/10.1016/j.cam.2017.02.026 CAM 11028

To appear in:

Journal of Computational and Applied Mathematics

Received date: 28 September 2016 Revised date: 19 January 2017 Please cite this article as: R. Feng, S. Peng, Quasi-interpolation scheme for arbitrary dimensional scattered data approximation based on natural neighbors and RBF interpolation, Journal of Computational and Applied Mathematics (2017), http://dx.doi.org/10.1016/j.cam.2017.02.026 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

The revised manuscript Click here to view linked References

Quasi-interpolation Scheme for Arbitrary Dimensional Scattered Data Approximation Based on Natural Neighbors and RBF Interpolation Renzhong Feng∗

Shun Peng

School of Mathematics and Systematic Science & Key Laboratory of Mathematics, Informatics and Behavioral Semantics, Ministry of Education, Beijing University of Aeronautics and Astronautics, Beijing 100191, China

Abstract: A local quasi interpolation scheme is presented in the paper, which can be applied to arbitrary dimensional scattered data approximation with high accuracy, high efficiency and high stability. Under our new quasi-interpolation scheme, we treat the natural neighbor set of the point to be estimated in a given scattered node set as a local interpolation node set. Based on the local node set, a local radial basis function interpolation with algebraic precision of any degree is constructed through a modified Taylor expansion of sampled function at each natural neighbor to compute approximate values. Given the exclusion of the point to be estimated from its natural neighbor set, i.e. interpolation node set, the interpolation scheme we construct is a local quasi-interpolation scheme, and its approximation error is given. The numerical experimental results show that, when the derivative information is unavailable, our method outperforms the famous natural neighbor interpolation method in both approximation accuracy and efficiency. As algebraic precision increases, the approximation accuracy can be further improved. Keywords: Scattered data, Local interpolation, Natural neighbor, RBF interpolation, Quasiinterpolation, Error analysis. ∗

Corresponding author: [email protected](R.Feng) , [email protected](S.Peng).

1

1 Introduction Scattered data approximations are widely applied in various research fields such as PDE, image reconstruction, neural network, fuzzy system, GIS system, optics and interferometry. The construction of scattered data approximation method with high precision, high efficiency and strong stability has always been a hotspot in approximation theory. For low dimensional approximation problems, triangular mesh are often used to obtain piecewise smooth triangular interpolants to approximate scattered data, see [1, 2, 3, 4, 5, 6]. These are the so-called local interpolation methods whereby only the information of scattered data near the point to be estimated are taken into consideration. For these local methods, triangulation plays a critical role as their approximation performance strongly relies on the triangulation algorithm, but their limitation of extending to higher dimensional case is obvious. With the rapid advancement of science and technology, high dimensional scattered data fitting problems start to cause massive attention in animation design, dynamic medical image processing and scientific computation. However, current multivariate scattered data approximation method still remains unsatisfactory, especially for above three dimensional cases. Since 1988, more and more efforts have been made to develop radial basis methods, and breakthroughs were achieved in properties of the coefficient matrices, establishment of simplified algorithm, and selection of the best interpolation points and so on. Thus, it is widely believed that radial basis function (RBF) method is an effective way to solve multivariate scattered data approximation problem, see [7, 8, 9, 10]. The advantage of RBF interpolation is their non-sensitivity to dimensionality. However, applying RBF interpolation requires fast solving of algebraic equations to determine the coefficients of interpolation function. These equations are usually ill-conditioned and cannot be solved effectively and stably especially when the scale of scattered data is large. One of the feasible solutions for overcoming this challenge is to search for better basis functions [11]. As is mentioned above, only selected local data information is utilized in local approximation method. Hence, the coefficients of interpolation basis function can be achieved without solving the whole linear systems. Therefore, local approximation method is considered as 2

an alternative way to avoid difficulties in global radial basis interpolation. The key point of local approximation method is how to determine the local information near the point to be estimated. Currently, two types of methods are commonly used by researchers. The first method is to apply triangulation when scattered data is low dimensional, but the aforementioned shortcoming side is the difficulty in extending to a high dimensional space. The other one is to utilize a weight function with local support for each basis function, see [12, 13, 14, 15], which can be easily applied in any dimensional scattered data approximation. However, as the support area of weight function is determined by its setting influence radius, when the influence radius is small, the data amount used for approximating can be particularly small, which probably leads to poor approximation performance. So, how to choose the influence radius for local approximation is a vital part for achieving stable and accurate approximations. When Sibson introduces the concept of natural neighbor coordinates in [16, 17], he also introduces another concept, the natural neighbors of a given point in a certain point set. Every single point from a data set of any dimension has its own natural neighbors in the set (see definition in Sec.2). The advantage of natural neighbor set is that the nearest node set from a certain point is self-defined in geometry, and their location and quantity depends only on the local distribution. If the distance between some site nodes are relatively large, or the distribution possesses high anisotropy, it can be reflected in natural neighbor set, but still, it produces the best node set around the point. For this reason, natural neighbor set is a relatively satisfactory interpolation node set for local interpolation scheme. In our paper, we first treat the natural neighbor sets of the points to be estimated in a given scattered node set as local interpolation node sets. Subsequently, a local radial basis function interpolation with algebraic precision of any degree is constructed by using a modified Taylor expansion at each natural neighbor to compute approximate values. Given the exclusion of the point to be estimated from its natural neighbor set, i.e. interpolation node set, the interpolation scheme we construct is a local quasi-interpolation scheme. This scheme has a high approximation accuracy and computational efficiency 3

because the number of natural neighbors of each point is relatively small. The numerical experimental results show that, when the derivative information is ignored, our method outperforms original natural neighbor interpolation(NNI) method in both approximation efficiency and accuracy. As algebraic precision increases, the approximation accuracy can be further improved. The rest of this article is organized as follows: in Sec.2, the concept of natural neighbor and its search algorithm is briefly introduced. In Sec.3, the construction of quasiinterpolation scheme with arbitrary algebraic precision based on natural neighbors and its error estimation are presented. In Sec.4, we apply our new interpolation schemes to do approximation of scattered data in 3D space, followed by experimental results and discussion. In the end, the work is summarized and concluded in Sec.5.

2 Natural neighbor and its search algorithm Given a set of nodes {v1 , · · · , vN } in Rs , denoting Ω as the convex hull of this node





set, C = {v ∈ Rs |

v − v



v − v

, ∀ j , i} , where

·

represents Euclidean norm. i

i

j

Here we call Ci as Voronoi element of vi . The natural neighbors of a node vi in Rs are the nodes whose Voronoi element has common edge with the Voronoi element of vi , in other

words, nodes connected to vi by a edge of the Delaunay triangle with vertex vi . Figure 1 shows node A with 7 natural neighbors, and Figure 2 shows corresponding Delaunay triangulation.

Figure 1: Node A and its natural neighbors

Figure 2: Delaunay triangulation

4

More specific discussion about natural neighbors can be seen in Watson [18,19]. Although natural neighbors usually refer to the nodes, one can equally well define a set of natural neighbors to any point v(x, y) ∈ Rs , as that set of nodes which would be connected to the point if it were added to the Delaunay triangulation. The importance of natural neighbors is that they represent a set of ’closest surrounding nodes’ whose number and positions are determined merely by the local nodal distribution. One can think of the natural neighbors about any point as a unique set of nodes that defines the neighborhood of the point. If the distance between nodes is relatively large in some places, or the distribution is highly anisotropic, then the set of natural neighbors will reflect these features, but nevertheless still represent the best set of nearby surrounding nodes. They are therefore ideal candidates for nodes of a local interpolation scheme

I f (v) =

k X

φi (v) f (vi ),

(2.1)

i=1

where f (v) is the interpolated function, f (vi ), (i = 1, · · · , k) are the data values at the natural neighbor nodes to the point v and the φi (v), (i = 1, · · · , k) are weights associated with each node. Since the summation in (2.1) is only over the natural-neighbor nodes, regardless of how the φi are determined, the interpolation is guaranteed to be local. So how to find the natural neighbors of an arbitrary point v ∈ Rs in given point set? We simply introduce the search algorithms in R2 and R3 . The algorithms in higher dimension spaces are similar. In 2D space, the procedure begins by finding the Delaunay triangle which contains the point v(x, y), and then all circum-triangles of v are found. A circumtriangle of a point is a Delaunay triangle whose circumcircle contains the point (The circumcircle is the circle that passes through the three nodes of a triangle.). The vertices of the circum-triangles of v are in fact the natural neighbors of v. As the number of nodes becomes large, it would cost a great deal of time to search through all triangles to find the ones whose circumcircles contain v, especially since the search would have to be repeated for every new v. Fortunately, the search algorithm of Lawson [20] can be used to find all circum-triangles. The steps of the algorithm are quite simple: the ’walk’ starts with an 5

initial-guess triangle t, and tests each side i(i = 1, 2, 3), in turn (Triangle sides are defined by the indices of the nodes opposite, which are in counter-clockwise order.). If the point v lies to the ’right’ of the current side then the walk moves to the neighboring triangle that shares this edge. The three sides of this new triangle are now considered in the same way as before.The walk stops when v lies to the ’left’ of all three sides, in which case it must be inside the current triangle. After the triangle which contains v is found, its neighbouring triangles (triangles which share a common side) are searched to find the circum-triangles of v. Then the natural neighbours are found, see [20]. To test if the point v(x, y) lies to the right of side i , we need only check whether the condition (y j − y)(xk − x) > (x j − x)(yk − y) holds, where v j = (x j , y j ) and vk = (xk , yk ) are the nodes on the edge, and j and k are defined by cyclic rotation of indices. This is a local search method, which remains efficient even for a large number of nodes. In 3D space, the algorithm requires some modifications to work with 3D tetrahedral. This is because there is no counter-clockwise search step in 3D cases. Our extension is quite simple, we merely replace the test at each side (of the triangle) with a more sophisticated test at each facet of the tetrahedron if v and the vertex vi are on opposite sides of the face i, i.e. whether the condition h i h i (vi − v j ) · (vk − v j ) × (vt − v j ) × (v − v j ) · (vk − v j ) × (vt − v j ) < 0

holds,where the three vertices of face i contain the nodes v j , vk , vl . If this condition is met for any face i, then we move into the tetrahedron on the other side of the face and start again. The algorithm can move through any tetrahedral of point v to another regardless of size and orientation, which is similar as the triangle search algorithm, the difference in 3D cases is that, the sampling may be in arbitrary direction.

6

3 Quasi-interpolation scheme with arbitrary algebraic precision based on natural neighbor set 3.1 Standard RBF Interpolation The construction of approximation algorithm is introduced in this section using the N example of 3D scattered data {(vi , f (vi ))}i=1 ⊂ R3 . It can be directly extended to any N dimensional scattered data. Denote χ = {vi }i=1 and call it node set. Ω is the convex hull N of {vi }i=1 . We assume that v(x, y) ∈ Ω ⊂ R2 , and {vn,i (xn,i , yn,i )}ki=1 is natural neighbor set

of point v in χ. We construct a radial basis interpolation function with interpolation node {vn,i }ki=1

q f (x, y) =

k X i=1

λi φ(||v − vn,i ||) +

m X

β j p j (x, y),

(3.1)

j=1

where φ(r) is a radial function, || · || is standard Euclidean norm and {p j (x, y)}mj=1 is a basis for the space of all 2-variate polynomials that has degree ≤ Q. The degree of the augmented polynomial in (3.1) depends on φ(r) and its inclusion may be necessary to guarantee a well-posed interpolation problem [7]. Table 1 lists a few of the many available choices for φ(r). Although (3.1) is well-posed without any polynomial augmentation for the GA, MQ, IMQ, and IQ RBFs in Table 1, we set Q=1(i.e.,m=3) in order to impose the condition that the RBF interpolants are exact for one degree polynomial. Thus, interpolation coefficients in (3.1) are determined by following conditions     q f (vn,i ) = f (vn,i ), i = 1, · · · , k,      k  X     λi p j (vn,i ) = 0, j = 1, 2, 3,   

(3.2)

i=1

where p1 (x, y) = 1, p2 (x, y) = x, p3 (x, y) = y. Conditions (3.2) lead to the symmetric, block linear system of equations

7

   Φ   PT

        P  λ  f     =   ,     0   β  0  1   1 where Φi, j = φ(||vn,i − vn, j ||), i, j = 1, · · · , k , P =  .  ..   1      Φ P denote   as A. PT 0 

(3.3)

xn,1 xn,2 .. . xn,k

   yn,1    f (v )   n,1    yn,2  , f =  ...  , and we ..    .      f (v ) n,k  yn,k 

Table 1: Some commonly used radial basis functions

Type of basis function

φ(r)

Piecewise smooth RBFs · Generalized Duchon spline

r2k logr, k ∈ N or r2v , v > 0 and v < N

· Wendland

(1 − r)k + p(r),p is a polynomial,k ∈ N

Infinitely smooth RBFs 2

· Gaussian(GA)

e−(εr)

· Generalized multiquadric

(1 + (εr)2 )v/2 , v , 0 and v < 2N

· Multiquadric(MQ)

(1 + (εr)2 )1/2

· Inverse Multiquadric(IMQ)

(1 + (εr)2 )−1/2

· Inverse quadric(IQ)

(1 + (εr)2 )−1

Note: in all cases, ε > 0 Radial basis function interpolation (3.1) possesses algebraic precision of one degree, i.e., when the condition f (x, y) = a1 + a2 x + a3 y is established, we have q f (x, y) = f (x, y). How to improve the algebraic precision of (3.1)? One most commonly used method is extending (3.1) to

q f (x, y) =

k X i=1

λi φ(||v − vn,i ||) + 8

m X j=1

β j p j (x, y), m > 3,

(3.4)

where {p j }mj=1 is the basis of bivariate polynomial space Π2Q of degree Q(Q > 1). In order to guarantee (3.4) is well-posed, {vn,i }ki=1 must be unique solvable for Π2Q , however which

is not satisfied generally. In this paper, we use modified Taylor expansion of sampled function f (x, y) at each node to improve algebraic precision of RBF interpolation based on interpolation condition (3.2).

3.2 Radial basis function interpolation with algebraic precision of n + 1(n ≥ 1) degree Assuming the derivative value of f (x, y) at node vi , i = 1, · · · , N is given, in order to improve the algebraic precision, firstly, the standard radial basis function interpolation (3.1) is converted into Lagrange form q f (x, y) =

k X

ψi (x, y) f (vn,i ),

(3.5)

i=1

where ψi (x, y) satisfies the cardinal conditions ψi (vn, j ) =

( 1,

0,

if

i = j,

if

i , j, i, j = 1, · · · , k.

(3.6)

There is a nice closed-form express for ψi (x, y) that first appeared in [21]. Denote by Ai (x, y), i = 1, · · · , k, the matrix A in (3.3) with the ith row replaced by the vector h i B(v) = φ(||v − vn,1 ||)φ(||v − vn,2 ||) · · · φ(||v − vn,k ||) 1 x y .

Note that with this notation A = Ai (vn,i ).

Theorem 3.1. (cf.[21]) The cardinal RBF interpolant of the form (3.1) that satisfies (3.6) is given by

ψi (v) =

det(Ai (v)) det(A)

.

(3.7)

Assume that f : R2 → R is C n+1 in a neighborhood of Ω. For each a ∈ Ω, let Tna f (v) be the degree-n Taylor polynomial of f expanded about a, and let Rna f (v) = f (v) − Tna f (v) be 9

the remainder. Taylors theorem in dimension m ≥ 2 is presented in a variety of different forms by different authors. For our purposes we select the vector form 1 [(v − a) · ∇]n f (u)|u=a . n! h In effort to simplify our formulas, we will change this notation by writing (v − a) · in  h in ∇ f (a) in place of (v − a) · ∇ f (u)|u=a . So the degree-n Taylor polynomial expanded Tna f (v) = f (a) + [(v − a) · ∇] f (u)|u=a + · · · +

about a becomes

Tna f (v)

n X ij  1 h (v − a) · ∇ f (a). = j! j=0

The remainder is given by Rna f (v) =

1 (n+1)!

h in+1  f (z) for some z = a+θ(v−a) = (v−a)·∇

(a1 + θ(x − a1 ), a2 + θ(y − a2 )), 0 ≤ θ ≤ 1 on the segment connecting a and v. To prove the theorems of this section we will need to apply the operator [(v − a) · ∇] in a variety of situations. Lemma 3.1. If z = a + θ(v − a), 0 ≤ θ ≤ 1 , then h ih in  h in  h in+1  (v − a) · ∇ (v − a) · ∇ f (z) = n (v − a) · ∇ f (z) + θ (v − a) · ∇ f (z).

Proof. Since (v − a) · ∇ = (x − a1 ) ∂x∂ + (y − a2 ) ∂y∂ , and h

then we have

! n X in  ∂n−k f ∂k f n (x − a1 )n−k (y − a2 )k n−k (z) k (z), (v − a) · ∇ f (z) = k ∂x ∂y k=0

10

h ih in  (v − a) · ∇ (v − a) · ∇ f (z)

! n h ∂ i X n ∂n−k f ∂k f  ∂ (x − a1 )n−k (y − a2 )k n−k (z) k (z) = (x − a1 ) + (y − a2 ) ∂x ∂y k=0 k ∂x ∂y ! n X n ∂n−k f ∂k f = (n − k)(x − a1 )n−k (y − a2 )k n−k (z) k (z) k ∂x ∂y k=0 ∂n+1−k f ∂k f  + θ(x − a1 )n+1−k (y − a2 )k n+1−k (z) k (z) ∂x ∂y ! n n−k X n f ∂k f n−k k∂ + k(x − a1 ) (y − a2 ) n−k (z) k (z) ∂x ∂y k k=0 ∂n−k f ∂k+1 f  + θ(x − a1 )n−k (y − a2 )k+1 n−k (z) k+1 (z) ∂x ∂y n+1 n+1−k X n + 1! h in  f ∂k f n+1−k k∂ (x − a1 ) (y − a2 ) n+1−k (z) k (z) = n (v − a) · ∇ f (z) + θ ∂x ∂y k k=0 h in  h in  = n (v − a) · ∇ f (z) + θ (v − a) · ∇ f (z).



According to Lemma 3.1, with θ = 0 we have [(v − a) · ∇]Tn+1 a f (v) =

n X i j+1  1 h f (a). (v − a) · ∇ j! j=0

(3.8)

As a consequence of Lemma 3.1, we also have h i h i (v − a) · ∇ f (v) − (v − a) · ∇ Tn+1 a f (v) h i = (v − a) · ∇ Rn+1 a f (v) in+2  1 h f (v) (v − a) · ∇ = [(v − a) · ∇] (n + 2)! in+3  1 h f (z) (v − a) · ∇ = (n + 2)Rn+1 a f (v) + θ (n + 2)!

(3.9)

n+2 = (n + 2)Rn+1 a f (v) + θ(n + 3)Ra f (v).

Definition 3.1. For a ∈ R2 , let Lna be a linear mapping gived by the following formula Lna f (v) =

n X  n+1− j 1 [(v − a) · ∇] j f (v). n + 1 j! j=0

Remark 3.1. Lna f (v) can be seen as a modified degree-n Taylor polynomial of f expanded about a. 11

n Lemma 3.2. Tn+1 a f (v) − La f (v) =

1 n+1

Proof. n Tn+1 a f (v) − La f (v) =

h i (v − a) · ∇ Tn+1 a f (v)

n+1 n+1 X X ij  ij  n + 1 − j 1 h 1 h (v − a) · ∇ f (a) − (v − a) · ∇ f (a) j! n + 1 j! j=0 j=0

n+1 ij  1 h 1 X (v − a) · ∇ f (a) = n + 1 j=1 ( j − 1)!

n i j+1  1 X 1 h (v − a) · ∇ = f (a) n + 1 j=0 j! i 1 h = (v − a) · ∇ Tn+1 (by eq.(3.8)) a f (v). n+1



Definition 3.2. qnf (x, y) are interpolants defined as follows qnf (v)

=

k X

ψi (v)Lnvn,i f (v),

(3.10)

i=1

where ψi (x, y) is Lagrange interpolation basis functions determined by (3.7) and {vn,i }ki=1 are natural neighbors of v in χ . Theorem 3.2. If f is a polynomial with degree no more than n + 1, on Ω we have qnf (v) ≡ f (v). Proof. According to the linear reproduction property of ψi (x, y) , we have k X (v − vn,i )ψi (v) = 0. i=1

Then we have

k h X i  (v − vn,i ) · ∇ f (v) ψi (v) = 0. i=1

12

(3.11)

From (3.9), (3.11) and Lemma 3.2, we observe that f (v) −

qnf (v)

= = = =

k  X

i=1 k  X

i=1 k  X

 f (v) − Lnvn,i f (v) ψi (v)

 n+1 n f (v) − Tn+1 f (v) + T f (v) − L f (v) ψi (v) vn,i vn,i vn,i

n+1 n Rn+1 vn,i f (v) + (Tvn,i f (v) − Lvn,i f (v)) −

i=1 k  X

Rn+1 vn,i f (v) +

i=1

k

 1 [(v − vn,i ) · ∇] f (v) ψi (v) n+1

 1 1 [(v − vn,i ) · ∇]Tn+1 f (v) − [(v − v ) · ∇] f (v) ψi (v) n,i vn,i n+1 n+1 k

1 X n+1 n + 3 X n+2 =− Rvn,i f (v)ψi (v) − θ R f (v)ψi (v). n + 1 i=1 n + 1 i=1 vn,i

n+2 When f (v) is a polynomial with degree ≤ n +1 , we know Rn+1 vn,i f (v) = Rvn,i f (v) ≡ 0. Then

we have f (v) − qnf (v) ≡ 0.



Remark 3.2. Theorem 3.2 implies that interpolation function qnf (v) has reproduction ability of degree n + 1 polynomial, i.e., it has degree n + 1 algebraic precision.

3.3 Approximation error estimation Set h = hχ,Ω = supv∈Ω minv j∈χ ||v − v j || as the filling distance. Theorem 3.3. Provided f is C n+3 on a neighborhood of Ω and M = max{M1 , M2 } , (n+3) (n+2) M1 = max max ∂x∂αf1 ∂yα2 (v) , M2 = max max ∂x∂αf1 ∂yα2 (v) , 0 ≤ θ ≤ 1 , then α +α =n+3 v∈Ω α +α =n+2 v∈Ω 1

1

2

2

α1 ,α2 ∈Z+0

α1 ,α2 ∈Z+0

k

| f (v) −

qnf (v)|

X 2n+2 ≤ (1 + 2θh)Mhn+2 |ψi (v)| . (n + 1)(n + 2)! i=1

13

Proof. f (v) −

qnf (v)

k X = ( f (v) − Lnvn,i f (v))ψi (v)

= = = =

i=1 k X

n+1 n ( f (v) − Tn+1 vn,i f (v) + Tvn,i f (v) − Lvn,i f (v))ψi (v)

i=1 k X

n+1 n (Rn+1 vn,i f (v) + Tvn,i f (v) − Lvn,i f (v))ψi (v)

i=1 k  X

Rn+1 vn,i f (v) +

i=1 k  X

Rn+1 vn,i f (v) +

i=1

k

 1 [(v − vn,i ) · ∇]Tn+1 f (v) ψi (v) vn,i n+1  1 [(v − vn,i ) · ∇]Tn+1 f (v) ψi (v) vn,i n+1

1 X − [(v − vn,i ) · ∇] f (v)ψi (v) n + 1 i=1

k  X = Rn+1 vn,i f (v) + i=1

k

(by Lemma 3.2)

(by Eq.(3.11))

 1 1 [(v − vn,i ) · ∇]Tn+1 f (v) − [(v − v ) · ∇] f (v) ψi (v) n,i vn,i n+1 n+1 k

1 X n+1 n + 3 X n+2 =− Rvn,i f (v)ψi (v) − θ R f (v)ψi (v). (by Eq.(3.9)) n + 1 i=1 n + 1 i=1 vn,i

(3.12)

n+2 Consider the terms on the right-hand side of Eq.(3.12), Rn+1 vn,i f (v) and Rvn,i f (v). Notice

that |Rn+1 vn,i f (v)| ≤

2n+3 M2 2n+2 M1 (||v − vn,i ||)n+2 , |Rn+2 f (v)| ≤ (||v − vn,i ||)n+3 . vn,i (n + 2)! (n + 3)!

Since vn,i , i = 1, · · · , k are the natural neighbors of v , so ||v − vn,i || ≤ h . Then k

| f (v) − qnf (v)| ≤

X 2n+2 (1 + 2θh)Mhn+2 |ψi (v)|. (n + 1)(n + 2)! i=1 

Remark 3.3. When we use(3.10) to calculate approximate value of f (v) at v , as interpolating points {vn,i }ki=1 are natural neighbors of v, v , vn,i , i = 1, · · · , k, so, qnf (v) , f (v). Thus the local interpolation function qnf (v) we construct is a quasi-interpolation function.

14

4 Numerical experiments In this section, we take the bivariate quadratic polynomial function f1 (x, y) = 3x2 + 4xy + 5y2 + 6x + 7y + 8,

0 ≤ x, y ≤ 1

and Franke function f2 (x, y) (see Figure 3) (9x − 2)2 + (9y − 2)2 (9x + 1)2 (9y + 1) ] + 0.75exp[− − ] 4 49 10 (9x − 7)2 + (9y − 3)2 ], 0 ≤ x, y ≤ 1 − 0.2exp[−(9x − 4)2 − (9y − 7)2 ] + 0.5exp[− 4

f2 (x, y) =0.75exp[−

Figure 3: Franke function

Figure 4: 300 scattered nodes

as approximated function. Scattered data was sampled in [0, 1] × [0, 1] , by the function rand() in MATLAB(Figure 4), and then with the sampled data {(v j , f (v j ))}Nj=1 , quasiinterpolation function qnf (v) in (3.10) was constructed to approximate f1 , f2 using the constructing method in Section 3. Mean absolute error (MAE) and L∞ error (Max absolute error) of quasi-interpolation function at 50 × 50 testing points from [0.1, 0.9] × [0.1, 0.9] were calculated according to different scattered data set. First of all , the quadratic reproduction property of q1f is tested. Then the approximation ability (how error changes according to h → 0) of q f (n = 0) and q1f are tested and compared. We also compared the approximation precision and efficiency of q f and NNI (natural neighbor interpolation). At last, the approximation precision of q1f using exact derivative and approximate derivative are compared. For all of the numerical experiments in this section, we use √ multiquadric(MQ) radial function φ(r) = r2 + c, c > 0 to construct radial basis function interpolation. 15

Table 2: The approximation error of q1f for f1 in[0.1, 0.9] × [0.1, 0.9] Scattered node number(N)

L∞ error

MAE

300

8.2836e-010

1.7930e-012

500

1.7703e-011

1.0095e-013

1000

5.5793e-010

6.2866e-013

2000

2.0811e-010

6.5076e-013

3000

5.5838e-010

1.0158e-012

According to Theorem 3.2, we know that q1f possesses quadratic algebraic precision. Error data in Table 2 testifies the conclusion that q1f accurately reproduces f1 . Table 3: Approximation error of q f and q1f for Franke function f2 in[0.1, 0.9] × [0.1, 0.9] Scattered node

Approximation error of q f

Approximation error of q1f

number (N)

L∞ error

MAE

L∞ error

MAE

300

0.0418

0.0035

0.0049

5.5562e-04

500

0.0523

0.0026

0.0034

3.0205e-04

1000

0.0150

8.8034e-004

0.0017

1.2152e-04

2000

0.0074

4.0895e-004

7.9539e-04

5.0463e-05

3000

0.0076

2.7105e-004

4.6706e-04

2.9107e-05

In Table 3, the shape factor value of MQ function is 0.05. And accurate derivative value was used in q1f . According to the results, we observe that, the approximation error of both q f and q1f decrease along with the increase of scattered node number, i.e., when h → 0, we have | f − qnf | → 0, n = 0, 1 . Which corresponds to the conclusion in Theorem 3.3. We can also observe that the approximation error of q1f (x, y) is almost one order magnitude higher than q f , The reason is that first derivative was used in q1f (x, y) which made it has two degree algebraic precision. It demonstrated that the approximation precision is improved along with the increasing of algebraic precision. R CoreT M i5-4570 All of the numerical results above come from experiments on Intel

16

[email protected] PC. The well-known NNI method is a local interpolation scheme constructed by Sibon[19] in1980s. Table 4 shows that, with the same scattered data, the approximation precision of q f is higher than NNI, but there is little difference between their running time. Table 4: Comparison between q f and NNI method in approximation precision and running time

Scattered node

Approximation precision and

Approximation precision and

running time for q f (c=0.05)

running time for NNI

number (N)

L∞ error

MAE

Running

L∞ error

MAE

300

0.0418

0.0035

1.34

0.0523

0.0066

1.37

500

0.0523

0.0026

1.53

0.0599

0.0044

1.46

1000

0.0150

8.8034e-004

1.76

0.0273

0.0022

1.90

2000

0.0074

4.0895e-004

2.60

0.0142

0.0011

2.63

time(s)

Running time(s)

Table 5: Approximation error to Franke function of q1f in [0.1, 0.9] × [0.1, 0.9] by using accurate derivative and approximate derivative Scattered node

Accurate derivative(c=0.05)

Approximate derivative(c=0.05)

number (N)

L∞ error

MAE

L∞ error

MAE

300

0.0049

5.5562e-04

0.0342

0.0027

500

0.0034

3.0205e-04

0.0253

0.0022

1000

0.0017

1.2152e-04

0.0173

0.0013

2000

7.9539e-04

5.0463e-05

0.0378

7.3078e-004

In Table 5, the first order approximate derivative is calculated by Goodman method [22]. Results show that when approximate derivative is used, approximation precision is clearly lower than that when accurate derivative is used, and this is due to the extra error from approximate derivative value. 17

5 Conclusion There are two critical problems in constructing local approximation function for high dimensional scattered data: the first is how to choose ideal approximation nodes near the point to be estimated; the other is how to choose basis function of approximation function to make it insensitive to dimension number. In our work, the natural neighbor set of point to be estimated in its node set was used as local interpolation node set, and radial basis interpolation function with arbitrary degree algebraic precision was used as approximation function. This construction idea solves two problems above quite well, and numerical experiment results demonstrate its feasibility and advantages. In order to achieve the advantages of our method, derivative information is necessary, while which is not known for scattered data. So, one of our future work will be on how to get relatively precise derivative value from function value.

Acknowledgements: The work is supported by National Natural Science Foundation (no.11271041, no.91630203) and Special project for civil aircraft ( MJ-F-2012-04 ).

References [1] R. J. Renka and K. Cline, A Triangle-based C 1 InterpolationMethod, The Rocky Mountain Journal of Math. 14(1984), 223-237. [2] T. Whelan, A Representation of a C 2 Interpolant over Triangles, Computer-Aided Geometric Design 3(1986), 53-66. [3] G. Farin, Triangular Bernstein-Bzier patches, Computer-Aided Geometric Design 3(1986), 83-127. [4] T. N. T. Goodman and H. B. Said, A C 1 Triangular Interpolant Suitable for Scattered Data Interpolation, Communications in Applied Numerical Methods 7(1991), 479485.

18

[5] R. Z. Feng and R. H. Wang, Closed Smmoth Surface Defined from Cubic Triangular Splines, Journal of Computational Mathematics 23(2005), 67-74. [6] Malik Zawwar Hussain, Maria Hussain, C 1 Positive Scattered Data Interpolation, Computers and Mathematics with Applications 59 (2010), 457-467. [7] C. Micchelli, Interpolation of Scattered Data: Distance Matrices and Conditionally Positive Definite Functions, Constructive Approximation 2(1986), 11-22. [8] Z. M. Wu, Hermite-BirkhoffInterpolation of Scattered Data by Radial Basis Functions, Approx.Theory & its Appl. 8(1992), 1-10. [9] N. Dyn, D. Levin, S. Rippa, Numerical Procedures for Surface Fitting of Scattered Data by Radial Functions, SIAM J. Sci.Stat. Comput 7(1986), 639-659. [10] Vaclav Skala, Scattered Data Interpolation in N-Dimensional Space, 11th Conference SIP’2012 & Plenary talk, pp.137-142, ISBN: 978-1-61804-081-7, St. Malo, WSEAS, France, 2012. [11] R. K. Beatson, J. Levesley, C. T. Mouat, Better Bases for Radial Basis Function Interpolation Problems, Journal of Computational and Applied Mathematics 236(2011), 434-446. [12] R. Franke, G. Nielson, Smooth Interpolation of Large Sets of Scattered Data, Int.J.Numer.Meth.Eng. 15(1980),1691-1704. [13] R. J. Renka, Multivariate Interpolation of Large Sets of Scattered Data, ACM Trans.Math.Software 14(2)(1988),139-148. [14] Damiana Lazzaro, Laura B. Montefusco, Radial Basis Functions for the Multivariate Interpolation of Large Scattered Data Sets£Journal of Computational and Applied Mathematics 140 (2002), 521-536. [15] R. Z. Feng and Y. N. Zhang, Piecewise Bivariate Hermite Interpolations for Large Sets of Scattered Data, Journal of Applied Mathematics, Volume 2013, Article ID 239703: 10. 19

[16] R. Sibson, A Vector Identity for the Dirichlet Tesselation, Math. Proc. Cambridge Philos. Soc. 87(1980), 151-155. [17] R. Sibson, A Brief Description of Natural Neighbor Interpolation, in lnterpreting Multivariate Data, 1981, 21-36. [18] D. F. Watson, Natural Neighbor Sorting, Australian Comput. J. 17(1985), 189-193. [19] D. F. Watson, Contouring: A Guide to the Analysis and Display of Spatial Data, Pergamon, Oxford, 1992. [20] C. L. Lawson, Software for C 1 Surface Interpolation, in Mathematical Software 3(1977). [21] B. Fornberg, G. Wright, E. Larsson, Some Observations Regarding Interpolants in the Limit of Flat Radial Basis Functions, Comput.Math. Appl. 47 (2004), 37-55. [22] T. N. T. Goodman, H. B. Said and L. H. T. Chang, Local Derivative Estimation for Scattered Data Interpolation, Applied Mathematics and Computational 68(1995),4150.

20