Local multilevel scattered data interpolation

Local multilevel scattered data interpolation

ARTICLE IN PRESS JID: EABE [m5GeSdc;January 4, 2018;10:56] Engineering Analysis with Boundary Elements 000 (2018) 1–7 Contents lists available at ...

1MB Sizes 0 Downloads 64 Views

ARTICLE IN PRESS

JID: EABE

[m5GeSdc;January 4, 2018;10:56]

Engineering Analysis with Boundary Elements 000 (2018) 1–7

Contents lists available at ScienceDirect

Engineering Analysis with Boundary Elements journal homepage: www.elsevier.com/locate/enganabound

Local multilevel scattered data interpolation Zhiyong Liu School of Mathematics and Statistics, Ningxia University, Yinchuan 750021, PR China

a r t i c l e

i n f o

MSC: 46E35 65D05 65J05 Keywords: Scattered data Radial basis functions Multilevel interpolation Sobolev spaces Native spaces

a b s t r a c t Radial basis functions play an increasingly prominent role in modern approximation. They are widely used in scattered data fitting, numerical solution of partial differential equations, machine learning and others. Although radial basis functions have excellent approximation properties, they often produce highly ill-conditioned discrete algebraic system and lead to a high computational cost. The paper introduces local multilevel scattered data interpolation method, which employ nested scattered data sets and scaled compactly supported radial basis functions with varying support radii. We will provide convergence theory for Sobolev target functions. And several numerical experiments will be provided to conform the efficiency of new method.

1. Introduction In this work, we introduce and analyze the local multilevel scattered data interpolation method. As an important tool in modern approximation, radial basis functions allow the easy construction of approximation spaces in arbitrary dimensions with arbitrary smoothness. Error estimates for scattered data interpolation via radial basis functions have been proved not only in native spaces, but also in Sobolev spaces. We can see the theory in early papers [11,18] and recent papers [12,13]. Particularly in [14], Schaback compared all linear PDE solvers and found that error-optimal methods are radial basis functions methods. Although globally supported radial basis functions have excellent approximation properties, they often produce dense discrete system which tends to be poor conditioning. Compactly supported radial basis functions leads to a very well-conditioned sparse system, but at the cost of a poor approximation accuracy. This is a “trade-off” principle. That is to say, small support leads to a well-conditioned system but also poor accuracy, while large support yields excellent accuracy at the price of ill-conditioned system. The goal of this paper is to design a local multilevel scattered data interpolation method which is likely to maintain approximation accuracy and with a low computational cost. To avoid writing constants repeatedly, we shall use notations ≲ , ≳ and ≅, as in the well-known paper [19]. With some constants c and C, the short notation x ≲ y means x ≤ Cy; and x≅y means cx ≤ y ≤ Cx. 2. Local multilevel interpolation method As we saw in introduction, there is a “trade-off” principle for interpolation with compactly supported radial basis functions. In order to

© 2017 Elsevier Ltd. All rights reserved.

combine the advantages of globally supported radial basis functions and compactly supported radial basis functions, a multilevel stationary interpolation algorithm was implemented first in [8], and then studied by a number of other researchers [3–5,7,9,10,16]. In multilevel algorithm, the target function first is interpolated on the coarsest level by one of the compactly supported radial basis functions with a larger support. Then the residual can be formed, and be interpolated on the next finer level by the same compactly supported radial basis function but with a smaller support. This process can be repeated and be stopped on the finest level. And the final approximation is the sum of all of interpolants. The weakness of the multilevel interpolation algorithm is that a global interpolation problem must be solved on each level, which will waste a lot of computational time. To avoid solving a series of global problems, we will use the local multilevel interpolation method. Let Ω ⊆ ℝ𝑑 be a bounded domain. Let 𝑋 = {x1 , … , x𝑁 } be a finite point set in Ω. We associate this point set with fill distance ℎ𝑋,Ω ∶= sup min ‖x − x𝑗 ‖2 , x∈Ω x𝑗 ∈𝑋

(2.1)

and separation distance 𝑞𝑋 ∶= min ‖x𝑘 − x𝑗 ‖2 . 𝑗≠𝑘

(2.2)

We assume that the point set is quasi-uniform, which means hX, Ω ≅ qX . To construct a local method, we need a successive refinement point sets 𝑋1 , 𝑋2 , …, which have fill distances ℎ𝑗 = ℎ𝑋𝑗 ,Ω . Of course X1 ⊂ X2 ⊂ · · · , if hj are monotonically decreasing. Let 𝑋𝑗∗ ⊆ 𝑋𝑗 be newly added point set in Xj , and have fill distances ℎ∗𝑗 ≥ ℎ𝑗 . Then we have ⋃ 𝑋1 = 𝑋1∗ and 𝑋𝑗 = 𝑋𝑗−1 𝑋𝑗∗ for 𝑗 = 2, ….

E-mail addresses: [email protected], [email protected] https://doi.org/10.1016/j.enganabound.2017.11.017 Received 30 June 2017; Received in revised form 14 September 2017; Accepted 30 November 2017 Available online xxx 0955-7997/© 2017 Elsevier Ltd. All rights reserved.

Please cite this article as: Z. Liu, Engineering Analysis with Boundary Elements (2018), https://doi.org/10.1016/j.enganabound.2017.11.017

ARTICLE IN PRESS

JID: EABE

[m5GeSdc;January 4, 2018;10:56]

Z. Liu

Engineering Analysis with Boundary Elements 000 (2018) 1–7 Table 4 Numerical result for 𝑏 = 0.1.

Coarse support

j

RMS-error

Max-error

%nonzero

Time (s)

1 2 3 4 5 6

9.7623E−02 4.6522E−02 7.8327E−03 1.4573E−03 3.7999E−04 1.0553E−04

3.1969E−01 2.0613E−01 3.9937E−02 2.3140E−02 7.2034E−03 1.5967E−03

100.00 100.00 99.94 65.17 23.09 6.67

0.03 0.05 0.07 0.20 1.00 9.93

0

10

−1

Fine support

10

−2

Error

10

b=1 b=0.5 b=0.1 b=0.08 b=0.05 b=0.01

−3

10

Fig. 1. Nested scattered data sets and corresponding supports. Table 1 Number of points on each level and corresponding ℎ∗𝑗 . Level j

1

2

3

4

5

6

#(𝑋𝑗 ) #(𝑋𝑗∗ ) ℎ∗𝑗 ≈

9 9

34 25

115 81

404 289

1493 1089

5718 4225

1 2

1 4

1 8

1 16

1 32

1 64

−4

10

−5

10

1

2

3

4

5

6

Level Fig. 2. RMS-error for C2 local interpolation with varying level and b.

Table 2 Numerical result for 𝑏 = 1. j

RMS-error

Max-error

%nonzero

Time (s)

1 2 3 4 5 6

2.4843E−01 1.5539E−01 1.0283E−01 8.2009E−02 6.6870E−02 4.5191E−02

8.4283E−01 7.9190E−01 7.8592E−01 7.8592E−01 7.6557E−01 7.1481E−01

55.56 15.20 4.89 1.10 0.32 0.08

0.03 0.05 0.06 0.15 0.84 8.98

consists of all functions 𝑔 ∈ 𝐿2 (ℝ𝑑 ) satisfying ‖𝑔‖2𝐾 =

|̂ 𝑔 (𝝎)|2 𝑑𝝎 < ∞ ∫ℝ𝑑 𝐾 ̂(𝝎)

(2.5)

with 𝑑

𝑔̂(𝝎) = (2𝜋)− 2 Table 3 Numerical result for 𝑏 = 0.5.

∫ℝ𝑑

𝑇𝝎

𝑔(x)𝑒−𝑖x

𝑑 x.

(2.6)

Suppose further that kernel K satisfies

j

RMS-error

Max-error

%nonzero

Time (s)

1 2 3 4 5 6

1.3259E−01 4.8543E−02 2.8578E−02 2.2981E−02 1.5945E−02 9.8463E−03

4.6615E−01 3.8572E−01 3.5321E−01 3.4422E−01 3.1050E−01 1.9932E−01

97.53 52.32 15.71 4.45 1.19 0.31

0.02 0.04 0.08 0.18 0.91 9.10

̂(𝝎) ≅ (1 + ‖𝝎‖2 )−𝜏 , 𝐾 2

𝜏>

𝑑 2

and 𝝎 ∈ ℝ𝑑

(2.7)

then 𝐾 (ℝ𝑑 ) is a Sobolev space 𝐻 𝜏 (ℝ𝑑 ) and it has an equivalent Sobolev type norm ‖𝑔‖𝐻 𝜏 (ℝ𝑑 ) 2 =

∫ℝ 𝑑

|̂ 𝑔 (𝝎)|2 (1 + ‖𝝎‖22 )𝜏 𝑑 𝝎.

(2.8)

Let 𝐾 = Φ𝑗 , then the norm of the space Wj will be denoted by We pick a kernel Φ𝑗 ∶ Ω × Ω → ℝ for each 𝑋𝑗∗ . In many applications, the kernel is given by one of the scaled version of a translation invariant functions. Let Φ ∶ ℝ𝑑 → ℝ be one of the compactly supported radial basis functions. We can define the kernel as Φ𝑗 (⋅, y) = 𝜀𝑑𝑗 Φ(𝜀𝑗 (⋅ − y)),

y ∈ 𝑋𝑗∗ .

‖𝑔‖2Φ = 𝑗

1 𝜀𝑗

(2.3)

and center y (See Fig. 1).

(2.9)

Theorem 2.1. Let Φ be a kernel satisfying (2.7) and Φj be defined by (2.3). If 𝑔 ∈ 𝐻 𝜏 (ℝ𝑑 ) and every scaling parameter 𝜀j ≥ 1, then Φ𝑗 (ℝ𝑑 ) = 𝐻 𝜏 (ℝ𝑑 ) and

Then we can build local approximation spaces 𝑊𝑗 = span{Φ𝑗 (⋅, y) ∶ y ∈ 𝑋𝑗∗ }.

∀𝑔 ∈ 𝑊𝑗 .

It is well-known that the Maté rn functions and Wendland’s compactly supported functions can satisfy (2.7) (or their Fourier transforms decay only algebraically). We can refer to monographs [2,6,15] for the details. Using the techniques of Fourier transform and the definition of norm ((2.8) and (2.9)), we have following norm equivalence theorem.

Clearly, Φj is a scaled radial basis function whose support is a ball with radius

|̂ 𝑔 (𝝎)|2 𝑑 𝝎, ∫ℝ𝑑 Φ ̂𝑗 (𝝎)

(2.4)

Let Vj be approximation spaces on data sets Xj . Then we have 𝑉1 = 𝑊1 , ⨁ 𝑊𝑗 for 𝑗 = 2, …, and V1 ⊂ V2 ⊂ · · · . 𝑉𝑗 = 𝑉𝑗−1 We need to associate Wj with some kinds of norms. The associated reproducing kernel Hilbert space (or native space) 𝐾 (ℝ𝑑 ) of kernel K

‖𝑔 ‖Φ𝑗 ≲ ‖𝑔 ‖𝐻 𝜏 (ℝ𝑑 ) ≲ 𝜀𝜏𝑗 ‖𝑔 ‖Φ𝑗 . Proof. See [16], a similar norm equivalence theorem with inverse of 𝜀j as scaling parameter. □ 2

ARTICLE IN PRESS

JID: EABE Z. Liu

[m5GeSdc;January 4, 2018;10:56] Engineering Analysis with Boundary Elements 000 (2018) 1–7

b=1

b=0.5 0

0.2

0.05

−0.1

0

0 −0.02

0

−0.04 −0.2

−0.2

−0.05

−0.3

−0.4

−0.06 −0.08

−0.1

−0.1 −0.4

−0.6

−0.15

−0.5

−0.8 0

−0.12 −0.14

−0.2 0

−0.16

−0.6 0.5

0.5 1

1

0.8

0.6

0.4

0.2

0

1

−4

b=0.1

−3

x 10

−0.18

−0.7

x 10

6

0.8

0.6

0.2

0

−4

b=0.08

−3

x 10

1

1

0.4

x 10 6

1 4

4 0.5

0.5

2

0

0 −2

−0.5

2

0

0 −2

−0.5

−4

−4

−1

−1 −6

−1.5

−6 −1.5

−8 −10

−2 0

−8

−2 0

−10

−12 0.5 1

1

0.8

0.6

0.4

0.2

0

1

−4

b=0.05

−4

x 10

−12 0.5

−14

x 10

2

0.8

0.4

0.2

0

−14

−4

b=0.01

−4

x 10

5

1

0.6

x 10 0

2 −1

0 0

0

−2

−2 −2 −5

−3

−4 −4

−4

−6

−10

−6

−5

−8 −15 0

−8 0

−10 0.5 1

1

0.8

0.6

0.4

0.2

0

−6 −7

0.5

−12

1

1

Fig. 3. Final error graphs for C2 local interpolation with varying b.

3

0.8

0.6

0.4

0.2

0

ARTICLE IN PRESS

JID: EABE

[m5GeSdc;January 4, 2018;10:56]

Z. Liu

Engineering Analysis with Boundary Elements 000 (2018) 1–7 Table 5 Numerical result for 𝑏 = 0.08.

100

RMS-error

Max-error

%nonzero

Time (s)

1 2 3 4 5 6

9.8752E−02 4.7054E−02 7.9347E−03 1.7216E−03 4.0492E−04 9.9640E−05

3.2054E−01 2.0593E−01 4.6238E−02 2.8836E−02 7.8414E−03 1.5739E−03

100.00 100.00 100.00 83.14 33.19 10.08

0.03 0.04 0.06 0.20 1.12 10.46

10-1

10-2

Error

j

Table 6 Numerical result for 𝑏 = 0.05.

10-3

j

RMS-error

Max-error

%nonzero

Time (s)

1 2 3 4 5 6

1.0055E−01 4.7532E−02 8.3151E−03 2.0516E−03 3.8722E−04 7.7540E−05

3.2181E−01 2.0082E−01 6.4290E−02 3.5560E−02 7.6669E−03 1.3064E−03

100.00 100.00 100.00 99.96 65.24 23.03

0.03 0.04 0.07 0.19 1.21 11.55

b=1 b=0.5 b=0.1 b=0.08 b=0.05 b=0.01

10-4

10-5 1

2

3

4

5

6

Level Table 7 Numerical result for 𝑏 = 0.01.

Fig. 4. RMS-error for C4 local interpolation with varying level and b.

j

RMS-error

Max-error

%nonzero

Time (s)

1 2 3 4 5 6

1.0305E−01 4.7599E−02 9.0839E−03 2.2645E−03 3.0478E−04 4.4900E−05

3.2344E−01 1.8564E−01 8.5176E−02 3.9796E−02 6.0646E−03 7.8790E−04

100.00 100.00 100.00 100.00 100.00 100.00

0.03 0.04 0.06 0.17 1.18 14.76

‖𝐸 𝑒𝑗−1 − 𝐼𝑋 ∗ 𝐸 𝑒𝑗−1 ‖Φ𝑗 ≤ ‖𝐸 𝑒𝑗−1 ‖Φ𝑗 . 𝑗

Lemma 3.3. (Sampling inequality, see [17].) Suppose Ω ⊆ ℝ𝑑 is open and has a Lipschitz boundary, and 𝜏 > 𝑑2 . Let X⊆Ω be a finite point set with sufficiently small mesh norm hX, Ω . Then, for all f ∈ H𝜏 (Ω) vanishing on X, we have 𝜇 ‖𝑓 ‖𝐻 𝜇 (Ω) ≲ ℎ𝜏− ‖𝑓 ‖𝐻 𝜏 (Ω) , 𝑋,Ω

Then, we can define local multilevel interpolation method as following.

Then, the convergence result can be given by the following theorem. Theorem 3.1. Suppose Ω ⊆ ℝ𝑑 is open and has a Lipschitz boundary. Let 𝑋𝑗∗ ⊆ Ω be quasi-uniform for 𝑗 = 1, 2, … , 𝑛. Let ℎ∗𝑗+1 ≅ 𝑎ℎ∗𝑗 with fixed a ∈ (0,

Algorithm: Local multilevel interpolation Input: Target function f; Number of levels n. Output: Numerical solution 𝑓𝑛 ∈ 𝑉𝑛 . Initialize 𝑓0 = 0, 𝑒0 = 𝑓 . For 𝑗 = 1, …, 𝑛 (1) Solve 𝑠𝑗 = 𝑒𝑗−1 , by constructing 𝑠𝑗 which has the form 𝑠𝑗 =

dim(𝑊𝑗 )



𝑘=1

𝑐𝑘 ⋅ Φ𝑗 (⋅, y𝑘 ),

1) and ℎ∗1 sufficiently small. Let Φ be a kernel satisfying (2.7) with 𝜏 > 𝑑2 and Φj be defined by (2.3) with 𝜀𝑗 = 𝑏(ℎ∗𝑗 )−1 for fixed b ∈ (0, 1). If the target function f ∈ H𝜏 (Ω), then there exists 𝛼 = max{𝑎𝜏 , 𝑏𝜏 } < 1 such that ‖𝑓𝑛 − 𝑓 ‖𝐿2 (Ω) ≲ 𝛼 𝑛 ‖𝑓 ‖𝐻 𝜏 (Ω) ,

y𝑘 ∈ 𝑋𝑗∗ .

(2) Update 𝑒𝑗 = 𝑒𝑗−1 − 𝑠𝑗 ; 𝑓𝑗 = 𝑓𝑗−1 + 𝑠𝑗 .

𝑠𝑗 (x) = 𝐼

𝑒𝑗−1 (x) =

𝑘=1

= ‖𝑒𝑗−1 − 𝐼𝑋 ∗ 𝑒𝑗−1 ‖𝐻 𝜏 (Ω) 𝑗

= ‖𝐸 𝑒𝑗−1 − 𝐼𝑋 ∗ 𝐸 𝑒𝑗−1 ‖𝐻 𝜏 (Ω) 𝑗

𝑐𝑘 ⋅ Φ𝑗 (x, y𝑘 ),

𝑑

x∈ℝ .

≤ ‖𝐸 𝑒𝑗−1 − 𝐼𝑋 ∗ 𝐸 𝑒𝑗−1 ‖𝐻 𝜏 (ℝ𝑑 )

(2.10)

𝑗

≲ 𝜀𝜏𝑗 ‖𝐸 𝑒𝑗−1 − 𝐼𝑋 ∗ 𝐸 𝑒𝑗−1 ‖Φ𝑗 𝑗

≲ 𝜀𝜏𝑗 ‖𝐸𝑒𝑗−1 ‖Φ𝑗 .

3. Error analysis

Using the norm definition of the native space, we have

In order to prove the convergence of the local multilevel interpolation method, we need following three significant lemmas.

‖𝐸𝑒𝑗 ‖2Φ

Lemma 3.1. (Extension operator in Sobolev spaces, see [1].) Suppose Ω ⊆ ℝ𝑑 is open and has a Lipschitz boundary, and 𝜏 ≥ 0. Then, for all f ∈ H𝜏 (Ω), there exists a linear operator 𝐸 ∶ 𝐻 𝜏 (Ω) → 𝐻 𝜏 (ℝ𝑑 ) such that (1) 𝐸𝑓 |Ω = 𝑓 |Ω , (2) ‖𝐸𝑓 ‖𝐻 𝜏 (ℝ𝑑 ) ≲ ‖𝑓 ‖𝐻 𝜏 (Ω) .

𝑗+1

=



̂𝑗 (𝝎)|2 |𝐸𝑒 ∫ℝ𝑑 Φ ̂ 𝑗+1 (𝝎) ∫ℝ𝑑

𝑑𝝎

(

̂𝑗 (𝝎)|2 1 + |𝐸𝑒

1 𝜀𝑗+1 2

)𝜏 ‖𝝎‖22

𝑑𝝎

≲ 𝐼1 + 𝐼2 , with

Lemma 3.2. (Best approximation in native spaces, see [15].) Suppose Ω ⊆ ℝ𝑑 is open and has a Lipschitz boundary. Let Φj be defined by (2.3). Assume that E is an extension operator defined in Lemma 3.1, and 𝑋𝑗∗ ⊆ Ω. Then we have

𝐼1 =

𝐼2 =

𝐼𝑋 ∗ 𝑒𝑗−1 = 𝐼𝑋 ∗ 𝐸𝑒𝑗−1 , 𝑗

𝑛 = 1, 2, ….

‖𝑒𝑗 ‖𝐻 𝜏 (Ω) = ‖𝑒𝑗−1 − 𝑠𝑗 ‖𝐻 𝜏 (Ω)

𝑗

dim(𝑊𝑗 ) ∑

𝑓 𝑜𝑟

Proof. By Lemmas 3.1, 3.2 and Theorem 2.1, we have

We observe that 𝑠𝑗 = 𝑒𝑗−1 only is solved on the newly added point set 𝑋𝑗∗ , not on the global data set Xj . We can define interpolation operator 𝐼𝑋 ∗ , which maps a continuous function 𝑒𝑗−1 to its interpolant. Namely,

𝑋𝑗∗

0 ≤ 𝜇 ≤ 𝜏.

for

𝑗

4

( ∫‖𝝎‖2 ≤𝜀𝑗+1 ∫‖𝝎‖2 >𝜀𝑗+1

̂𝑗 (𝝎)|2 1 + |𝐸𝑒 ( ̂𝑗 (𝝎)|2 1 + |𝐸𝑒

1 𝜀𝑗+1

)𝜏 ‖𝝎‖22 2

1 𝜀𝑗+1 2

𝑑 𝝎, )𝜏

‖𝝎‖22

𝑑 𝝎.

ARTICLE IN PRESS

JID: EABE Z. Liu

[m5GeSdc;January 4, 2018;10:56] Engineering Analysis with Boundary Elements 000 (2018) 1–7

b=1

b=0.5

0.2

0.08

0.06

0.1

0.1

0.06

0.05

0.04

0.04

0.05

0 0

-0.1

0.03

0.02 -0.05

0.02

-0.2

0 0.01

-0.1

-0.3 0

-0.02 0

0

-0.15

0.5

-0.01

0.5 1

1

0.8

0.6

0.4

0.2

0

-0.2

1

b=0.1

10-3 2

0

10-3 1

0

-1

0

-2

-2

-1

1

0.8

0.6

0.4

0.2

0

b=0.08 0

-0.5

-1

-1.5

-4

-2 -3

-2

-6

-3 -2.5

-4

-8 0

-4 0

-5

0.5

-3

-3.5

0.5 1

1

0.8

0.6

0.4

0.2

0

-6

1

10-3

b=0.05

10-4

1

5

0.6

0.4

0 10-3

b=0.01

10-4 1

0

0.8

0.2

0

-2

0

0 -4

-1

-1 -5

-6

-10

-8

-2 -2

-3 -10

-15

-4 -12

-20 0 0.5

0.5

-16

1

1

0.8

0.6

0.4

0.2

-3

-5 0

-14

0 1

10-4

1

Fig. 5. Final error graphs for C4 local interpolation with varying b.

5

0.8

0.6

0.4

0.2

0

-4 10-4

ARTICLE IN PRESS

JID: EABE Z. Liu

Engineering Analysis with Boundary Elements 000 (2018) 1–7

4.1. Local multilevel interpolation by C2 kernel

The first term can be estimated by Lemmas 3.1 and 3.3: 𝐼1 ≲ ≲ ≲

∫‖𝝎‖2 ≤𝜀𝑗+1 ∫ℝ𝑑

In the first experiment, we let kernel Φ be the standard C2 Wendland’s function 𝜙(𝑟) = (1 − 𝑟)4+ (4𝑟 + 1). Our target function is the standard Franke function

̂𝑗 (𝝎)|2 𝑑 𝝎 |𝐸𝑒

̂𝑗 (𝝎)|2 𝑑 𝝎 ≲ ‖𝐸𝑒𝑗 ‖2 |𝐸𝑒 2

𝐿 (ℝ𝑑 )

𝑓 (𝑥, 𝑦) = 0.75exp(−((9𝑥 − 2)2 + (9𝑦 − 2)2 )∕4)

‖𝑒𝑗 ‖2 2 𝐿 (Ω)

+0.75exp(−((9𝑥 + 1)2 ∕49 + (9𝑦 + 1)2 ∕10))

≲ (ℎ∗𝑗 )2𝜏 ‖𝑒𝑗 ‖2𝐻 𝜏 (Ω) ≲ (𝜀𝑗 ℎ∗𝑗 )2𝜏 ‖𝐸𝑒𝑗−1 ‖2Φ ≲𝑏

2𝜏

+0.5exp(−((9𝑥 − 7)2 + (9𝑦 − 3)2 )∕4) −0.2exp(−((9𝑥 − 4)2 + (9𝑦 − 7)2 )).

𝑗

‖𝐸𝑒𝑗−1 ‖2Φ . 𝑗

On each level, the corresponding discrete algebraic system will be solved by Gauss elimination method. We will run six tests with varying b (which determines the size of the support of Φj , see Theorem 3.1). The test results are shown in Tables 2–7. In Tables 2–7, the %nonzero column indicates the sparsity of the interpolation matrices, and CPU time is measured in seconds. The rootmean-square error and max error are computed on an evaluation grid of 40 × 40 equally spaced points in [0, 1]2 . According to Theorem 3.1, the size of the support of Φj is controlled by 𝜀𝑗 = 𝑏(ℎ∗𝑗 )−1 . From Tables 2 to 7, we observe that the decreasing b makes the matrices become increasingly dense and computation take a little more time. This is because the small b leads to a small 𝜀j but a large support. And the large support yields better accuracy at the price of high computational cost. We observe that the results in Table 4 is ideal, because the RMS-error and maxerror are acceptable meanwhile the amount of calculation is moderate. But from Tables 5 to 7 we observe that the errors are not substantially improved, instead more time are consumed. This coincides with the theoretical result that smaller b is no longer necessary. Fig. 2 shows the relationship between the parameter b and RMS-error for different levels. We observe that 𝑏 = 0.1 is acceptable, when considering the “tradeoff” principle between the approximate accuracy and the computational cost. The final error graphs for these six tests are shown in Fig. 3. It seems that the more bigger errors occur in the corners of domain [0, 1]2 . That’s because a few Halton points are arranged around corners. To solve this problem, it may be possible to use the adaptive algorithm.

If we estimate the second term by Lemma 3.1, we get 𝐼2 ≤ ≲

[m5GeSdc;January 4, 2018;10:56]

2 ̂𝑗 (𝝎)|2 (‖𝝎‖2 )𝜏 𝑑 𝝎 |𝐸𝑒 2 𝜏 ∫ 𝜀2𝑗+1 ‖𝝎‖2 >𝜀𝑗+1 1 ̂𝑗 (𝝎)|2 (1 + ‖𝝎‖2 )𝜏 𝑑 𝝎 ≲ 1 ‖𝐸𝑒𝑗 ‖2 𝜏 𝑑 |𝐸𝑒 2 𝐻 (ℝ ) 𝜏 ∫ 𝑑 𝜀𝑗+1 2𝜏 𝜀2𝑗+1 ℝ

1 ‖𝑒𝑗 ‖2𝐻 𝜏 (Ω) 𝜀𝑗+1 2𝜏 ( ) 𝜀𝑗 2𝜏 ≲ ‖𝐸𝑒𝑗−1 ‖2Φ 𝑗 𝜀𝑗+1 ≲

≲ 𝑎2𝜏 ‖𝐸𝑒𝑗−1 ‖2Φ . 𝑗

Let 𝛼 = max{𝑎𝜏 , 𝑏𝜏 }, we have ‖𝐸 𝑒𝑗 ‖Φ𝑗+1 ≲ 𝛼‖𝐸 𝑒𝑗−1 ‖Φ𝑗 . Note that, by the local multilevel interpolation algorithm, 𝑓𝑛 = 𝑠 1 + 𝑠 2 + ⋯ + 𝑠 𝑛 , 𝑒𝑛 = 𝑓 − (𝑠1 + 𝑠2 + ⋅ ⋅ ⋅ + 𝑠𝑛 ). So,

4.2. Local multilevel interpolation by C4 kernel

‖𝑓𝑛 − 𝑓 ‖𝐿2 (Ω) = ‖𝑒𝑛 ‖𝐿2 (Ω) ≲ (ℎ∗𝑛 )𝜏 ‖𝑒𝑛 ‖𝐻 𝜏 (Ω)

In this experiment, we use the compactly supported radial basis function 𝜙(𝑟) = (1 − 𝑟)6+ (35𝑟2 + 18𝑟 + 3). This function is C4 and strictly positive definite on ℝ𝑑 for d ≤ 3. To avoid repetition with the last experiment, we employ the target function

≲ (ℎ∗𝑛 )𝜏 ‖𝐸𝑒𝑛 ‖𝐻 𝜏 (ℝ𝑑 )

≲ (𝜀𝑛+1 ℎ∗𝑛 )𝜏 ‖𝐸𝑒𝑛 ‖Φ𝑛+1 ( )𝜏 𝑏 ≲ ‖𝐸𝑒𝑛 ‖Φ𝑛+1 . 𝑎

𝑓𝑑 (x) = 4𝑑

Since a and b are fixed, above inequality can be estimated by

𝑑 ∏ 𝑘=1

𝑥𝑘 (1 − 𝑥𝑘 ),

x = (𝑥1 , … , 𝑥𝑑 ) ∈ [0, 1]𝑑 .

In the experiment, we choose 𝑑 = 2. The test results for different b are shown in Figs. 4 and 5. Compared Fig. 4 with Fig. 2, we observe that 𝑏 = 0.1 is no longer the best choice for C4 local interpolation. That is to say, the errors can be reduced further by using smaller b. We can explain this phenomenon through Theorem 3.1 and its proof. In fact, the factor ( 𝑎𝑏 )𝜏 will greatly influence the convergence error, because C4 kernel has a improved smoothness parameter 𝜏. So, we may sharp the result of Theorem 3.1 in future work.

‖𝑓𝑛 − 𝑓 ‖𝐿2 (Ω) ≲ ‖𝐸𝑒𝑛 ‖Φ𝑛+1 𝑛

≲ 𝛼 ‖𝐸𝑒0 ‖Φ1

≲ 𝛼 𝑛 ‖𝐸𝑓 ‖𝐻 𝜏 (ℝ𝑑 ) ≲ 𝛼 𝑛 ‖𝑓 ‖𝐻 𝜏 (Ω) . □ 4. Numerical experiments

5. Conclusions

In this section, to demonstrate the effectiveness of the local multilevel interpolation method, we will present some numerical experiments. First, we generate a nested Halton points sets in the unit square Ω = [0, 1]2 . Table 1 lists the number of points of Xj and 𝑋𝑗∗ , and the approximate values of ℎ∗𝑗 . According to the condition of Theorem 3.1, we can know a ≈ 0.5. Computations were performed on a laptop with 2.4 GHz Intel Core i7 processor, using MATLAB running in the Windows 7 operating system.

We introduced and analyzed the local multilevel interpolation method, which employ nested scattered data sets and scaled compactly supported radial basis functions. We provided the L2 interpolation error in Sobolev spaces. Compared with radial basis functions approximation, compactly supported radial basis functions approximation, and stationary multilevel approximation, the present method can not only produce 6

JID: EABE

ARTICLE IN PRESS

Z. Liu

[m5GeSdc;January 4, 2018;10:56] Engineering Analysis with Boundary Elements 000 (2018) 1–7

highly sparse discrete algebraic system, but also solve a relatively smallscale interpolation problem on each level. This method may also be used in numerical solution of partial differential equations and other fields.

[6] Fasshauer GE. Meshfree approximation methods with MATLAB. Singapore: World Scientific Publishers; 2007. [7] Fasshauer GE, Jerome JW. Multistep approximation algorithms: improved convergence rates through postconditioning with smoothing kernels. Adv Comput Math 1999;10:1–27. [8] Floater MS, Iske A. Multistep scattered data interpolation using compactly supported radial basis functions. J Comput Appl Math 1996;73:65–78. [9] Gia QTL, Sloan IH, Wendland H. Multiscale analysis in Sobolev spaces on the sphere. SIAM J Numer Anal 2010;48:2065–90. [10] Gia QTL, Sloan IH, Wendland H. Multiscale RBF collocation for solving PDEs on spheres. Numer Math 2012;121:99–125. [11] Madych WR, Nelson SA. Multivariate interpolation and conditionally positive definite functions.II. Math Comp 1990;54:211–30. [12] Narcowich FJ. Recent developments in error estimates for scattered-data interpolation via radial basis functions. Numer Algorithms 2005;39:307–15. [13] Narcowich FJ, Ward JD, Wendland H. Sobolev error estimates and a Bernstein inequality for scattered data interpolation via radial basis functions. Constr Approx 2006;24:175–86. [14] Schaback R. A computational tool for comparing all linear PDE solvers: error-optimal methods are meshless. Adv Comput Math 2015;41:333–55. [15] Wendland H. Scattered data approximation. Cambridge: Cambridge University Press; 2005. [16] Wendland H. Multiscale analysis in Sobolev spaces on bounded domains. Numer Math 2010;116:493–517. [17] Wendland H, Rieger C. Approximate interpolation with applications to selecting smoothing parameters. Numer Math 2005;101:643–62. [18] Wu Z, Schaback R. Local error estimates for radial basis function interpolation of scattered data. IMA J Numer Anal 1993;13:13–27. [19] Xu J. Iterative methods by space decomposition and subspace correction. SIAM Rev 1992;34:581–613.

Acknowledgment The author would like to thank editor and two unknown reviewers who made valuable comments on an earlier version of this paper. This work was partially supported by the Natural Science Foundation of Ningxia Province (No. NZ15005), the Natural Science Foundation of China (No. 11501313), and the Science Research Project of Ningxia Higher Education (No. NGY2016059). References [1] Adams RA, Fournier P. Sobolev spaces (Pure and Applied Mathematics), vol. 65. Academic Press; 2003. [2] Buhmann MD. Radial basis functions: Theory and implementations. Cambridge: Cambridge University Press; 2003. [3] Chen CS, Ganesh M, Golberg MA, Cheng AHD. Multilevel compact radial functions based computational schemes for some elliptic problems. Comput Math Appl 2002; 43:359–78. [4] Chernih A, Gia QTL. Multiscale methods with compactly supported radial basis functions for the stokes problem on bounded domains. Adv Comput Math 2016;42:1187–208. [5] Farrell P, Wendland H. RBF multiscale collocation for second order elliptic boundary value problems. SIAM J Numer Anal 2013;51:2403–25.

7