Applied Mathematics and Computation 216 (2010) 317–331
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
Polyharmonic splines: An approximation method for noisy scattered data of extra-large size Mira Bozzini a,*, Licia Lenarduzzi b, Milvia Rossini a a b
Dip. Mat. Appl., Università di Milano Bicocca, via Cozzi 53, 20125 Milano, Italy IMATI CNR, via Bassini 15, 20133 Milano, Italy
a r t i c l e
i n f o
Keywords: Polyharmonic B-splines Interpolation Noisy data Outliers Unevenly scattered data
a b s t r a c t The aim of this paper is to provide a fast method, with a good quality of reproduction, to recover functions from very large and irregularly scattered samples of noisy data, which may present outliers. To the given sample of size N, we associate a uniform grid and, around each grid point, we condense the local information given by the noisy data by a suitable estimator. The recovering is then performed by a stable interpolation based on isotropic polyharmonic B-splines. Due to the good approximation rate, we need only M N degrees of freedom to recover the phenomenon faithfully. Ó 2010 Elsevier Inc. All rights reserved.
1. Introduction This paper deals with the problem of recovering a function from a set ðX; YÞ of scattered and noisy data having very large size N ðN P 106 Þ and irregularly distributed locations X ¼ fxi gNi¼1 on a given domain X R2 . When the sample size N is very large, it is necessary to develop methods with high computational efficiency and with strong stability properties. To this purpose, we quote what is written in [22]: ‘‘. . .it often does not make sense to use all 100,000 degrees of freedom to solve such a system exactly, coming up with a solution with 100,000 coefficients, whose sheer size limits its usefulness. It seems to be much reasonable to get away with 1000 nonzero parameters that fit the data to an accuracy of 0.1%. . .”. ~ i gNi¼1 come from experimental measurements and, in this case, the large samHere, we assume that the noisy values Y ¼ fu ple size implies that outliers may occur. In addition, the irregular distribution of the locations makes more difficult to provide solutions with a good graphical behaviour. The current problem has a considerable impact in many practical situations which are often characterized by different requirements. For instance, the solution of many geophysical problems is represented by a non smooth function. On the contrary, the recovering of real objects is generally represented by very smooth surfaces. The interest for this kind of problem is also evident from literature, where several approximation schemes are proposed. Among the most recent ones, we find [6–10,12,21–23]. Most of them do not consider the presence of noise. The aim of [6] is to provide graphical representation of 3D objects by a local strategy, when very large and highly unevenly distributed sets of points are given. In [7–9,12,23], the presence of outliers is considered. Only in [8,9] and [23] we find examples with N P 106 . In [21], Schaback analyzes possible techniques to provide explicit representations of surfaces and defines precisely the properties that a method should ideally have. The most important ones are computational efficiency and good graphical
* Corresponding author. E-mail addresses:
[email protected] (M. Bozzini),
[email protected] (L. Lenarduzzi),
[email protected] (M. Rossini). 0096-3003/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2010.01.065
318
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
quality of reproduction. In [21] and [22], adaptive methods enjoying these requirements are presented, while in [10] several multilevel schemes are discussed. The adaptive techniques are effective in image processing since, with a suitable thresholding, they detect the essential information on edges but they do not work properly in recovering problems (see [3,22]), because they exhibit undue oscillations. Then it is necessary to incorporate a smoothing technique that increases the computational cost. We present a method that, due to the good approximation rate, needs only finitely many and fixed degrees of freedom M N to recover the phenomenon faithfully. It can be seen as one of the approaches on structured grids. In fact, we consider T a uniform grid hZ2 , and, around each grid point zj 2 X hZ2 , we condense the local information given by the noisy sample ðX; YÞ by a suitable estimator. Then, we recover the unknown phenomenon by a stable interpolation based on isotropic polyharmonic B-splines, whose definition and properties are recalled in Section 2.2. In Section 3 we present the mathematical model and in Section 4 we discuss its approximation properties which are given in terms of expected values. In Section 5 we give a sketch of the algorithm and in Section 6 we present some academic and real world examples which confirm the efficiency of the proposed method. 2. Preliminaries 2.1. Notations and main definitions Throughout this paper, d is the dimension of the space, vectors of Rd are denoted by lowercase a; b; c; x . . ., bold lowercase a; b; x . . . are random variables. We indicate with uðxÞ functions of a random variable and with fðxÞ a generic stochastic process. k kp denotes the usual norm on ‘p ðZd Þ or in Lp ðRd Þ ð1 6 p 6 1Þ and k kp; X the norm restricted to a domain X. denotes the usual convolution product between two functions in L1 ðRd Þ or two vectors in ‘1 ðZd Þ. We use the notational conventions P for convolution between a function uðxÞ and a vector c : c uðxÞ :¼ j2Zd cj uðx jÞ. The inner product on L2 ðRd Þ is R ðf ; gÞ :¼ Rd f ðxÞgðxÞdx. Given an estimator ~fðxÞ of a function f ðxÞ : X Rd ! R, the bias of ~fðxÞ is biasð~fðxÞÞ :¼ jf ðxÞ Eð~fðxÞÞj, and its variance is Varð~fðxÞÞ :¼ Efð~fðxÞ Eð~fðxÞÞÞ2 g. R The integrated mean square error on X # Rd is Imse :¼ X Ef½~fðxÞ f ðxÞ2 gdx which is also equal to
Imse ¼
Z
2 bias ð~fðxÞÞdx þ
Z
X
Varð~fðxÞÞdx:
ð1Þ
X
Taken a set of evaluation points ft j gSj¼1 ; t j 2 X, the root mean square error is defined by
e2 ¼
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uP 2 u S ~ fðt Þ f ðt Þ t j¼1
j
j
S
;
and it measures the smoothing of the data. The discrete maximum error is
e1 ¼ max j~fðtj Þ f ðtj Þj; j
and it provides a measure of the pointwise error on the data. ~ i gNi¼1 . The root mean square error ~e2 In the following, we indicate by ðX; YÞ a given sample of size N; X ¼ fxi gNi¼1 X; Y ¼ fu and the maximum error ~ e1 with respect to the given sample are defined by
~e2 ¼
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uP 2 u N ~ ~i t i¼1 fðxi Þ u N
;
~ i j: ~e1 ¼ max j~fðxi Þ u i
2.2. Isotropic polyharmonic B-splines Polyharmonic splines are well studied by several authors because of their good properties (see, for instance, [10] for a wide description and references). Here we consider the class SHm ðRd Þ of m -harmonic cardinal splines, m > d=2, which, according to [15], are defined as the subspace of S0 ðRd Þ (the usual space of d-dimensional tempered distributions) whose eleP ments f are in C 2md1 ðRd Þ and enjoy the property Dm f ¼ 0 on Rd n Zd , where D :¼ dj¼1 @ 2 =@x2j is the Laplacian operator and Dm m m1 is defined iteratively by D f ¼ DðD f Þ. General properties of the space SHm ðRd Þ can be founded for example in [14–16,19]. In [18] and [19] it is proven that the space of polyharmonic splines can be generated by the translates of a particular polyharmonic spline named elementary Bm spline defined as Um ðxÞ :¼ Dm 1 v m ðxÞ, where v m is the fundamental solution of D given by
319
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
(
v m ðxÞ ¼
cðd; mÞkxk2md
if d is odd;
cðd; mÞkxk2md log kxk if d isfeven:
The constant cðd; mÞ depends only on d and m and is chosen so that Dm v m ¼ Dirac (the unit Dirac distribution at the origin). P D1 is the usual discrete version of D: D1 f ¼ dj¼1 ðf ð ej Þ 2f ðÞ þ f ð þ ej ÞÞ. We can express Um , see [2], as convolution between a vector c of Zd and v m , that is:
8 > < 2d if j ¼ 0 cj :¼ 1 if jjj ¼ 1 > : 0 else;
cm :¼ c cm1 ; Um ðxÞ ¼ cm v m ðxÞ:
Here we consider a new polyharmonic B-spline, the so-called isotropic B-spline (see [4,20]), which has features and properties useful in applications dealing with stochastic processes. This family of polyharmonic B-splines is defined with respect to a new vector cr which is a linear combination of three different discretizations of D : c1 ; c2 and c3 , that is cr ¼ b1 c1 þ b2 c2 þ b3 c3 . The coefficients are chosen so that the Fourier transform of cr has the following expansion in a neighborhood of the origin
c^r ðxÞ ¼ kxk2 þ gkxk4 þ Oðkxk6 Þ; as x ! 0; instead of kxk2 þ Oðkxk4 Þ as it happens for the elementary B-spline. The so-called isotropic B-spline /m ðxÞ is again defined as
/m ðxÞ ¼ cm cmr :¼ cr cm1 : r v m ðxÞ r For d ¼ 2 we have
2
1
4
1
1
4
1
3
1 7 cr ¼ 6 4 4 20 4 5: 6
The graph of /2 ðxÞ is shown in Fig. 1. The isotropic B-splines enjoy the same properties as elementary B-splines. Among them, we recall that /m ðxÞ generates a Riesz basis and admits a multiresolution [4]. In addition (see [20]), it decays as 1=kxkdþ4 while the elementary one decays as 1=kxkdþ2 and approaches a Gaussian, as the order m increases, i. e. /m ðxÞ expð m3 kxk2 Þ. Another important fact is that we can efficiently interpolate a function f ðxÞ with polynomial growth on Zd or on hZd . Let f ¼ ff ðjÞgj2Zd ; f h ¼ ff ðhjÞgj2Zd . We can express the interpolant in Lagrange form
If ðxÞ ¼
X
f ðjÞLm ðx jÞ;
Ih f ðxÞ ¼
j2Zd
X
f ðhjÞLm ðx=h jÞ:
j2Zd
The Lagrangian m-harmonic spline Lm ðxÞ satisfies Lm ðkÞ ¼ d0;k (the usual Kronecker symbol), and enjoys the remarkable prop1 erty of the exponential decay. It may be expanded as Lm ðxÞ ¼ am /m ðxÞ, where am 2 l ðZd Þ decays very fast and can be computed explicitly according to [1]. Then the interpolants If ðxÞ and Ih f ðxÞ can be easly expressed also as
If ðxÞ ¼
X
kj /m ðx jÞ;
Ih f ðxÞ ¼
j2Zd
X
kj;h /m ðx=h jÞ;
j2Zd
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
4
4 2 0 −2 −4
−4
−2
0
2
4
2 0 −2 −4
Fig. 1. Left: the elementary B-spline. Right: the isotropic B-spline.
−4
−2
0
2
4
320
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
where the coefficients k; kh are given by discrete convolution
k ¼ am f ;
kh ¼ am fh :
If f ðxÞ 2 C 2m ðRd Þ the approximation order of the scaled interpolant (see [5]) is 2m
kf ðxÞ Ih f ðxÞk1 ¼ Oðh Þ;
h ! 0:
ð2Þ
Moreover, we can give a bound (see the following proposition) for the Lebesgue constant
X : Km :¼ jL ðx jÞj m j2Zd
ð3Þ
1
Proposition 1. Km is such that
Km 6 K :¼ Cð1 þ 2ð2d1Þ fð5ÞÞ Cð1 þ 2ð2d1Þ Þ; where fðnÞ ¼
P1
l¼1 l
n
ð4Þ
is the usual special function and C is a constant depending only on m and d.
Proof. Since the Lagrangian polyharmonic spline Lm ðxÞ decays exponentially (see [15]), we can write that
jLm ðxÞj 6
C ð1 þ kxkÞdþ4
;
x 2 Rd ;
for some constant C depending only on m and d. As a consequence, we get
X
jLm ðx jÞj 6
j2Zd
X
C
j2Zd
ð1 þ kx jkÞdþ4
:
Any x 2 Rd can be written as k þ h where k 2 Zd and h 2 ½0; 1Þd . Then
X
C
j2Zd
ð1 þ kx jkÞdþ4
6
X
C
j2Zd
ð1 þ kk þ h jkÞdþ4
X
1
i2Zd i – 0
ð1 þ ki þ hkÞdþ4
6C 1þ
! :
For kik1 – 0
ki þ hk P ki þ hk1 P kik1 khk1 > kik1 1 ; and
X
1 dþ4
i2Zd i – 0
ð1 þ ki þ hkÞ
X
1
i2Zd i – 0
kikdþ4 1
6
:
Since d þ 4 > d, the last series is absolutely convergent and we can change the order of the terms. Let l ¼ kik1 , and, for any d1 l P 1, let Di ¼ fi 2 Zd j kik1 ¼ lg and let jDi j be the cardinality of Di . Being jDi j 6 22d1 l , we get
X
jLm ðx jÞj < C 1 þ
l¼1
j2Zd
where fðnÞ ¼
1 X jDi j
P1
l¼1 l
n
l
dþ4
! 6C 1þ2
2d1
1 X
! 5
l
¼ C 1 þ 22d1 fð5Þ ;
l¼1
is the usual special function. Since the approximate value of fð5Þ is about 1.0369, we get the proof. h
3. The model According to the application we deal with, we fix d ¼ 2 even though the model we present holds for any dimension d. We suppose that the phenomenon to be recovered is represented by an explicit function uðxÞ : X R2 ! R. We assume that uðxÞ belongs to the Hölder space of order a > 1; uðxÞ 2 Ha ðXÞ, and that X ¼ ½0; 12 . We consider an extension u0 ðxÞ : R2 ! R of uðxÞ, which is compactly supported on a set Xs ¼ ½s; 1 þ s X; ðs > 0Þ and such that u0 ðxÞ uðxÞ on X. This substitution has little effects, since the properties of the estimator of uðxÞ will be evaluated on X and the sample relevant to u0 ðxÞ coincides with the given data ðX; YÞ; X X. ~ i 2 Y; i ¼ 1; . . . ; N, are a realization of a stochastic process The assigned functional values u
~ ðxÞ ¼ u0 ðxÞ þ nðxÞ; u where the noise nðxÞ is symmetric, ergodic and stationary with EðnðxÞÞ ¼ 0 and covariance matrix C ¼ r2 I, I being the identity matrix of order N.
321
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
T Now, we consider a uniform grid hZ2 and we fix a suitable integer s. To each grid point zj 2 hZ2 X, we associate a set s Pj ¼ fxk gk¼1 X.Pj agrees with the s points of X nearest to zj . We indicate with Uj the neighborhood which contains zj and Pj and we call lj its diameter. The cloud Pj is a sampling of a random variable fj having expected value Eðfj Þ ¼ zj and density function ffj which is compactly supported on Uj . fj depends on the random variables nl ; l ¼ 1; . . . ; L that explain the irregular distribution of the sample locations X in X. The variables nl have compactly supported density function on Xl X, and each fj is expressed as
fj ¼
L X
wjl nl ;
where wjl P 0 and
l¼1
L X
wjl ¼ 1:
ð5Þ
l¼1
~ k ¼ u0 ðxk Þ þ nðxk Þ; xk 2 Pj ; k ¼ 1; . . . ; s, are a realization of an experiment E with respect to the It follows that the values, u ~ ðfj ; nÞ, where fj and n are independent. For each grid point zj 2 X, we condense the local information local random variable u ~ ðfj ; nÞ by a measure Ij of central tendency. Suitable measures are on u 1. the expected value; 2. the median, when outliers occur; 3. the expected value of a multiple regression, when the unknown function uðxÞ cannot be considered locally constant. b j of Ij . Namely By a local sample of size s in Uj , we associate to zj an estimator I ^j ¼ u j ¼ u j ðfj ; nÞ. 1. the sample mean: I ^ j ¼ ue ; when s is odd the sample mean ue is equal to a sample value. 2. the sample median: I j j 3. We fix a set of well spaced centers y ¼ fyt grt¼1 ; yt 2 Uj , and we consider the local least square polyharmonic approx^ j ¼ ^sj ðzj ; yÞ. imation ^sj ðx; yÞ. We set I b ¼ fL b j g 2 the values associated to the whole grid hZ2 . They are defined in the following way. If We indicate by L j2Z T 1 T 2 bj ¼ I b j is a linear combination of some I ^ j ; if j 2 Z2 h1 fXs n Xg, L b l (see Section 5.2); when h X, we set L j 2 J :¼ Z 1 2 b j 2 Z n h Xs , we put L j ¼ 0. We consider the linear space spanned by the isotropic polyharmonic B-splines f/m ðx=h jÞg. We recover u0 ðxÞ by the b interpolant of ðhZ2 ; LÞ:
~h ðxÞ ¼ g
X
Gj;h /m ðx=h jÞ;
ð6Þ
j2Z2
where Gj;h are calculated (see Section 2) by discrete convolution
b : Gj; h ¼ ðam LÞ j
ð7Þ
We want to remark that the number of non zero coefficients in (6), say M, is finite and depends on the step-size h. More precisely, M is approximately equal to the number K of the grid points belonging to Xs . The sparsity of the method can be addressed by the assumption on u0 ðxÞ. For instance (see (2)), if u0 ðxÞ 2 C 2m ðR2 Þ we have that 2m
ku0 ðxÞ Ih u0 ðxÞk1 ¼ Oðh Þ;
h ! 0:
ð8Þ
Being u0 ðxÞ uðxÞ on X we also get 2m
kuðxÞ Ih u0 ðxÞk1; X ¼ Oðh Þ;
h ! 0:
ð9Þ
As a consequence, we can obtain a satisfactory error choosing h so that M N. 3.1. Stability of the method We conclude the description of the method by discussing its stability: it is stable as long as it is the interpolant. As usual, we consider the interpolant Ih f ðxÞ of a gridded data set ðhZ2 ; f Þ :¼ fhj; f j gj2Z2 , the interpolant Ih; f ðxÞ of the perturbed data ðhZ2 ; f þ Þ :¼ fhj; f j þ j g|2Z2 , and the error
kIh f ðxÞ Ih; f ðxÞk1 : kIh f ðxÞk1 We can write the two interpolants in terms of the shifts of the Lagrangian function Lm , that is Ih f ðxÞ ¼ P Ih; f ðxÞ ¼ j2Z2 ðfj þ j ÞLm ðx=h jÞ. Since kIh f ðxÞk1 P maxj2Z2 jf ðjhÞj, using (4) we get for any h
kIh f ðxÞ Ih; f ðxÞk1 kk1 kk1 6 Km 6 Km : kIh f ðxÞk1 kf ðxÞk1 kf ðxÞk1 Note that the computed value of K2 is between one and two.
ð10Þ P
j2Z2 fj Lm ðx=h
jÞ;
ð11Þ
322
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
~h 4. Properties of g ~h ðxÞ. We show that, as N goes to infinity, (6) converges to This section is devoted to studying the statistical properties of g the interpolant Ih u0 ðxÞ of the function u0 ðxÞ at hZ2 . Such an interpolant represents the expected value of the model. Moreover, ~ h ðxÞ is an unbiased estimator of uðxÞ; x 2 X as N goes to infinity and h goes to zero. we prove that g We give also upper bounds (in probability) of the approximation error with respect to the Chebyschev and L2 norms. In the following, we assume that u0 ðxÞ has those smoothness properties needed to assure that the convergence rate of the interpolation sequence is a
a > 1 as h ! 0;
ku0 ðxÞ Ih u0 ðxÞk1 ¼ Oðh Þ;
ð12Þ
and consequently a
kuðxÞ Ih u0 ðxÞk1;X ¼ Oðh Þ as h ! 0:
ð13Þ
~ h ðxÞ (see Appendix A for the proof). We can now state the statistical properties of g Theorem 2. If u0 ðxÞ 2 Ha ðR2 Þ and the noise nðxÞ is symmetric, ergodic, stationary with EðnðxÞÞ ¼ 0 and covariance matrix C ¼ r2 I, we have that, for all x 2 X
~ h ðxÞÞ ¼ Ih u0 ðxÞ þ OðlÞ; Eðg
ð14Þ
and
~ h ðxÞÞ 6 K Varðg
r2
þ OðlÞ;
s
where K is a finite constant and
ð15Þ
l ¼ maxj lj .
~h ðxÞ converges to the interpolant Ih u0 ðxÞ for Since l goes to zero as N ! 1, Theorem 2 tells us that the expected value of g any choice of h (see (14)). On the other hand, when N ! 1, we can take h ! 0 and consider the sequence Ih u0 ðxÞ which converges to the unknown function uðxÞ for any x 2 X. Hence, when N ! 1; h ! 0, 2 ~ h ðxÞÞ ! 0; bias ðg
8x 2 X;
ð16Þ
~ h ðxÞ is an asymptotically unbiased estimator of uðxÞ. We can conclude that in asymptotic conditions (N large and h not and g ~ h ðxÞ provides a good approximation of uðxÞ; x 2 X. too large and fixed) the estimator g We give now upper bound of the integrated mean square error. From Theorem 2, (13), and using the bias–variance decomposition (1), we immediately get
Z
~h ðxÞ uðxÞ2 gdx Ef½g Z Z 2 ~ h ðxÞÞdx þ Varðg ~ h ðxÞÞdx bias ðg ¼
Imse :¼
ð17Þ
X
X
ð18Þ
X
6 kuðxÞ Ih u0 ðxÞk22;X þ K 2a
6 Oðh Þ þ K
r
r
2
s
þ OðlÞ
ð19Þ
2
s
þ OðlÞ:
ð20Þ
R R 2 ~h ðxÞÞdx and X Varðg ~ h ðxÞÞdxÞ Indeed, our estimator attains the optimal accuracy when the two components of Imseð X bias ðg are of the same order of magnitude. In this case we reach a trade-off between the fidelity to the data and the smoothness of the resulting solution, i. e. the noise is smoothed and, at the same time, the approximation is not too biased. In fact our aim is to filter the noise within the data without causing a loss of important information on the underlying function. In addition, we can prove that in probability a
r
~h ðxÞ uðxÞk2;X 6 Oðh Þ þ C pffiffi þ OðlÞ: kg s
ð21Þ
In fact, from (14) and (13) we get
~h ðxÞ uðxÞk2;X 6 kuðxÞ Eðg ~ h ðxÞÞk2;X þ kg ~h ðxÞ Eðg ~ h ðxÞÞk2;X kg ~h ðxÞ Eðg ~h ðxÞÞk2;X þ OðlÞ 6 kuðxÞ Ih u0 ðxÞk2;X þ kg a
~h ðxÞ Eðg ~h ðxÞÞk2;X : 6 Oðh Þ þ OðlÞ þ kg The bound (21) follows from the Chebyshev inequality
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ~h ðxÞÞg P 1 1=j2 : ~h ðxÞ Eðg ~h ðxÞÞj 6 j Var ðg Probfjg
ð22Þ
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
323
~h ðxÞ and uðxÞ. Remembering that Ih u0 ðxÞ We now provide an upper bound also for the maximum norm of the error between g ~h ðxÞ can be expressed in terms of the Lagrangian bases Lm ðxÞ, we have and g
~h ðxÞ uðxÞk1;X 6 kuðxÞ Iu0 ðxÞk1;X þ kg ~h ðxÞ Iu0 ðxÞk1;X kg X b j u0 ðjhÞÞLm ðx=h jÞ ¼ kuðxÞ Iu0 ðxÞk1;X þ ð L j2Z2
:
1;X
Using the left side of (11), we get
b j u0 ðjhÞjg: ~h ðxÞ uðxÞk1;X 6 kuðxÞ Iu0 ðxÞk1;X þ Km maxfj L kg j2Z2
By considering the absolute mean deviations d :¼ fdj :¼
R Uj
ð23Þ
b j Eð L b j ÞjdFg, and the Markov inequality we have jL
b j u0 ðjhÞj 6 jdj g P 1 1=j: Probfj L
ð24Þ
Then we can conclude that in probability a
~h ðxÞ uðxÞk1;X 6 kuðxÞ Iu0 ðxÞk1;X þ Km kdk1 ¼ Oðh Þ þ Km kdk1 : kg
ð25Þ
In this case, the optimal accuracy is achieved when the maximum norm of the interpolation error is of the same order as the maximum mean deviation. Remembering that the absolute mean deviation is less than or equal to the standard deviation, we also get a
r
~h ðxÞ uðxÞk1;X 6 Oðh Þ þ Km pffiffi ; kg s
ð26Þ a
and hence we reach the optimal accuracy choosing h so that h prffis. 5. The algorithm and its computational cost In this section, we give a sketch of the procedure implemented for the bidimensional case, together with the computational cost. 5.1. Preliminaries As already stressed in the Introduction, our aim is to provide a ”good” solution with low computational cost. That is to say, we need to find a trade-off between cost and accuracy mainly expressed in terms of good graphical representation. Remem2 bering that the data are corrupted by noise, it make sense to ask for a l relative error not greater than 5%. b j ; j 2 J. For a given sample, the accuracy depends on h; s (see (21), (25)) and on the estimators I The step-size h has to be chosen in order to guarantee both a good graphical behaviour of the solution and low cost. Remember that h is relevant to the interpolant accuracy when considering exact data and the theory gives hints about its choice (see for instance (9)). But, in practice, the error for a given h depends on the smoothness of the function we want to interpolate. For instance, if we interpolate the well-known Franke’s function at the points of a uniform grid of step-size h ¼ 0:1 in X ¼ ½0; 12 , we obtain a maximum error e1 ¼ 2:0e 2 while, with a finer grid, h ¼ 0:025, the error is e1 ¼ 8:8e 5. When we consider the valleys function by Nielson, h ¼ 0:1 gives e1 ¼ 1:6e 1, while h ¼ 0:025 provides e1 ¼ 1:3e 3, which is very different from the value obtained for the Franke’s function. Then we can conclude that choosing h of the order of 102 is likely appropriate. The value of s must be chosen at least in order of ten, both for statistical and local approximation reasons. When the noise to signal ratio r=kuk2 is below ten percent, it is reasonable to choose s ’ 10. Otherwise, s must be larger. Below, we shall give indications about the choice of these quantities in order to reach the goal. The procedure here presented is validated by the examples shown in Section 6. b j is an asymptotically unbiased estimator of uðzj Þ, with variance of About the estimators, we observe that each statistic I the order of r2 =s (see the proof of Theorem 2 in Appendix A ). b j depends on the behaviour of the function uðxÞ within the neighborhood Uj , on the value of r2 and on The goodness of I the presence of outliers in Uj . b j, ^2j of r2 . They provide useful information for assessing the goodness of I We fix s ’ 10 and consider the estimators r 2 needed to keep the l relative error below 5%. An operative strategy about this point is provided in the next section. 5.2. The algorithm Having assigned the sample and fixed a step-size h, we consider the corresponding grid. b j and tol2 We choose s ¼ 11 for any j 2 J, and we prescribe two tolerances: tol1 for the local correctness of the estimator I for the presence of outliers.
324
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
The main steps of the algorithm are as follows. 1. Geometric Algorithm. We individuate the sets Pj ; j 2 J of the s data points closest to zj by a classical cell technique (see Chapter 2 in [10]). b j . We initially assume that the central tendency measure Ij is constant b j of the local estimators I 2. Computation of values I b j as arithmetic sample mean or median if we know that there on each Uj ; j 2 J. Consequently, for any j 2 J, we compute I b j Þ should be of the same order of magnitude (see (38) and (44) in Appendix A). are outliers. In this case the variances Varð I ^2j : Then for each j 2 J, we compute the values of the estimators r
r^ 2j ¼
Ps
b k Þ2 ~ I ; s
k¼1 ðuk
~ k jxk 2 Pj : u
^ 2j > tol1, we need to increase s since the noise to signal ratio is very high. If, for some If for almost all indices j we have that r H ^ 2H < tol2, the hypothesis that the local tendency measure IjH is constant on UjH must be rejected and the index j ; tol1 < r j least squares estimator ^sjH ðzjH ; yÞ has to be calculated. The knots y ¼ fyt grt¼1 are placed on the circle with center in zj and radius q ¼ maxxk 2Pj fkzj xk kg at the positions of the r roots of the unity (usually we choose r ¼ 2). If, for some H ^ 2H P tol2 we need to increase s and take s ¼ 10s, since in Uj there are outliers that distort the estimate of r2 . j ;r j b The 3. Construction of the approximation. First we extend the data out of the domain X ¼ ½0; 12 and we obtain the set L. extension strategy is chosen in order to minimize the Gibb’s phenomenon at the boundary. A good way is to construct b j relevant to a set of pseudodata calculated by odd symmetry, across the edges of X, of the values of the estimators I the nodes close to the boundary of X; by multiplication with a weight function, we make them vanishing gently. After the side regions, we take care of the angular regions. In order to joint the data smoothly, we compute a weighted mean of the two closest values at the sides. By discrete convolution, we calculate the coefficients of the approximation (6), that b ; j ¼ 1; . . . ; M. is Gj;h ¼ ðam LÞ j ~h is done on a finer grid of X with step-size 2liv h; liv > 1, by 2liv discrete convolutions. 4. Evaluation. The evaluation of g 5.3. Computational cost The main cost of the algorithm is due steps 1, 3, and 4. According to [10], the cost of the geometric algorithm to construct the tree structure of cells is OðN log NÞ at most. For regular scattered set X, the cost results OðNÞ. Moreover, the exploration of the tree could be easily made parallel (see [13]) and so also the computation of the estimab j. tors I The cost of the construction is given by the calculus of the coefficients of the interpolant that are obtained by discrete convolutions. Since the mask am has few non zero elements, the cost is OðMÞ where M N. The evaluation is done by 2liv discrete convolutions and then the cost is Oð1Þ for each point of the tabulation chosen according to step 4 of the algorithm. 6. Numerical examples In the following examples, we have considered /m with m ¼ 2. We have run our matlab algorithm on a computer AMD 64 that works as a mono-processor when using matlab. For a given h, we indicate by J the number of the grid-nodes on X and consider s ¼ 12h for the extended domain Xs . The two tolerances tol1 and tol2 are 0:05xðYÞ and 0:5xðYÞ, where xðYÞ is the range of the data Y. The final recovering is evaluated on a grid of step-size 2liv h with liv ¼ 2. We first present the results related to synthetic data taken from two challenging test functions: the Gaussian peaks and the valley function of Nielson. The data locations present two zones with very different density distribution (see Fig. 2). The data do not exhibit outliers and are corrupted by white noise with standard deviation r. According to the discussion of Section 5.1, we set s ¼ 11 and h ¼ 0:025 which gives J ¼ 412 . Then, the number of grid points in Xs is K ¼ 652 . Gaussian peaks We have considered N ¼ 24260. The point locations are twenty times more dense in [0,0.5] [0,1] than in [0.5,1] [0,1]. The standard deviation is r ¼ 0:001 and the noise to signal ratio is r=kuk2 1%. b j are initially computed by the sample mean, and according to the second step of the algorithm, in 481 The estimators I b j by a least square approximation with two knots. grid points we need to compute I In Fig. 3, we show a visualization of the data by using a piecewise linear interpolation (left) and the approximation ob~h (right), which provides a full shape recovering of this function with e1 ¼ 0:10 and e2 ¼ 0:014. If we consider the tained by g data without noise we obtain the same errors.
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
325
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 2. Locations with two zones of different distribution.
Fig. 3. Left: the data. Right: the approximation.
The approximation error is shown in Fig. 4; the error is larger where the function curvature has the highest values. The example shows that, even though the distribution of the data is tricky for this function, the final recovering is completely faithful and does not exhibit undue oscillations neither at the boundary nor in the flat zones as it could happen when using penalization techniques. The Valley function In this case, we have N ¼ 9700 points five times more dense in [0,0.5] [0,1] than in [0.5,1] [0,1]. The standard deviation is r ¼ 0:001 and the noise to signal ratio is r=kuk2 1%. b j are initially computed by the arithmetic mean and according to the second step of the algorithm in 481 The estimators I b j by a least square approximation with two knots. grid points we need to compute I ~h In Fig. 5 we show a visualization of the data by using a piecewise linear interpolation (left) and the approximation g (right) which provides a full shape recovering of the function with e1 ¼ 0:037 and e2 ¼ 0:009. This test function has been considered in [9] where the authors provide a recovering by a local method which uses samples of exact scattered data with uniformly distributed locations in the domain. For their sample of size N ¼ 10200, they obtain e1 ¼ 0:093 and e2 ¼ 0:014. The second group of our experiments considers two real data sets with outliers. The first is the mannequin (ma), which has to be recovered by a very smooth function. The second example is related to the recovering of a part of the Rotterdam harbor floor (RH) which can not be represented by a smooth function. The results are reported in Table 1 where the maxi~ 2 with respect to the data are shown. e2 , and the relative root mean square error re mum error ~e1 , the root mean square error ~ Such errors have been computed discharging the values with less probability to happen. Moreover we provide the compu~h ðxÞ, (steps 1, 2, 3 of the algorithm). tation times tc for the construction of g
326
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
Fig. 4. The Graph of the error.
Fig. 5. Left: the data. Right: the approximation.
Table 1 Maximum errors, root mean square errors, and relative root mean square errors with respect to the data. Example
~ e1
~ e2
e2 re
tc
ma RH
2.6 cm 0.84 m
1.84 cm 0.28 m
2.5% 1.4%
0.133 s 0.175 s
Fig. 6. Left: the mannequin data. Right: approximation of the front.
The mannequin: N ¼ 126533 The measurements of the mannequin surface are point clouds with presence of outliers and noise. The data were kindly sent us from the Laboratory of Optoelectronics of the University of Brescia, see [17]. They are collected by optical digitizers for the efficient reverse engineering of complex shapes and for rapid prototyping. In this example the object surface is very smooth. Therefore, it is possible to maintain all the details by a sub-sample. We have set J ¼ 412 .
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
327
260 240 220 200 180 160 140 120 50
100
150
200
250
Fig. 7. Left: Rotterdam harbor locations. Right: approximation of the Rotterdam data.
Fig. 8. Top view of the Rotterdam data approximation: J ¼ 3212 .
In Fig. 6 (left) the optical digitalization is shown. As one can see, the noise is very meaningful. The algorithm calculates the b j by the median with s ¼ 11. estimators I We have run the algorithm on the N data relevant to the front part of the mannequin and the nice looking functional reconstruction is shown in Fig. 6 (right). Rotterdam Harbor floor: N ¼ 634604 Unlike the previous example, the surface to be recovered presents a lot of details. The data are produced by echosounder for dredge monitoring. They have very irregular distributed locations on tracks ( see Fig. 7 on the left), present outliers and besides, in some regions of the domain, we have cluster of outliers. These data were considered in [9,8] and [23]. bj According to the requirements expressed in Section 5.1, the choice J ¼ 412 can be sufficient. The algorithm estimates I by the median, using s ¼ 101 local data. The final reconstruction is shown in Fig. 7 (right) and the errors in Table 1. Although a small value of J was used, the graphical recovering is good. This is also confirmed by the error ~ e1 . If instead we desire to provide a surface that collects all the details, it is appropriate to decrease the grid step-size. With the choice J ¼ 3212 we can use all the information given by the sample and get a more accurate recovering. In this case, all the data are processed and particular attention should be placed in the region where there is a thickening of outlier clusters. In order to highlight the details, we show, in Fig. 8, a top view of the harbor floor recovering.
328
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
Table 2 Errors and computational times for different values of J. J 81
2
3212
~e1
~ e2
tc
0.67 m
0.27 m
0.6 s
0.2 m
0.067 m
7.88 s
Thanks to this example we can do some remarks. Firstly, the method stability enable us to increase J (to decrease the step-size h) in order to improve the accuracy. Table 2 shows the error decrease (as J increases) and the computation times tc. In addition, the method we propose is very efficient. If we define the efficiency as the inverse of the product between the maximum norm of the approximation error ~ e1 and the computation time tc, we have that, it results to be quite effective when compared with [8]. Appendix A. Before giving the proofs we recall some assumptions on the model (see Section 3) and we give some notations. ~ k are a realization of random variables depending on the variables fj and n which are Remember that the sample values u independent. We indicate by fxk gsk¼1 the random variables relevant to the sampling of fj in Uj . b j g and it can be also expressed in terms of the m-har~ h ðxÞ is the interpolant (6) of the gridded data fhj; L The estimator g ~ h as monic Lagrangian spline Lm . In the following we write g
~h ðxÞ ¼ g
M X
b j Lm;j ðxÞ; L
ð27Þ
j¼1
where we have set Lm;j ðxÞ ¼ Lm ðx=h jÞ. Proof of Theorem 2 According to (27), we have
~ h ðxÞÞ ¼ Eðg
M X
b j ÞLm;j ðxÞ; Eð L
ð28Þ
j¼1
and
h i ~ h ðxÞÞ ¼ E ðg ~h ðxÞ Eðg ~ h ðxÞÞÞ2 Varðg ¼
M X
b j ÞL2 ðxÞ Varð L m;j
j¼1
þ
i X h b j Eð L b jÞ ðL b l Eð L b l ÞÞ Lm;j ðxÞLm;l ðxÞ: E L
ð29Þ
j–l
By the Cauchy–Schwartz inequality, we obtain
~ h ðxÞÞ 6 Varðg
M X
b j ÞL2 ðxÞ Varð L m;j
j¼1
þ
X qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b l ÞjLm;j ðxÞj jLm;l ðxÞj: b j ÞVarð L Varð L
ð30Þ
j–l
b j , are computed (see Section 5.2), we have that Taking into account of how the quantities L
( b jÞ ¼ Eð L
b j Þ; j2J Eð I Pl T 1 b j 2 Z2 h fXs n Xg: l¼1 bl Eð L l Þ;
ð31Þ
For the variance, it is not difficult to see that
b j Þ ¼ Varð I b j Þ j 2 J; Varð L b jÞ 6 Varð L
l X l¼1
b2l
l X
b l Þ; Varð L
j 2 Z2
\
1
h fXs n Xg:
l
b j Þ and Varð L b j Þ for j 2 J, that is Eð I b j Þ and Varð I b j Þ. Then we have to compute Eð L
ð32Þ
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
329
bj ¼ u j Case (1) I Since fj and nare independent, we have
b j Þ ¼ Ef fEn ðu j Þg: Eð I j Applying the mean value theorem, we get
Efj ðuðxk ÞÞ ¼ uðxH Þ;
xH 2 Uj :
ð33Þ
For any xH 2 Uj we can write
uðxH Þ ¼ uðzj Þ þ Oðlj Þ:
ð34Þ
b j Þ ¼ uðzj Þ þ Oðl Þ; Eð I j
ð35Þ
Then
and
b j Þ ¼ u0 ðzj Þ þ OðlÞ; Eð L
jZ2
\
1
h fXs n Xg:
ð36Þ
Substituting (35), (36) in (28), and remembering (4), we get (14)
~h ðxÞÞ ¼ Ih u0 ðxÞ þ OðlÞ; Eðg
x 2 X:
j Þ. It is known that Let us now consider Varðu
j Þ ¼ Eðu 2j Þ E2 ðu j Þ: Varðu Now
2j Þ ¼ Efj fEn ðu 2j Þg ¼ Eðu
ð37Þ
! s X X 1 r2 2 : E uðx Þ þ uðx Þuðx Þ þ f k k l s2 j k¼1 s k–l
With the same argumentations used before, we get
2j Þ ¼ Eðu
r2 s
þ u2 ðzj Þ þ Oðlj Þ;
and using (35) we obtain
b j Þ ¼ Varðu jÞ ¼ Varð I
r2 s
þ Oðlj Þ:
ð38Þ
Then using (4) and remembering (32), (30) becomes
~h ðxÞÞ 6 Varðg
r2 s
K þ OðlÞ;
ð39Þ
and we get the proof of (15). b j ¼ ue Case (2) I j b j ¼ ue when outliers occur and when we can As already said in Section 3, we use the classical estimator of the median, I j assume that in Uj the unknown function u is approximately constant, i.e. uðxÞ ’ c. In this case the estimator uej is strictly e connected to the estimator of the median of the noise nj
uej ’ c þ nej :
ð40Þ Eðnej Þ
Since the noise nðxÞ is symmetric, we have ¼ 0. In addition when s is large, we have that the density function of the sample mean is asymptotically a Gaussian with zero mean and variance r2 and
b j Þ ¼ Varðne Þ ¼ Varð I j
p 2s
r2 ;
ð41Þ
(see [11]). We consider s odd, therefore with the same argumentations used in the previous case, we get
Eðuej Þ ¼ Efj fEn ðuej Þg ¼ uðzj Þ þ Oðlj Þ;
ð42Þ
and again
b j Þ ¼ u0 ðzj Þ þ OðlÞ; Eð L
j 2 Z2
Substituting in (28) we get (14).
\
1
h fXs n Xg:
ð43Þ
330
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
Let us consider Varðuej Þ. From (40), (42) and (41), we have that
Varðuej Þ ¼ Varðnej Þ ¼
p 2s
r2 þ Oðlj Þ:
ð44Þ
We proceed as in the previous case, and substituting (44) in (30) we get (15). b j ¼ ^sj ðzj ; yÞ Case (3) I For sake of simplicity, we consider the orthonormal basis fut ðxÞgrt¼1 ¼ fuðx yt Þgrt¼1 of the Hilbert space of the polyharmonic splines with finite dimension r. We assume that uðxÞ in Uj is well approximated by the least square element r X
ct ut ðxÞ þ p1 ðxÞ;
ð45Þ
t¼1
b j is where p1 ðxÞ is a linear polynomial and ct ¼ ðuðxÞ; ut ðxÞÞ. The estimator I
^sj ðzj ; yÞ ¼
r X
^ 1 ðzj Þ; ^ct ut ðzj Þ þ p
t¼1
P ~ k uðxk yt Þ; p ^ 1 ðzj Þ ¼ u j þ OðlÞ. being ^ ct ¼ 1=s sk¼1 u ^ 1 ðzj Þ, the expected value of ^sj ðzj ; yÞ is c0 ¼ p Putting u0 ðxÞ ¼ 1 and ^ r X
Eð^sj ðzj ; yÞÞ ¼ Efj En
!!
^ct ut ðzj Þ
t¼0
! r s X 1X j Þ: ¼ Efj ðuðxk Þut ðxk ÞÞ ut ðzj Þ þ Efj ðu s k¼1 t¼1 j Þ ¼ uðzj Þ þ Oðlj Þ, and we can write (see case (1) We have already proved that Efj ðu H Efj ðuðxk Þut ðxk ÞÞ ¼ uðxH j Þut ðxj Þ ¼ uðzj Þut ðzj Þ þ Oðlj Þ;
then we get
Eð^sj ðzj ; yÞÞ ¼ uðzj Þ þ uðzj Þ
r X
ut ðzj Þ2 þ Oðlj Þ:
ð46Þ
t¼1
It is easy to see that, since each ut ðxÞ can be expressed in terms of the function P ut ðxÞ ¼ ri¼1 cti v 2 ðx yi Þ, we have
v 2 ðxÞ ¼ kxk2 log kxk,
i.e.
ut ðxÞ ¼ Oðlj Þ; t ¼ 1; . . . ; r; x 2 Uj :
ð47Þ
b j Þ ¼ Eð^sj ðzj ; yÞÞ ¼ uðzj Þ þ Oðl Þ; Eð I j
ð48Þ
Then
and
b j Þ ¼ u0 ðzj Þ þ OðlÞ; Eð L
j 2 Z2
\
1
h fXs n Xg:
ð49Þ
Substituting (48) in (28), we get (14). Now we prove (15). Using the Cauchy–Schwartz inequality, we obtain that
2
r X Varð^sj ðzj ; yÞÞ ¼ E4 ð^ct Eð^ct ÞÞut ðzj Þ
!2 3 5
t¼0
" !# r r X X ð^ct Eð^ct ÞÞ2 1 þ u2t ðzj Þ 6E t¼0
¼
r X
Varð^ct Þ þ
t¼0
t¼1 r X t¼1
2 t ðzj Þ
u
r X
Varð^ct Þ:
ð50Þ
t¼0
^t Þ. We have to compute Varðc j Þ and for (38) we get When t ¼ 0; Varð^ c0 Þ ¼ Varðu
Varð^c0 Þ ¼
r2 s
þ O1 ðlj Þ::
ð51Þ
M. Bozzini et al. / Applied Mathematics and Computation 216 (2010) 317–331
331
While, for t ¼ 1; . . . ; r, tedious computations give
Varð^ct Þ 6
r2 s
u2t ðzj Þ:
ð52Þ
Then, remembering (47) and substituting (51) and (52) in (50), we obtain
b j Þ ¼ Varð^sj ðzj ; yÞÞ 6 Varð I
r2 s
þ O2 ðlj Þ:
ð53Þ
Substituting in (15) and using the same argumentations of case (1), we complete the proof. References [1] B. Bacchelli, M. Bozzini, C. Rabut, A fast wavelet algorithm for multidimensional signals using polyharmonic splines, in: A. Cohen, J.L. Merrien, L. Schumaker (Eds.), Curve and Surface Fitting: Saint–Malo 2003, Nashboro Press, 2003, pp. 21–30. [2] B. Bacchelli, M. Bozzini, C. Rabut, M.L. Varas, Decomposition and reconstruction of multidimensional signals using polyharmonic pre-wavelets, Appl. Comput. Harmon. Anal. 18 (2005) 282–299. [3] M. Bozzini, M. Rossini, Approximating surfaces with discontinuities, Mathematical and Computer Modelling 31 (6/7) (2000) 193–216. [4] M. Bozzini, C. Rabut, M. Rossini, A multiresolution analysis with a new family of polyharmonic B-splines, in: A. Cohen, J.L. Merrien, L. Schumaker (Eds.), Curve and Surface Fitting: Avignon 2006, Nashboro Press, 2007, pp. 51–60. [5] M.D. Buhmann, Radial basis functions, Acta Numer. (2000) 1–38. [6] G. Casciola, D. Lazzaro, L. Montefusco, S. Morigi, Shape preserving surface reconstruction using locally anisotropic radial basis function interpolants, Comput. Math. Appl. 51 (2006) 1185–1198. [7] D. Castaño, A. Kunoth, Multilevel regularization of wavelet based fitting of scattered data - some experiments, Numer. Algorithms 39 (2005) 81–96. [8] O. Davydov, A. Sestini, R. Morandi, Local RBF approximation for scattered data fitting with bivariate splines, in: M.G. de Bruin, D.H. Mache, J. Szabados (Eds.), Trends and Applications in Constructive Approximation, vol. 151, Birkhäuser, 2005, pp. 91–102. ISNM. [9] O. Davydov, F. Zeilfelder, Scattered data fitting by direct extension of local polynomials to bivariate splines, Adv. Comp. Math. 21 (2004) 223–271. [10] A. Iske, Multiresolution Methods in Scattered Data Modelling, Springer-Verlag, 2004. [11] M.G. Kendall, A. Stuart, The Advanced Theory of Statistics, vol. 1, Griffin, London, 1969. [12] A. Kunoth, Adaptive wavelets for sparse representations of scattered data, in: Jetter et al. (Eds.), Topics in Multivariate Approximation and Interpolation, Elsevier, 2005, pp. 1–24. [13] D. Lazzaro, L. Montefusco, Radial basis functions for the multivariate interpolation of large scattered data sets, J. Comput. Appl. Math. 140 (1-2) (2002) 521–536. [14] W.R. Madych, Spline type summability for multivariate sampling. Analysis of divergence (Orono, ME, 1997), in Appl. Numer. Harmon. Anal., Birkhauser Boston, MA, 1999, 475-512. [15] W.R. Madych, S.A. Nelson, Polyharmonic cardinal splines, J. Approx. Theory 60 (1990) 141–156. [16] C. Micchelli, C. Rabut, F. Utreras, Using the refinement equation for the construction of pre-wavelets, III: Elliptic splines, Numerical algorithms 1 (1991) 331–352. [17] http://optoweb.ing.unibs.it. [18] C. Rabut, B-splines polyharmoniques cardinales: interpolation, quasi-interpolation, filtrage, Thèse d’Etat, Université de Toulouse, 1990. [19] C. Rabut, Elementary polyharmonic cardinal B-splines, Numer. Algorithms 2 (1992) 39–46. [20] M. Rossini, On the construction of polyharmonic B-splines, Journal of Computational and Applied Mathematics 221 (2008) 437–446. [21] R. Schaback, Remarks on meshless local construction of surfaces, in: The Mathematics of Surfaces IX, Proceedings of the Ninth IMA Conference on the Mathematics of Surfaces, Springer, 2000. [22] R. Schaback, H. Wendland, Adaptive greedy techniques for approximate solution of large RBF systems, Numer. Algorithms 24 (2000) 239–254. [23] F. Zeilfelder, Scattered data fitting with bivariate splines, in: M. Floater et al. (Eds.), Principles of Multiresolution in Geometric Modelling, Springer, 2002, pp. 243–286.