Applied Mathematics and Computation 190 (2007) 1050–1062 www.elsevier.com/locate/amc
The regularizing properties of anisotropic radial basis functions G. Casciola, L.B. Montefusco, S. Morigi
*
Department of Mathematics, University of Bologna, P.zza di Porta San Donato 5, 40127 Bologna, Italy
Abstract In the present work we consider the problem of interpolating scattered data using radial basis functions (RBF). In general, it is well known that this leads to a discrete linear inverse problem that needs to be regularized in order to provide a meaningful solution. The work focuses on a metric-regularization approach, based on a new class of RBF, called anisotropic RBF. The work provides theoretical justifications for the regularization approach and it considers a suitable proposal for the metric, supporting it by numerical examples. Ó 2006 Elsevier Inc. All rights reserved. Keywords: Radial basis functions; Interpolation; Regularization; Ill conditioning
1. Introduction Let us consider the problem of recovering a function from the set of distinct data ðX ; fÞ, where X ¼ fxi gni¼1 X ½0; 1d , and f ¼ ffi gni¼1 R is the set of associated measured values. Such problems arise in several different fields such as scattered data interpolation, surface reconstruction, neural networks and machine learning, or in the solution of partial differential equations by collocation methods. A popular and successful choice for solving this problem is to consider radial basis function interpolants of the form n X F ðxÞ ¼ cj uðkx xj k2 Þ; ð1Þ j¼1
where u : RP0 ! R is a radial function, and the coefficients c1 ; . . . ; cn are determined by the interpolation conditions F ðxi Þ ¼ fi ; 1 6 i 6 n. This can be conveniently written in matrix–vector form as Ac ¼ f; A 2 Rnn ; c 2 Rn ; f 2 Rn ; ð2Þ where the interpolation matrix A is given by A ¼ ðuðkxi xj k2 ÞÞi;j¼1;...;n :
ð3Þ
In what follows we will consider special classes of order zero completely monotone basis functions u which allow interpolants in the form (1), i.e. we restrict ourselves to positive definite functions, such as inverse *
Corresponding author. E-mail addresses:
[email protected] (G. Casciola),
[email protected] (L.B. Montefusco),
[email protected] (S. Morigi).
0096-3003/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.11.128
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
1051
multiquadrics, Gaussians, and compactly supported radial basis functions [12,16]. In these cases, A is a symmetric positive definite matrix, and the interpolation problem (2) is theoretically uniquely solvable. It is well known that the condition number of the interpolation matrix A, and thus the accuracy of the computation of (2), strictly depends on the cardinality and distribution characteristics of the data set X. Following the classical literature, we will characterize the data set X in terms of the separation distance qX and the fill distance hX defined, respectively, by qX ¼
1 min kxi xj k2 ; 2 16i
hX ¼ sup min kx xi k2 : x2X 16i6n
ð4Þ
Relatively small distances between points, that is a small qX, can lead some of the eigenvalues of A to cluster at the origin. Thus, A can be severely ill-conditioned and may be numerically singular, even for relatively low dimension n. Linear systems of equations with a matrix of ill-determined rank are commonly referred to as linear discrete ill-posed problems. It is well known that these problems are very sensitive to perturbation on the right-hand side. In the considered context, the right-hand side vector f, representing the available data, is typically contaminated by an error e 2 Rn , which may stem from measurement errors (as, for example, in the 3D scanner acquisition systems). Let ^f denote the unknown error-free vector associated with f, i.e., f ¼ ^f þ e, f^ i ¼ f^ ðxi Þ and ^c the solution of the linear discrete ill-posed problem with error-free right-hand side A^c ¼ ^f:
ð5Þ
Since ^f is not available, our problem is to compute an approximation of the exact solution ^c by computing a suitable solution of (2). In fact, due to the error e in f and the ill-conditioning of A, the exact solution of (2) can be of very large norm. Therefore it does not represent a meaningful approximation of ^c. The computation of a meaningful approximate solution of the linear system (2) requires, in general, that the system be replaced by a nearby, but better conditioned linear system, that is less sensitive to perturbations. This replacement is referred to as regularization, and the computed solution of the latter system, named regularized solution, is considered an approximation of the noise-free solution ^c. Some well-known regularization methods could be used to determine an approximate solution for (2), such as the Lavrentiev regularization methods or truncated singular value decomposition (TSVD) [14]. Another approach to address this problem is to use a preconditioning technique coupled with domain decomposition methods, allowing large-scale problems to be solved [6,1]. A different approach to circumvent the ill-conditioning problem is to get rid of undesirable data in X. In particular, in [5,2,3] an efficient strategy to remove data is proposed in the context of a local RBF interpolation method where the interpolant values are given by a convex combination of local reconstructions. The purpose of the present work is to propose regularization techniques in the framework of the interpolation of irregularly distributed data sets. First we will revisit, from a regularization point of view, the strategy proposed in [2,3], considering it as a projective approach to regularization. The second part of the paper will regard an alternative and more appealing approach to regularization, enabling the data set to be left unaltered. It consists in replacing the coefficient matrix A with a better conditioned matrix AT obtained by exchanging in (3) the Euclidean metric with a suitable metric T. The use of the new metric leads to the definition of a new class of radial basis functions, called anisotropic RBFs. These RBFs have already been used successfully in [3], in which their shape preserving reconstruction characteristics have been emphasized. In this work, on the other hand, the regularising characteristics of these new anisotropic basis functions will be emphasized and it will be proved that, for a suitable choice of the metric T, the condition number of the new interpolation matrix AT decreases, thus motivating us to call this approach metric-regularization. Moreover, we will provide a proposal for a metric T which satisfies the theoretical requirements to be a ‘‘regularizing metric’’, hence reducing the ill-conditioning of the problem. This paper is organized as follows. In Section 2 we consider a projective approach to regularization and we provide a characterization in the case of interpolation with RBFs. The new concept of metric-regularization will be presented in Section 3, together with a brief introduction to the class of anisotropic RBFs. In Section 4 we will provide a suitable proposal for the metric T supporting it by numerical examples and results.
1052
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
2. A projective approach to regularization The objective in this section is to provide a theoretical framework which states that removing some of the data in a suitable way represents a sort of regularization to the problem (2). Let SX be the set of the indexes of the ‘ n points discarded from the set X and X 1 ¼ X n fxj jj 2 S X g be the set of the remaining points. Using this set SX we construct the orthonormal columns ej, for each j 2 S X , of a matrix W 2 Rn‘ , as the canonical unit vectors with one in the jth entry and zeros elsewhere. Hence the span of the orthonormal columns of W represents the user-chosen linear space W, corresponding to the discarded elements of X. Let us introduce the orthogonal projectors P W ¼ WW T and P ? W ¼ I P W , where I denotes the identity matrix. We use these projectors to split the solution c of the linear system (2) according to c ¼ c0 þ c00 ;
c0 ¼ P W c;
c00 ¼ P ? W c;
ð6Þ
where c0 2 W represents the critical contribution to the solution that we want to discard, while c00 2 Rn n W is the meaningful part to be considered. Our claim now is that discarding data in X according to a suitable strategy and thus solving a linear system in a subspace of Rn of dimension n ‘, represents a sort of regularization of problem (2). Using the projectors PW and P ? W and the decomposition (6), the linear system (2) can be rewritten as ? ? ? ðP ? W AP W þ P W AP W Þc ¼ P W f:
ð7Þ
Now, in order to consider only solutions c00 of (6), we reduce the linear system (7), taking into account only the second term of the coefficient matrix, thus obtaining ? ? ðP ? W AP W Þc ¼ P W f;
ð8Þ
? 00 which is equivalent to solving P ? W Ac ¼ P W f: By using a suitable permutation matrix Pb , the linear system (8) can be written as follows: ? bT b b ? Pb ðP ? W AP W Þ P P c ¼ P P W f;
or, in a block form c1 AX 1 0 f1 ¼ ; 0 0 0 0
ð9Þ
ð10Þ
where AX 1 2 Rðn‘Þðn‘Þ and the last ‘ rows and ‘ columns are zero vectors, c1 2 Rðn‘Þ collects all the non-vanðn‘Þ ishing components of c00 , and f 1 ¼ Pb P ? . The following result claims that solving the reduced linear Wf 2 R system: AX 1 c 1 ¼ f 1
ð11Þ
improves the conditioning of problem (2). Theorem 1. Let A be the coefficient matrix of the linear system (2) and AX 1 be the reduced matrix in (11), associated to SX, then KðAX 1 Þ 6 KðAÞ, where KðAÞ ¼ kAk2 kA1 k2 : Proof. Since A is assumed to be of full rank, then also AX 1 is of full rank, and so the condition numbers are well defined. Let k1 ; kn and r1 ; rn‘ , denote the maximum and minimum eigenvalues of A and AX 1 respectively. The eigenvalues of A interlace the eigenvalues of the leading principal minor of A of order n 1; see [15]. Repeating this result for the leading principal minors up to order ðn ‘Þ, we have k1 ¼ maxhAx; xi P max hAX 1 y; yi ¼ r1 kxk¼1
kyk¼1 y¼y P? W
and kn ¼ min hAx; xi 6 min hAX 1 y; yi ¼ rn‘ : kxk¼1
kyk¼1 y¼y P? W
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
1053
Therefore, since A and AX 1 are positive definite, then KðAX 1 Þ ¼
r1 k1 6 ¼ KðAÞ rn‘ kn
which proves the theorem.
h
The following result will characterize, among all the possible choices for the set SX, the one that maximizes the reduction of the upper bound of the condition number. Theorem 2. Let SX be chosen so that the set X1 of the remaining points of X has separation distance qX 1 > qX ; 8X ; jX j ¼ n ‘; X X . If KðAX Þ and KðAX 1 Þ are limited from above by U X and U X 1 , respectively, then U X 1 < U X : Proof. The result simply follows by combining the classical bounds for kAk2 with the ones derived for kA1 k2 , see i.e. [7,8,4]. Since the greatest eigenvalue of a positive definite matrix A is kAk2 , while the reciprocal of the smallest eigenvalue is kA1 k2 , in Table 1 we collected the results from [12,13,10,9] of the lower bounds for the smallest and of the upper bounds for the greatest eigenvalues, given in terms of the separation distance q and denoted by G‘ ðqÞ and Gu ðqÞ, respectively. Since the estimates of G‘ ðqÞ increase exponentially or polynomially as q increases, and we assumed that qX 1 > qX , then we have G‘ ðqX 1 Þ > G‘ ðqX Þ. Concerning the bound for kAk2 , according to the upper bounds Gu ðqÞ reported in Table 1, we obtain the result of the theorem. h The reduced interpolation approach leads to a loss of information, since some points are discarded and their contributions are lost. This can be unsatisfactory when considering a global unique interpolant of the data. On the other hand, in the context of local RBF interpolation, as considered in [5,2,3], where each data point is considered as a center of a local interpolation, this loss of information is compensated by the fact that a discarded point in any case will contribute to the final reconstruction. Despite this, in the following section we will propose a new method which manages to obtain an improvement in the conditioning of the problem, while not discarding any information.
3. Anisotropic RBF regularization In this section, after a brief introduction to the anisotropic radial basis functions, we will define a new proposal for the regularization of problem (2), which we will call metric-regularization, and some theoretical results characterizing the regularizing metric will be provided.
Table 1 Lower bounds for the smallest eigenvalues of A, (G‘ ðqÞ) and upper bounds for the greatest eigenvalue of A; ðGu ðqÞÞ, in terms of the separation distance q uðrÞ,r ¼ k k2
G‘ ðqÞ
Inverse multiquadrics ðr2 þ c2 Þ1=2 ;
q1=2 e12:76 q
cd
40:71d 2
qd e
2
Gaussians ear ; a > 0
Compactly supported u 2 C
Gu ðqÞ
2k
T
PDd e.g. ð1
rÞ4þ ð4r
þ 1Þ
2kþ1
q
aq
nuð0Þ
2
nuð0Þ d cost q
1054
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
3.1. Anisotropic radial basis functions Given a radial basis function Uj ðÞ ¼ uðk xj k2 Þ with center xj, and a d d symmetric positive definite matrix T, we define the corresponding anisotropic radial basis function as: UT ;j ðÞ :¼ uðk xj kT Þ; where kxkT ¼ xT T x. These new basis functions have level sets that are hyper-ellipsoids centered in xj and associated with the T quadratic form ðx xj Þ T ðx xj Þ. Therefore they are no longer radial with respect to the Euclidean norm, but can be considered radial with respect to the new metric. Fig. 1 shows a compact support RBF in its isotropic and anisotropic form. The anisotropic RBFs inherit all the well known good properties of their isotropic counterparts, thus representing a more flexible generalization of them. By using these new basis functions, the anisotropic RBF interpolant F T ðxÞ can be evaluated by replacing in (2) the matrix A with AT ¼ ðuðkxi xj kT ÞÞi;j¼1;...;n ; ð12Þ and solving the linear system AT c ¼ f ð13Þ which represents the metric analog of the linear system (2). The following proposition characterizes a property of the spectrum of the two matrices A and AT, showing that their eigenvalues have the same sign and the same total sum, but are spread out differently. Proposition 1. Let A be the symmetric positive definite matrix defined in (3), and AT be defined as in (12) where T is a d d symmetric, positive definite matrix. Then AT is symmetric, positive definite and has the same trace as A, namely: TrðAÞ ¼ TrðAT Þ: Proof. Let T ¼ M T M, where M is a non-singular matrix, then pffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxkT ¼ xT T x ¼ xT M T Mx ¼ kMxk2 : By defining Y ¼ MX we have that AT ¼ ðuðkyi yj k2 ÞÞi;j¼1;...;n , where yi are the elements of the transformed data set Y; then AT is positive definite due to the positive definitness of the considered radial basis functions. Let us decompose A and AT as the sum of a diagonal matrix and an off-diagonal matrix respectively, that is A ¼ uð0ÞI þ B;
AT ¼ uð0ÞT I þ BT :
The second result now follows immediately, since uð0Þ ¼ uT ð0Þ.
h
From the previous proposition we have that the linear system (13) is uniquely solvable and determining its solution on the original data X in X is equivalent to finding the solution of (2) on the transformed data set Y ¼ MX in X ½0; 1d , where M is such that T ¼ M T M.
Fig. 1. Isotropic compact support RBF (left); Anisotropic Compact support RBF (right), obtained using the metric T of Example 1 in Section 4.
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
1055
The two solutions of (2) obtained considering the set X and Y will be different, unless the matrix T is the identity matrix. But, if the transformed set Y turns out to be more uniform, the corresponding linear system can be better conditioned. By defining the separation distance qY and the fill distance hY of the set Y according to (4), we can measure the uniformity of the set Y by using the positive scalar value q ¼ qðY Þ ¼ qY =hY . Optimal values for q depend on the type of data distribution; e.g. if the datap are ffiffiffi nodes of an equilateral triangular pffiffiffi grid, then qopt ¼ 3=2, while for a regular square grid we have qopt ¼ 2=2. The uniformity of the new set Y is dependent on the transformation matrix M as shown in the following theorem, proved in [3], which gives the relation between separation and fill distances of the original set X and of the new set Y. Theorem 3. Let M be a d d non-singular matrix that maps the sets X and X into the sets Y and X, respectively, and let T ¼ M T M be the associated symmetric, positive definite matrix, whose eigenvalues are ki ; i ¼ 1; . . . ; d. Then the separation and fill distances of the sets X and Y satisfy: pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi ð14Þ kmin qX 6 qY 6 kmax qX ; pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffi ð15Þ kmin hX 6 hY 6 kmax hX : In Section 4 we will consider some possible choices for T, in order to increase the uniformity of the data set, thus improving the conditioning of the matrix AT. 3.2. Metric regularization In this section we will introduce the concept of metric regularization for problem (2), highlighting the differences with respect to a classical regularization approach. The metric regularization derives from the need to improve the conditioning of problems defined by distance matrices, that is matrices whose elements are functions of distances. It is well known that the condition number of such matrices strongly depends on the data distribution. In fact, the condition number is worse for highly unevenly distributed data, while it remains acceptable when the data are uniformly distributed. Thus metric regularization is intended to act exclusively on the data distribution. Roughly speaking, metric regularization simply consists in replacing the solution of the linear system (2) with the solution of the slightly different but better conditioned linear system (13). The following definition provides a simple criterium to characterize a metric-regularized solution of the interpolation problem. Definition 1 (Metric Regularization). Let T be a symmetric positive definite matrix that defines a metric k kT , then cT is a metric regularized solution of Ac ¼ f if (1) cT solves AT c ¼ f, (2) KðAT Þ 6 KðAÞ; (3) if q qopt then cT c. Condition (3) requires that when the data are uniformly distributed, then cT c, that is the metric regularized solution coincides with the solution of (2). This implies that, in the choice of the metric T, we have to guarantee that the regularizing effects will disappear when the data are uniformly distributed, namely, when q qopt . This is equivalent to requiring that, in this case, the metric becomes the Euclidean one, thus T = I. The following result will provide restrictions on the metric in order to satisfy condition (2) in Definition 1, that is, to obtain a better conditioned linear system (13). If this is the case, the metric will be called a regularizing metric. Theorem 4. Let T be a symmetric positive definite matrix that defines a metric k kT , and A and AT given as in (3) and (12), respectively. If the metric induced by T is non-contractive, that is k kT P k k2 , and kAk1 < 2uð0Þ, then KðAT Þ 6 KðAÞ:
1056
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
Proof. Let B ¼ A uð0ÞI, and BT ¼ AT uð0ÞI, we first claim that kAk1 ¼ uð0Þ þ kBk1
ð16Þ
and similarly for kAT k1 . This follows immediately by the norm definition. In fact, ! P kAk1 ¼ max jai;j j ; i
j
¼ uð0Þ þ max i
P
! jai;j j ;
j;j6¼i
¼ uð0Þ þ kBk1 : For the considered classes of RBFs the fact that the metric is non-contractive implies that kBT k1 6 kBk1 so that from (16) we have kAT k1 6 kAk1 :
ð17Þ 1
Using the fact that kAk1 < 2uð0Þ and (16) then kBk1 < uð0Þ, and obviously kBuð0Þ k1 < 1: This implies that the series representation 1 1
uð0ÞA1 ¼ ðI þ Buð0Þ Þ
¼
k 1 X ð1Þ k
k¼0
uð0Þ
Bk k
holds. Using this series, the triangle inequality, and kBk k1 6 kBk1 , we obtain the inequalities k 1 1 P kBk1 1 kA1 k1 6 uð0Þ 6 uð0ÞkBk : uð0Þ k¼0
1
ð18Þ
Since kBT k1 6 kBk1 , from (18) we have 1 kA1 T k1 6 kA k1 :
ð19Þ
Recalling that the condition number of A is defined as the product kAk1 kA1 k1 , (17) and (19) establish the theorem thesis. h The condition kAk1 < 2uð0Þ in Theorem 4 implies the strictly diagonal dominance of A, which can be satisfied when u belongs to the class of Gaussian radial functions or compactly supported radial functions, but the inverse multiquadric class fails to satisfy this condition in any dimension [8]. Moreover, even for the class of functions u that can be considered, this condition seems to be too severe to be useful in practice. In fact, for instance, for Gaussian functions this would require very small values of the constant a (see Table 1), which is not, in general, the case. The above motivations led us to propose the next result on the upper bound of the condition number, which holds under less restrictive conditions for all the three classes of radial functions considered. Theorem 5. Let A and AT be defined by (3) and (12), respectively, and let Y ¼ MX with T ¼ M T M. If the separation distances of the two sets X and Y satisfy qY P qX , and KðAÞ and KðAT Þ are limited from above by U and UT respectively, then UT 6 U: The proof follows the same arguments as the proof of Theorem 2. Until now, we have only considered improving the conditioning of the coefficient matrix associated with the interpolation problem, but we must remember that a well-conditioned coefficient matrix is a necessary but not a sufficient condition for obtaining a good reconstruction. A measure of the reconstruction error for a given exact function f^ ðxÞ and a theoretical RBF reconstruction F^ ðxÞ, is usually given in terms of the fill distance h:
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
1057
kf^ ðxÞ F^ ðxÞk 6 kf^ k H ðhðxÞÞ; where H ðhðxÞÞ is an increasing function of h. The theoretical results establish that the reconstruction error and the conditioning of the interpolation matrix are strictly related, as established by the well known uncertainty principle. It states that, for increasing values of q, the numerical error EðqÞ decreases, whereas, for increasing values of h, the reconstruction error increases [11]. Therefore, from a practical point of view, we have to take into account that the total reconstruction error of a numerically evaluated interpolant F ðxÞ can be modelled by the sum of the theoretical reconstruction error and a numerical error EðqÞ, thus kf^ ðxÞ F ðxÞk 6 kf^ k H ðhðxÞÞ þ EðqÞ:
ð20Þ
Since increasing q generally corresponds to increasing the value of h, and when hY P hX , the upper bound of the reconstruction error can increase. Therefore we need to find a good balance between q and h. A regularized solution should provide a convenient compromise between minimizing the reconstruction error and improving the numerical conditioning. The next section will introduce our proposal for the metric T which, in general, satisfies these expectations. 4. On the choice of the matrix T A metric is defined on the domain X by a symmetric positive definite d d matrix T. This matrix can be factorized as T ¼ M T M, where M ¼ SR, that is the product of a rotation matrix R and a diagonal scaling matrix S ¼ diagðs1 ; . . . ; sd Þ: T ¼ RS 2 RT : Here s1 ; . . . ; sd represent the deformation components along the axes of the local coordinate system, defined by R. Of course, considering a metric T with R ¼ I and si ¼ s, with a constant s, for i ¼ 1; . . . ; d, then we get M ¼ diagðs; s; . . . ; sÞ, and it acts on the set X by performing an isotropic scaling. This trivially increases the separation distance q and, proportionally, the fill distance h, leading to unacceptable reconstructions. A possible way to build a new metric T can be based on a transformation M designed according to Theorem 5 to increase qY, that is, keeping the two closest points far apart. In this case, the matrix R can be chosen so that one of the new cartesian axes is aligned with the line across these points, and the matrix S should provide the maximum scale factor of the data set in order to remain in the X domain. This particular metric has been tested on several random data sets and we can conclude that, as expected, only about 40% of these tests leads to a decrease of the condition number. In fact, this strategy can fail trivially, for example when two couples of points at the same minimum distance are orthogonal to each other. Other similar heuristic strategies based on the data set can also similarly fail. Hence, in the rest of this section, we propose a more general strategy, aimed at increasing the separation distance and, at the same time, improving the uniformity of the data distribution. The new proposal for a metric is based on the commonly used statistical tool given by principal component analysis (PCA). This choice is particularly suited when the data present a well-defined trend, captured by the PCA. Performing PCA is the equivalent of performing singular value decomposition (SVD) of the matrix Q, which defines the data covariance matrix C. In fact, if: Q ¼ ðx1 x; . . . ; xn xÞ 2 Rdn ; P pffiffiffiffiffi pffiffiffiffiffi where x ¼ 1n i xi , then C ¼ QQT , and using the decomposition Q ¼ VSW T , with S ¼ diagð r1 ; . . . ; rd Þ, we 2 T obtain C ¼ VS V . The latter factorization of the covariance matrix identifies a new orthogonal coordinate system, with axes given by the eigenvectors of C and origin at the mean value of the data points, where the axis v1 is selected as the direction of maximum variance, namely, corresponding to the maximum eigenvalue r1. In order to obtain an improvement in the distribution of the data, we choose the matrix T, defining the new metric, as being proportional to the inverse of the covariance matrix, i.e. T ¼ M T M ¼ m2 C 1 with m > 0. As a consequence, the transformation matrix M is given by
1058
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
M ¼ mS 1 V T : The constant m represents a further degree of freedom which allows us to satisfy the hypothesis of Theorem 5. 2 In fact, according to (14), if the eigenvalues of T, ki ¼ mri i ¼ 1; . . . ; d are scaled such that the minimum eigenvalue kmin ¼ k1 ¼ 1, then qY P qX . This particular choice for T implies that, for a uniform data distribution (q ¼ qopt ), the eigenvalues of T are equal to one, thus giving uðÞ ¼ uT ðÞ. In this case where the data set does not present any well-defined trend, the metric regularization, according to Property 3 in Definition 1, does not affect the solution. It is worthwhile noting that, even if from (15) we can expect an increase in hY, in practice, due to the choice of the principal components as the new coordinate system, we obtain that, in general, the increase in hY is less than the increase in qY, thus leading to an improvement in the uniformity of the data distribution (q value). 4.1. Numerical results The metric regularization based on anisotropic RBF, has been tested on several data sets and on different classes of RBFs. The best results are obviously obtained when the data distribution in X follows the features of the function to be reconstructed. This choice characterizes the application field of the proposed metric regularization. In the following, two of the most representative examples of surface reconstruction, in dimension d ¼ 2, are presented. The test functions in the examples have been chosen to be the classical analytic functions illustrated in Fig. 2, sampled into 40 points scattered along the feature direction. The data set fits exactly the 2 domain ½0; 1 . The corresponding function values ^f have been perturbed by adding an error vector e with normally distributed random entries with zero mean to yield the right-hand side f of (2). The vector e is scaled to 2 correspond to a specified noise level kek ¼ 1 103 . kfk 2
According to the notation in Table 1, for the compactly supported RBFs we set r ¼ 4, for the Gaussian RBFs we set a ¼ 0:5, while for the inverse multiquadrics we set c ¼ 1:5. All computations were carried out in Matlab with about 16 significant decimal digits. Example 1. We have considered the interpolation of the scalar function tan hð9y 9xÞ þ 1 f^ ðx; yÞ ¼ 9 2
shown in Fig. 2 (left) on the domain ½0; 1 . The scattered interpolation data set X is shown in Fig. 3 on the left, while the transformed data set Y ¼ MX is shown on the right, where M ¼ f2:71; 1:86; 1:86; 3:08g. The data sets are characterized by qX ¼ 0:0078; hX ¼ 0:1455; qX ¼ 0:0536 and qY ¼ 0:0113; hY ¼ 0:1539; qY ¼ 0:0737 respectively.
Fig. 2. Analytic functions in Example 1 (left) and Example 2 (right).
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062 1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0 0.1
1
0.2
0.3
0.4
0.5
0.6
0.7
1059
0.8
0.9
Fig. 3. Example 1: Original data set X (left); transformed data set Y (right).
Table 2 Example 1 KðAT Þ
KðAÞ Comp. supp. Inv. mult. Gauss.
6
9:4 10 3:85 1015 2:20 1014
6
4:7 10 2:08 1014 1:25 1013
EMAX
EMAXT
RMSE
RMSET
0.171 0.5696 0.5567
0.016 0.0527 0.1827
0.005 0.0121 0.0107
0.0004 0.0013 0.0031
In order to see how sensitive the results of the metric regularization to the different choices of the basis functions are, we summarized in Table 2 the condition numbers and the corresponding reconstruction errors using isotropic and anisotropic RBFs for the computation of the coefficients of the linear system (2). The reconstruction errors are evaluated by EMAX ¼ kEk1 =ðmaxðEÞ minðEÞÞ and RMSE ¼ kEk2 =n, where E ¼ f F is an error vector computed on the points of a 50 50 square grid that belong to the convex set of the original data. We denote by a subscript T the error associated with anisotropic reconstructions. Figs. 4–6 show the function reconstructions and allow for a qualitative evaluation of the improvement due to the metric regularization. Example 2. In the second example, the analytic test function considered was 1 fb ðx; yÞ ¼ y cos4 ð4ðx2 þ y 1ÞÞ 2 2
and this is shown in Fig. 2 (right) on the domain ½0; 1 .
Fig. 4. Example 1: Reconstructions using isotropic compact support RBF (left); and anisotropic compact support RBF (right).
1060
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
Fig. 5. Example 1: Reconstructions using isotropic inverse multiquadric RBF (left); and anisotropic inverse multiquadric RBF (right).
Fig. 6. Example 1: Reconstructions using isotropic Gaussian RBF (left); and anisotropic Gaussian RBF (right).
In Fig. 7 the scattered data set X is represented on the left, while the transformed data set Y ¼ MX is shown on the right, where M ¼ f1:73; 0:74; 0:74; 1:76g. The data sets are characterized by qX ¼ 0:0012; hX ¼ 0:1322; qX ¼ 0:0091 and qY ¼ 0:0023; hY ¼ 0:1702; qY ¼ 0:0138 respectively. 1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0
0.1
0.2
0.3
0.4
0.5
0.6
Fig. 7. Example 2: Original data set X (left); transformed data set Y (right).
0.7
0.8
0.9
1
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
1061
Table 3 Example 2
Comp. supp. Inv. mult. Gauss.
KðAÞ
KðAT Þ
EMAX
EMAXT
RMSE
RMSET
1:92 108 1:05 1015 8:52 1013
3:69 107 3:08 1013 2:40 1012
0.1328 0.7457 0.9931
0.1739 0.0803 0.1161
0.0089 0.0091 0.0281
0.0065 0.0037 0.0048
Fig. 8. Example 2: Reconstructions using isotropic compact support RBF (left); and anisotropic compact support RBF (right).
Analogously to Example 1, in Table 3 we summarized the condition numbers and the reconstruction errors of the isotropic and anisotropic RBF interpolants, and in Figs. 8–10 are illustrated the different reconstructions obtained using, respectively, compact support RBFs, inverse multiquadric RBFs and Gaussian RBFs, in the isotropic (left) and anisotropic (right) setting. From the numerical tests elaborated, we have noticed, in general, that the condition number decreases by about a couple of orders in magnitude, which is not too impressive. Nevertheless, the metric regularization leads to a very effective improvement in the reconstruction. As we have already stressed, the good results obtained using the proposed choice of T in the metric regularization are partially due to the fact that the data set follows the features of the function. This can be either a natural consequence, related to the acquisition tools, e.g. the use of 3D scanner, topographic systems, etc., or a user-chosen strategy in the selection of the local data set for the local RBF interpolation. In fact, in the latter, the data used for each local reconstruction are selected so that they follow the features of the surface to be reconstructed [3].
Fig. 9. Example 2: Reconstructions using isotropic inverse multiquadric RBF (left); and anisotropic inverse multiquadric RBF (right).
1062
G. Casciola et al. / Applied Mathematics and Computation 190 (2007) 1050–1062
Fig. 10. Example 2: Reconstructions using isotropic Gaussian RBF (left); and anisotropic Gaussian RBF (right).
A complete study of all the possible metrics that can be suitable to regularize problem (2) is beyond the purpose of the present work. In fact, this work is a starting point for the study of metric-regularization, which turns out to be essential not only for the considered problem of surface reconstruction, but also in several other mathematical application fields involving RBFs. Acknowledgement This work has been supported by MIUR-Cofin 2004, ex 60% project and by University of Bologna ‘‘Funds for selected research topics’’. References [1] R.K. Beatson, W.A. Light, S. Billings, Fast solution of the radial basis function interpolation equations: domain decomposition methods, SIAM J. Sci. Comput. 22 (5) (2000) 1717–1740. [2] G. Casciola, D. Lazzaro, L.B. Montefusco, S. Morigi, Fast surface reconstruction and hole filling using radial basis functions, Numer. Algorithms 39 (2005) 289–305. [3] G. Casciola, D. Lazzaro, L.B. Montefusco, S. Morigi, Shape preserving surface reconstruction using locally anisotropic RBF interpolants, Comput. Math. Appl. 51 (2006) 1185–1198. [4] E.J. Kansa, R.E. Carlson, Improved accuracy of multiquadric interpolation using variable shape parameters, Comput. Math. Appl. 24 (12) (1992) 99–120. [5] D. Lazzaro, L.B. Montefusco, Radial basis functions for the multivariate interpolation of large scattered data sets, J. Comput. Appl. Math. 140 (2002) 521–536. [6] L. Ling, E.J. Kansa, Preconditioning for radial basis functions with domain decomposition methods, Math. Comput. Model. 40 (13) (2003) 1413–1427. [7] F.J. Narcowich, J.D. Ward, Norm estimate for the inverses of a general class of scattered-data radial-function interpolation matrices, J. Approx. Theory 69 (1992) 84–109. [8] F.J. Narcowich, N. Sivakumar, J.D. Ward, On condition numbers associated with radial-function interpolation, J. Math. Anal. Appl. 186 (1994) 457–485. [9] R. Schaback, Lower bounds for norms of inverses of interpolation matrices for radial basis functions, J. Approx. Theory 79 (1994) 287–306. [10] R. Schaback, Error estimates and condition numbers for radial basis function interpolation, Adv. Comput. Math. 3 (1995) 251–264. [11] R. Schaback, Stability of radial basis function interpolants, in: C.K. Chui et al. (Eds.), Approximation Theory X: Wavelets, Splines, and Applications, Vanderbilt Univ. Press, Nashville, 2002, pp. 433–440. [12] H. Wendland, Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree, Adv. Comput. Math. 4 (1995) 389–396. [13] H. Wendland, Error estimate for interpolation by compactly supported radial basis functions of minimal degree, J. Approx. Theory 93 (1998) 258–272. [14] H. Wendland, Computational aspects of radial basis function approximation, in: K. Jetter et. al. (Ed.), Topics in Multivariate Approximation and Interpolation, 2005. [15] J.H. Wilkinson, The Algebraic Eigenvalue Problem, Clarendon Press, 1965. [16] Z. Wu, Multivariate compactly supported positive definite radial functions, Adv. Comput. Math. 4 (1995) 283–292.