Future Generation Computer Systems 20 (2004) 1337–1353
Functional networks for B-spline surface reconstruction A. Iglesias∗ , G. Echevarr´ıa, A. G´alvez Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. de los Castros s/n, E-39005 Santander, Spain Available online 4 July 2004
Abstract Recently, a new extension of the standard neural networks, the so-called functional networks, has been described [E. Castillo, Functional networks, Neural Process. Lett. 7 (1998) 151–159]. This approach has been successfully applied to the reconstruction of a surface from a given set of 3D data points assumed to lie on unknown B´ezier [A. Iglesias, A. G´alvez, Applying functional networks to CAGD: the tensor-product surface problem, in: D. Plemenos (Ed.), Proceedings of the International Conference on Computer Graphics and Artificial Intelligence, 3IA’2000, 2000, pp. 105–115; A. Iglesias, A. G´alvez, A new artificial intelligence paradigm for computer-aided geometric design, in: Artificial Intelligence and Symbolic Computation, J.A. Campbell, E. Roanes-Lozano (Eds.), Lectures Notes in Artificial Intelligence, Berlin, Heidelberg, Springer-Verlag, vol. 1930, 2001, pp. 200–213] and B-spline tensor-product surfaces [A. Iglesias, A. G´alvez, Applying functional networks to fit data points from B-spline surfaces, in: H.H.S. Ip, N. Magnenat-Thalmann, R.W.H. Lau, T.S. Chua (Eds.), Proceedings of the Computer Graphics International, CGI’2001, IEEE Computer Society Press, Los Alamitos, CA, 2001, pp. 329–332]. In both cases the sets of data were fitted using B´ezier surfaces. However, in general, the B´ezier scheme is no longer used for practical applications. In this paper, the use of B-spline surfaces (by far the most common family of surfaces in surface modeling and industry) for the surface reconstruction problem is proposed instead. The performance of this method is discussed by means of several illustrative examples. A careful analysis of the errors makes it possible to determine the number of B-spline surface fitting control points that best fit the data points. This analysis also includes the use of two sets of data (the training and the testing data) to check for overfitting, which does not occur here. © 2004 Elsevier B.V. All rights reserved. Keywords: Neural networks; Functional networks; CAGD; Surface reconstruction; B-spline surfaces; B´ezier surfaces; Artificial intelligence; Functional equations
1. Introduction The problem of recovering the 3D shape of a surface, also known as surface reconstruction, has received ∗
Corresponding author. E-mail address:
[email protected] (A. Iglesias).
0167-739X/$ – see front matter © 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.future.2004.05.025
much attention in the last few years. For instance, in [13,33,34,36,37] the authors address the problem of obtaining a surface model from a set of given crosssections. This is a typical problem in many research and application areas such as medical science, biomedical engineering and CAD/CAM, in which an object is often defined by a sequence of 2D cross-sections (acquired
1338
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
from computer tomography, magnetic resonance imaging, ultrasound imaging, 3D laser scanning, etc.). Another different approach consists of reconstructing surfaces from a given set of data points (see, e.g., [19,23,24,31,35,41]). Depending on the nature of these data points, two different approaches are employed1 : interpolation and approximation. Approximation techniques are specially recommended when data are not exact, but subjected to measurement errors. Another important reason to choose approximation could be the great computational effort required to obtain surfaces by interpolating an infinite number of data, such as curve forms. An example is given in surface skinning problems, where we look for a smooth surface passing through a set of cross-sectional curves. In [39] it has been mentioned that, using the NURBS representation, interpolation of only a few cross-sections of various types may require as high a number as hundreds of thousands of surface control points. Furthermore, because industrial parts can easily contain hundreds of surfaces, interpolation for these parts becomes unrealizable in practice. Finally, in many applications, the data consist of a very large number of measurements, causing the number of basis functions to be very large as well. In addition, new measurement points may change the structure of the solution. The obvious solution to these problems is to consider an approximation scheme that allows us a significant saving of time and memory. In this approach, the goal of the surface reconstruction methods can be stated as follows: given a set of sample points X assumed to lie on an unknown surface U, construct a surface model S that approximates U. Of course, this implies giving a tolerance error compatible with the manufacturing process without sacrificing the “quality of the shape”. Following [30] this property can be described in terms of three different criteria: [C1 ] the “smoothness” of the surface [C2 ] the “polygon regularity” and [C3 ] the distance from the obtained surface to the given set of points. 1 The interested reader is referred to [11] and [32] for an introduction to the field. For the analysis of these methods within the framework of geometric modeling, we refer to [25,38]. In particular, [25] has many references (including several surveys) on this topic and even devotes a chapter to scattered data interpolation. Also, [38] includes a chapter on curve and surface fitting.
This problem has been analyzed from several points of view, such as parametric methods [4,5,20,41], function reconstruction [10,42], implicit surfaces [31,40], B-spline patches [35], etc. One of the most striking and promising approaches to this problem is that based on neural networks. A neural network consists basically of one or several layers of computing units, called neurons, connected by links. Each artificial neuron receives an input value from the input layer or the neurons in the previous layer. Then it computes a scalar output y=f wik xk from a linear combination of the received inputs x1 , x2 , . . . , xn using a set of weights wik associated with each of the links and a given scalar function f (the activation function), which is assumed to be the same for all neurons. The interested reader is referred to [12] and [21] for a nice introduction to the field. Artificial neural networks have been recognized as a powerful tool for learning and simulating systems in a great variety of fields. Since the behavior of the brain is the inspiration behind the neural networks, these are able to reproduce some of its most typical features, such as the ability to learn from data. This feature makes them specially valuable for solving problems in which one is interested in fitting a given set of data. For instance, the authors in [19] propose to fit surfaces through a standard neural network. Their approach is based on training the neural network to learn the relationship between the parametric variables and the data points. A more recent approach can be found in [22], in which a Kohonen neural network [26] has been applied to obtain free-form surfaces from scattered data. However, in this approach the network is used exclusively to order the data and create a grid of control vertices with quadrilateral topology. After this preprocessing step, any standard surface reconstruction method (such as those referenced above) has to be applied. Finally, a very recent work using a combination of neural networks and Partial Differential Equation (PDE) techniques for the parameterization and reconstruction of surfaces from 3D scattered points can be found in [3]. It should be remarked, however, that the neural network scheme is not the “panacea” for the surface reconstruction problem. On the contrary, as shown in [27], some situations might require more sophisticated techniques. Among them, an extension of the
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
“neural” approach based on the so-called functional networks has been recently proposed [7,27]. These functional networks are a generalization of the standard neural networks in the sense that the weights are now replaced by neural functions, which can exhibit, in general, a multivariate character. In addition, when working with functional networks we are able to connect different neuron outputs at convenience. Furthermore, different neurons can be associated with neural functions from different families of functions. As a consequence, the functional networks exhibit more flexibility than the standard neural networks [7]. The performance of this new approach has been illustrated by its application to fit given sets of data from B´ezier [27,28] and B-spline tensor-product surfaces [29]. In spite of these good results, the previous scheme is very limited in practice because the sets of data were fitted by means of B´ezier surfaces in all cases. This is a drastic limitation because, in general, the B´ezier scheme is no longer used for practical applications. The (more flexible) piecewise polynomial scheme (based on B-spline and NURBS surfaces) is usually applied in surface modeling and industry instead. The present paper applies this recently introduced functional network methodology to fit sets of given 3D data points through B-spline surfaces. The structure of this paper is as follows: in Section 2 we briefly describe the B-spline surfaces. Then, in Section 3 the problem to be solved is introduced. A description of the main components of a functional network is given in Section 4. Differences between neural and functional networks will also be discussed in this section. Application of the general methodology to work with these networks and the required steps of the method are described in Section 5, while Section 6 reports the results obtained from the learning process for different examples of surfaces as well as a careful analysis of the errors. It includes the use of two sets of data (the training and the testing data) to check for overfitting. As we will show, this analysis makes it possible to determine the number of B-spline surface fitting control points that best fit the data points. Section 7 shows that the problem solved by our method is actually a generalization of the classical approach to Gordon–Coons surfaces. Finally, Section 8 closes with the main conclusions and further remarks on this work.
1339
2. Some basic definitions In this section, we give some basic definitions required throughout the paper. A more detailed discussion about B-spline surfaces can be found in [38]. Let S = {s0 , s1 , s2 , . . . , sr−1 , sr } be a nondecreasing sequence of real numbers called knots. S is called the knot vector. The i-th B-spline basis function Nik (s) of order k (or degree k − 1) is defined by the recurrence relations 1, if si ≤ s < si+1 Ni1 (s) = (1) 0, otherwise and Nik (s) =
s − si Ni,k−1 (s) si+k−1 − si si+k − s + Ni+1,k−1 (s) si+k − si+1
(2)
for k > 1. With the same notation, given a set of threedimensional control points {P ij ; i = 0, . . . , m; j = 0, . . . , n} in a bidirectional net and two knot vectors S = {s0 , s1 , . . . , sr } and T = {t0 , t1 , . . . , th } with r = m + k and h = n + l, a B-spline surface S(s, t) of order (k, l) is defined by S(s, t) =
n m
P ij Nik (s)Njl (t)
(3)
i=0 j=0
where the {Nik (s)}i and {Njl (t)}j are the B-spline basis functions of order k and l, respectively, defined following (1) and (2).
3. Description of the problem In this section we describe the problem we want to solve. It can be stated as follows: we look for the most general family of parametric surfaces P(s, t) such that their isoparametric curves (see [9] for a description) s = s˜0 and t = t˜0 are linear combinations of the sets of functions: f (s) = {f0 (s), f1 (s), . . . , fm (s)} and f ∗ (t) = {f0∗ (t), f1∗ (t) . . . , fn∗ (t)}, respectively. In other words, we look for surfaces P(s, t) such that they sat-
1340
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
isfy the system of functional equations P(s, t) ≡
n j=0
αj (s)fj∗ (t) =
m
the expression:
βi (t)fi (s)
P(s, t) =
(4)
i=0
n m i=0 j=0
t
s
α1
αn
f0
x x
f1
(5)
where the P ij are elements of an arbitrary matrix P; therefore, P(s, t) is a tensor-product surface. Eq. (5) shows that the functional network in Fig. 1 can be simplified to the equivalent functional network in Fig. 2. This functional network is then applied to solve the surface reconstruction problem described above. In order to check the flexibility of our proposal, we have considered sets of 256 three-dimensional data points {Tuv ; u, v = 1, . . . , 16} (from here on, the training points) in a regular 16 × 16 grid from four different surfaces. This choice guarantees that criterion [C2] is satisfied. The first one (Surface I) is a B-spline surface given by (3) with the control points listed in Table 1, m = n = 5, k = l = 3 and nonperiodic knot vectors (according to the classification used in [2]) for both directions s and t. The other three surfaces (labelled as Surfaces II, III and IV) are explicit surfaces defined by
where the sets of coefficients {αj (s); j = 0, 1, . . . , n} and {βi (t); i = 0, 1, . . . , m} can be assumed, without loss of generality, to be sets of linearly independent functions. This problem cannot be solved with simple standard neural networks: to represent it in terms of a neural network, we would have to allow some neural functions to be different, while the neural functions in neural networks are always identical. Moreover, the neuron outputs of neural networks are different; however, in our scheme, some neuron outputs in the example are coincident. This implies that the neural networks paradigm should be generalized to include all these new features, which are incorporated into the functional networks (see [7]). To be more precise, our problem is described by the functional network in Fig. 1, which can be simplified (see Section 5, Step 3 for details) to
α0
P ij fi (s)fj∗ (t)
fm f0* f1*
x
x
+
fn* β0
x
β1
βm
x
+
P(s,t) Fig. 1. Graphical representation of a functional network for the parametric surface of Eq. (4)
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
t
s
f0
f1
x x
x x x
P00 P01
P0n P10 P11
1341
fm
f0* f1*
x
fn*
x x
P1n
Pm0
Pm1
x Pmn
+ P(s,t) Fig. 2. Functional network associated with Eq. (5). It is equivalent to the functional network in Fig. 1
the following equations z = y3 − x3 − y2 + x2 + xy z = 2(x4 − y4 ) z=
0.8y2
− 0.5x2
x2 + y2 + 0.1
(Surface II)
(Surface III) (Surface IV )
(6) (7) (8)
respectively. Note that in practical settings, although we are provided with the data points, no knowledge about the surface they come from is generally given (in fact, the surface to be reconstructed is initially assumed to be unknown). As will be shown later, such knowledge is not actually required for applying the proposed method. Thus, in real cases our input consists exclusively of a given set of data points. The Surfaces I–IV are used here for illustrative purposes only in order to allow the readers to compare the fitting surfaces with the original ones.
In order to check the robustness of the proposed method, the third coordinate of the 256 threedimensional points (xp , yp , zp ) was slightly modified by adding a real uniform random variable !p of mean 0 and variance 0.05. Therefore, in the following, we consider points given by (xp , yp , z∗p ), where z∗p = zp + !p ,
!p ∈ (−0.05, 0.05)
(9)
Table 1 Control points used to define Surface I (x, y, z)
(x, y, z)
(x, y, z)
(x, y, z)
(x, y, z)
(x, y, z)
(0, 0, 1) (1, 0, 2) (2, 0, 1) (3, 0, 3) (4, 0, 2) (5, 0, 1)
(0, 1, 2) (1, 1, 3) (2, 1, 4) (3, 1, 4) (4, 1, 2) (5, 1, 2)
(0, 2, 3) (1, 2, 4) (2, 2, 5) (3, 2, 5) (4, 2, 4) (5, 2, 3)
(0, 3, 3) (1, 3, 2) (2, 3, 5) (3, 3, 1) (4, 3, 4) (5, 3, 3)
(0, 4, 2) (1, 4, 3) (2, 4, 4) (3, 4, 2) (4, 4, 3) (5, 4, 2)
(0, 5, 1) (1, 5, 2) (2, 5, 3) (3, 5, 3) (4, 5, 2) (5, 5, 1)
1342
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
Such a random variable plays the role of a measure error to be used in the estimation step to learn the functional form of P(s, t). We would like to remark that a drastic increase of the variance of !p in (9) could lead to a not sufficiently smooth surface, thus violating criterion [C1]. In those cases, learning might be improved by using the penalized least squares method proposed in [14]. Basically, the method consists of introducing a new term (the penalty term) which measures the smoothness of the fitting function. As a consequence, the cross-validation performed in Step 7 of our methodology (see Sections 5 and 6 for details) would be replaced by a generalized cross-validation.
name of the corresponding neural function inside. For example, in Fig. 1, we have three layers of neurons. The first one gives outputs of functions with one variable. The second layer exhibits the same function for all its neurons, the product operator. Similarly, the last layer exhibits the sum operator for its two neurons. (3) A set of directed links. They connect the input or intermediate layers to its adjacent layer of neurons, and neurons of one layer to its adjacent intermediate layers or to the output layer. Connections are represented by arrows, indicating the information flow direction. We remark here that information flows in only one direction, from the input layer to the output layer.
4. Functional networks
All these elements together form the network architecture or topology of the functional network, which defines the functional capabilities of the network. For example, since units are organized in series of layers, the functional network in Fig. 1 is a multilayer network.
In this section, we describe the main components of a functional network. Differences between neural and functional networks are also discussed in this section. 4.1. Components of a functional network From Fig. 1 the main components of a functional network become clear: (1) Several layers of storing units. (a) A layer of input units. This first layer contains the input information. In this figure, this input layer consists of the units s and t. (b) A set of intermediate layers of storing units. They are not neurons but units storing intermediate information. This set is optional and allows more than one neuron output to be connected to the same unit. In Fig. 1 there are two intermediate layers of storing units, which are represented by small circles in black. (c) A layer of output units. This last layer contains the output information. In Fig. 1 this output layer is reduced to the unit P(s, t). (2) One or more layers of neurons or computing units. A neuron is a computing unit, which evaluates a set of input values, coming from the previous layer, of input or intermediate units, and gives a set of output values to the next layer, of intermediate or output units. Neurons are represented by circles with the
4.2. Differences between functional and neural networks Some of the differences between functional and neural networks were already discussed in Section 3. In this subsection, we discuss these differences and the advantages of using functional networks instead of standard neural networks. (1) In neural networks each neuron returns an output y= f ( wik xk ) that depends only on the value wik xk , where x1 , x2 , . . . , xn are the received inputs. Therefore, their neural functions have only one argument. In contrast, neural functions in functional networks can have several arguments. (2) In neural networks, the neural functions are univariate: neurons can show different outputs but all of them represent the same values. In functional networks, the neural functions can be multivariate. (3) In a given functional network the neural functions can be different, while in neural networks they are identical. (4) In neural networks there are weights, which must be learned. These weights do not appear in func-
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
1343
tional networks, where neural functions are learned instead. (5) In neural networks the neuron outputs are different, while in functional networks neuron outputs can be coincident. As we shall see, this fact leads to a set of functional equations, which have to be solved. These functional equations impose strong constraints leading to a considerable reduction in the degrees of freedom of the neural functions. In most cases, this implies that neural functions can be reduced in dimension or expressed as functions of smaller dimensions.
any given input. This leads to the concept of equivalent functional networks. Two functional networks are said to be equivalent if they have the same input and output units and they give the same output for any given input. The practical importance of this concept is that we can define equivalent classes of functional networks, that is, sets of equivalent functional networks, and then choose the simplest in each class to be used in applications. As we show in the next paragraphs, functional equations constitute the main tool for simplifying functional networks. We refer to [1] for a survey on functional equations.
All these features show that the functional networks exhibit more interesting possibilities than standard neural networks. This implies that some problems (as that introduced in Section 3) require functional networks instead of neural networks in order to be solved.
Coming back to our problem, it seems that the functions {αj (s); j = 0, 1, . . . , n} and {βi (t); i = 0, 1, . . . , m} have to be learned. However, the functional Eq. (4) put strong constraints on them. In fact, the general solution of this functional equation is given by the following theorem:
5. Working with functional networks In this section, we describe how functional networks must be used. Functional networks methodology can be more easily understood by organizing it into eight different steps, which are shown below. For the sake of clarity, these steps are described by their application to problem previously introduced in Section 3. Step 1 (Statement of the problem). Understanding the problem to be solved. This is a crucial step, which has been done in Section 3. Step 2 (Initial topology). Based on the knowledge of the problem, the topology of the initial functional network is selected. Thus, the system of functional Eq. (4) leads to the functional network in Fig. 1. Note that the above equations can be obtained from the network by considering the equality between the two values associated with the links connected to the output unit. We also remark that each of these values can be obtained in terms of the outputs of the preceding units by writing the outputs of the neurons as functions of their inputs, and so on. Step 3 (Simplification). In this step, the initial functional network is simplified using functional equations. Given a functional network, an interesting problem consists of determining whether or not there exists another functional network giving the same output for
Theorem 1. The most general family of parametric surfaces P(s, t) such that all their isoparametric curves s = s0 and t = t0 are linear combinations of the sets of linearly independent functions: f (s) = {f0 (s), f1 (s), . . . , fm (s)} and f ∗ (t) = {f0∗ (t), f1∗ (t) . . . , fn∗ (t)} respectively, is of the form P(s, t) =
n m i=0 j=0
P ij fi (s)fj∗ (t) = f (s)P(f ∗ (t))T (10)
where (·)T indicates the transpose of a matrix and P ij are elements of an arbitrary vector matrix P, that is, they are tensor-product surfaces. Two important conclusions can be derived from this theorem: (1) No other functional forms for P(s, t) satisfy Eq. (4). So, no other neurons can be replaced by neurons βi , αj , fi and fj∗ .2 (2) The functional structure of the solution is (10). Step 4 (Uniqueness of representation). In this step, conditions for the neural functions of the simplified functional network must be obtained. For the case of Eq. (10), two cases must be considered: 2 From this point of view, Eq. (10) provides a characterization of the tensor-product surfaces, a remarkable question in CAGD.
1344
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
(1) The fi (s) and fj∗ (t) functions are given: Assume that there are two matrices P = {P ij } and P ∗ = {P ∗ij } such that n m
P(s, t) ≡
i=0 j=0 n m
=
i=0 j=0
f (x) · g(y) =
P ij fi (s)fj∗ (t) P ∗ij fi (s)fj∗ (t)
i=0 j=0
P ij − P ∗ij fi (s)fj∗ (t) = 0
(11)
(12)
Since the functions in the set {fi (s) fj∗ (t) | i = 0, 1, . . . , m, j = 0, 1, . . . , n} are linearly independent because the sets {fi (s) |i = 0, 1, . . . , m} and {fj∗ (t) | j = 0, 1, . . . , n} are linearly independent, from (12) we have P ij = P ∗ij ,
i = 0, 1, . . . , m, j = 0, 1, . . . , n
that is, the coefficients P ij in (10) are unique. (2) The fi (s) and fj∗ (t) functions are to be learned: In this case, assume that there are two sets of functions {fi (s), fj∗ (t)} and {f˜ i (s), f˜ j∗ (t)}, and two matrices P and P˜ such that P(s, t) ≡
n m i=0 j=0
=
n m i=0 j=0
P ij fi (s)fj∗ (t) P˜ ij f˜ i (s)f˜ j∗ (t)
Then we have n m i=0 j=0
P ij fi (s)fj∗ (t) −
n m i=0 j=0
n
fk (x)gk (y) = 0
k=1
Solving the uniqueness of representation problem consists in solving Eq. (11). To this aim, we write (11) in the form m n
Theorem 2 (see Acz´el [1], p. 160; see also [6]). All solutions of the equation
P˜ ij f˜ i (s)f˜ j∗ (t) = 0 (13)
To solve this equation, we need to introduce the following theorem:
where f (x) = (f1 (x), . . . , fn (x)), g(y) = (g1 (y), . . . , gn (y)) and (·) is used to denote the dot product of two vectors, can be written in the form f (x) = ϕ(x)A,
g(y) = ψ(y)B
where ϕ(x) = (ϕ1 (x), . . . , ϕr (x)), ψ(y) = (ψr+1 (y), . . . , ψn (y)), r is an integer between 0 and n, {ϕ1 (x), . . . , ϕr (x)} and {ψr+1 (y), . . . , ψn (y)} are two arbitrary systems of linearly independent functions, and A and B are constant matrices, which satisfy ABT = 0. Therefore, according to Theorem 2, the solution of (13) satisfies
m
P i0 fi (s) i=0 m P i1 fi (s) i=0 .. . m P f (s) in i PT i=0 −− − − − − −− T = B f (s) m P˜ i0 f˜ i (s) i=0 m P˜ i1 f˜ i (s) i=0 .. . m ˜ ˜ P in fi (s) i=0
(14)
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
and ∗ f0 (t) f ∗ (t) 1 . . . f ∗ (t) n I −− −f˜ ∗ (t) = −− (f ∗ (t))T 0 C ˜∗ −f1 (t) . .. −f˜ n∗ (t)
(15)
with
I −− P|BT C = 0 ⇔ P = −BT C
From (14) and (15) we get T T P˜ f˜ (s) = Bf T (s) ∗ (f˜ (t))T = −C (f ∗ (t))T
Expressions (14) and (15) give the relations between both equivalent solutions and the degrees of freedom we have. However, if we have to learn f (s) and f ∗ (t) we can approximate them as: f (s) = φ(s) B,
f ∗ (t) = ψ(t) C
and we get P(s, t) = f (s)P(f ∗ (t))T T ˜ = φ(s)BPCT ψ(t)T = φ(s)Pψ(t)
which is equivalent to (10) but with functions {φ(s), ψ(t)} instead of {f (s), f ∗ (t)}. Thus, this case is reduced to the first one. Step 5 (Data collection). For the learning to be possible we need some data. We will employ those described in Section 3.
1345
Step 6 (Learning). At this point, the neural functions of the network must be estimated (learned) by using some minimization method. In functional networks, this learning process consists in obtaining the neural functions based on a set of data D = {(Ii , Oi )|i = 1, . . . , n}, where Ii and Oi are the i-th input and output, respectively, and n is the sample size. Usually, the learning process is based on minimizing the sum of squared errors of the actual and the observed outputs for the given inputs Q=
n
(Oi − F (Ii ))2
i=1
where F is the compound function given the outputs, as a function of the inputs, for the given network topology. Note that this formulation corresponds exactly to the previously mentioned criterion [C3] for the quality of the shape. To this end, each neural function fi is approximated by a linear combination of functions in a given family {φi1 , . . . , φimi }. Thus, the approximated neural function fˆ i (x) becomes fˆ i (x) =
mi
aij φij (x)
j=1
where x are the inputs associated with the i-th neuron. In the case of our example, the problem of learning the above functional network merely requires the neuron functions x(s, t), y(s, t) and z(s, t) to be estimated from a given sequence of triplets {(xp , yp , zp ), p = 1, . . . , 256} which depend on s and t so that x(sp , tp ) = xp and so on. For this purpose we build the sum of squared errors function: Qα =
256 p=1
αp −
M−1 N−1
2 aij φi (sp )ψj (tp )
(16)
i=0 j=0
where, in the present example, we must consider an error function for each variable x, y and z. This is assumed by α in the previous expression, so (16) must be interpreted as three different equations, for α = x, y and z, respectively. Applying the Lagrange multipliers
1346
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
to (16), the optimum value is obtained for 256 M−1 N−1 ∂Qα αp − = aij φi (sp )ψj (tp ) 2∂aγµ
Step 7 (Model validation). At this step, a test for quality and/or the cross-validation of the model is performed. Checking the obtained error is important to see whether or not the selected family of approximating functions are adequate. A cross-validation of the model is also convenient. This task will be performed in the next section.
On the other hand, a B-spline function is basically a piecewise polynomial function whose number of spans r is given by r = m + k − 1, where m and k are the number of the control points and the order, respectively. Hence, we need to make a decision between the following two possibilities:
Step 8 (Use of the model). Once the model has been satisfactorily validated, it is ready to be used in predicting new points on the surface.
• to fix the number of control points and to change the order of the B-spline or • to fix the order and then change the number of the control points.
To test the quality of the model we have calculated the mean and the root mean squared (RMS) errors for M and N from 4 to 8 and for the 256 training data points from the four surfaces described in Section 3. Table 2 refers to Surface I. As the reader can appreciate, the errors (which, of course, depend on the values of M and N) are very small, indicating that the approach is reasonable. The best choice (indicated in italics in Table 2) corresponds to M = N = 6, as expected because data points come from a B-spline surface defined through a net of 6 × 6 control points. In this case, the mean and the RMS errors are 0.0085 and 0.00071, respectively. Table 3 shows the control points for the reconstructed Surface I corresponding to the best case M = N = 6. They were obtained by solving the system (17) with a floating-point precision and removing the zeroes when redundant. A simple comparison with Table 1 shows that the corresponding x and y coordinates are exactly the same, which is as expected because they were not affected by the noise. On the contrary, since noise was applied to the z coordinate, the corresponding values are obviously not the same but very similar, indicating that we have obtained a very good approximation. The approximating B-spline surface is shown in Fig. 3 (top-left) and it is virtually indistinguishable from the original surface. To cross-validate the model, we have also used the fitted model to predict a new set of 1024 testing data points, and calculated the mean and the RMS errors, obtaining the results shown in Table 4. The new results confirm our previous choice for M and N. A comparison between mean and RMS error values for the training
p=1
i=0 j=0
(17) φγ (sp )ψµ (tp ) = 0, γ = 0, 1, . . . , M − 1, µ = 0, 1, . . . , N − 1.
In this paper we have considered the second option: to fit the 256 data points of our examples, we have used nonperiodic third-order B-spline basis functions {Ni3 (s)}i and {Nj3 (t)}j , that is, we have chosen {φi (s) = Ni3 (s)|i = 0, 1, . . . , M − 1} and {ψj (t) = Nj3 (t)|j = 0, 1, . . . , N − 1} in (16). Note that this choice guarantees that the criterion [C1] is satisfied, because data are fitted with a B-spline surface. Furthermore, this choice is also very natural: the B-spline functions are frequently used in the framework of both surface reconstruction and approximation theory. In particular, the third-order B-spline functions are the most common curves and surfaces in research and industry. Finally, nonperiodic knot vectors mean that we force the B-spline surfaces to pass through the corner points of the control net, a very reasonable constraint in surface reconstruction. Therefore, we allow the parameters M and N in (17) to change. Of course, every different choice for M and N yields the corresponding system (17), which must be solved. Note that, since third-order B-spline functions are used, the minimum value for M and N is 3. However, this value implies that the B-spline surface is actually a B´ezier surface [2], so we have taken values for M and N from 4 to 8. Solving the system (17) for all these cases, we obtain the control points associated with the B-spline surfaces fitting the data. The results will be discussed in the next section.
6. Results
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
1347
Table 2 Mean and root mean squared errors of the z-coordinate of the 256 training points from the Surface I for different values of M and N M
N 4
5
6
7
8
4 5 6 7 8
0.1975 (0.00919) 0.1229 (0.00873) 0.0676 (0.00528) 0.0691 (0.00547) 0.0678 (0.00531)
0.1000 (0.00798) 0.0939 (0.00743) 0.0354 (0.00265) 0.0387 (0.00301) 0.0356 (0.00270)
0.0941 (0.00762) 0.0885 (0.00700) 0.0085 (0.00071) 0.0208 (0.00163) 0.0117 (0.00093)
0.0945 (0.00764) 0.0888 (0.00703) 0.0115 (0.00090) 0.0221 (0.00172) 0.0139 (0.00109)
0.0943 (0.00763) 0.0886 (0.00702) 0.0093 (0.00082) 0.0217 (0.00168) 0.0131 (0.00103)
Table 3 Control points of the reconstructed Surface I (x, y, z)
(x, y, z)
(x, y, z)
(x, y, z)
(x, y, z)
(x, y, z)
(0, 0, 1.0382) (1, 0, 2.0048) (2, 0, 1.007) (3, 0, 2.9866) (4, 0, 2.0302) (5, 0, 0.9757)
(0, 1, 1.9897) (1, 1, 2.9945) (2, 1, 3.9777) (3, 1, 4.004) (4, 1, 1.9729) (5, 1, 2.0232)
(0, 2, 3.047) (1, 2, 4.0228) (2, 2, 4.9951) (3, 2, 5.0283) (4, 2, 4.0344) (5, 2, 3.0047)
(0, 3, 2.9435) (1, 3, 1.9602) (2, 3, 5.0357) (3, 3, 0.9122) (4, 3, 4.004) (5, 3, 3.009)
(0, 4, 2.0411) (1, 4, 2.9981) (2, 4, 3.9554) (3, 4, 2.0926) (4, 4, 2.9637) (5, 4, 1.9567)
(0, 5, 0.9777) (1, 5, 2.0028) (2, 5, 3.0221) (3, 5, 2.968) (4, 5, 2.0087) (5, 5, 1.0423)
Table 4 Mean and root mean squared errors of the z-coordinate of the 1024 testing points from the Surface I for different values of M and N M
N 4
5
6
7
8
4 5 6 7 8
0.1118 (0.00441) 0.10599 (0.00422) 0.0649 (0.00252) 0.0668 (0.00263) 0.0651 (0.00253)
0.0943 (0.00384) 0.0888 (0.00363) 0.0341 (0.00130) 0.0381 (0.00149) 0.0345 (0.00133)
0.0887 (0.00366) 0.0830 (0.00342) 0.0078 (0.00032) 0.0203 (0.00081) 0.0111 (0.00043)
0.0889 (0.00367) 0.0833 (0.00343) 0.0109 (0.00042) 0.0216 (0.00085) 0.0133 (0.00051)
0.0889 (0.00366) 0.0832 (0.00342) 0.0093 (0.00038) 0.0213 (0.00084) 0.0125 (0.00049)
Fig. 3. B-spline approximating surfaces of those labelled Surfaces I, II, III and IV, respectively (top–bottom, left–right). Their corresponding equations are described in Section 3
1348
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
Table 5 Mean and root mean squared errors of the z-coordinate of the 256 training points from the Surface II for different values of M and N M
N 4
5
6
7
8
4 5 6 7 8
0.0052 (0.00040) 0.0053 (0.00045) 0.0060 (0.00048) 0.0062 (0.00050) 0.0068 (0.00055)
0.0052 (0.00041) 0.0056 (0.00049) 0.0063 (0.00052) 0.0065 (0.00053) 0.0071 (0.00059)
0.0060 (0.00047) 0.0065 (0.00054) 0.0073 (0.00059) 0.0074 (0.00060) 0.0072 (0.00065)
0.0069 (0.00051) 0.0074 (0.00060) 0.0078 (0.00064) 0.0080 (0.00065) 0.0089 (0.00073)
0.0069 (0.00054) 0.0075 (0.00065) 0.0084 (0.00070) 0.0085 (0.00071) 0.0092 (0.00078)
Table 6 Mean and root mean squared errors of the z-coordinate of the 1024 testing points from the Surface II for different values of M and N M
N 4
5
6
7
8
4 5 6 7 8
0.0049 (0.00018) 0.0049 (0.00020) 0.0057 (0.00022) 0.0059 (0.00023) 0.0065 (0.00025)
0.0050 (0.00019) 0.0052 (0.00021) 0.0059 (0.00023) 0.0062 (0.00024) 0.0067 (0.00027)
0.0057 (0.00022) 0.0061 (0.00024) 0.0069 (0.00027) 0.0071 (0.00028) 0.0073 (0.00030)
0.0064 (0.00024) 0.0069 (0.00027) 0.0074 (0.00030) 0.0075 (0.00030) 0.0085 (0.00035)
0.0067 (0.00026) 0.0072 (0.00030) 0.0079 (0.00032) 0.0080 (0.00033) 0.0088 (0.00036)
and testing data shows that, for our choice, they are comparable. Thus, we can conclude that no overfitting occurs. Note that a variance for the training data significantly smaller than the variance for the testing data is a clear indication of overfitting. This does not occur here. A similar analysis was carried out for the other surfaces described in Section 3. For example, Tables 5 and 6 show the results for the training and the testing points used for Surface II, respectively. In this case, the best choice for M and N corresponds to M = N = 4. This result is not surprising, as the polynomial equation of the surface has the same degrees for both variables x and y (see Eq. (6)). Note that the polynomial degree of Surface II is 3 for both variables and the order of the approximating B-spline surface is 4 (i.e., degree 3). The mean and RMS errors for this best choice are 0.0052 and 0.00040, respectively, for the training points and
0.0049 and 0.00018 for the testing points. The approximating B-spline surface is shown in Fig. 3 (top-right). Tables 7 and 8 show the results corresponding to Surface III (given by Eq. (7)). It has a more complex shape so larger values of M and N, namely either M = 6 and N = 7 or M = 7 and N = 7, are required for the best fitting. This stems from the fact that both values minimize the mean and RMS errors. In the first case, (M, N) = (6, 7), the mean and RMS errors are 0.0095 and 0.00073, respectively, for the training points and 0.0091 and 0.00035 for the testing points, while they are 0.0096 and 0.00076 for the training points and 0.0090 and 0.00035 for the testing points for the second case, (M, N) = (7, 7). The approximating B-spline surface is shown in Fig. 3 (bottom-left). Finally, Surface IV, which is a rational polynomial function (see Eq. (8)), is best fitted for M = N = 7. The results for this example are shown in Tables 9
Table 7 Mean and root mean squared errors of the z-coordinate of the 256 training points from the Surface III for different values of M and N M
4 5 6 7 8
N 4
5
6
7
8
0.2111 (0.01642) 0.1661 (0.01170) 0.1661 (0.01162) 0.1661 (0.01162) 0.1661 (0.01162)
0.1668 (0.01170) 0.0269 (0.00201) 0.0212 (0.00153) 0.0211 (0.00153) 0.0212 (0.00155)
0.1667 (0.01163) 0.0213 (0.00155) 0.0100 (0.00083) 0.0107 (0.00085) 0.0112 (0.00089)
0.1667 (0.01162) 0.0209 (0.00150) 0.0095 (0.00073) 0.0096 (0.00076) 0.0101 (0.00082)
0.1667 (0.01162) 0.0211 (0.00152) 0.0109 (0.00078) 0.0102 (0.00080) 0.0108 (0.00086)
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
1349
Table 8 Mean and root mean squared errors of the z-coordinate of the 1024 testing points from the Surface III for different values of M and N M
N 4
5
6
7
8
4 5 6 7 8
0.1984 (0.00766) 0.1556 (0.00547) 0.1549 (0.00543) 0.1547 (0.00543) 0.1547 (0.00543)
0.1561 (0.00548) 0.0254 (0.00097) 0.0203 (0.00073) 0.0202 (0.00073) 0.0204 (0.00074)
0.1555 (0.00545) 0.0209 (0.00075) 0.0103 (0.00040) 0.0103 (0.00040) 0.0107 (0.00042)
0.1555 (0.00545) 0.0205 (0.00073) 0.0091 (0.00035) 0.0090 (0.00035) 0.0095 (0.00038)
0.1555 (0.00545) 0.0207 (0.00074) 0.0098 (0.00037) 0.0098 (0.00037) 0.0102 (0.00040)
Table 9 Mean and root mean squared errors of the z-coordinate of the 256 training points from the Surface IV for different values of M and N M
N 4
5
6
7
8
4 5 6 7 8
0.1022 (0.00803) 0.0817 (0.00640) 0.0838 (0.00661) 0.0811 (0.00625) 0.0814 (0.00630)
0.0722 (0.00552) 0.0328 (0.00263) 0.0378 (0.00309) 0.0268 (0.00222) 0.0291 (0.00237)
0.0756 (0.00586) 0.0415 (0.00329) 0.0449 (0.00367) 0.0363 (0.00298) 0.0381 (0.00310)
0.0697 (0.00520) 0.0222 (0.00180) 0.0305 (0.00243) 0.0143 (0.00114) 0.0176 (0.00143)
0.0701 (0.00531) 0.0255 (0.00212) 0.0327 (0.00268) 0.0199 (0.00161) 0.0224 (0.00183)
Table 10 Mean and root mean squared errors of the z-coordinate of the 1024 testing points from the Surface IV for different values of M and N M
N 4
5
6
7
8
4 5 6 7 8
0.1002 (0.00399) 0.0826 (0.00319) 0.0846 (0.00329) 0.0810 (0.00311) 0.0815 (0.00314)
0.0722 (0.00274) 0.0319 (0.00131) 0.0374 (0.00154) 0.0264 (0.00109) 0.0287 (0.00118)
0.0751 (0.00292) 0.0409 (0.00165) 0.0444 (0.00185) 0.0363 (0.00150) 0.0380 (0.00156)
0.0695 (0.00258) 0.0220 (0.00089) 0.0304 (0.00122) 0.0139 (0.00055) 0.0178 (0.00070)
0.0706 (0.00264) 0.0258 (0.00107) 0.0327 (0.00135) 0.0199 (0.00081) 0.0226 (0.00092)
and 10 for the training and testing points, respectively. For the best choice, the mean and RMS errors are 0.0143 and 0.00114, respectively, for the training points and 0.0139 and 0.00055 for the testing points. The approximating B-spline surface is displayed in Fig. 3 (bottom-right).
7. Extending Gordon–Coons surfaces scheme The method introduced in this paper takes advantage of the two classical schemes for fitting surfaces from data points: interpolation and approximation. The first one is applied to obtain the surface interpolating some prescribed isoparametric curves and consists of an transfinite interpolation, which is used to obtain the
basis surface defining the topology of the functional network. Such a basis surface is obtained from Eq. (4) by solving the resulting functional equation, leading to a simpler functional network (Step 3 of our development). The second family of methods is given by the approximation scheme, which means that the resulting surface does not pass through the points exactly but only lies close to them, where this last condition is established on the basis of a given norm. Among them, the least-squares approximants (L2 -norm) appear as the most usual tool in this framework. Of course, other norms are also possible but computationally much less efficient for large problems, for which the matrix structure cannot so readily be exploited [8]. Since this scheme is especially valuable when data points are af-
1350
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
fected by errors (given by Eq. (9) in our case), it has been applied in Step 6 in order to fit a set of data points required for the learning process. As a consequence of this minimization process, different B-spline surfaces have been obtained. These solutions are, therefore, the surfaces interpolating the isoparametric curves defined by conditions in Eq. (4) and minimizing the approximation error to the data points. As a conclusion, we have solved a mixed problem given by:
These curves intersect in a set of space points whose coordinates are obtained for some values of the parameters t1 < t2 < · · · < tN and s1 < s2 < · · · < sM . For these two families to define a surface they must satisfy the following compatibility conditions (19)
Then, it is possible to build a surface v(s, t) interpolating the M + N given curves, that is, satisfying the system of vector equations v(si , t) = gi (t), i = 1, 2, . . . , M v(s, tj ) = f j (s), j = 1, 2, . . . , N To solve this problem, Gordon proposed the solution given by the following theorem: Theorem 3. If {φi (s) | i = 1, 2, . . . , M} and {ψj (t)|j = 1, 2, . . . , N} are any two sets of functions such that they satisfy the conditions:
ψj (tk ) = δjk
∀i, j
Then, the bivariate function v(s, t) =
M
−
In this sense, this work is strongly related to a classical work from Gordon [15–18]. In that work, we are given two families of parametric curves gi (t)|i = 1, 2, . . . , M and f j (s)|j = 1, 2, . . . , N (18)
φi (sk ) = δik
gi (tj ) = f j (si ),
gi (t)φi (s) +
i=1
• a transfinite interpolation and • a discrete approximation of the mesh data.
U ij = gi (tj ) = f j (si )
where δij is Kronecker’s delta function, and if {gi (t)|i = 1, 2, . . . , M} and {f j (s)|j = 1, 2, . . . , N} are any two sets of functions such that the following compatibility conditions are satisfied:
N M
N
f j (s)ψj (t)
j=1
uij φi (s)ψj (t)
i=1 j=1
is one solution of the interpolation problem v(sk , t) = gk (t), k = 1, 2, . . . , M v(s, tp ) = f p (t), p = 1, 2, . . . , N It is worthwhile mentioning that the previous theorem also gives a solution to the problem of interpolating isoparametric curves subjected to the constraints (19). Therefore, the following problem is solved: • a transfinite interpolation of the families of curves (18) and • a discrete interpolation of the mesh data given in (19). From this point of view, the present paper is a generalization of Gordon’s work in the sense that now the mesh data do not necessarily belong to the surface; in other words, the discrete interpolation is now replaced by a discrete approximation. Obviously, the resulting surface is no longer an interpolating surface, but only an approximation. In addition, Gordon’s approach only applies for points on a grid, which is used for establishing compatibility conditions, assuring the existence of a solution. In contrast, our model is also able to deal with scattered data, because the data points are used for solving system (17) only, and no other restrictions are imposed. In all the cases, the present method returns the best surface approximating the data points in the sense of leastsquares. At the limit, as the data points go to the surface, this approximating surface goes to the interpolating one from Theorem 3, called the Gordon–Coons surface.
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
Furthermore, it has been proved that under certain conditions (given in [6]) this interpolating surface is unique and so is the approximating surface of this paper. This uniqueness is a direct consequence of being the solution of the least-squares problem in (17).
8. Conclusions and further remarks In this paper a powerful extension of neural networks, the so-called functional networks, has been applied to the surface reconstruction problem. Given a set of 3D data points, the functional network sends out the control points and the degree of the B-spline surface that best fits these data points. A careful analysis of the error as a function of the number of the control points has also been carried out. The obtained results show that all these new functional network features allow the surface reconstruction problem to be solved in several cases. As a conclusion, the methodology to deal with functional networks can be established as follows: (1) Obtain the functional network representing the problem. In our example, the functional network in Fig. 1 which represents all the tensor-product surfaces. (2) Simplify the network, if possible. For example, Fig. 2 represents the simplest functional network describing the tensor-product surfaces. (3) Use the data points for the learning process, which leads to a system of equations whose solutions are the coefficients (control points) of the approximating surface. (4) Calculate the error for different degrees and/or number of control points. (5) cross-validation of the fitted model by predicting a new set of testing data points. This step is required to check for overfitting. The main advantages of functional networks in our framework are: • • • •
the possibility of using different neural functions, the multivariate character of neural functions, the possibility of connecting neuron outputs, and the fact that neural functions instead of weight are learned.
1351
We would like to remark that our approach is very general: the data points may come from any kind of surface (in fact, in this paper we have used data points from parametric and explicit surfaces) and the approximating surface can be written in terms of any arbitrary family of basis functions. Due to this generality, we think that this approach opens a promising new line of research, as functional networks might be applied to many other challenging problems in surface modeling. On the other hand, additional work is needed to clearly establish the limitations of our approach. For example, in this paper uniform meshes of data points have been considered. Our interest now lies in exploring the constraints (if any) to be put on unorganized data points in order to apply our functional network methodology. Preliminary results have shown that a pre-processing step might be necessary, but this assessment is currently unclear and consequently further research is still required. Our future results will be reported elsewhere.
References [1] J. Acz´el, Lectures on Functional Equations and their Applications, Academic Press, San Diego, 1966. [2] V. Anand, Computer Graphics and Geometric Modeling for Engineers, John Wiley & Sons, New York, 1993. [3] J. Barhak, A. Fischer, Parameterization and reconstruction from 3D scattered points based on neural network and PDE techniques, IEEE Trans. Visualization Comput. Graph. 7 (1) (2001) 1–16. [4] R.M. Bolle, B.C. Vemuri, On three-dimensional surface reconstruction methods, IEEE Trans. Pattern Anal. Machine Intell. 13 (1) (1991) 1–13. [5] J.F. Brinkley, Knowledge-driven ultrasonic three-dimensional organ modeling, IEEE Trans. Pattern Anal. Machine Intell. 7 (4) (1985) 431–441. [6] E. Castillo, A. Iglesias, Some characterizations of families of surfaces using functional equations, ACM Trans. Graph. 16 (3) (1997) 296–318. [7] E. Castillo, Functional networks, Neural Process. Lett. 7 (1998) 151–159. [8] M.G. Cox, Algorithms for spline curves and surfaces, in: L. Piegl (Ed.), Fundamental Developments of Computer-Aided Geometric Design, Academic Press, London, San Diego, 1993, pp. 51–76. [9] G.E. Farin, Curves and Surfaces for Computer-Aided Geometric Design, 5th ed. Morgan Kaufmann, San Francisco, 2001. [10] T.A. Foley, Interpolation to scattered data on a spherical domain, in: J.C. Mason, M.G. Cox (Eds.), Algorithms for Approximation, vol. II, Chapman & Hall, London, New York, 1990, pp. 303–310.
1352
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353
[11] R.H. Franke, L.L. Schumaker, A bibliography of multivariate approximation, in: C.K. Chui, L.L. Schumaker, F.I. Utreras (Eds.), Topics in Multivariate Approximation, Academic Press, New York, 1986. [12] J.A. Freeman, Simulating Neural Networks with Mathematica, Addison Wesley, Reading, MA, 1994. [13] H. Fuchs, Z.M. Kedem, S.P. Uselton, Optimal surface reconstruction from planar contours, Commun. ACM 20 (10) (1977) 693–702. [14] M.V. Golitschek, L.L. Schumaker, Data fitting by penalized least squares, in: J.C. Mason, M.G. Cox (Eds.), Algorithms for Approximation, vol. II, Chapman & Hall, London, New York, 1990, pp. 210–227. [15] W.J. Gordon, Distributive lattices and the approximation of multivariate functions, in: Proceedings of the Symposium on Approximation with Special Emphasis on Spline Functions, University of Wisconsin, 1969, pp. 223–277. [16] W.J. Gordon, Spline-blended surface interpolation through curve networks, J. Math. Mech. 18 (10) (1969) 931–952. [17] W.J. Gordon, Blending-function methods of bivariate and multivariate interpolation and approximation, SIAM J. Numer. Anal. 8 (1971) 158–177. [18] W.J. Gordon, Sculptured surface definition via blendingfunction methods, in: L. Piegl (Ed.), Fundamental Developments of Computer-Aided Geometric Design, Academic Press, London, San Diego, 1993, pp. 117–134. [19] P. Gu, X. Yan, Neural network approach to the reconstruction of free-form surfaces for reverse engineering, Comput. Aided Des. 27 (1) (1995) 59–64. [20] T. Hastie, W. Stuetzle, Principal curves, J. Am. Stat. Assoc. 84 (1989) 502–516. [21] J. Hertz, A. Krogh, R.G. Palmer, Introduction to the Theory of Neural Computation, Addison Wesley, Reading, MA, 1991. [22] M. Hoffmann, L. Varady, Free-form surfaces for scattered data by neural networks, J. Geometry Graph. 2 (1998) 1–6. [23] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, W. Stuetzle, Surface reconstruction from unorganized points, Proc. SIGGRAPH’92 Comput. Graph. 26 (2) (1992) 71–78. [24] H. Hoppe, Surface Reconstruction from Unorganized Points. Ph.D. Thesis, Department of Computer Science and Engineering, University of Washington, 1994. [25] J. Hoschek, D. Lasser, Fundamentals of Computer Aided Geometric Design, A.K. Peters, Wellesley, MA, 1993. [26] T. Kohonen, Self-Organization and Associative Memory, 3rd ed., Springer-Verlag, Berlin, 1989. [27] A. Iglesias, A. G´alvez, Applying functional networks to CAGD: the tensor-product surface problem, in: D. Plemenos (Ed.), Proceedings of the International Conference on Computer Graphics and Artificial Intelligence, 3IA’2000, 2000, pp. 105–115. [28] A. Iglesias, A. G´alvez, A new artificial intelligence paradigm for computer-aided geometric design, in: J.A. Campbell, E. Roanes-Lozano (Eds.), Artificial Intelligence and Symbolic Computation, Lectures Notes in Artificial Intelligence, vol. 1930, Springer-Verlag, Berlin, Heidelberg, 2001, pp. 200–213. [29] A. Iglesias, A. G´alvez, Applying functional networks to fit data points from B-spline surfaces, in: H.H.S. Ip, N. MagnenatThalmann, R.W.H. Lau, T.S. Chua (Eds.), Proceedings of the
[30] [31]
[32]
[33] [34]
[35]
[36] [37] [38] [39] [40] [41]
[42]
Computer Graphics International, CGI’2001, IEEE Computer Society Press, Los Alamitos, CA, 2001, pp. 329–332. P. Laurent, M. Mekhilef, Optimization of a NURBS representation, Comput. Aided Des. 25 (11) (1993) 699–710. C. Lim, G. Turkiyyah, M. Ganter, D. Storti, Implicit reconstruction of solids from cloud point sets, Proceedings of 1995 ACM Symposium on Solid Modeling, Salt Lake City, Utah, 1995, pp. 393–402. J.C. Mason, M.G. Cox, Algorithms for Approximation, vol. II, Chapman & Hall, London, New York, 1990 (we specially recommend the good collection of references at the end of the book: Part Three. Catalogue of Algorithms). D. Meyers, S. Skinnwer, K. Sloan, Surfaces from contours, ACM Trans. Graph. 11 (3) (1992) 228–258. D. Meyers, Reconstruction of Surfaces from Planar Sections. Ph. D. Thesis, Department of Computer Science and Engineering, University of Washington, 1994. M. Milroy, C. Bradley, G. Vickers, D. Weir, G1 continuity of Bspline surface patches in reverse engineering, Comput. Aided Des. 27 (6) (1995) 471–478. H. Park, K. Kim, 3-D shape reconstruction from 2-D crosssections, J. Des. Mng. 5 (1997) 171–185. H. Park, K. Kim, Smooth surface approximation to serial crosssections, Comput. Aided Des. 28 (12) (1997) 995–1005. L. Piegl, W. Tiller, The NURBS Book, 2nd ed., Springer Verlag, Berlin, Heidelberg, 1997. L. Piegl, W. Tiller, Algorithm for approximate NURBS skinning, Comput. Aided Des. 28 (9) (1997) 699–706. V. Pratt, Direct least-squares fitting of algebraic surfaces, Proc. SIGGRAPH’87 Comput. Graph. 21 (4) (1987) 145–152. F. Schmitt, B.A. Barsky, W. Du, An adaptive subdivision method for surface fitting from sampled data Proc. SIGGRAPH’86 Comput. Graph. (1986) 20 (4) 179–188. S. Sclaroff, A. Pentland, Generalized implicit functions for computer graphics, Proc. SIGGRAPH’91 Comput. Graph. (1991) 25 (4), 247–250.
Andr´es Iglesias is currently Associate Professor at the Department of Applied Mathematics and Computational Sciences of the University of Cantabria (Spain). He holds a B.Sc. degree in Mathematics (1992) and a Ph.D. in Applied Mathematics (1995). He has been the chairman and organizer of some international conferences in the fields of computer graphics, geometric modeling and symbolic computation, such as the CGGM (2002–2004), TSCG (2003–2004) and CASA (2003–2004) conference series. In addition, he has served as a program committee member and steering committee member in conferences such as ICCSA, GMAG, CGIV, 3IA, CyberWorlds, WSCG and ICICS. He is currently guest editor of four special issues of the journals Future Generation Computer Systems (FGCS) and International Journal of Image and Graphics (IJIG) on the topics of computer graphics, geometric modeling and symbolic computation. He is an ACM Siggraph and Eurographics member.
A. Iglesias et al. / Future Generation Computer Systems 20 (2004) 1337–1353 Gonzalo Echevarr´ıa is currently a Ph.D. student at the Department of Applied Mathematics and Computational Sciences of the University of Cantabria (Spain). He holds a B.Sc. degree in Industrial Engineering at the University of Cantabria (Spain) where he works as a Linux/Unix operating system expert. His Ph.D. research is focused on surface reconstruction. His fields of interest also include virtual reality, computer graphics and its applications for industrial uses.
1353
Akemi G´alvez is currently a Ph.D. student at the Department of Applied Mathematics and Computational Sciences of the University of Cantabria (Spain). She holds a B.Sc. degree in Chemical Engineering at the National University of Trujillo (Peru) and a M.Sc. degree in Computational Sciences at the University of Cantabria (Spain). She has published several papers on geometric processing, surface reconstruction and symbolic computation and participated in national and international projects on geometric processing and its applications to the automotive industry. Her fields of interest also include Chemical Engineering, numerical/symbolic computation and industrial applications.