Approximation by shape preserving interpolation splines

Approximation by shape preserving interpolation splines

Applied Numerical Mathematics 37 (2001) 271–288 Approximation by shape preserving interpolation splines A. Kouibia ∗ , M. Pasadas Deptartment of Appl...

176KB Sizes 7 Downloads 183 Views

Applied Numerical Mathematics 37 (2001) 271–288

Approximation by shape preserving interpolation splines A. Kouibia ∗ , M. Pasadas Deptartment of Applied Mathematics, University of Granada, Severo Ochoa s/n, 18071 Granada, Spain

Abstract In this paper we present a shape preserving method of interpolation for scattered data defined in the form of some constraints such as convexity, monotonicity and positivity. We define a k-convex interpolation spline function in a Sobolev space, by minimizing a semi-norm of order k + 1, and we discretize it in the space of piecewise polynomial spline functions. The shape preserving condition that we consider here is the positivity of the derivative function of order k. We present an algorithm to compute the resulting function and we show its convergence. Some convergence theorems are established. The error is of order o(1/N)k+1 , where N is the number of the Lagrangian data. Finally, we analyze some numerical and graphical examples.  2001 IMACS. Published by Elsevier Science B.V. All rights reserved. Keywords: Shape preserving interpolation; Spline

1. Introduction In recent years, there has been an increasing interest in interpolation problems by using convex splines. In many interpolation problems, it is important that the solution preserves some shape properties such as positivity, monotonicity and convexity. Several approaches to this problem are being actively investigated. Classical methods, with the polynomial spline functions being the most widely used, usually ignore these kinds of conditions and thus yield solutions exhibiting undesirable inflections or oscillations. This is the reason why many investigations during the last years have been directed towards interpolation by means of shape preserving polynomial spline functions. In the early papers, Passow and Roulier [17] have investigated the possibility of obtaining spline interpolants which are shape-preserving, Hornung [11] has shown that the smoothest convex interpolating function to convex data is a cubic spline, Costantini [6] has concerned with the problem of monotone and/or convex splines, having degree n and order of continuity k, which interpolate to a set of data at the knots. Likewise, Schumaker [20] has discussed the design of algorithms for interpolating discrete data using C 1 quadratic splines in such a way that the monotonicity and/or convexity of the data is preserved. * Corresponding author.

E-mail addresses: [email protected] (A. Kouibia), [email protected] (M. Pasadas). 0168-9274/01/$ – see front matter  2001 IMACS. Published by Elsevier Science B.V. All rights reserved. PII: S 0 1 6 8 - 9 2 7 4 ( 0 0 ) 0 0 0 2 9 - 5

272

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

Utreras [22] has considered the problem of interpolating positive data by positive thin plate splines at scattered data points of the plane R2 . Recently, Dauner and Reinsch [7] have given algorithms for computing monotone splines based on the minimization principle. C 2 cubic polynomial splines interpolating convex and monotone data while shape preserving do not generally exist, Dupin and Fréville [9] have given simple sufficient conditions for the existence of such splines in the case of geometric mesh. Likewise, Fiorot and Tabka [10] have used intermediate knots and the derivatives at the first and last knots, as free parameters, to show that the existence of shape preserving C 2 cubic splines, solution of the mentioned interpolation problem [9], is equivalent to the existence of solutions for a linear tridiagonal system. Lahtinen [14] has considered a flexible construction of a shape preserving quadratic spline based on the work of Schumaker [20]. Schmidt and HeB [19] have considered a convex interpolation with C 2 cubic splines on grids built by adding two knots in each subinterval of neighboring data sites. Andersson and Elfving [2] have applied one technique to the problem of interpolating of data in a plane by a cubic spline function which is subject to obstacles, and a characterization result has been obtained which has been used to develop a Newton type algorithm for the numerical solution. More recently, Manni and Sablonnière [16] have proposed a local solution for the problem of interpolating monotone data by monotone C 2 cubic splines with two additional knots per interval, on an arbitrary partition, and with an approximation of order 3, which is an extension of the works doing by Archer and LeGruyer [4] and Pruess [18]. In this paper we are interested in methods which preserve the shape of data. By this we mean that in each real interval I where the data are positive or monotone, the solution should have the same property. Similarly, in each interval where the data are convex, the same should be true for the solution. Our problem can be formulated as follows: First, we define a k-convex interpolation spline function in the Sobolev space H k+1 (I ), whose derivative function of order k is positive. Then, we study the existence, uniqueness and characterization results of this function. This problem is obtained by minimizing the semi-norm of order k + 1 in an interpolation convex subset of H k+1 (I ), there are several approaches in works using the minimization principle as, for example, [1,7] . The additional shape preserving condition that we consider here is the positivity of the derivative of order k, which yields the positivity when k = 0, the monotonicity when k = 1 and the convexity when k = 2. Generally, we do not have sufficient information about the k-convex interpolation spline function. This is why we proceed with the discretization problem on the space of the piecewise polynomial spline functions of class k and degree  2k + 1, in order to define a k-convex discrete interpolation spline, by means of an algorithm which leads to the construction of a sequence of some discrete interpolation spline functions, preserving some conditions and converging to the k-convex discrete spline function in H k+1 (I ). This algorithm is inspired on the Laurent’s algorithm [15], where there are several works using the same one (see, for example, [22,23]). For the discrete D m -spline, modified D m -spline and smoothing variational spline functions defined on I (see [3], [12] and [13], respectively), some convergence results have been obtained when the set of Lagrangian data becomes dense in I . However, this good behaviour does not guarantee anything about the shape of the approximating function, and in this case, the interpolation spline function could easily not preserve any shape even if the data are preserving some shape. We have seen that it is suitable to consider some additional shape preserving conditions in order that the interpolation spline function preserves some shape. Moreover, we prove that the k-convex discrete interpolation spline function converges towards the k-convex interpolation spline function, when the number n of the knots tends to infinity, the

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

273

k-convex interpolation spline and the discrete interpolation spline converge towards the function f to be interpolated, whenever the number N of Lagrangian data tends to infinity and N , n tend to infinity, respectively. Convergence results are established in H k+1 (I ). Finally, some errors are of order o(1/N)k+1 . This paper is organized as follows. In Section 2, we briefly recall some preliminary notations and results, and we give the definition of the discrete interpolation spline in a space of the piecewise polynomial spline functions. In Section 3, we define and characterize the notion of the k-convex interpolation spline in H k+1 (I ). Section 4 is devoted to formulate the discretization problem of Section 3, to define the k-convex discrete interpolation spline and to describe an algorithm allows to compute this function. In Section 5, some theorems of convergence are established and others results of estimation of the errors are given. In Section 6, we analyze some numerical and graphical examples by following the same way as Utreras [22].

2. Notations and results Let (a, b) be a real interval and k ∈ N. We denote by H k+1 (a, b) the usual Sobolev space of (classes of) functions u belonging to L2 (a, b), together with all their derivatives u(β) —in the distribution sense—of order β  k + 1. This space is equipped with the usual norm  · k+1,(a,b), the semi-norm of order k + 1, | · |k+1,(a,b) and the corresponding inner semi-product of order k + 1, (·, ·)k+1,(a,b). For all N1 ∈ N we shall use the inner product ·, · of RN1 . Likewise, let Xm = {x1 , . . . , xm } be a subset of distinct points of [a, b], with a  x1 < · · · < xm  b. We denote by 





k (Xm ) = v ∈ C k [a, b]  u|[xi−1 ,xi ] ∈ P2k+1 [xi−1 , xi ], i = 2, . . . , m , S2k+1

where P2k+1 [xi−1 , xi ] is the restriction on [xi−1 , xi ] of the vectorial space of polynomials real of degree k (Xm ) is a Hilbert subspace of H k+1 (a, b), equipped with the same norm, semi-norm and  2k + 1. S2k+1 inner semi-product of H k+1 (a, b). Suppose we have an ordered finite set Σ of N1 linear forms, defined on H k+1 (a, b), of type Φ : v → () v (x), for   k, with x ∈ Xm , and the continuous linear operator ρ defined from H k+1 (a, b) into RN1 by ρv = (Φ(v))Φ∈Σ . We assume that the operator ρ satisfies ker ρ ∩ Pk (a, b) = {0},

(1)

where Pk (a, b) is the space of all polynomials, defined on (a, b), with values in R of degree  k. Let y = (yi )1iN1 ∈ RN1 and we define the interpolation set 



k K = v ∈ S2k+1 (Xm ) | ρv = y .

Now we consider the minimization problem: Find σ ∈ K such that ∀v ∈ K,

|σ |k+1,(a,b)  |v|k+1,(a,b).

(2)

k (Xm ). Also, it is a linear affine variety Proposition 2.1. K is a non-empty closed convex subset of S2k+1 k associated to the vectorial subspace K 0 = {v ∈ S2k+1 (Xm ) | ρv = 0}.

274

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

k Proof. Let Σ = {Φ1 , . . . , ΦN1 } and vi ∈ S2k+1 (Xm ) be the basis function associated to the degree of N1 freedom Φi , for i = 1, . . . , N1 . Then u = i=1 yi vi ∈ K, it means that K is not empty. Likewise, we deduce that K is closed and convex from K = ρ −1 {y} and the linearity of the operator ρ. Finally, for each u, v ∈ K, u − v ∈ K 0 . ✷

The proofs of the following results can be found, for an analogous case, in [3]. Lemma 2.2. We suppose that the hypothesis (1) holds. Then, the bilinear form defined on H k+1 (a, b) × H k+1 (a, b) by (((u, v))) = ρu, ρv + (u, v)k+1,(a,b) is an inner product, associated to the norm [[v]] = (((v, v)))1/2 , which is equivalent to the norm  · k+1,(a,b) . Theorem 2.3. The problem (2) has a unique solution, called the discrete interpolation spline in the space k S2k+1 (Xm ) relative to ρ and y, which is also the unique solution of the following variational problem: Find σ ∈ K such that ∀v ∈ K 0 ,

(σ , v)k+1,(a,b) = 0.

(3)

Now, for all m ∈ N, let hm = maxi=2,...,m (xi − xi−1 ) and we suppose that 

hm = O



1 , m



m → +∞,

(4)

N → +∞,

(5)



1 , sup minm |x − y| = O y∈X N x∈(a,b) L

where XLm is a subset of points of Xm associated to the Lagrangian data of Σ , i.e., for all a ∈ XLm there k (Xm ), and N is the cardinal of XLm . exists Φ ∈ Σ such that Φ(v) = v(a), for all v ∈ S2k+1  k (Xm ) relative to ρ and ρf , Let k  > k + 1, f ∈ H k (a, b) and σ be the interpolation spline in S2k+1 which depending on m and N . Theorem 2.4. Suppose that the hypotheses (1), (4) and (5) hold. Then lim

m,N→+∞

σ − f k+1,(a,b) = 0.

3. Definition and characterization For some applications, we have seen moreover that it is suitable to find the interpolation spline functions verifying the interpolation conditions some others shape preserving. To do this, let 





K = v ∈ H k+1 (a, b)  ρv = y, v (k) (x)  0, ∀x ∈ (a, b) , and we consider the following problem: Find σ ∈ K such that ∀v ∈ K,

|σ |k+1,(a,b)  |v|k+1,(a,b).

(6)

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

275

Definition 3.1. The solution of the problem (6), if it exists, will be called the k-convex interpolation spline relative to ρ and y. Remark 3.1. We observe that if u is a solution of (6), then k = 0 implies that u is positive in (a, b), k = 1 implies that u is increasing in (a, b) and k = 2 implies that u is convex in (a, b). Proposition 3.1. Suppose that K is not empty, N1 = m  k and Φi (v) = v(xi ), for all v ∈ H k+1 (a, b) and all i = 1, . . . , m. Then, we have $ki  0,

∀i = 1, . . . , m − k,

(7)

where $0i = yi , $i =

i = 1, . . . , m,

$−1 i+1

− $−1 i

xi+ − xi

,

 = 1, . . . , k, i = 1, . . . , m − .

Proof. We remember that the divided differential of u ∈ K in the points xi , . . . , xi+ is defined by u[xi+1 , . . . , xi+ ] − u[xi , . . . , xi+−1 ] , u[xi , . . . , xi+ ] = xi+ − xi with u[xi ] = u(xi ) = yi , for i = 1, . . . , m. Then, immediately we deduced that u[xi , . . . , xi+ ] = $i . It is known that u[xi , . . . , xi+ ] = (u() /()!)(ξ ), for certain ξ ∈ (a, b), i = 1, . . . , m −  and  = 0, . . . , k, which implies for  = k that $ki  0, for any i = 1, . . . , m − k. ✷ Remark 3.2. Proposition 3.1 indicates that (7) is a necessary condition to affirm that K = ∅ when the data of Σ are Lagrangian. In this case (yi )1iN1 is called a k-convex vector. Proposition 3.2. K is a closed convex subset of H k+1 (a, b). Proof. Let (vs )s∈N be a sequence of functions of K converging towards v of H k+1 (a, b), i.e., lims→+∞ vs − vk+1,(a,b) = 0. Then, we have lims→+∞ Φi (vs ) = Φi (v), for all i = 1, . . . , N1 , and Φi (v) = yi . Hence, if there exists x ∈ (a, b) such that v (k) (x) < 0, then there exists a certain s0 ∈ N such that (k) vs0 (x) < 0, which produce a contradiction with vs0 ∈ K. Consequently, v (k) (x)  0, for all x ∈ (a, b), which implies that v ∈ K and K is closed. The convexity of K is deduced from the linearity of the elements of Σ and the kth derivative operator. ✷ Theorem 3.2. If K is non-empty, then the problem (6) has a unique solution σ , which is also the unique solution of the problem: Find σ ∈ K such that ∀v ∈ K,

(σ, v − σ )k+1,(a,b)  0.

Proof. Let α be the continuous bilinear application defined from H k+1 (a, b) × H k+1 (a, b) into R by α(u, v) =

N1  i=1

Φi (u)Φi (v) + (u, v)k+1,(a,b).

(8)

276

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

Using Lemma 2.2, we deduced that α is the inner product (((·, ·))) on H k+1 (a, b). Hence, there exists C > 0 such that α(u, u)  Cu2k+1,(a,b) ,

∀u ∈ H k+1 (a, b),

i.e., α is H k+1 (a, b)-elliptic. Likewise, let θ be the linear form defined on H k+1 (a, b) by θ(v) = 0, which obviously is continuous. By applying the Stampacchia theorem for the convex set K (see [5]), we deduced that there exists σ ∈ K unique such that α(σ, v − σ )  θ(v − σ ) = 0,

∀v ∈ K. ✷

Remark 3.3. The hypothesis K = ∅ is obvious if the data (yi )1iN1 are proceeding from a k-convex function, i.e., yi = Φi (f ), for all i = 1, . . . , N1 , with f ∈ H k+1 (a, b) verifying f (k) (x)  0, for all x ∈ (a, b). Generally, we have not sufficient information about the unique solution of (6), in case that it exists, this is why we shall proceed to the discretization of the problem, by means of an algorithm which on practice computes the solution. 4. Algorithm In order to proceed to the discretization problem, let n ∈ N such that n > m, Xn ⊂ (a, b), Xm ⊂ Xn and we consider 



k (Xn ) | ρv = y, v (k) (x)  0, ∀x ∈ Xn , Kn = v ∈ S2k+1

and the following minimization problem: Find σn ∈ Kn such that ∀v ∈ Kn ,

|σn |k+1,(a,b)  |v|k+1,(a,b).

(9)

Obviously, Kn is non-empty. Then, by reasoning as Theorem 3.3 we observe that the problem (9) has a unique solution, called the k-convex discrete interpolation spline, which is also characterized as the unique solution of the following problem: Find σn ∈ Kn such that ∀v ∈ Kn ,

(σn , v − σn )k+1,(a,b)  0.

(10)

Now, we shall describe an algorithm consisting for the construction of a sequence of some discrete interpolation spline functions converging towards σn . Proposition 4.1. The semi-norm J (v) = |v|2k+1,(a,b) is a strongly convex functional on K ∗ , where k (Xn ) | ρv = y}. K ∗ = {v ∈ S2k+1 Proof. A differentiable application h is strongly convex on K ∗ , if and only if, there exists a positive constant w such that 



h λu + (1 − λ)v  λh(u) + (1 − λ)h(v) − wλ(1 − λ)u − v2k+1,(a,b), for all u, v ∈ K ∗ and all λ ∈ (0, 1), or equivalently, if h(u + x)  h(u) + h (u)x + wx2k+1,(a,b) , k (Xn ) | ρv = 0}. for all u ∈ K ∗ and all x ∈ K0∗ , where K0∗ = {v ∈ S2k+1

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

277

J is a differentiable application, we designate by J  its differentiable function in the Fréchet sense. Let v ∈ K ∗ and x ∈ K0∗ ; we have J (v + x) = |v + x|2k+1,(a,b) = |v|2k+1,(a,b) + 2(v, x)k+1,(a,b) + |x|2k+1,(a,b) = J (v) + J  (v)x + |x|2k+1,(a,b) . Next, we have |x|2k+1,(a,b) = |ρx|2 + |x|2k+1,(a,b) = α(x, x), where α is defined in the proof of Theorem 3.3. Then, from α is H k+1 (a, b)-elliptic, we deduced that there exists a constant w > 0 such that α(x, x)  wx2k+1,(a,b) . It follows that J (v + x)  J (v) + J  (v)x + wx2k+1,(a,b) , which implies that J is strongly convex on K ∗ .



Now, let us give the necessary assumptions: We denote by ϕx , for x ∈ Xn fixed, the linear form of k k k (Xn ))∗ defined by ϕx (v) = v (k) (x), for all v ∈ S2k+1 (Xn ), where (S2k+1 (Xn ))∗ denotes the dual (S2k+1 k space of S2k+1 (Xn ). We define the set 





k (Xn )  v (k) (x)  0 , Kx = v ∈ S2k+1

and the function g(v) = minx∈Xn v (k) (x). Then, we have Kn =









k K ∗ ∩ Kx = v ∈ S2k+1 (Xn ) | ρv = y, g(v)  0 .

x∈Xn k k (Xn ))∗ = L(S2k+1 (Xn )) is bounded. Then, |ϕx (v)| = |v (k) (x)| < +∞, The set {ϕx , x ∈ Xn } ⊂ (S2k+1 k (Xn )), which implies that there exists a constant C > 0 such that ϕx   C, for all for all v ∈ S2k+1 k (Xn ))∗ . x ∈ Xn , where ϕx  denotes the norm of ϕx in (S2k+1 / Kx we have Let u ∈ Kx and v ∈

−v (k) (x)  −ϕx (v) + ϕx (u) = ϕx (u − v). Then, we obtain −v (k) (x)  ϕx  inf u − vk+1,(a,b) = ϕx d(v, Kx ), u∈Kx

it follows that −v (k) (x)  Cd(v, Kx ),

(11)

and which implies that the function g(v) is finite for every v ∈ K ∗ , hence continuous. After introducing the necessary assumptions, let us describe the algorithm. k (Xn ) relative to ρ and y. Iteration 0. Let σ0 = σ , where σ is the discrete interpolation spline in S2k+1 If σ0 belongs to Kn , then it is the solution of the problem.

278

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

If σ0 does not belong to Kn , then there exists x 0 ∈ Xn such that

Iteration 1.

σ0 ∈ / Kx 0 , ϕx 0 (σ0 ) = g(σ0 ).

Let σ1 be the (k) 0 σ1 (x ) = 0.

k minimum of J (v), for v ∈ D 1 = {v ∈ S2k+1 (Xn ) | ρv = ρf , v (k) (x 0 )  0}. It is clear that

We observe that the problem is reduced to another one with equality constraints. By this we mean that k (Xn ) | ρv = ρf, v (k)(x 0 ) = 0}. J (σ1 ) is the minimum of J (v), for v ∈ K ∗ ∩ B(D 1 ) = {v ∈ S2k+1 If σ1 belongs to Kn , then it is the solution of the problem. If σ1 does not belong to Kn , then there exists x 1 ∈ Xn such that

Iteration 2.

σ1 ∈ / Kx 1 , ϕx 1 (σ1 ) = g(σ1 ).

Let σ2 be the minimum of J (v), for v ∈ K ∗ ∩ Kx 1 ∩ Kx 0 . This problem can be reduced in two steps: First minimize J (v), for v ∈ K ∗ ∩ B(Kx 1 ). If the solution belongs to Kx 0 , it is σ2 . Next, σ2 is obtained by minimizing J (v), for v ∈ K ∗ ∩ B(Kx 1 ) ∩ B(Kx 0 ). Using the Kuhn–Tucker theorem for the characterization of σ2 , there exist coefficients µ2i , i = 1, . . . , N1 , and non-negative coefficients ν02 and ν12 such that  N1         J (σ )v = µ2i Φi (v) + ν02 v (k) x 0 + ν12 v (k) x 1 , 2    i=1 2 (k) 0  σ x = 0, ν  0 2     2 (k) 1

ν1 σ 2

x

k ∀v ∈ S2k+1 (Xn ),

= 0,

where Σ = {Φ1 , . . . , ΦN1 }. Then, we deduce that νi2 = 2(σ2 , wi )k+1,(a,b),

i = 0, 1,

k (Xn ) associated to the degree of freedom v → v (k) (x i ), i = 0, 1. being wi the basis of S2k+1 We consider the semispace















k (Xn )  ν02 v (k) x 0 + ν12 v (k) x 1  0 . D 2 = v ∈ S2k+1

Obviously, D 2 ∩ K ∗ contains Kn and σ2 ∈ K ∗ ∩ B(D 2 ). Using again the Kuhn–Tucker theorem, J attains a minimum in K ∗ ∩ D 2 at σ2 ∈ K ∗ ∩ B(D 2 ) = {v ∈ k S2k+1 (Xn ) | ρv = ρf, ν02v (k) (x 0 ) + ν12 v (k) (x 1 ) = 0}. Iteration r + 1. We suppose that, at the rth iteration, we have r points x i , no negative coefficients νir , i = 0, . . . , r − 1, and the semispace 

Dr =

 r−1    k v ∈ S2k+1 (Xn )  νir v (k) x i  0 , i=0

such that the minimum of J (v), for v ∈ K ∗ ∩ D r , is σr ∈ K ∗ ∩ B(D r ). If σr belongs to Kn , it is the solution of the problem.

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

279

If σr does not belong to Kn , then there exists x r ∈ Xn such that

σr ∈ / Kx r , ϕx r (σr ) = g(σr ).

(12)

Let σr+1 be the minimum of J (v), for v ∈ K ∗ ∩ Kx r ∩ D r . By reasoning as the iteration 2, we observe that σr+1 is the minimum of J (v), for v ∈ K ∗ ∩ B(Kx r ) ∩ B(D r ). Using again the Kuhn–Tucker theorem for the characterization of σr+1 , we see that there exist coefficients µr+1 , i = 1, . . . , N1 , and non-negative coefficients ηr+1 and θ r+1 such that i  N1 r−1        r+1 r+1 (k) r r+1   J (σ )v = µ Φ (v) + η v x + θ νir v (k) x i ,  r+1 i i    i=0   (k)  ri=1 r+1

η

σ

k ∀v ∈ S2k+1 (Xn ),

= 0,

x

r+1    r−1      i  (k)  r+1 r θ νi σr+1 x = 0,   i=0

it follows that

 r+1  = 2(σr+1 , wr )k+1,(a,b), η 1 r+1 = r 2(σr+1 , wj )k+1,(a,b) ,  θ ν j

k (Xn ) where j ∈ {0, . . . , r − 1} is such that νjr = 0 and for i = 0, . . . , r, wi is the basis function of S2k+1 (k) i associated to the degree of freedom v → v (x ). Now, we put

νjr+1 = θ r+1 νjr ,

j = 0, . . . , r − 1,

νrr+1 = ηr+1 , and we consider the semispace 

D

r+1

= v

r 

 k ∈ S2k+1 (Xn ) 

νir+1 v (k)

 i



x 0 .

i=0

The previous conditions imply that  N1 r      r+1   J (σ )v = µ Φ (v) + νir+1 v (k) x i ,  r+1 i i  i=1

k ∀v ∈ S2k+1 (Xn ),

i=0

r    (k)  i   νir+1 σr+1 x = 0.  i=0

Thus, using the Kuhn–Tucker theorem, we deduce that J attains a minimum in K ∗ ∩ D r+1 at ∗



σr+1 ∈ K ∩ B D

r+1



= v



 k ∈ S2k+1 (Xn )  ρv

= ρf,

r  i=0

 νir+1 v (k) x i



=0 .

280

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

All we need now is for the algorithm to converge. Theorem 4.2. The sequence (σr )r∈N satisfies lim J (σr ) = J (σn ),

lim inf g(σr )  0,

r→+∞

r→+∞

and σr converges strongly towards σn in H k+1 (a, b). Proof. By using Proposition 4.1, (11) and (12), the result can be deduced without difficulty by reasoning as in the proof of Theorem of section IV of [15]. ✷ 5. Convergence and error estimate Let f ∈ H k+1 (a, b) such that f (k) (x)  0, for all x ∈ (a, b). Under adequate hypotheses, we shall show in H k+1 (a, b), that σn converges to σ , σn and σ converge to f , respectively. Now, for all n ∈ N, let hn = maxi=2,...,n (xi − xi−1 ) and we suppose that  

1 , n → +∞. n Likewise, we are going to use the hypothesis (5) taking Xn instead of Xm . hn = O

(13)

Theorem 5.1. Suppose that the hypotheses (1) and (13) hold. Then, we have lim σn − σ k+1,(a,b) = 0.

n→+∞

Proof. Applying corollary 5.1 of [3], we deduce that there exists a family (vn )n∈N such that, for all k (Xn ), ρvn = ρσ , vn(k) (x) = σ (k) (x)  0, for all x ∈ Xn (vn can represent the Hermite n ∈ N, vn ∈ S2k+1 k (Xn ), for all n ∈ N), and interpolant of σ in S2k+1 lim vn − σ k+1,(a,b) = 0.

n→+∞

By using Lemma 2.2, it is verified that lim [[vn − σ ]] = 0,

n→+∞

hence, [[vn ]]  [[σ ]] + o(1),

n → +∞.

(14)

Now, let n ∈ N, we have that vn ∈ Kn , then J (σn)  J (vn ), taking into account the definition of J and the norm [[·]], we obtain that ∀n ∈ N,

[[σn ]]  [[vn ]].

(15)

From (14), (15) and using Lemma 2.2, we deduce that the family (σn )n∈N is bounded in H k+1 (a, b). Then, there exist a subsequence (σn )∈N , extracted from this family, such that lim→+∞ n = +∞, and an element σ ∈ H k+1 (a, b) such that σn converges weakly towards σ in H k+1 (a, b),

when  → +∞,

(16)

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

281

then, from (16) we have lim (((σn , σ ))) = [[σ ]]2 .

(17)

→+∞

Likewise, as H k+1 (a, b) is compactly injected in C k ([a, b]), then ∀Φ ∈ Σ,





lim Φ(σn ) − Φ(σ ) = 0,

→+∞

and, taking into account that ρσn = ρσ , for all  ∈ N, we have that ρ σ = ρσ . Let x ∈ (a, b). From (13), for each  ∈ N , there exists ax ∈ Xn such that |ax − x|  hn and for the injection of the H k+1 (a, b) into C k ([a, b]), there exists C > 0, independent on k,  and x, such that  (k)   σ  ax − σ (k)(x)  Chn ,

hence,

























σ (k) (x) = σn(k) ax − σn(k) ax − σ (k)(x)  σn(k) ax − σn(k) (ax ) − σ (k) (x)  σn(k) ax − Chn .      (ax )  0, being  → +∞, we deduce that As we have σn(k)  ∀x ∈ (a, b),

σ (k)(x)  0.

So, σ ∈ K and hence J (σ )  J (σ ). Thus, [[σ ]]  [[σ ]].

(18)

From (15) we have that [[σn − σ ]]2 = [[σn ]]2 + [[σ ]]2 − 2(((σn , σ )))  [[vn ]]2 + [[σ ]]2 − 2(((σn , σ ))). Applying (14), (17) and (18) we deduce that lim [[σn − σ ]] = 0,

→+∞

then, σn converges to σ in H k+1 (a, b). From (15) we deduce that J (σn )  J (vn ), for all  ∈ N, then J (σ )  J (σ ). From which with (18), it follows that σ = σ , hence lim σn − σ k+1,(a,b) = 0.

→+∞

Finally, if limn→+∞ σn − σ k+1,(a,b) = 0, then there exists η > 0 and one subsequence (σn ) ∈N such that σn − σ k+1,(a,b) > η,

∀ ∈ N.

(19)

But by reasoning analogously as we have done for the family (σn )n∈N , it would be deduced that from such family it can be extracted one subsequence converging towards σ , which produces a contradiction with (19). ✷ 

Theorem 5.2. Suppose that the hypotheses (1) and (5) hold. If f ∈ H k (a, b), with k  > k + 1, then lim σ − f k+1,(a,b) = 0.

N→+∞

282

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

Proof. First, we have f − σ ∈ K0 = {v ∈ H k+1 (a, b) | ρv = 0}, which implies that (σN , f − σ )k+1,(a,b) = 0, where σN is the interpolation Dm -spline function relative to ρ and ρf (see [3]). Next, from (8) we have (σ, f − σ )k+1,(a,b)  0. It follows from the both previous equations that (σ − σN , f − σ )k+1,(a,b)  0. Hence, |σ − f |2k+1,(a,b)  (σN − f, σ − f )k+1,(a,b)  |σN − f |k+1,(a,b) |σ − f |k+1,(a,b), which implies that |σ − f |k+1,(a,b)  |σN − f |k+1,(a,b). By using Lemma 2.2 and that ρf = ρσ = ρσN , we deduce that there exists C > 0 such that σ − f k+1,(a,b)  CσN − f k+1,(a,b). We obtain the result from σN converges to f , when N → +∞ (see again [3]).





Theorem 5.3. Suppose that the hypotheses (1), (5) and (13) hold. If f ∈ H k (a, b), with k  > k + 1, then lim

n,N→+∞

σn − f k+1,(a,b) = 0.

Proof. Applying corollary 5.1 of [3], we deduce that there exists a family (fn )n∈N such that, for all n ∈ N, k (Xn ), ρfn = ρf , fn(k) (x) = f (k) (x)  0, for all x ∈ Xn , and fn ∈ S2k+1 lim fn − f k+1,(a,b) = 0.

(20)

n→+∞

First, we have fn − σn ∈ K 0 . Then, from (3) we obtain (σ , fn − σn )k+1,(a,b) = 0. Next, from (10) we have (σn , fn − σn )k+1,(a,b)  0. It follows from the both previous equations that (σn − σ , fn − σn )k+1,(a,b)  0. Hence, taking into account that ρf = ρfn = ρσ = ρσn = y and Lemma 2.2, we deduce that there exists C > 0 such that σn − fn k+1,(a,b)  Cfn − σ k+1,(a,b), which implies that there exists C > 0 such that 



σn − f k+1,(a,b)  C fn − f k+1,(a,b) + f − σ k+1,(a,b) .

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

283

Finally, using Theorem 2.4 and (20), we deduce that σn converges towards f when both n and N tend to infinity. ✷ Now, we shall give an estimation of the approximation error of f by σn or σ , when both n and N tend to infinity. To do this we give the necessary result. Theorem 5.4. Suppose that the hypotheses (1) and (5) hold. Then, there exist the positive constant R, λ0 and C such that, for each N ∈ N, with 1/N  λ0 /R, and each n ∈ N, we have 

∀ = 0, . . . , k + 1,

1 |σn − f |,(a,b)  N

k+1−

|σn − f |k+1,(a,b).

Proof. From proposition 1 of [8], there exist M > 1 and λ0 > 0 such that, for all λ ∈ (0, λ0 ], there exists Tλ ⊂ (a, b) verifying ∀t ∈ Tλ ,

[t − λ, t + λ] ⊂ (a, b)



(a, b) ⊂

and

[t − Mλ, t + Mλ].

t ∈Tλ

Let s = σn − f and R be the constant of proposition 2 of [8]. If λ = R/N (then 1/N  λ0 /R) we have ∀ = 0, . . . , k + 1,

|s|2,(a,b)  |s |2,

t∈TR/N

(t −MR/N,t +MR/N)

,

where s = Ck+1 s and Ck+1 is the operator explicitly given in Lemma of section 4 of [21]. However, ∀ = 0, . . . , k + 1,

|s|2,(a,b) 



t ∈TR/N

|s |2,(t −MR/N,t +MR/N) .

(21)

As s ∈ H k+1 (t − MR/N, t + MR/N) and from (5) it vanishes in at least one point of an interval, of length 2/N , which is contained in [t − R/N, t + R/N], it follows from proposition 2 of [8] that there exists a constant C > 0 depending uniquely on M, k + 1 and  such that 

1 ∀ = 0, . . . , k + 1, |s |,(t −MR/N,t +MR/N)  C N and from (21) we have that 

∀ = 0, . . . , k + 1,

|s|2,(a,b)

C

2

1 N

k+1−

2(k+1−)  t ∈TR/N

|s |k+1,(t −MR/N,t +MR/N),

|s |2k+1,(t −MR/N,t +MR/N).

(22)

By using item 3 of proposition 1 of [8] we deduce that there exists M1 > 1 such that 

t ∈TR/N

|s |2k+1,(t −MR/N,t +MR/N)  M1 |s |2k+1,R ,

and it follows from (22) that there exists C > 0 such that 



1 2(k+1−) 2 |s |k+1,R . (23) N But as s = Ck+1 s, by using again Lemma of section 4 of [21] and (23), we deduce that there exists C > 0 such that  k+1− 1 |s|k+1,(a,b) . ✷ ∀ = 0, . . . , k + 1, |s|,(a,b)  C N ∀ = 0, . . . , k + 1,

|s|2,(a,b)  C 2 M1

284

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

Remark 5.4. In the proof of the previous theorem if we put s = σ − f , we obtain 

|σ − f |,(a,b)  C

∀ = 0, . . . , k + 1,

1 N

k+1−

|σ − f |k+1,(a,b).

Corollary 5.5. Under the hypotheses of Theorem 5.2, we have 

1 |σ − f |,(a,b) = o N

k+1−

,

when N → +∞.

Proof. It is obvious by using Theorems 5.2 and 5.4.



Corollary 5.6. Under the hypotheses of Theorem 5.3, we have 

|σn − f |,(a,b) = o

1 N

k+1−

,

when N and n → +∞.

Proof. It is obvious by using Theorems 5.3 and 5.4.



6. Numerical and graphical examples In order to test the validity and effectiveness of the method we have chosen a set of test functions defined in (a, b) = (−3, 3), and we have generated n random points x1 , . . . , xm . For each case, we have testing the algorithm for constructing positive, monotone and convex spline interpolants, respectively, and using as stopping criteria: σr(k) (x r )  γ , where k = 0, 1, 2 and γ is a small positive number. While testing the algorithm we have used very small numbers γ , but for practical purposes we consider a good choice γ 0 , giving by γ0 =−

1 (k)  0 σ x , 10s 0

with s ∈ N,

being x 0 such that σ0(k) (x 0 ) is minimum, as given in the algorithm. Indeed, the use of γ less than γ 0 does not produce in general, any visual effect. But of course, if a better precision is needed, then other choice of γ should be made (for more details of stopping criteria see [22]). Moreover, in each examples we give a table with the number of iterations required to attain several precisions depending on the points. Graphically, in each occasion we plot a particular region in order to show the improvements of the interpolating spline function preserving some shape with respect to the interpolating spline function defined in Section 2, i.e., without preserving any shape. Example 1. In the first example, we consider the set of positive data values given by (xi , yi )1i5 , where yi = f (xi ) and f (x) = x 4 − 3x 2 + 94 , which is positive in (a, b). We impose positivity and require that the interpolating spline σ5 should satisfy the same property. The graph of this interpolant and a piece of it on a particular zone are reported in Fig. 2. In this case, x 0 = 0.322499 and for 5 points, 5 iterations are required and γ = γ 0 /10, with γ 0 is given for s = k = 0.

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

285

Table 1 Example 1. Number of iterations as a function of the points and γ 0 Points

γ0

γ 0 /10

γ 0 /100

γ 0 /1000

5

3

5

7

11

6

3

3

3

5

7

1

1

1

1

Fig. 1. Example 1. The graph of σ and a piece of it.

Fig. 2. Example 1. The graph of σ5 and a piece of it.

The plots in Fig. 1 represent the interpolation spline σ of class 0 in the space S10 (X5 ), for the set of data in (−3, 3) given above, and the graph of this function for the same selecting zone given in Fig. 2, which shows that this approximating function does not preserve the positivity. We have chosen in Figs. 1 and 2 the same particular zone in order to see clearly the difference between the interpolating spline and the interpolation spline preserving the positivity. Consequently, from the plots of the same particular zone in Figs. 1 and 2, we observe the improvements realized by the method with respect to the problem of preserving the positivity. Example 2. In the second example, we consider the set of increasing data values given by (xi , yi )1i6 , where yi = f (xi ) and f (x) = (1 + 2 exp{3(x + 6.7)})1/2 , which is increasing in (a, b). We impose increasing and require that the interpolating spline σ4 ∈ S31 (X6 ) should satisfy the same property. The graph of the first derivative of this interpolant in (a, b) and a piece of it on a particular zone are reported in Fig. 5. The plots in Fig. 3 represent the interpolation spline σ of class 1 in S31 (X6 ), and the interpolating function σ4 preserving the increasing. In this case, x 0 = 0.618608 and for 6 points, 4 iterations are required and γ = γ 0 , with γ 0 is given for s = 0 and k = 1.

286

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

Table 2 Example 2. Number of iterations as a function of the points and γ 0 Points

γ0

γ 0 /10

γ 0 /100

γ 0 /1000

5

4

7

7

7

6

4

11

32

96

8

1

1

1

1

Fig. 3. Example 2. The graphs of σ and σ4 .

Fig. 4. Example 2. The graph of the first derivative of σ and a piece of it.

Fig. 5. Example 2. The graph of the first derivative of σ4 and a piece of it.

The plots in Fig. 4 represent the first derivative of the interpolation spline σ of class 1 in the space S31 (X6 ), interpolating the set of data in (−3, 3) given above, and the graph of this function for the same selecting zone given in Fig. 5, which shows that this approximating function does not preserve the increasing. Consequently, from the plots of the same particular zone in Figs. 4 and 5, we observe the improvements realized by the method with respect to the problem of preserving the monotonicity. Example 3. We consider 5 random convex data values given by (xi , yi )1i5 , where yi = f (xi ) and f (x) = (1 + 2 exp{2(x + 6.7)})1/2 , which is convex in (a, b). As in both previous cases the same analysis can be done. The graph of the second derivative of the interpolating spline σ6 in (−3, 3) and a piece of it

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

287

Table 3 Example 3. Number of iterations as a function of the points and γ 0 Points

γ0

γ 0 /10

γ 0 /100

γ 0 /1000

5

2

6

14

34

6

1

1

1

1

Fig. 6. Example 3. The graphs of σ and σ6 .

Fig. 7. Example 3. The graph of the second derivative of σ and a piece of it.

Fig. 8. Example 3. The graph of the second derivative of σ6 and a piece of it.

on a particular zone are reported in Fig. 8. In this case, x 0 = 0.382423 and for 5 points, 6 iterations are required and γ = γ 0 /10, with γ 0 is given for s = 0 and k = 2. The plots in Fig. 6 represent the interpolation spline σ of class 2 in the space S52 (X5 ), interpolating the set of data in (−3, 3) given above, and the interpolating function σ6 preserving the convexity. The plots in Fig. 7 represent the second derivative of the interpolation spline σ , and the graph of this function for the same selecting zone given in Fig. 8, which shows that this approximating function does not preserve the convexity. Consequently, from the plots of the same particular zone in Figs. 7 and 8, we observe the improvements realized by the method with respect to the problem of preserving the convexity.

288

A. Kouibia, M. Pasadas / Applied Numerical Mathematics 37 (2001) 271–288

Acknowledgements We thank Prof. P.J. Laurent for his helpful corrections. The work of the second author has been supported by the Junta de Andalucia (Research group FQM/191).

References [1] L.-E. Andersson, T. Elfving, Interpolation and approximation by monotone cubic splines, J. Approx. Theory 66 (1991) 302–333. [2] L.-E. Andersson, T. Elfving, Best constrained approximation in Hilbert space and interpolation by cubic splines subject to obstacles, SIAM J. Sci. Comput. 16 (5) (1995) 1209–1232. [3] R. Arcangéli, D m -splines sur un domaine borné de Rn , Publication UA 1204 CNRS No. 86/2 (1986). [4] J.C. Archer, E. LeGruyer, Two shape preserving Lagrange C 2 -interpolants, Numer. Math. 64 (1993) 1–11. [5] H. Brézis, Analyse Fonctionnelle. Théorie et Applications, Masson, Paris, 1983. [6] P. Costantini, On monotone and convex spline interpolation, Math. Comp. 46 (1986) 203–214. [7] H. Dauner, C.H. Reinsch, An analysis of two algorithms for shape-preserving cubic spline interpolation, IMA J. Numer. Anal. 9 (1989) 299–314. [8] J. Duchon, sur l’Erreur d’Interpolation des fonctions de plusieurs variables par les D m -splines, RAIRO 12 (4) (1978) 325–334. [9] J.C. Dupin, A. Fréville, Shape preserving interpolating cubic splines with geometric mesh, Appl. Numer. Math. 9 (1992) 447–459. [10] J.C. Fiorot, J. Tabka, Shape-preserving C 2 cubic polynomial interpolating splines, Math. Comp. 57 (195) (1991) 291–298. [11] U. Hornung, Interpolation by smooth functions under restrictions on the derivatives, J. Approx. Theory 28 (1980) 227–237. [12] A. Kouibia, M. Pasadas, Smoothing variational spline, Appl. Math. Lett. 9 (2) (2000) 71–75. [13] A. Kouibia, M. Pasadas, J.J. Torrens, Fairness approximation by modified discrete smoothing D m -splines, in: M. Daehlen, T. Lyche, L.L. Schumaker (Eds.), Mathematical Methods for Curves and Surfaces II, 1998, pp. 295–302. [14] A. Lahtinen, Shape preserving interpolation by quadratic splines, J. Comp. Appl. Math. 29 (1990) 15–24. [15] P.J. Laurent, An algorithm for the computation of spline functions with inequality constraints, Seminar No. 335 at the University of Grenoble, February 1980. [16] C. Manni, P. Sablonnière, Monotone interpolation of order 3 by C 2 cubic splines, IMA J. Numer. Anal. 17 (1997) 305–320. [17] E. Passow, J.A. Roulier, Monotone and convex spline interpolation, SIAM J. Numer. Anal. 14 (1977) 904– 909. [18] S. Pruess, Shape preserving C 2 cubic spline interpolation, RAIRO: Numer. Anal. 13 (1993) 493–507. [19] J.W. Schmidt, W. HeB, An always successful method in univariate convex C 2 interpolation, Numer. Math. 71 (1995) 237–252. [20] L.L. Schumaker, On shape preserving quadratic spline interpolation, SIAM J. Numer. Anal. 20 (4) (1983). [21] G. Strang, Approximation in the finite element method, Numer. Math. 19 (1972) 81–98. [22] F.I. Utreras, Positive thin plate splines, Approx. Theory Appl. 1 (3) (1985). [23] M.L. Varas, On the computation of the monotone cubic spline function, Approx. Theory Appl. 3 (1987) 91–105.