Polynomial interpolation on interlacingrectangular grids

Polynomial interpolation on interlacingrectangular grids

Accepted Manuscript Polynomial interpolation on interlacing rectangular grids Michael S. Floater PII: DOI: Reference: S0021-9045(17)30073-4 http://d...

429KB Sizes 0 Downloads 127 Views

Accepted Manuscript Polynomial interpolation on interlacing rectangular grids Michael S. Floater

PII: DOI: Reference:

S0021-9045(17)30073-4 http://dx.doi.org/10.1016/j.jat.2017.06.002 YJATH 5154

To appear in:

Journal of Approximation Theory

Received date : 23 November 2016 Revised date : 1 June 2017 Accepted date : 19 June 2017 Please cite this article as: M.S. Floater, Polynomial interpolation on interlacing rectangular grids, Journal of Approximation Theory (2017), http://dx.doi.org/10.1016/j.jat.2017.06.002 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

*Manuscript Click here to view linked References

Polynomial interpolation on interlacing rectangular grids Michael S. Floater∗ June 1, 2017

Abstract In this paper we establish the unisolvence of any interlacing pair of rectangular grids of points with respect to a large class of associated polynomial spaces. This includes interpolation on Padua points and the schemes of Morrow and Patterson, Xu, and Erb et al. We use Newton polynomials and divided differences and an apparently new formula for determinants of certain divided difference matrices. Math Subject Classification: Primary: 65D05, 65F40 Secondary: 65D32 Keywords: Multivariate interpolation, Padua points, divided differences, determinants.

1

Introduction

In [12, 14, 3, 1, 9, 8] point sets consisting of various interlacing pairs of rectangular grids in the domain [−1, 1]2 have been shown to have some remarkable properties when used for approximation and cubature. Using properties of Chebyshev polynomials, these points sets have been shown to be unisolvent for interpolation in appropriate spaces of polynomials, with slowly growing Lebesgue constants, and associated cubature formulas with high degrees of precision with respect to a Chebyshev weighting. A notable example is the Padua points [3], which are unisolvent for polynomial interpolation of full degree N . The Lebesgue constant grows with minimal order O(log2 (N )) and the associated cubature rule has degree of precision 2N − 1 with respect to the Chebyshev weighting [1]. These points can be defined as the intersections of the Lissajous curve γ(t) = (cos N t, cos(N + 1)t), t ∈ [0, π], with itself and the boundary of [−1, 1]2 . Figure 1 shows the example N = 4. The nine points shown as circles form one grid, the six points shown as squares form another, interlacing grid. Padua points of even degree are a modification of a similar point set proposed by Morrow and Patterson [12]. Point sets for interpolation in other polynomial spaces ∗ Department of Mathematics, University of Oslo, Moltke Moes vei 35, 0851 Oslo, Norway, email: [email protected]

1

1

0.5

0

-0.5

-1 -1

-0.5

0

0.5

1

Figure 1: Padua points, degree 4. were studied in [12], and by Xu [14]. Recently, Erb et al. [9] and Erb [8] have studied large classes of point sets generated by Lissajous curves with respect to both interpolation and cubature. A natural question arising from these works is whether these points sets continue to be unisolvent when their coordinates are arbitrarily spaced. For the Padua points, these more general ‘Padua-like’ points were studied in [2], [6], and [13]. It was shown that the associated Vandermonde determinant has a factorization into smaller determinants, each of which depends only on either the x coordinates of the points or the y coordinates. From this it was argued that the Vandermonde determinant is non-zero. In this paper we show unisolvence for any interlacing pair of grids with respect to a large class of associated polynomial spaces. This implies, in particular, the unisolvence of all the interpolation schemes in [12, 14, 3, 9, 8]. The basic idea is to use tensor-product interpolation, Newton polynomials, and divided differences to reduce the problem to solving a sequence of smaller linear systems. The elements of the matrices in these linear systems are divided differences. We show that these matrices are non-singular by deriving an apparently new formula for their determinants.

2

The point set

Defining Bk,l = {(i, j) : 0 ≤ i ≤ k, 0 ≤ j ≤ l},

k, l ≥ 0,

the point set is the union U ∪ X of two rectangular grids of points in R2 , U = {(ui , vj ) : (i, j) ∈ Bµ,ν },

X = {(xi , yj ) : (i, j) ∈ Bm,n },

that are interlaced: u0 < x 0 < u1 < x 1 < · · ·

or

x 0 < u0 < x 1 < u1 < · · · ,

(1)

v 0 < y0 < v 1 < y1 < · · ·

or

y0 < v 0 < y 1 < v 1 < · · · .

(2)

and

2

It follows that |m − µ| ≤ 1 and |n − ν| ≤ 1. Therefore, m ≤ µ + 1, and, by reversing the roles of U and X if necessary, we may assume without loss of generality that n ≤ ν. Three examples of such grids are shown in Figure 2, with black points for U and white points for X. For simplicity the figure shows uniform spacing but we allow arbitrary spacing in the two coordinate directions. For example, the dimensions and interlacing of U and X in Figure 2 (left) are the same as for the Padua points of Figure 1.

Figure 2: Examples of grids U , black points, X, white points.

3

Interpolation

We next associate with the point set an appropriate space of polynomials. Let N0 = {0, 1, 2, . . .}, and for any two index-pairs (i, j) and (k, l) in N20 we write (i, j) ≤ (k, l) if both i ≤ k and j ≤ l. We call any set L ⊂ N20 a lower set [7] if it has the property that if (k, l) ∈ L and (i, j) ∈ N20 and (i, j) ≤ (k, l) then (i, j) ∈ L. We can associate with any lower set L the linear space of polynomials Π(L) := span{xi y j : (i, j) ∈ L}. In the special case that L = Ts := {(i, j) : i + j ≤ s}, Π(L) = Πs , the space of bivariate polynomials of degree ≤ s. Next, let K1 ⊂ Bm,n be a lower set, with the condition that (0, n) ∈ K1 if m = µ + 1.

(3)

Then let K2 = {(i, j) : (m − i, n − j) ∈ Bm,n \ K1 }, which is clearly also a lower set: we can think of it as the result of rotating the ‘upper set’ Bm,n \ K1 through 180 degrees. In the case that m = µ + 1 the condition (3) is equivalent to (m, 0) 6∈ K2 . We now define L to be the disjoint union L = Bµ,ν ∪ {(i + µ + 1, j) : (i, j) ∈ K1 } ∪ {(i, j + ν + 1) : (i, j) ∈ K2 }, which has the same cardinality as U ∪ X, specifically, (µ + 1)(ν + 1) + (m + 1)(n + 1).

3

L is a lower set, since, by construction, the largest j coordinate of K1 is at most ν, and the largest i coordinate of K2 is at most µ. We will show Theorem 1 Given the values of a real function f on U ∪ X, there is a unique polynomial p ∈ Π(L) such that p = f on U ∪ X. Let us consider some examples of the theorem. Figure 3 illustrates possible lower sets L corresponding to the three grid pairs of Figure 2. Black points are used for Bµ,ν , white points for K1 shifted to the right of Bµ,ν , and white points for K2 shifted up above Bµ,ν . 4

4

4

3

3

3

2

2

2

1

1

1

0

0

0

0

1

2

3

4

0

1

2

3

0

1

2

3

4

5

Figure 3: Possible index sets L for the examples in Figure 2. 1. Padua and Morrow-Patterson points. For Padua points [3] of even degree and the Morrow-Patterson points of Sec. 2.2 of [12], (µ, ν) = (s, s) and (m, n) = (s, s − 1), for s ≥ 1. Setting K1 = Ts−1 gives K2 = Ts−1 and Π(L) = Π2s . Figures 2 and 3 (left) show the case s = 2. For Padua points of odd degree, (µ, ν) = (s + 1, s) and (m, n) = (s, s), s ≥ 1. Setting K1 = Ts−1 gives K2 = Ts and Π(L) = Π2s+1 . 2. Xu and Morrow-Patterson points. In [14] and in Sec. 2.3 of [12], points were studied in which (µ, ν) = (s − 1, s) and (m, n) = (s, s − 1), s ≥ 1. With the choice K1 = Ts−1 (which satisfies Condition (3) because (0, s − 1) ∈ K1 ), we obtain K2 = Ts−1 and Π2s−1 ⊂ Π(L) ⊂ Π2s . Figures 2 and 3 (middle) show the case s = 2. In Sec. 2.3 of [12], Morrow and Patterson also considered grids with (µ, ν) = (s, s), (m, n) = (s − 1, s − 1), s ≥ 1. Choosing K1 = Ts−1 gives K2 = Ts−2 and Π2s−1 ⊂ Π(L) ⊂ Π2s . In Sec. 2.3 of [14], Xu also considered grids with (µ, ν) = (m, n) = (s, s), s ≥ 1. With the choice K1 = Ts , we obtain K2 = Ts−1 and Π2s ⊂ Π(L) ⊂ Π2s+1 .

3. Lissajou points of Erb et al. Large classes of points sets satisfying the conditions of Theorem 1 arise from Lissajou curves, as studied by Erb et al. [9] and Erb [8]. Figure 2 (right) shows the type of grid pair generated by the Lissajou curve γ(t) = (sin 2t, sin 3t), t ∈ [0, 2π], the first example of Sec. 2 of [9]. Figure 3 (right) shows the index set L used in Sec. 4 of [9]. Here, Π4 ⊂ Π(L) ⊂ Π5 .

4

4

Tensor-product interpolation

Let us start by using tensor-product interpolation on U to reduce the complexity of the problem. Letting a(x) = (x − u0 )(x − u1 ) . . . (x − uµ ),

b(y) = (y − v0 )(y − v1 ) . . . (y − vν ),

we can express any p ∈ Π(L) uniquely as p(x, y) = p0 (x, y) + a(x)p1 (x, y) + b(y)p2 (x, y), where p0 ∈ Π(Bµ,ν ), p1 ∈ Π(K1 ), and p2 ∈ Π(K2 ). We note that for Padua interpolation, the polynomials a(x) and b(y) are the same as those used to make a basis for Π(L) in [2, Sec. 2]. So let p0 be the tensor-product interpolant to f on U , p0 (x, y) =

µ X ν X

f (uk , vl )αk (x)βl (y),

k=0 l=0

where

µ Y x − ur αk (x) = , uk − ur

ν Y y − vs βl (y) = , vl − vs

r=0 r6=k

s=0 s6=l

and we will have p = f on U ∪ X if we can find p1 ∈ Π(K1 ) and p2 ∈ Π(K2 ) such that a(xi )p1 (xi , yj ) + b(yj )p2 (xi , yj ) = gij , (i, j) ∈ Bm,n , (4) where gij := f (xi , yj ) − p0 (xi , yj ).

5

Newton form

To solve the equations in (4) we represent p1 and p2 in Newton form [10] with respect to the grid X, X X dkl φk (x)ψl (y), ckl φk (x)ψl (y), p2 (x, y) = p1 (x, y) = (k,l)∈K2

(k,l)∈K1

for coefficients ckl , dkl ∈ R, where φ0 (x) = 1, ψ0 (y) = 1,

φk (x) = (x − x0 )(x − x1 ) · · · (x − xk−1 ),

ψl (y) = (y − y0 )(y − y1 ) · · · (y − yl−1 ),

Substituting these expressions into (4) gives X X ckl (aφk )(xi ) ψl (yj ) + dkl φk (xi ) (bψl )(yj ) = gij , (k,l)∈K1

(k,l)∈K2

k ≥ 1,

l ≥ 1.

(5) (6)

(i, j) ∈ Bm,n .

Next, by taking divided differences of these (m + 1)(n + 1) equations with respect to the grid X, we transform them into an equivalent set of equations which are easier

5

to solve. Applying the divided difference operator [x0 , . . . , xi ; y0 , . . . , yj ] to each side of the equation gives X ckl [x0 , . . . , xi ](aφk )[y0 , . . . , yj ]ψl + (k,l)∈K1

X

(i, j) ∈ Bm,n ,

dkl [x0 , . . . , xi ]φk [y0 , . . . , yj ](bψl ) = hij ,

(k,l)∈K2

where hij = [x0 , . . . , xi ; y0 , . . . , yj ](f − p0 ). These equations now simplify because [x0 , . . . , xi ]φk = δik ,

and

[y0 , . . . , yj ]ψl = δjl ,

and, by the Leibniz rule for divided differences [5], aki := [x0 , . . . , xi ](aφk ) =

i X

[xρ , . . . , xi ]a[x0 , . . . , xρ ]φk =

ρ=0

and similarly, blj := [y0 , . . . , yj ](bψl ) =

(

[yl , . . . , yj ]b, 0,

We have thus obtained the equivalent equations X X aki ckj + blj dil = hij , k:(k,j)∈K1

6

l:(i,l)∈K2

(

[xk , . . . , xi ]a, 0,

k ≤ i; k > i,

l ≤ j; l > j.

(i, j) ∈ Bm,n .

(7)

Solution

The equations (7) can be solved by block back substitution, as follows. An index pair (i, j) of the lower set K1 is a maximal point of K1 if there is no pair (k, l) ∈ K1 , distinct from (i, j), such that (i, j) ≤ (k, l). In general the maximal points of K1 are (ir , jr ), r = 1, 2, . . . , s, for some s ≥ 1, with the ordering 0 ≤ i1 < · · · < is ≤ m,

n ≥ j1 > · · · > js ≥ 0.

Let us assume initially that (0, n) ∈ K1 and (m, 0) 6∈ K1 . Then j1 = n and is < m and K2 also has s maximal points, (m − ir − 1, n − jr+1 − 1), r = s, s − 1, . . . , 1, where js+1 := −1. This is in fact the case in all three examples of Figure 3, where s = 2. Figure 4 illustrates, on the left, an example of K1 ⊂ Bm,n , where (m, n) = (20, 12). Here, s = 3 and (i1 , j1 ) = (5, 12), (i2 , j2 ) = (8, 9), (i3 , j3 ) = (16, 4). The corresponding set K2 has maximal points (3, 12), (11, 7), (14, 2), and is illustrated on the right. We now group the equations (7) into 2s blocks. For each r = 1, 2, . . . , s we apply the following two steps (we also define i0 = −1 and is+1 = m):

6

12

12 9

7 4

K1

0

5

16

8

20

2

K2

0

3

11

14

20

Figure 4: Example of K1 and K2 , both with three maximal points. (i) For j = jr+1 + 1, . . . , jr , we write equations (7) with i = m − ir , . . . , m as the linear system ir X k=0

aki ckj = hij −

n−j α −1 X

blj dil ,

i = m − iα , . . . , m − iα−1 − 1. (8)

α = 1, . . . , r,

l=0

Since the coefficients dil , i ≥ m − ir , are known, the right hand side is known and we can solve for c0,j , . . . , cir ,j . (ii) For i = m − ir+1 , . . . , m − ir − 1, we write equations (7) with j = jr+1 + 1, . . . , n, as the linear system n−jr+1 −1

X l=0

blj dil = hij −

iα X

aki ckj ,

α = 1, . . . , r,

j = jα+1 + 1, . . . , jα .

(9)

k=0

Since the coefficients ckj , j ≥ jr+1 + 1, are known, the right-hand side is known and we can solve for di,0 , . . . , di,n−jr+1 −1 .

12

2

9

1

4 3

6 4 5 0

3

11

14

20

Figure 5: The equations are grouped into six blocks. Figure 5 illustrates how the equations in the example of Figure 4 are grouped together into blocks. There are six blocks, numbered according to the order in which they are solved. Odd-numbered blocks find rows of c-coefficients, one row of coefficients for each horizontal line of equations in the block. Even-numbered blocks find columns of d-coefficients, one column of coefficients for each vertical line of equations in the block.

7

The algorithm is similar for the remaining cases of j1 and is . If j1 = n and is = m, K2 has s − 1 maximal points, and the algorithm stops after 2s − 1 steps. If j1 < n, the first block of equations finds d-coefficients, the second finds c-coefficients, and so on.

7

Unisolvence

We have now shown that the original problem has a solution if the linear systems in (8) and (9) are solvable. To discuss their solvability consider the two divided difference matrices   [x0 , . . . , xm ]a · · · [xm−1 , xm ]a [xm ]a [x0 , . . . , xm−1 ]a · · · [xm−1 ]a 0    (10) A= .. ..  . . . .  . . . .  [x0 ]a

and

···

0



[y0 , . . . , yn ]b · · · [y0 , . . . , yn−1 ]b · · ·  B= .. .  .. . [y0 ]b 0

0

 [yn−1 , yn ]b [yn ]b [yn−1 ]b 0   ..  . . . . .  ···

(11)

For each r = 0, 1, . . . , m, let Ar be the submatrix of A consisting of its first r + 1 rows and columns (the upper left corner of A). Similarly, for each r = 0, 1, . . . , n, let Br be the submatrix of B consisting of its first r + 1 rows and columns. By ordering the equations in (8) by decreasing i and the unknowns ck,j by increasing k, we see that the linear system (8) has a unique solution if the matrix Air is non-singular. Similarly, by ordering the equations in (9) by decreasing j and the unknowns di,l by increasing l, the linear system (9) has a unique solution if the matrix Bn−jr+1 −1 is non-singular. Thus the proof of Theorem 1 will be complete if we can show that all the submatrices Ar , r = 0, 1, . . . , m, and Br , r = 0, 1, . . . , n, are non-singular. It is sufficient to study A. Consider first A0 , which consists of the single element [x0 , . . . , xm ]a =

m X

qk ,

k=0

where qk = a(xk )

m Y

j=0 j6=k

(xk − xj )−1 .

(12)

Since x0 , x1 , . . . , xm are increasing, the sign of the product in (12) is (−1)m−k . By the interlacing of X with U , the values a(x0 ), . . . , a(xm ) alternate in sign, sgn a(xk ) = (−1)m−k a(xm ),

8

and it follows that sgn qk = sgn a(xm ), and so sgn ([x0 , . . . , xm ]a) = sgn a(xm ) 6= 0, and A0 is non-singular. In order to extend this argument to general r, we derive a formula for the determinant of Ar . Lemma 1 For r = 0, 1, . . . , m, and with qk as in (12), X Y det Ar = (xkj − xki )2 qk0 qk1 · · · qkr .

(13)

0≤k0
Here, the product over an empty set is defined as 1 (corresponding to the case r = 0). From this lemma, the proof of Theorem 1 is complete because the sign of every term in the sum in (13) is sgn (a(xm ))r , and so Ar is non-singular. The proof that the Br are non-singular is similar. As an example, for m = 2, Lemma 1 shows that the determinant of A1 is [x0 , x1 , x2 ]a [x1 , x2 ]a = (x1 − x0 )2 q0 q1 + (x2 − x0 )2 q0 q2 + (x2 − x1 )2 q1 q2 , [x0 , x1 ]a [x1 ]a where

q0 =

a(x0 ) , (x0 − x1 )(x0 − x2 )

q1 =

a(x1 ) , (x1 − x0 )(x1 − x2 )

q2 =

a(x2 ) . (x2 − x0 )(x2 − x1 )

This can be verified by direct calculation using the observation that [x0 , x1 , x2 ]a = q0 + q1 + q2 , [x0 , x1 ]a = (x0 − x2 )q0 + (x1 − x2 )q1 ,

[x1 , x2 ]a = (x1 − x0 )q1 + (x2 − x0 )q2 , [x1 ]a = (x1 − x2 )(x1 − x0 )q1 .

Since x0 < x1 < x2 , and assuming that a(x0 ) > 0, a(x1 ) < 0, a(x2 ) > 0, we find that q0 , q1 , q2 > 0 and so det A1 > 0. To prove Lemma 1 observe that the elements of A in (10) are Aij = aj,m−i for i, j = 0, . . . , m. We start by expressing these elements as linear combinations of q0 , . . . , qm . Lemma 2 For i, j = 0, . . . , m, Aij =

m X

φˆi (xk )φj (xk )qk ,

(14)

k=0

where qk is given by (12), φj is given by (5), and φˆ0 (x) = 1,

φˆi (x) = (x − xm )(x − xm−1 ) · · · (x − xm−i+1 ),

9

i ≥ 1.

Proof. Consider first the case that i + j ≤ m. Then Aij = [xj , . . . , xm−i ]a =

m−i X

a(xk )

k=j

m−i Y s=j s6=k

(xk − xs )−1 =

m−i X

φˆi (xk )φj (xk )qk .

k=j

Then (14) follows from the fact that φj (xk ) = 0 whenever k < j and φˆi (xk ) = 0 whenever k > m − i. Otherwise, i + j > m. Then (14) follows from the fact that φˆi (xk )φj (xk ) = 0 for all k = 0, . . . , m. ✷ Proof of Lemma 1. By Lemma 2, we can write Ar as the matrix product Ar = Cr Dr where Cr = [φˆi (xk )]i=0,...,r,k=0,...,m , Dr = [φj (xk )qk ]k=0,...,m,j=0,...,r . By the Cauchy-Binet theorem [11, p. 17] applied to this product, X |φˆi (xkj )|i,j=0,...,r |φj (xki )qki |i,j=0,...,r . |Ar | = 0≤k0
Since |φj (xki )qki |i,j=0,...,r = qk0 qk1 . . . qkr |φj (xki )|i,j=0,...,r , the proof is completed by observing that |φˆi (xkj )|i,j=0,...,r = |φj (xki )|i,j=0,...,r = |xikj |i,j=0,...,r =

Y

0≤i
(xkj − xki ).



8

Final remark

While the purpose of this work was to prove unisolvence, the method of proof provides an algorithm for finding the polynomial interpolant p in Theorem 1. However, this algorithm will be subject to the numerical sensitivity of divided differences, and the conditioning of the matrices Ar and Br , and it might be better, numerically, to represent p in some other way, such as, for example, using a Chebyshev basis, as proposed for Padua points in [4]. This is a topic for future research. Acknowledgement. I wish to thank the referees for their careful reading of the manuscript and their valuable comments which have helped to improve the paper.

References [1] L. Bos, M. Caliari, S. De Marchi, M. Vianello, and Y. Xu, Bivariate Lagrange interpolation at the Padua points: the generating curve approach, J. Approx. Theory 143 (2006), 15–25.

10

[2] L. Bos, S. De Marchi, and S. Waldron, On the Vandermonde determinant of Padua-like points, Dolomites Research Notes on Approximation 2 (2009), 1–15. [3] M. Caliari, S. De Marchi, and M. Vianello, Bivariate Lagrange interpolation on the square at new nodal sets, Appl. Math. Comp. 165 (2005), 261–274. [4]

, Bivariate Lagrange interpolation at the Padua points: computational aspects, J. Comp. Appl. Math. 221 (2008), 284–292.

[5] C. de Boor, Divided differences, Surveys in Approximation Theory 1 (2005), 46–69. [6] S. De Marchi and K. Usevich, On certain multivariate Vandermonde determinants whose variables separate, Lin. Alg. Appl. 449 (2014), 17–27. [7] N. Dyn and M. S. Floater, Multivariate polynomial interpolation on lower sets, J. Approx. Theory 177 (2014), 34–42. [8] W. Erb, Bivariate Lagrange interpolation at the node points of Lissajous curves - the degenerate case, Appl. Math. Comp. 289 (2016), 409–425. [9] W. Erb, C. Kaethner, M. Ahlborg, and T. M. Buzug, Bivariate Lagrange interpolation at the node points of non-degenerate Lissajous curves, Numer. Math. 133 (2016), 685–705. [10] E. Isaacson and H. B. Keller, Analysis of numerical methods, Dover, 1994. [11] S. Karlin, Total positivity, Vol. I, Stanford University Press, Stanford, California, 1968. [12] C. R. Morrow and T. N. L. Patterson, Construction of algebraic cubature rules using polynomial ideal theory, SIAM J. Numer. Anal. 15 (1978), 953–976. [13] A. Pierro de Carmago and S. De Marchi, A few remarks on “On certain Vandermonde determinants whose variables separate”, Dolomites Research Notes on Approximation 8 (2015), 1–11. [14] Yuan Xu, Lagrange interpolation on Chebyshev points of two variables, J. Approx. Theory 87 (1996), 220–238.

11