Dual basis functions in subspaces of inner product spaces

Dual basis functions in subspaces of inner product spaces

Applied Mathematics and Computation 219 (2013) 10012–10024 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation jo...

575KB Sizes 0 Downloads 72 Views

Applied Mathematics and Computation 219 (2013) 10012–10024

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Dual basis functions in subspaces of inner product spaces Scott N. Kersey Dept. of Math. Sci., Georgia Southern Univ., Statesboro, GA 30460-8093, United States

a r t i c l e

i n f o

a b s t r a c t Dual basis functions are well-studied in the literature for certain inner product spaces. In this paper, we introduce dual basis functions in subspaces of inner product spaces. The goal is to construct a basis for a subspace that is dual to a basis in a different subspace of the same dimension. This problem reduces to the standard dual basis problem when the two subspaces and bases are the same. The paper begins with a characterization and properties of dual basis in subspaces, including requirements for existence. Then, the construction is carried out for subspaces of the space of polynomials in the Bernstein basis. Two configurations are of particular interest: a symmetric case in which the dual basis is affine and converges to Lagrange polynomial interpolation, and an end-point case that converges to Hermite interpolation. Ó 2013 Elsevier Inc. All rights reserved.

Keywords: Dual basis functions Polynomials

1. Introduction Let ðX; U; DÞ be a real or complex finite-dimensional inner product space with basis U ¼ ½U1 ; . . . ; Un  and dual basis D ¼ ½D1 ; . . . ; Dn . The dual basis D is the basis of X that is dual to U in the sense that hUi ; Dj i ¼ dij . With F a scalar field (R or C), we can view the basis U as a map on (column) vectors in Fn as follows:

U : Fn ! X : a # Ua ¼

n X

ai Ui :

i¼1

Likewise, D : Fn ! X with Da ¼

Pn

i¼1 Di

ai . On identifying the dual space X  with X, we define the dual map

D : X ! Fn : f # D f ¼ ½hf ; D1 i; . . . ; hf ; Dn i: Hence, D is dual to U exactly when the matrix D U :¼ ½hUi ; Dj i : 1 6 j; i 6 n is the n  n identity matrix. The basis D can be expressed in terms of the basis U as D ¼ UC for some invertible matrix C. On applying U to both sides and taking the inverse, we get C ¼ ðU UÞ1 U D. Since D is dual to U, this reduces to C ¼ ðU UÞ1 . Dual bases in inner product spaces have in recent years been well-studied in the literature, in particular when X is a space of polynomials of certain degree in the Bernstein basis, and to a lessor extent when X is a space of spline functions (see for example [1,5,6]). In certain cases the transformation matrix C is known (e.g., an explicit formula for the dual Bernstein basis transformation is derived in [1]). In the first part of this paper, it is our goal to generalize the theory of dual basis in inner product spaces to dual bases in ‘‘subspaces’’ of inner product spaces. In this case, bases for spaces are to be considered that are dual to bases from ‘‘different’’ spaces (of the same size), both subspaces of a common inner product space. To set up this problem, let ðX 1 ; U1 ; D1 Þ and ðX 2 ; U2 ; D2 Þ be two subspaces of ðX; U; DÞ of the same dimension, with bases Uk ¼ UEk for some injective maps Ek , and dual

E-mail address: [email protected] 0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2013.04.015

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

10013

bases Dk ; k ¼ 1; 2. Hence, Dk Uk ¼ I for i ¼ 1; 2. Our main interest is the basis D21 of X 2 that is dual to U1 . Under certain conditions, we show that D21 ¼ U2 C 21 with

C 21 ¼ ðE1 C 1 E2 Þ1 ¼ ðE1 U UE2 Þ1 : Here, U is the adjoint operator of U, as described above, and E1 is either the conjugate transpose or the transpose of E1 , depending if F ¼ C or R. In the latter part of this paper the construction is carried out for subspaces of the space of polynomials of degree n in the Bernstein basis. We present two special configurations: a ‘‘symmetric configuration’’ that produces a dual basis that is affine and converges to Lagrange interpolation, and an ‘‘end-point configuration’’ that produces a basis that converges to Hermite interpolation We derive other properties of these basis, in particular, that they are affine (but not convex). The problem of constructing dual basis functions in inner product spaces has been well-studied, however, to our knowledge, the construction of dual bases in subspaces has not been carried out systematically. Potential applications include best approximation (least squares), degree reduction and curve fitting in the dual bases. It is the aim of this paper to develop this subject. This paper is organized as follows:  In Section 2, we provide a construction of dual basis functions in inner product spaces and it’s connection to orthogonal projection.  In Section 3, dual basis functions in subspaces of inner product spaces are defined. This construction determines the basis in a subspace that is dual to a basis in a second (typically different) subspace of the same dimension.  In Section 4, we show how this construction is related to the problem of least squares, and in particular, how to recover the approximand from the best approximation.  In Section 5, we specialize the construction to polynomial spaces in the Bernstein basis. In particular, three cases (with two sub-cases) are presented. In this section, we establish invertiblity of the Gram matrix for these cases.  In Section 6, we show that for two our cases, the dual bases are affine.  In Section 7, we show that a symmetric class of Bernstein bases converge to Lagrange Interpolation. Several bases are plotted.  In Section 8, we show that an end-point class of Bernstein bases converge to Hermite Interpolation. Several bases are plotted.  In Section 9, we compute the dual basis for the symmetric and end-point cases, and compare these to the Lagrange and Hermite bases, respectively. Following this, we apply these two bases for curve fitting. 2. Dual basis functions in inner product spaces Let X ¼ ðX; h; iÞ be a (real or complex) inner product space of dimension n with basis U ¼ ½U1 ; . . . ; Un . Any x 2 X can be represented as a linear combination

x ¼ Ua ¼ ½U1 ; . . . ; Un ½a1 ; . . . ; an T ¼

n X

ai Ui

i¼1

for some coefficient vector a 2 Fn . The dual space X  consists of all linear functionals on X, and among these one has the dual functionals kj such that kj Ui ¼ dij . With (data map) K :¼ ½k1 ; . . . ; kn  (a row vector), then KT is a column vector, and the duality statement is simply

KT U ¼ I with I the n  n identity matrix. Since U is a basis (hence linearly independent), then K is also linearly independent, and therefore a basis for the n-dimensional space X  . By the Riesz Representation Theorem, every linear functional on X has a representer in X. That is, for each kj , there exists Dj 2 X such that

kj ¼ h; Dj i: Therefore, we associate K ¼ ½k1 ; . . . ; kn  with the representers D :¼ ½D1 ; . . . ; Dn . Since K is linearly independent in X  and dual to U, then D is linearly independent in X and also dual to U. That is, hDj ; Ui i ¼ dij . Hence, D is a so-called dual basis. In computing with dual bases, it is of interest to find a representation in terms of the original basis U. This is a transformation between the two bases U and D, which can be written as D ¼ UC for some matrix C 2 Fnn . Using the adjoint notation, we have

I ¼ KT U ¼ D U ¼ ½hUi ; Dj i: The transformation matrix C is computed by

I ¼ D U ¼ ðUCÞ U ¼ C  U U; with C  denoting the conjugate transpose of C (when F ¼ C) or transpose (when F ¼ R). This gives,

10014

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

C  ¼ ðU UÞ1 ¼ ððU UÞ Þ ¼ ððU UÞ1 Þ ¼ C  ¼ C: That is, the matrix C is hermitian (symmetric when F ¼ R). But this fact was already evident, since U is a basis and inner products are hermitian (resp. symmetric) and positive definite, implies that the Gram matrix U U is hermition (symmetric) positive definite. It’s inverse is therefore hermitian (symmetric), and positive definite since the eigenvalues are reciprocal to those of U U (which are necessarily positive and real). All said, we have: Theorem 2.1. Let ðX; U; DÞ be an inner product space of dimension n over a field F with basis U and dual basis D. Then, D ¼ UC with

C ¼ ðU UÞ1 a positive-definite n  n matrix, which is hermitian when F ¼ C and symmetric when F ¼ R. One application of dual bases is in solving least squares problems. This involves orthogonal projection. Suppose that ðX 1 ; U1 ; D1 Þ is a subspace of X with basis U1 and dual basis D1 ¼ U1 C 1 . It is well-known that the orthogonal projector onto X 1 can be constructed as P 1 ¼ U1 ðU1 U1 Þ1 U1 . Then, with C 1 ¼ ðU1 U1 Þ1 by the previous theorem and D1 ¼ U1 C 1 , we get the following representation for this orthogonal projector. Theorem 2.2. Let ðX; U; DÞ be an inner product space with basis U and dual basis D, and let ðX 1 ; U1 ; D1 Þ be a subspace with basis U1 and dual basis D1 ¼ U1 C 1 with C 1 ¼ ðU1 U1 Þ1 . Then the orthogonal projector of X onto X 1 is P 1 ¼ D1 U1 . In this setup, we say that p ¼ P1 q ¼ D1 U1 q is the best approximation in X 1 from q 2 X. There are several papers that work off this model of dual bases, and in particular the matrix C is known explicitly in certain inner product spaces (c.f. [1]). The goal in our work is to extend the idea of dual basis to dual bases on subspaces. Our motivation is to know the action of a subset of the functionals ki on subspaces of X. For example, suppose that X 1 is a subspace of X of dimension m, and we choose a selection (subsequence) Ks ¼ ½ksðiÞ : i ¼ 0 : m of the functionals, then when is Ks 1–1 on X 1 , and what meaning does it contain. More specifically, we are interested in the action of a subset of the dual functionals on Bn on Bm , the Bernstein basis for polynomial spaces. In this paper, we are concerned with those functionals of the form k ¼ hD; i with representer D in an inner product space. 3. Dual basis functions in subspaces of inner product spaces In this section we generalize Theorem 2.1 for subspaces of X. As above, X is an n-dimensional inner product space of dimension n with basis U and dual basis D. For k ¼ 1; 2 and m 6 n, let Ek be an n  m matrix of rank m and X k the subspaces of X of dimension m with basis Uk ¼ UEk . These subspaces necessarily have their own dual bases Dk , which can be represented as Dk ¼ Uk C k for some C k 2 Rmm . In particular, Dk Uk ¼ I, with I the m  m identity matrix. The setup is illustrated by the diagram in Fig. 3.1. In the figure, and theorem that follows, D12 is a basis for X 1 that is dual to U2 , the basis for X 2 . This connects these two subspaces. This is stated in the next theorem. Theorem 3.1. Suppose ðX; U; DÞ is an inner product space over a field F (R or C) of dimension n with basis U and dual basis D ¼ UC for some n  n matrix C. For k ¼ 1; 2, let ðX k ; Uk ; Dk Þ be two m-dimensional subspaces of X with bases Uk ¼ UEk for some n  m matrices Ek , and with dual bases Dk ¼ Uk C k for some m  m matrices C k . Assume that E1 C 1 E2 is invertible, and let D21 :¼ U2 C 21 with

C 21 ¼ ðE1 C 1 E2 Þ1 ¼ ðE1 U UE2 Þ1 ¼ ðU1 U2 Þ1 : Then, D21 is a basis for X 2 that is dual to the basis U1 of X 1 . In particular, D21 2 X 1 iff RanðE2 Þ ¼ RanðE1 Þ (i.e., when X 2 ¼ X 1 ), and D21 ¼ D1 iff E1 ¼ E2 . The dual basis D1 can be represented as D1 ¼ P1 D21 with P1 :¼ D1 U1 the orthogonal projector from X onto X 1 . Proof. Since D21 ¼ U2 C 21 is dual to U1 ,

I ¼ D21 U1 ¼ ðU2 C 21 Þ U1 ¼ ðUE2 C 21 Þ UE1 ¼ ðDC 1 E2 C 21 Þ UE1 ¼ C 21 E2 C  D UE1 ¼ C 21 E2 C  E1

Fig. 3.1. Dual bases on subspaces given as (space, basis, dual basis).

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

10015

and so

C 21 ¼ ðE2 C  E1 Þ ¼ ðE1 C 1 E2 Þ1 : Equivalently,

C 21 ¼ ðE1 U UE2 Þ1 ¼ ðU1 U2 Þ1 : By the hypothesis, C 21 is an invertible matrix, and so D21 is indeed a basis for X 2 , hence is not contained in X 1 unless X 1 ¼ X 2 . This establishes the first part. Now, the orthogonal projector onto X 1 can be written 

P1 ¼ U1 ðU1 U1 Þ1 U1 ¼ U1 C 1 U1 ¼ D1 U1 : Then, we have

P1 D21 ¼ D1 U1 U2 C 21 ¼ D1 E1 U UE2 ðE1 U UE2 Þ1 ¼ D1 : That is, the basis D21 projects onto D1 . h For the special case when X 1 ¼ X 2 with basis U1 ¼ U2 and dual basis D1 ¼ U1 C 1 , we have by Theorem 2.1 that C 1 ¼ ðU1 U1 Þ1 . With U1 ¼ UE1 , this is equivalent to

C ¼ ððUE1 Þ ðUE1 ÞÞ1 ¼ ðE1 U UE1 Þ1 ¼ ðE1 C 1 E1 Þ1 : This establishes the following corollary: Corollary 3.2. Suppose ðX; U; DÞ is an inner product space of dimension n with basis U and dual basis D ¼ UC for some n  n matrix C. Let ðX 1 ; U1 ; D1 Þ be an m-dimensional subspaces of X with basis U1 ¼ UE1 for some n  m matrix E1 , and with dual basis D1 ¼ U1 C 1 for some m  m matrix C 1 . Then, E1 C 1 E1 is invertible, and

C 1 ¼ ðE1 C 1 E1 Þ1 ¼ ðE1 U UE1 Þ1 : The previous result shows that the Gram matrix is invertible when E1 ¼ E2 . However, when E1 – E2 this may not be the R1 case. For example, consider X ¼ P1 ðRÞ with inner product hp; qi ¼ 1 pðtÞqðtÞdt, X 1 ¼ spanftg and X 2 ¼ P0 ðRÞ. Then, for p ¼ c1 t and q ¼ c2 , we have



Z

1

1

tdt ¼ hp; qi ¼ hc1 t; c2 t 0 i ¼ ðUE1 c1 Þ ðUE2 c2 Þ ¼ cT2 ET2 U UE1 c1

for all c1 . Therefore, ET1 U UE2 c2 ¼ 0 for c2 – 0, and so ET1 U UE2 is not invertible. On the other hand, we show below that in the Bernstein basis, if X 2 ¼ Pm ðRÞ and X 1 is any m þ 1-selection of the Bernstein basis of degree n with n > m, then the matrix is invertible. 4. Least squares problems and optimal recovery Dual basis functions in inner product spaces are useful for computing solutions to least squares problems, as is demonstrated in the next result. It turns out that the dual basis on subspaces are perhaps most useful for recovering the approximand, i.e., for finding the function q that projects to p. This is demonstrated in the second theorem of this section. Recall that the orthogonal projector of ðX; U; DÞ onto ðX 1 ; U1 ; D1 Þ is P 1 ¼ U1 ðU1 U1 Þ1 U1 . From above, this reduces to

P1 ¼ U1 C 1 U1 ¼ D1 U1 : Recalling that the matrix C 1 is Hermitian (i.e., C 1 ¼ C 1 ), we have

P1 ¼ U1 C 1 U1 ¼ U1 ðU1 C 1 Þ ¼ U1 ðU1 C 1 Þ ¼ U1 D1 : This gives: Corollary 4.1. Suppose that ðX 1 ; U1 ; D1 Þ is a subspace of a (real or complex) inner product space ðX; U; DÞ. Then, the orthogonal projector from X onto X 1 is P1 ¼ D1 U1 ¼ U1 D1 . In the next result, we describe orthogonal projection between subspaces. Theorem 4.2. Let ðX k ; Uk ; Dk Þ for k ¼ 1; 2 be two m-dimensional subspaces of an inner product space ðX; U; DÞ of dimension n. Let P 1 ¼ D1 U1 , the orthogonal projector of X onto X 1 . Assume that U1 U2 is invertible, and let P 21 ¼ D21 U1 . Then, P 21 is:  A projector from X onto X 2 ;  Orthogonal iff X 1 ¼ X 2 ;

10016

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

 The inverse of P 1 between X 2 and X 1 . I.e., P21 P 1 ¼ IX 2 and P 1 P21 ¼ IX 1 .  Optimal recovery from X 2 onto X 1 . I.e., if p ¼ P 1 q is the best approximation to q 2 X 2 , then q ¼ P21 q.

Proof. Recall that D21 ¼ U2 ðU1 U2 Þ1 . Then, P21 ¼ D21 U1 ¼ U2 ðU1 U2 Þ1 U1 is a projector since

ðP21 Þ2 ¼ ðU2 ðU1 U2 Þ1 U1 Þ2 ¼ U2 ðU1 U2 Þ1 U1 U2 ðU1 U2 Þ1 U1 ¼ U2 ðU1 U2 Þ1 U1 ¼ D21 U1 ¼ P 21 : Since

P21 ¼ ðU2 ðU1 U2 Þ1 U1 Þ ¼ U1 ðU2 U1 Þ1 U2 and

P21 ¼ U2 ðU1 U2 Þ1 U1 ; then P 21 ¼ P 21 iff RanU2 ¼ RanU1 and ker U1 ¼ ker U2 . That is, P21 is orthogonal iff X 1 ¼ X 2 . It remains to establish the third statement of the theorem. Let q :¼ U2 b 2 X 2 for some b 2 Rm , and p :¼ P1 q with p ¼ U1 a for some a 2 Rm . Then,

p ¼ P1 q ¼ D1 U1 q ¼ U1 C 1 U1 U2 b ¼ U1 ðU1 U1 Þ1 U1 U2 b and so

a ¼ ðU1 U1 Þ1 U1 U2 b: Therefore,

b ¼ ðU1 U2 Þ1 U1 U1 a and so,

q ¼ U2 b ¼ U1 ðU1 U2 Þ1 U1 p ¼ D21 U1 p: Therefore, P 21 :¼ D21 U1 is the inverse of P 1 on restriction to the subspaces X 2 and X 1 . These projections are illustrated in Fig. 4.1.

h

5. Dual Bernstein basis functions in subspaces For the remainder of this paper we apply our construction to subspaces of the space X ¼ Pn :¼ Pn ðRÞ of polynomials of degree at most n (dimension n þ 1) over the reals in the Bernstein basis Bn :¼ ½Bn1 ; . . . ; Bnn  with

Bni ðtÞ ¼

  n ð1  tÞni ti : i

Viewed as a map, we write

U :¼ Bn : Rnþ1 # R : a # Bn a :¼

n X

ai Bni :

i¼0

Let

hf ; gi :¼ ðn þ 1Þ

Z

1

f ðtÞgðtÞdt 0

Fig. 4.1. Orthogonal projection onto X 1 : p ¼ P 1 q ¼ D1 U1 q; Oblique projection onto X 2 : q ¼ P 21 p ¼ D21 U1 p.

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

10017

be our inner product. Our task is now to choose meaningful choices for X 1 and X 2 and their bases U1 ¼ Bn E1 and U2 ¼ Bn E2 , and then to construct dual bases

D21 ¼ Bn E2 ðET1 Bn Bn E2 Þ1 ; when possible. The following cases are considered in the remainder of this paper: (I) Same subspace: Let E1 ¼ E2 be any linear embedding. In this case X 1 ¼ X 2 with bases U1 ¼ U2 ¼ UE1 . (Ia) Degree elevation: Here, X 1 ¼ X 2 ¼ Pm in the Bernstein basis. Then, E1 ¼ E2 is the degree elevation matrix (embedding) of Pm into Pn . This special case has been studied in [1], but without the factor n þ 1 in the inner product. We show below that with this factor, the dual basis functions will form an affine basis. (II) Selections: Let X 2 ¼ Pm with Bernstein basis U2 ¼ Bn E2 ¼ Bm (i.e., E2 is degree elevation). Let X 1 be the subspace of X ¼ Pn spanned by any m þ 1 subset of Bn . In this case, E1 ¼ Ið:; sÞ with s : ½0 : m # ½0 : n any 1–1 selection map. (IIa) Symmetric Class: Let s ¼ ½0; k; 2k; . . . ; n for m and n such that n ¼ km. In particular, n is not prime. For example, if n ¼ 6 then s ¼ ½0; 2; 4; 6 when k ¼ 2, and s ¼ ½0; 3; 6 when k ¼ 3. This configuration is useful for a certain application in the area of discrete blending. In this case the dual basis D21 for X 2 converges to Lagrange interpolation as n ! 1, as we establish in a later section. (III) End-Point Class: Again, X 2 ¼ Pm in the Bernstein basis. Choose 1 6 ‘0 and 1 6 ‘1 such that ‘0 þ ‘1 ¼ m  1, and set

E1 ði; jÞ ¼ ð1ÞðiþjÞ

   n j j! for j ¼ 0 : ‘0 ; i ¼ 0 : j j i

and

E1 ðn þ 1  i; m þ 1  jÞ ¼ ð1Þi

   n j j! for j ¼ 0 : ‘1 ; i ¼ 0 : j; j i

with all other entries zero. In this case, the dual basis D21 for X 2 converges to Hermite interpolation as n ! 1, as we establish in a later section. Our primary goal is to construct dual bases

D21 ¼ Bn E2 ðET1 Bn Bn E2 Þ1 ; when possible. To make this determination we need to establish invertibility of the Gram matrix ET1 Bn Bn E2 . In general, this is not a simple task. We do so here for the cases listed above. Case (I) is simply a corollary to Corollary 3.2. Corollary 5.1. For (I), let E1 ¼ E2 be any linear embedding of a subspace X 1 ¼ X 2 into X ¼ Pn . Then, ET1 Bn Bn E1 is invertible, and

D21 ¼ Bn E1 ðET1 Bn Bn E1 Þ1 : In case (Ia) when X 1 ¼ Pm , the matrix E1 is degree elevation, and D21 ¼ Bm C m with C m ¼ ðBm Bm Þ1 . We note that the last case in the corollary is exactly the problem studied in [1]. In that paper, an explicit expression is given for the matrix C m . And so we have established invertibility of the Gram matrix for case (I). To determine invertibility for the remaining cases, we first establish some results. Lemma 5.2. Let Ek : Rmþ1 ! Rnþ1 for k ¼ 1; 2 with m 6 n. Assume that E2 is the degree elevation matrix (necessarily 1–1). Then ET1 Bn Bn E2 is invertible iff ET1 E2 is invertible, and in this case E1 is also 1–1. Proof. In [4] it was proved that the orthogonal complement of Pm in Pn with respect to the L2 inner product is equivalent to the orthogonal complement with the ‘2 inner product of degree-raised BB coefficients. That is, for p ¼ Bm a ¼ Bn E2 a 2 Pm and q ¼ Bn b 2 Pn ,

hp; qiL2 ¼ hBn E2 a; Bn biL2 ¼ aT ET2 Bn Bn b ¼ 0 iff

hp; qi‘2 ¼ ðE2 aÞ  b ¼ aT ET2 b ¼ 0: Since this is true for all coefficients a 2 Rm , then ET2 Bn Bn b ¼ 0 iff ET2 b ¼ 0. That is, kerðET2 Bn Bn Þ ¼ kerðET2 Þ. Therefore, kerðET2 Bn Bn E1 Þ ¼ kerðET2 E1 Þ. Since these latter matrices are square, we get that ET2 Bn Bn E1 is invertible iff ET2 E1 is invertible. On taking the adjoints of these, we get ET1 Bn Bn E2 is invertible iff ET1 E2 is invertible. This established the main statement. For the last statement, we note that when ET1 E2 is invertible, then E2 is necessarily 1–1 and ET1 onto. Hence, E1 is 1–1. h For showing invertibility of the Gram in (II), we will also use the following result from [2]. For this, recall Pascal’s matrix in upper triangular form, and sub-matrices of this matrix determined by selecting distinct rows r ¼ ½r 0 ; . . . ; rd  and distinct columns c ¼ ½c0 ; . . . ; cd , with 0 6 ri < riþ1 and 0 6 ci < ciþ1 :

10018

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

2 2

1 60   6 6 j 0 ¼6 T¼ 6 i 60 4 .. .

3

c0

 

c1



 

cd

3

6 r 1 1 1  r0 r0 7 7 6 0  7 6    7 6 1 2 3 7 c c c 0 1 d 7   6 7  7 cj 6 0 1 3    7 and Tðr; cÞ ¼ r1 r1 7 ¼ 6 r1 7: 7 6 ri .. .. .. 7 7 6 .. 0 0 1 7 5 6 . .  . 7 .. .. .. . . 7 6   .  4 c1 cd 5 c0 . . . .  rd rd rd

Then, we have the following result from [2]. Lemma 5.3 (see [2]). The matrix Tðr; xÞ is invertible iff r 6 x (i.e., ri 6 xi for i ¼ 0 : d). Now we can establish invertibility of the Gram matrix in case (II). Theorem 5.4. For (II), let X ¼ Pn in the Bernstein basis Bn , let X 2 ¼ Pm with Bernstein basis Bm , for m 6 n (hence, E2 is degree elevation), and let X 1 be the m þ 1 dimensional subspace with basis U1 ¼ Bn E1 ¼ Bn ðsÞ for some m þ 1 selection s. Then, the matrix ET1 Bn Bn E2 is invertible.

Proof. By a previous lemma, ET1 Bn Bn E2 is invertible iff ET1 E2 is invertible. Let M m be the matrix converting power coefficients to Bernstein coefficients of degree m. This is given explicitly as the following inverse

    1 m i M m ¼ ð1Þij : i j Since M m is invertible, then ET1 E2 is invertible iff ET1 E2 M m is invertible. But note that E2 M m ¼ M n ð:; 0 : mÞ (since powers xk for k > m are not in Pm Þ. Therefore, ET1 E2 is invertible iff ET1 Mð:; 0 : mÞ is invertible. That is, iff Mn ðs; 0 : mÞ is invertible for an m þ 1 selection of the rows of Mð:; 0 : mÞ. Now,

      n n n ; ;...; ; M n ¼ Tð0 : n; 0 : nÞT Diag 0 1 n where T n is Pascal’s matrix. It follows that M n ðs; 0 : mÞ is invertible iff Tðs; 0 : mÞT is invertible, which is true iff Tð0 : m; sÞ is invertible. Let r ¼ 0 : m. Then, r 6 s (i.e., r i 6 si for i ¼ 0 : m). Therefore, by the theorem above (from [2]), Tð0 : m; sÞ is invertible, and therefore, so is Tð0 : m; sÞT . Therefore, M n ðs; 0 : mÞ is invertible, and so is ET1 E2 for any selection E1 :¼ Iðs; :Þ. By the previous lemma, the matrix ET1 Bn Bn E2 is invertible. h The next result established invertibility of the Gram matrix in case (III). Theorem 5.5. For (III), let X ¼ Pn in the Bernstein basis Bn , let X 2 ¼ Pm with Bernstein basis Bm , for m 6 n (hence, E2 is degree elevation), and let X 1 be the m þ 1 dimensional subspace with basis U1 ¼ Bn E1 with E1 as given in (III) above. Then, the matrix ET1 Bn Bn E2 is invertible.

Proof. Consider the adjoint

ðET1 Bn Bn E2 Þ ¼ ET2 Bn Bn E1 ¼ Bm Bn E1 : This can be partitioned as

2

  3 n 6 n þ 1 j 7 7E1n ¼ ½ G1 6 i Bm Bn E1 ¼ m þ n þ 14 m þ n 5 iþj

2

m

G2

E11

6 G3 4 0 0

0

3

7 0 5 ¼ ½ G1 E32

 G3 

E11

0

0

E32

 :

The matrix E11 and E32 are triangular, each with nonzero diagonals, and so the second matrix in the last term is invertible. The first matrix in the last term is exactly Bm Bn ðsÞ where s is the selection

s ¼ ½0; . . . ; ‘0 ; n  ‘1 ; . . . ; n: But it was shown in Theorem 5.4 that for an arbitrary selection this matrix is invertible. That is, this matrix is exactly what is considered in case (II). Therefore, both matrices in the last display are invertible, hence so is Bm Bn E1 . Therefore, ET1 Bn Bn E2 is invertible, and so the dual basis D21 exists for case (III). h

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

10019

6. Properties of dual Bernstein bases on subspaces In this and the next sections we determine properties of the dual basis functions for the configurations (I)–(III) given above. In this section, we show that for Cases (Ia) and (II) the dual bases are affine. Recall first the inner product:

hf ; gi :¼ ðn þ 1Þ

Z

1

f ðtÞgðtÞdt:

0

For this inner product, it is well-known that

2    3 m n 6 n þ 1 j 7 i 6  7: Bn Bm ¼ m þ n þ 14 m þ n 5 iþj To establish the main result, we derive a useful result based on the Vandermonde convolution

X



r mþk

k

s nk



 ¼

rþs mþn

 :

Lemma 6.1. For j ¼ 0 : m,

 m X

m



i¼0

  n

mþnþ1 j i  ¼ : mþn nþ1 iþj

Proof. In the proof we use the generalization of binomial coefficients to negative numbers as follows:



n

 :¼

m

  nþm1 nðn  1Þ    ðn  m þ 1Þ ; ¼ ð1Þm m! n

where m and n are both positive integers. Then,

 n X i¼0

m



  n

  m  mþnji jþi m!n! X j i  ¼ ðsimplificationÞ mþn ðm þ nÞ! i¼0 mi i iþj   m  n þ j  1 j  1 m!n! X ð1Þmi ð1Þi ðremoving upper index jÞ ¼ ðm þ nÞ! j¼0 mi i   n  2 m!n! ¼ ð1Þm ðby the Vandermonde convolutionÞ ðm þ nÞ! m   mþnþ1 m!n! mþnþ1 ð1Þm ðupper summationÞ ¼ ¼ ð1Þm :  ðm þ nÞ! nþ1 m

The next lemma is used to determine row sums of the inverse of our Gram matrix: Lemma 6.2. Suppose that A is an invertible matrix with row sum 1 for all rows. Then, the row sum of A1 is also 1, for all rows. We are now ready to establish the main result of this section. Proposition 6.3. For cases (Ia) and (II) the basis D21 is affine. That is,

Pm

21 i¼0 Di

¼ 1.

Proof. Recall that D21 ¼ Bn E2 M 1 with M :¼ ET1 Bn Bn E2 . Since E2 is degree elevation, D21 ¼ Bm M 1 with M ¼ ET1 Bn Bm . For (I), E1 ¼ E2 . Therefore, M :¼ Bm Bm . Recalling that

2    3 m n nþ1 6 j 7 i n m 6  7; B B ¼ m þ n þ 14 m þ n 5 iþj

10020

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

it follows by the previous lemma that

 m X

m X

j¼0

j¼0

Mði; jÞ ¼

ðBm Bm Þði; jÞ ¼

mþ1 2m þ 1

m X j¼0

m



  n

m þ 1 2m þ 1 j i  ¼ ¼1 mþn 2m þ 1 m þ 1 iþj

for any row index i. In case (II), M :¼ ET1 Bn Bm ¼ ðBn ðsÞÞ Bm for some m þ 1 selection ‘‘s’’ of the basis elements of Bn . And so, for all i it follows that

 m m X X Mði; jÞ ¼ ððBn ðsÞÞ Bm Þði; jÞ ¼ j¼0

j¼0

n



m



m nþ1 X nþ1 mþnþ1 j sðiÞ   ¼ ¼ 1: mþn m þ n þ 1 j¼0 mþnþ1 nþ1 sðiÞ þ j

In both case (I) and case (II) the row sum of M is 1, for all rows. Therefore, in both cases the row sum of A :¼ M 1 is 1, for all columns. And so we get, m m m X m m m m X X X X X X D21 ðBm AÞðiÞ ¼ Bm ðjÞAðj; iÞ ¼ Bm ðjÞ Aðj; iÞ ¼ Bm ðjÞ ¼ 1: i ¼ i¼0

i¼0

i¼0 j¼0 21

Therefore, the basis D

j¼0

is affine for case (I) and case (II).

i¼0

j¼0

h

7. Symmetric class: convergence to Lagrange interpolation Recall Case (IIa). In this case we are taking a selection of the basis elements Bn that are spread out symmetrically. As above, this is constructed as follows: (IIa) Symmetric class: Let s ¼ ½0; k; 2k; . . . ; n for m and n such that n ¼ km. In particular, n is not prime. For example, if n ¼ 6 then s ¼ ½0; 2; 4; 6 when k ¼ 2, and s ¼ ½0; 3; 6 when k ¼ 3. This configuration is useful for a certain application in the area of discrete blending. In the next result, we establish connections between this dual basis and Lagrange interpolation. To prove the result, we will need the following lemma from [3]. Lemma 7.1. For x 2 R and i P 0,



kx



i lim   ¼ xi ; k

k!1

i with ðÞ! :¼ Cð þ 1Þ for non-integer x. Then, we have: n Theorem 7.2. Let X ¼ Pn and X 2 ¼ Pm with n ¼ mk. Let D21 k be the basis for X 2 that is dual to the basis B E1 of X 1 , with respect to k in the symmetric case described above. Then, m lim D21 k ¼ L

k!1

with Lm the Lagrange basis for Pm corresponding to the nodes

i m

for i ¼ 0 : m.

1 k m k T n n 21 m Proof. For each k, the dual basis expressed in B-form is D21 k ¼ B C 21 with C 21 ¼ ðE1k B B E2 Þ . We write this as Dk Ak ¼ B with Ak ¼ ET1k Bn Bn E2 . Let

  ; d1 K :¼ d0 ; dm1 ; . . . ; dm1 m

be point evaluation at the nodes mi ; i ¼ 0 : m. If we consider in the limiting case that D12 is the Lagrange basis, then we have KT D21 ¼ I, and so h  i m A ¼ KT Bm ¼ Bj mi ; with row index i and column index j. Our goal is to determine this matrix in the limiting case.

10021

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

2

2

2

2

1

1

1

1

0

0

0

0

−1

−1

−1

−1

2

2

2

2

1

1

1

1

0

0

0

0

−1

−1

−1

−1

2

2

2

2

1

1

1

1

0

0

0

0

−1

−1

−1

−1 21

Fig. 9.1. Dual bases D

in the symmetric case, with ½m; k ¼ ½m; 1; ½m; 3 and m = 1:4 (first two rows), and the Lagrange basis (bottom row).

For each k, we have 1 k n n T n n D21 ¼ Bn E2 ðET2 Bn Bn E1k ÞT ¼ Bm ðBm Bn E1k ÞT ¼ Bm ðBm Bkm ðsÞÞT k ¼ B E2 C 21 ¼ B E2 ðE1k B B E2 Þ 0 0 2   2   31T  31T m km km B mk þ 1 6 B mk þ 1  m 6 7C 7C kj  7C B 6 i 6  kj  7C T mB m ¼ Bm B 7C ¼ B B 7C ¼ B ðAk Þ ; 6 6 @m þ mk þ 1 4 m þ km 5A @m þ mk þ 1 i 4 m þ km 5A i þ kj i þ kj

with

Ak :¼

mk þ 1 m þ mk þ 1



2   3 km 6 7 m 6 kj  7 7: 6 i 4 m þ km 5 i þ kj

T From this, we get Bm ¼ D21 k Ak . On expanding Ak , we get

Ak ði; jÞ ¼

mk þ 1 m þ mk þ 1



m i



ðkmÞ!ði þ kjÞ!ðm þ km  i  kjÞ! mk þ 1 ¼ ðm þ kmÞ!ðkjÞ!ðkm  kjÞ! m þ mk þ 1

ðj þ ki Þk ¼

i

mk þ 1 m þ mk þ 1

!



Þk ðm  j þ mi k mi

ðm þ

 m Þk k

m Let A :¼ limk!1 Ak . This gives

 Aði; jÞ ¼ lim Ak ði; jÞ ¼ lim k!1

k!1

jk i

 

ðm  jÞk mi  mk m

On further simplification,



 :

! :



m i



kj þ i i





km  kj þ m  i

m þ km m

mi   m i



10022

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024



jk i



ðm  jÞk mi   mk





            k jk ðm  jÞk k k jk ðm  jÞk k   m k!ðk  mÞ! mi i mi m i i mi m ¼         ¼       : k k mk k k k mk i ðk  iÞ!ðk  m þ iÞ!

m

i

mi

m

m

i

mi

m

Now,

lim

k!1 ðk

k!ðk  mÞ! ¼1  iÞ!ðk  m þ iÞ! 2km

since it is a rational function of k with leading term k the first three fractions, to get



jk



i Aði; jÞ ¼ lim   k k!1

 

ðm  jÞk



i

k



mi m    k mk mi

on the top and on the bottom. We now apply the above lemma on



m i



i

¼ j ðm  jÞmi

1 mm



m i



 ¼

 i  mi   j j j : 1 ¼ Bm i m m m i

m

m

Therefore,

h iT h  i m j AT ¼ Bm ¼ Bj mi ; i m which is the transformation matrix from Bernstein to Lagrange basis derived at the start of this proof. h The first example in the last section of this paper will demonstrate the convergence to Lagrange interpolation (see Fig. 9.1). 8. End-point class: convergence to hermite interpolation Recall Case (III). In this case we are looking at the basis functions of X 1 ¼ Bn E1 to the left and the right sides. As above, this is constructed as follows: (III) End-Point class: Choose 1 6 ‘0 and 1 6 ‘1 such that ‘0 þ ‘1 ¼ m  1, and set

E1 ði; jÞ ¼ ð1ÞðiþjÞ

   n j j! for j ¼ 0 : ‘0 ; i ¼ 0 : j j i

and

E1 ðn þ 1  i; m þ 1  jÞ ¼ ð1Þi

   n j j! for j ¼ 0 : ‘1 ; i ¼ 0 : j; j i

with all other entries zero. Note that when ‘0 or ‘1 are negative, then no boundary condition is prescribed at that point. Our goal is to show that this configuration converges to Hermite polynomial interpolation as k ! 1. Note the following lemma: Lemma 8.1. iþj Di B m j ð0Þ ¼ ð1Þ i!



m

  i

i

j

and iþj Di B m j ð1Þ ¼ ð1Þ i!



m i



  i with :¼ 0 if j < 0 or j > i. j

i



mj

Now, we establish our main result. Theorem 8.2. Let X ¼ Pn and X 2 ¼ Pm with n ¼ mk. Let D21 n be the dual basis with respect to n in the Hermite case described above. Then, m lim D21 n ¼ H

n!1

with Hm the Hermite basis for Pm ½0; 1.

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

2

2

2

2

1

1

1

1

0

0

0

0

−1

−1

−1

−1

−2

−2

−2

−2

2

2

2

2

1

1

1

1

0

0

0

0

−1

−1

−1

−1

−2

−2

−2

−2

2

2

2

2

1

1

1

1

0

0

0

0

−1

−1

−1

−1

10023

Fig. 9.2. Dual bases D21 in the end-point with ½‘0 ; ‘1  ¼ ½0; 0; ½1; 0; ½1; 1; ½2; 1, and with n ¼ m in the first row and n ¼ 4  m in the second row. The third row is the Hermite basis. n m n T n m 1 Proof. As in the proof of the symmetric case, the dual basis expressed in B-form is D21 n ¼ B C 21 with C 21 ¼ ðE1n B B Þ . We 21 m T n m write this as Dn An ¼ B with An ¼ E1n B B . Let

  K :¼ d0 ; d0 D; . . . ; d0 D‘0 ; d1 D‘1 ; . . . ; d1 D; d1

be derivative evaluation at the two end points. If we consider in the limiting case that D21 is the Lagrange basis, then we have KT D12 ¼ I, and so

" T

m

A¼K B ¼

Di Bm j ð0Þ Di Bm j ð1Þ

# :

Now, Bm Bn E1n can be partitioned as

2

Bm Bn E1n

  3 n nþ1 6 j 7 i 7E1n ¼ ½ G1 6  ¼ m þ n þ 14 m þ n 5 iþj

2

m

G2

E11

6 G3 4 0 0

0

3

7 0 5 ¼ ½ G1 E11 E32

G3 E32 :

The matrix E11 is upper triangular, and E32 is lower triangular. In particular,

2

G1 E11

  3    3 2 n n m         ‘0 X 7 n n j j nþ1 6 nþ1 6 j 7 k 7 ð1ÞðiþjÞ 6 6 i i ð1ÞðkþjÞ j! ¼ j! 7 ¼ 5 mþn þ14 mþn 5 m þ n þ 1 4 k¼0 m þ n j j i k iþj iþk 2      3 n n m j j! 7 ‘0 nþ1 6 i k k j 7 6X ¼ ð1ÞðkþjÞ 7 6 mþn 5 m þ n þ 1 4 k¼0 iþk m

Now, considering those terms involving n,

   n n nþ1 nkþj j k    kþi ¼ nji mþnþ1 mþn n iþk

10024

S.N. Kersey / Applied Mathematics and Computation 219 (2013) 10012–10024

0.5

10

0.4 0.3 0.2

5

0.1 0 −0.1 0

−0.2 −0.3 −0.4

−5

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

−0.5

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 9.3. Dual basis curves of symmetric class (m ¼ 4 and k ¼ 2; 12, with b ¼ ½1; 2; 2; 1; 1) and End-Point class (‘0 ¼ ‘1 ¼ 1 and n ¼ 5; 25, with b ¼ ½0; 1; 1; 0).

for large n. Since E11 is upper triangular, then j 6 i (other terms are zero). The terms converge to zero when j < i, and to 1 when j ¼ i. This leaves

" lim G1 E11

n!1

‘0 X

ð1Þ

ðkþjÞ



k¼0

  # j j! ; i k

m

which is exactly the condition Di Bm j ð0Þ as in the previous lemma for Hermite interpolation. Symmetrically, one gets the right side as above as well. Therefore, this converges to Hermite interpolation. h

9. Computation In this section we plot the dual basis functions D21 for the symmetric and end-point polynomial cases, and plot curves in these basis. In Fig. 9.1, the dual basis functions are plotted for the symmetric case in the first two rows. This is compared to the Lagrange basis functions, to which they converge. In Fig. 9.2, the dual basis functions are plotted for the Hermite case in the first two rows, and the Hermite basis is plotted in the third row, to which they converge. The curves in Fig. 9.3 are represented in the dual basis

p ¼ D21 b ¼

m X bi D21 i ðÞ: i¼0

The curves on the left side are computed using the symmetric class construction given above. The degree of both curves are m ¼ 4. As k jumps from 2 to 12 (hence the n ¼ 8 and n ¼ 48), the curves comes much closer to interpolating the coefficients. That is, the second curve comes closer to Lagrange interpolation, as we expect. Moreover, the curves roughly follow the control polygon because these bases are affine, but not completely because the bases are not convex. These two aspects (interpolation and affineness) give some shape control to the curves in this basis. The curves on the right side are computed using the end-point construction. The curves are cubic (m ¼ 3), and the data b represents the end points and end slopes. That is, pð0Þ ¼ 0; p0 ð0Þ ¼ 1; p0 ð1Þ ¼ 1 and pð1Þ ¼ 0. As n jumps from 5 to 25, the curve comes much closer to Hermite interpolation of the date, as we expect. References [1] [2] [3] [4]

B. Jüttler, The dual basis functions of the Bernstein polynomials, Adv. Comput. Math. 8 (1998) 345–352. S. Kersey, Invertibility of submatrices of Pascal’s matrix, manuscript 2013. Math Archives: arXiv:1303.6159. S. Kersey, Dual basis functions on subspaces of polynomial spaces, manuscript 2013. D. Lutterkort, J. Peters, U. Reif, Polynomial degree reduction in the L2 -norm equals best Euclidean approximation in Bézier coefficients, Comput. Aided Geometric Des. 16 (7) (1999) 607–612. [5] P. Woz´ny, S. Lewanowicz, Multi-degree reduction of Bézier curves with constraints using dual Bernstein basis polynomials, Comput. Aided Geometric Des. 26 (2009) 566–579. [6] A. Rababah, M. Al-Natour, The weighted dual functionals for the Univarite Bernstein basis, Appl. Math. Comput. 199 (2008) 1581–1590.