JOURNAL OF COMPUTATIONAL AND APPLIEDMATHEMATICS
ELSEVIER
Journal of Computational and Applied Mathematics 78 (1997) 355-370
Matrices for the direct determination of the barycentric weights of rational interpolation Jean-Paul Berrut a,,, Hans D. Mittelmann b a Universitd de Fribour9, Mathkmatiques, CH-1700 Fribouro/Pd.rolles, Switzerland b Department of Mathematics, Arizona State University, Tempe, Arizona 85287-1804, USA Received 22 March 1996; revised 30 October 1996
Abstract Let x0.... ,XN be N + 1 interpolation points (nodes) and f0,...,fN be N + 1 interpolation data. Then every rational function r with numerator and denominator degrees ~
r(x) =
Uk
£
/
N
/ ,("~
x - x---SJkl
Uk
x-xk,
which is completely determined by a vector u of N + 1 barycentric weights u,. Finding u is therefore an alternative to the determination of the coefficients in the canonical form of r; it is advantageous inasmuch as u contains information about unattainable points and poles. In classical rational interpolation the numerator and the denominator of r are made unique (up to a constant factor) by restricting their respective degrees. We determine here the corresponding vectors u by applying a stabilized elimination algorithm to a matrix whose kernel is the space spanned by the u's. The method is of complexity (9(n3) in terms of the denominator degree n; it seems on the other hand to be among the most stable ones.
Keywords." Interpolation; Rational interpolation; Barycentric representation; Barycentric weights A M S classification." primary 65D05; 41A05; secondary 41A20
1. The problem Let x 0 , x l , . . . , x N be N + 1 distinct points (nodes) in ~, fo, f b . . . , f N corresponding values in R (C). For any two given integers m,n>>.O we will denote by ~m,n the set of all rational functions w i t h n u m e r a t o r degree ~
356
J.-P. Berrut, H . D . M i t t e l m a n n l J o u r n a l
of Computational and Applied Mathematics
78 ( 1 9 9 7 ) 3 5 5 - 3 7 0
The (classical) rational interpolation problem is the following: given m and n, find r=
P E ~m,n q
(1)
--
such that p(xk ) r(xk)------fk, q(xk )
k = 0(1)N.
(2)
It is well known that one may assume without loss of generality that n ~
the first case, r*I-L=o(X-Xk) solves the does. If r exists, its canonical representation reads
problem,
in
the
second
p--I
(1/r*)rL=o(X-Xk)
m
r(x) = ~
akx k
k=0
bkx k /
k=0
and the interpolation conditions (2) imply p(xk) - fkq(xk) = 0,
k = 0(1)N
(3)
or ao + a~xk + ... + amX~ -- fk(bo + blxk + "" + b,x~,) = 0. The set of solution vectors [a0 a~ am bo bl...b,] T of (3) is the kernel of the ( N + 1) x (m + n + 2)-matrix "1
Xo Xo2
"'"
x~'
-fo
-foxo
-foxg
....
fox~
1
xl
x~
"'"
x'~
-fl
-fix,
-flx~
....
flx~
:
:
•
1 XN X2N "'"
:
:
:
X~
--fN
--fNXN
•
--fNX2N . . . .
:
(4)
fNXTv
In order to have a nontrivial solution of (4) for every set of data, one should have less rows than columns. One therefore takes N=m+n.
(5)
The kernel of (4) then always contains vectors for which q ~ 0. An introduction to classical rational interpolation can be found in [5, 15, 17] (the latter shortened in [13]). For further leading literature, the reader may consult [1, 7, 10, 11, 19] and the literature cited there. The main difficulty with classical rational interpolation are the zeros of q, of which we can distinguish two kinds:
J.-P. Berrut, H.D. MittelmannlJournal of Computational and Applied Mathematics 78 (1997) 355-370
357
(a) for the zeros z . common to p one can (in theory) cancel the corresponding factors x - z.. The kernel of (4) corresponds to a unique interpolant, /f the latter exists. However, if a zero of q with multiplicity v is a node xk, then in view of (2) it is also a zero of p; after cancellation of (x - x k ) v, (1) takes a value which may be different from fk: then the problem has no solution, the point (xk, f k ) is called unattainable. In view of the unicity theorem [19, Theorem 3], cancelling factors x - z. cannot introduce unattainable points. These are a property of the problem. Detecting them from the classical representation requires computing the values of q at all nodes; (b) the zeros not common to p correspond to poles of r and can be classified into two kinds: whereas those lying outside the interval [minxk, maxxk] do not cause trouble, those inside it are the main drawback of rational interpolation. And the coefficients in the canonical representation of r do not give any hint to the presence of such poles.
2. Barycentric representation of the interpolant Every rational interpolant r Uk
r(x)----
-k=0
C
~N,N can be written in its barycentric f o r m [4] Uk
fk /k=0
X -- X k
(6)
--
X -- X k"
Indeed, let qk := q(xk) be the values of the denominator at the nodes; then N
N
q(x) = n ( x - x i ) ~ - ~ i=0
k--O
- - wk qk X--Xk
(7)
with Wk
=
H
1/ i=0, iCk
(8)
- x,)
is the Lagrangian representation of the denominator and r can be written as in (6) with u, := wkqk.
(9)
(This proof is an illustration of the fact, used by most constructive methods, that a rational interpolant is fully determined by its denominator.) To every node xk corresponds a so-called weight uk, and a barycentric formula for r thus encompasses N + 1 unknowns, as opposed to N + 2 in a canonical representation. Since wk # 0 for all k, there follows from (9) that q has a zero at a node iff the corresponding weight is itself zero. The barycentric representation presents several advantages in comparison with the canonical one: (a) unattainable points: (6) implies that the interpolation condition at xe is satisfied for all us # 0: lim ~ x---*x/ ~ =
u~__ X -- X k
/k=O
uk X -- X k
-- f~.
(10)
358
J.-P. Berrut, H.D. Mittelmann/Journal o f Computational and Applied Mathematics 78 (1997) 355-370
ut = 0 therefore is a necessary condition for an unattainable point at x~: the barycentric weights give immediate information about possible unattainable points (see also [3]); us-= 0 in (6) simply means that the information at xe is discarded when determining the interpolant; (b) stability: from (10) there follows that as long as uk # 0 for all k the interpolation conditions are satisfied even if the weights are not those of the proper interpolant (i.e., if r in (6) does not have the right numerator and/or denominator degree(s)): the barycentric representation is therefore perfectly stable as far as the interpolation conditions are concerned; (c) poles in [minxk,maxxk]: one has the following theorem [14]:
Theorem 2.1. Suppose the nodes are ordered as Xo < xl < ... < x N and the common factors in r have been simplified to yield the reduced function "~ so that (6) corresponds to an interpolant with minimal denominator degree, and suppose uk # 0 for all k. Then sign Uk+l = sign uk implies that "f has an odd number o f poles in [Xk,Xk+l]. Nonalternating signs of the weights of a reduced interpolant therefore is a sufficient condition for the presence of poles. Unfortunately, this condition is not necessary [3]; finding necessary conditions deserves further research efforts.
3. A matrix for the determination of the weights The only published algorithm for computing the weights uk seems to be the one advocated by Schneider and Werner [14], who suggest finding first the Newton form of the denominator q; more precisely, these authors use the vanishing of finite differences of f q : fq[Xo,Xl,...,Xm,Xi]=O,
i=m+l(1)n.
Writing q in its Newton form q(x) = ~,=0 n vi IJj=0(xi-i xj) this yields the homogeneous system with matrix f[xo,xl,... ,Xm,Xm+l]
f [ x ~ , x 2 , . • • ,Xm,Xm+l ]
• ..
f[xo,...,Xm,Xm+l]] E E,.n+1
Lf[ xO'xl'.'.'xm'xm+n]
f[xl,xz,.
. . ,Xm,Xm+n]
...
f[x,,.
(1 1)
,Xm,Xm+,]J
for the vector v:=[v0, vl, .... vn] c R n+~. This vector of the Newton coefficients of q is then transformed into a vector u of the Lagrangian form of q by an algorithm of Werner [18]. We will present here a direct method for determining the vector u := [u0, ul . . . . . UN]v. In view of the fact that, by (10), the barycentric form automatically guarantees interpolation, all we need to do is achieve that the denominator and numerator degrees do not exceed n and m, respectively. For that purpose, let N ::
( x - xo ) ( x - xl ) . . . ( x - XN ) : I I ( x -- x: ) j=o
£-P. Berrut, H.D. Mittelmann/Journal of Computational and Applied Mathematics 78 (1997) 355-370
359
denote the product in the Lagrangian representation of polynomial interpolation, as in (7), and start with the canonical representation o f q:
q(x) = bo + blx + ... + bN-lX N-1 + bNX N. Then xNq(1/X) = box N + blX N-1 + ... +
bN-lX+
(12)
bN and q(x) is o f degree ~
bN = limxN q ( l ) = o. ~-~o \x / Using (7) in order to translate this into a condition for the uk's, we get
xNq
N
=xNE
1
Ilk
k=o x
=
= XN
xk
(1 - x j x )
Uk
j=o
:
[(1 - x k x ) / x ]
1 -x~x' k=0
so that
bN = lim q ( l ~ xN = ~NU k x--.0 \ x / ~=o and thus N
deg q
~< N -
1
,:~ bN = 0 <::k Z
(13)
Uk = O.
k=0
The replacement o f uk by ukfk is the only difference between q and p, and so N
degp~
1~
~-~ukfk = 0 . k=0
Once bu = 0 is satisfied, (12) shows as above that bN-i = 0 iff
limxN-iq(l~ x~O \x/
=0.
(14)
Written in terms o f the uk's, the quantity whose limit is sought reads
---- -
(1 - - x j x )
1 --xkx"
As we want to let x ~ 0, each term of the last sum can be expanded into its geometric series uk - uk(1 + x k x + (XkX) 2 --~'' ") 1 - xkx and the sum becomes
Z k=0
-1 --
XkX
-Zuk+x k=0
ukx +x2 k=0
\k=0
)
J.-P. Berrut, H.D. M i t t e l m a n n l J o u r n a l o f Computational and Applied M a t h e m a t i c s 78 (1997) 3 5 5 - 3 7 0
360
Using (13) and letting x ~ 0, we see that (14) and the corresponding condition for p mean N
degq~
¢* b N - I
2
= 0 ~:~ Z U k X k
= O,
k=0
(15)
N
deg p < . N - 2 ¢:> ~
ukfkxk = O.
k=0
One can keep on decreasing the degrees of the denominator and the numerator in the same manner and prove the following by induction.
Theorem 3.1. Let r = qP be a rational function written in its barycentric f o r m (6) with N
N
p(x) = ((x) ~
u-----L--k fk,
q(x) = f(x) ~
k=0 X -- Xk
u___k_ .
= X -- Xk
Then N
xkiUk = 0 ,
degq~
i=O(1)N--(n+l),
(16a)
k=0 N
degp~
i = 0, fkxkuk
i = 0(1 )N - (m + 1).
(16b)
k=0
By choosing m and n satisfying (5) one sees that the set of u's that correspond to r solving the rational interpolation problem is the kernel of the N x (N + 1)-matrix 1
1
1
...
1
x0
Xl
x2
•••
XN
X~o
x~
x~
...
x~
m--1 X0
xm-I
XT-1
•..
m-1 XN
fo
f,
f2
"
fN
foxo
f,x,
f2x2
"'"
fNXN
foxg
flx~
f2x~
...
fNx~
foxg-'
f, xT-'
Axe-'
,4 :=
"'" f~x;~-'
(17)
J.-P. Berrut, H.D. M i t t e l m a n n l J o u r n a l
o f C o m p u t a t i o n a l a n d A p p l i e d M a t h e m a t i c s 78 (1997) 3 5 5 - 3 7 0
361
Remarks. (~) The number of rows corresponding to the degree conditions for the denominator is equal to the degree of the numerator, and conversely; (fl) the matrix (17) is not quite the transpose of (4): not only the negative signs (which would not affect the kernel) are missing, but also the dimensions are different as (4) has one more row and one more column; (7) in [8] appeared a proof, attributed to Meinguet, of a system equivalent to (16): replacing there in (9) 1/['(xk) by wk [9, p. 243], using our (9), transposing and reordering the equations yields (16).
4. Determination of the baryeentric weights through triangulation of A In order to determine from (16) the weights of rational interpolation, i.e., the kemel of A in (17), we will now triangulate A. For that purpose, we will call the first m rows (i.e., those without f k ' s ) of A its "top matrix" and the last n rows its "bottom matrix". The triangulation will be performed in several steps: (1) Separate trianoulation o f top matrix and bottom matrix: (a) subtract XN times each row from the next. With the space saving abbreviation xjk := Xj, k := xj - xk
this yields 1
1
XON
XlN
XoXoN
...
1
1
•••
XN--I, N
0
XIXIN
•••
XN_IXN_I, N
0
XoXoN
x x,N
•
:
:
XOm--2 XON
X?--2XIN
fo
o :
0
"""
X Nm-- 2I X N - I,N
0
f,
"'"
fN-,
fN
foXoN
flXlN
"'"
fN--lXN--1,N
0
foXoXoN
flXlXlN
"'"
fN_IXN--IXN_I,N
0
foX~Xo N
f , X2XlN
"'"
f N _ I X N 2_ I X N _ I , N
0
flX~--2Xlu
"'"
n--2 fN_lXN_lXN--I,N
0
n--2
foXo
XON
362
J.-P. Berrut, H.D. Mittelmann/Journal of Computational and Applied Mathematics 78 (1997) 355-370
(b) to obtain the zeros in the second column, subtract XN-1 times each of the rows number 2 to m, respectively n, from the next to get 1
1
-..
XON
X1N
"""
Xo, N - IXoN
Xl, N - IX lN
•••
XoXo, N--IXoN
XlXI,N--IXIN
'''
XN_2XN_2,N_IXN_2,
X2Xo, N _ I X o N
X2X1,N_IXIN
•"•
X N2 _ z X N _ 2 , N _ I X N _ 2 ,
:
1
1
1
XN--2,N
X N - I,N
0
X N _ 2 , N _ I X N _ 2,N
0
0
N
0
0
u
0
0
:
:
0
0
iN-2
iN-,
fN
:
:
X ~ - 3 X O , N _ IXON
X m1- - 3 X I , N _ I X I N
•••
fo
f,
'''
X Nm -_3 2 X N _ 2 , N _ I X N _ 2 ,
N
foXON
flXlN
"'"
fN-2XN--2,N
fN--IXN--I,N
0
LXo,N--IXoN
LXI,N--IXIN
''"
fN_2XN_2,N--IXN--2,N
0
0
UoXOXo, N _ t X o N
f lXIX1,N_IXIN
"'"
0
0
•
.
0
0
•.
..
foX~-3Xo, N_,XoN
f,X~--3X,,N_,X,N
fN_2XN_2XN_2,N_IXN_2,N ..
'"
fN_2X"N--~XN_2,N_,XN_2,N
(c) pursue in the same way, until both matrices are (separately) triangulated: 1
...
Xox
• . .
XO, N _ I X o N
• ..
1
1
Xu--l,U
XN_2,N_IXN_2, N
•
.
XO, n + 2 . . . X o N
fo foXoN f o X o , m+2 • •
1
XN_2, N
.XoN
"'"
Xn+l,n+Z...Xn+l, N
. "'"
.
.
.
.
" " "
(18) .
fN-, fN-IXN--1,N
fm+lXm+l,m+2
fN
• • .Xm+l,N
The elimination in the top matrix is now complete• Remarks. (6) Since the order of the nodes is arbitrary, the last row of the top matrix implies that, given any subset S of n + 2 nodes X~o,Xi, . . . . . x~,,+, with corresponding weights ui0, ui,,..., Ug,+,, one has n+l ~d,~uik = 0 ,
(19a)
k=0
where dik := (xik -- xi,+2 )(xik -- xi,,+3) " " (xi~ -- xiN )
(19b)
denotes the product of the differences between xgk and all x~, ie f[ S. Moreover, if one applies GaussJordan elimination to the top matrix in (18) in the same manner as above, but here by subtracting
J.-P. Berrut, H.D. MittelmannlJournal of Computational and Applied Mathematics 78 (1997) 355-370
363
a multiple of the next row to annulate an element, and multiplies by the arising denominators, a "lower anti-triangular" matrix with n + 2 anti-diagonals results whose rows contain the coefficients in (19) for the S with n + 2 consecutive indices; (e) since all di, are different from 0, the above remark implies that the top matrix has full rank; (() in the special case of polynomial interpolation (n = 0), a full-rank bidiagonal matrix results whose last row implies Xo2Xo3...XoN
UO=--
U 1 --
XI2XI3.-.XIN
N Hi=2(Xo--Xi) N HO" Hi=2(Xl --Xi)
Changing the order of the variables to bring xk into the second column (or solving the bidiagonal system by back substitution) similarly yields Uk~
N H i = l , i~k(XO -- X i ) N UO"
Hi=l,~¢Axk-xi)
N X 0-Xg) for the arbitrary u0, one obtains uk =wk from (8), the barycentric Choosing u0 =w0 = 1/1-Ii=i( weights of polynomial interpolation. As a corollary we get the so far possibly overlooked fact that the kernel of a transposed Vandermonde matrix without its last row is the space of the barycentric weights of polynomial interpolation between the points making up the Vandermonde matrix•
We now continue our triangulation of A: (2) Elimination of the diagonals of the bottom matrix present in the top matrix: (a) subtract successively fu-k+l times the kth row of the top from the kth of the bottom, k = l(1)n = N - m, and express the differences of function values with finite differences, i.e., J ) - fk = (xj - xk)f[xj,xk]. This yields XONf[Xo,XN]
.
.
. Xm,N. f[Xm,XN] . .
XO,N--IXONf[XO,XN--I]
.
. Xm,N--IXm, . . Nf[Xm,XN--l] . .
...
XN--1,N f[XN--I,XN]
XN--2,N--IXN--2,Nf[XN--2,XN--1]
0
O0
Xm+l,m+2...Xm+l,N f[Xm+l,Xm+2]
X0,m+l ... XONf[Xo, Xm+l ]
Xm,m+l...XmN f[Xm,Xm+l]
(b) in order to eliminate the next diagonal, subtract successively f[XN_k+l,XN_k+2]times the kth row of the top from the kth of the bottom, k = 2(1 )n + 1 = N - m + 1, to get XO,N--tXONf[Xo, XN--I,XN]... Xm--I,N--IXm--I,N f[Xm--I,XN--I,XN] XO,N--2XO,N IXONf[Xo,XN--2,XN--1]
xo..... XONf[Xo,X,,,Xm+t]
...
..•
XN--2,N--IXN--2,N f[XN--2,XN--I,XN]
XN--3,N--2XN--S,N--IXN--3,N f[XN--S,XN--2,XN--1]
...
""
• ..
Xm+l,m+2...Xm+l,Nf[Xm,Xm+hXm+2]
... Xm-l..... X,~uf[Xm--l,Xm,Xm+l]
0 00 0
364
J.-P. Berrut, H.D. MittelmannlJournal of Computational and Applied Mathematics 78 (1997) 355-370
(c) ... and so on. For each eliminated diagonal, the number of factors of differences of x-values as well as the number of arguments in the divided differences increase by one. After eliminating m - n + 1 diagonals, one has (without the columns of zeros) X0,2. . . . XONf[Xo,X2 . . . . . . X N ] . . •
Xn,2 . . . . XnNf[Xn,X2 . . . . . . XN]
.-.
i
*'"
•
X2n--1,2. . . . X2n--l,Nf[X2n--l,X2 . . . . . . XN] ]
J
kX0, n+l...Xoxf[Xo,Xn+l . . . . . Xm+l]... Xn n+l.• .XnNf[Xn,Xn+l . . . . . Xm+l]
The number of factors of differences of x-values increases by one from row to row, the numbers of arguments in the divided differences is m - n + 2 everywhere. (3) Elimination of the "triangle" on the right of the bottom matrix: We now zero the diagonals shortened at each step of one more element at their lower extremity. At the end the products of the differences of xk's all have the same number of factors and are identical in every column• To save space again, they will be denoted in the following by X k : = Xk, n+ l Xk, n+2 " " • Xk, u = (Xk -- Xn+l ) ( X k - - Xn+2 ) " " • (Xk - - X u ),
k = 0(1 )n.
The finite differences have a number of arguments decreasing from the last to the first row. We inverse the order and, ignoring again the zero columns, we get the n × (n + 1)-matrix
I Xof[xo,x,+,,...,Xm+l] . . . . ,xm+2]
Xof[xo,Xn+l
Xlf[Xl,Xn+I,...,Xm+I]
...
X,f[x,,x,+,,...,Xm+l]
]
Xlf[Xl,Xn+l,...,Xm+2]
"'"
Xnf[Xn'Xn+l'""Xm+2]
I
•
] ,
[ Xof[xo,x,+, .... ,XN-,] [.Xof[Xo,Xn+I,...,...,XN]
xIU[x,,x,+,,...,XN-I] Xlf[Xl,Xn+l
........
,XN]
XnU[Xn,Xn+I,...,XN-I]
... ...
(20)
I
Ynf[Xn,Xn+l,...,...,XN]J
whose kemel is the space of the first n + 1 barycentric weights (the m last ones are then computed from the top matrix). Remarks. (q) notice the difference with the matrix (11): whereas there the number of arguments of the divided differences changes from column to column, in (20) it changes from row to row; (0) special case: denominator of degree 1 (n = 1). The only row of (20) then reads ( X 0 -- X 2 ) ( X o -- X 3 ) • • •
(Xo - - X N ) f [ X o , X 2 , . . . ,
XNJUo
"~- (Xl -- X2)(Xl -- X3) • ' " (Xl -- X N ) f [ X l , X 2 , . . .
,XN]Ul
:
0
and since, as noticed in remark (6), the order of the nodes is arbitrary, we have
uk =
fix0, # 0,k}] 1-L¢0,k(x0 f[xk, {xl, f # 0,k}] H~¢0,k(xk -- x,) u°"
Moreover, if the two elements of (20) are identical, then r is the interpolating polynomial: by the last formula ul is related to u0 as wl to w0, and by (19) the coefficients are the wk in (8); (t) from one column to the next, the arguments of the divided differences differ only by a single argument• The common differences f[Xn+l,... ,Xm+l], f [ x , + l , . . •,Xm+2],...,f[x,+l,...,xN] can be computed first in about ~ml2 f l o p s in a single difference tableau. Then this same tableau is updated for each of the nodes x0.... ,x,, which gives the differences in the columns of (20) one after another. 1 2 + m(n + 1) flops; The cost of computing all divided differences thus is about ~rn
J.-P. Berrut, H.D. MittelmannlJournal of Computational and Applied Mathematics 78 (1997) 355-370
365
(~c) the divided differences need not be multiplied by X0,... ,X,: these factors can be taken care of at the final back substitution (see below); (2) up till here, the elimination is linear in the values f ; the nonlinearity comes into play in the next, last step. (4) triangulation o f (20) by Gaussian ""anff'-elimination: The node xe corresponding to the variable ut eliminated from a row comes into the other elements of the row, so that in the final bottom matrix the number of arguments in the elements increases by two from one row to the next. We will denote by F[ ] the elements computed in the process (without the Xk's), starting with F[ ] : = f [ ]. Wenn xt is eliminated, the kth element of the jth row changes according to the following program: for ~ := n ( - 1 ) 2 for
do
j := N - f + 2(1)N do for k := f - 1(--1)0 do F[xk,xe,
• " .,xj] =
F[xk,x:+l . . . . . Xj]
- - ~ 7 - ' -F - [ x. : ., . .. .., x. j ] ~ P,-,r LXk,Xd+l,''',XN--t+l] l'~ [ X d , . . . , X N - - d + l J
(21)
end k; end j; end d. The bottom matrix finally reads Xof[xo,x.+l ..... Xm+l] XoF[xo,x. ..... Xm+2]
. . . . . Xm+l]
...
XIF[Xt,X...... Xm+2]
...
Xlf[Xl,Xn+l
X.-lf[x.-1,x.+l . . . . . Xn-lF[Xn-l,Xn
."
X o F [ x o , x3 . . . . . X N - I ] X o F [ x o , x 2 . . . . . XN ]
Xm+l] X . f [ x . , x . + l
. . . . . Xm+2]
.°
. . . . . Xm+l]
0
(22)
X 2 F [ x 2 , x3 . . . . . X N - 1 ] X I F [ X l , X 2 . . . . . XN]
0
(21) requires that all pivots F[xe .... ,XN-e+l] are # 0. The case in which the latter is true (also for f = 1 ) is the generic case. This last step unfortunately requires (9(n 3) flops. The challenge would be to compute the F in (9(n 2) operations. Now that the matrix is triangulated, one can find u by back-substitution: starting from any nonvanishing value for u0 ( u 0 = 0 yields the uninteresting trivial solution u = 0 ) , one successively gets u~, u2,..., u, by (22), then u , + l , . . . , UN by the "(n + 2)-lower antidiagonal" (19). The latter is preferable to the triangular top matrix of (18), since the number of additions/subtractions is smaller. (In N--I particular, the first row of (18) is to be avoided: it would give the last coefficient as UN= - ~k=0 Uk, a long sum with mainly alternating signs, thus subject to catastrophic cancellation and smearing [9].) Taking into account the value in remark (0, we see that the total cost of the method is (9(mZ+mn+n 3) flops.
366
J.-P. Berrut, H.D. MittelmannlJournal of Computational and Applied Mathematics 78 (1997) 355-370
5. I m p l e m e n t a t i o n i s s u e s and n o n - g e n e r i c c a s e s
We add here practical remarks on the use of the above method: (1) Computing divided differences as those needed in (20) for nodes ordered from one side to the other of an interval is a notoriously unstable process: the points should be re-ordered, e.g. with the van der Corput sequence [6, 16]; the order of the weights must be restored accordingly once back-substitution is complete. (2) Even in the generic case, if one wants to avoid exaggerated growth of the F[ ] in (21), pivoting should be implemented. We used column pivoting in order to keep the natural order of the degree conditions. For the weights, this means a change of their order, which must be stored. (3) If a row is totally zero (no pivot), the kernel is at least two-dimensional, the solution is not anymore unique (up to a constant) and the problem therefore ill-posed. There are at least two ways of coping with this nongeneric case: - if one is interested in the general solution, one should keep the corresponding weight indeterminate and compute the kernel as a function of all the undeterminate weights, a usual way of finding kernels; - instead, we have modified the problem to make it well-posed: with the desire to hold the number of poles as low as possible, we have decreased n and increased m accordingly until the solution was unique. Therefore the problem we have solved should be rephrased as follows: find the unique rE~m*,* with m* + n* =N, n* <~m*, that satisfies the interpolation conditions (2) with n* <<,n as large as possible. Since the solution is then one-dimensional, the rational function is reduced. In that sense, we have determined (if it exists) the interpolant with minimal denominator degree as in [14, 19]. Row pivoting would theoretically allow the immediate determination of n*. In the top matrix, decreasing n and increasing m means erasing a factor from every d/k in (19b) and computing one more row. In the bottom (before triangulation), this means cancelling the first row and column and updating the remaining divided differences with xn. Similar remarks hold for the updating problem of increasing N, to which we did not give much thought, however. The contraposition of Theorem 2.1 can be useful in determining whether a row is zero or not (i.e., whether or not the kemel has dimension one): C o r o l l a r y 5.1. Suppose the nodes are ordered as Xo < x~ < ... <
X N and uk # 0 for all k. Then if signuk=signuk+l for a k and if r is bounded on [x0,xu], then it is not reduced.
(4) When one ur is zero, then, since there is one less term, (13) already implies that d e g q ~ < N - 2 , (15) that deg q ~ < N - 3, etc. Numerator and denominator then automatically have degree 1 less than required. In fact, u e = 0 in (16) implies that the factor x - xe has been cancelled in the numerator and the denominator, and p(xf) = q(x~) = 0; (5) Numerical experience leads us to conjecture that even with column pivoting the rows of zeros are at the end of the triangulated matrix; (6) With the reduced r, one can use the criteria of Section 2 for detecting unattainable points and poles: - if ue=0, evaluate r at x~: if it is different from f~, (xt, f~) is unattainable; we did not encounter any example where our method gave u ~ = 0 and x/ was not unattainable, and therefore conjecture
J.-P. Berrut, H.D. MittelmannlJournal of Computational and Applied Mathematics 78 (1997) 355-370
367
that the method yields u f = 0 iff xE is unattainable (see also [14, Corollary 7] which seems to lack some hypotheses to be true as generally as stated); - same signs of weights corresponding to consecutive nodes guarantee that r has an odd number of poles between these.
6. Numerical examples We have tested the method described above on dozens of examples with MATLAB for the MacIntosh. First we have recomputed the low-dimensional examples in [14, pp. 293-294] and obtained the same coefficients in all examples; and in example (c) they came out directly, without the detour by a random Newton denominator. Example 1. In order to demonstrate what happens with unattainable points, let us try the example x := [ - 2 , - 1 , 0,1, 2] T, y : = [ 1 , 2 , - 1 , 0 , 1 ] a', m = n = 2 . Here the matrix A in (17) has (full) rank 4, its one-dimensional kernel is spanned by the weights u = [ 1 , - 1 , - 1 , 1,0] T corresponding (without 1. the point (2,1) is unattainable. reduction) to the interpolant r ( x ) = ( x - 1)/(2x + 1) with r ( 2 ) = ~. The weights of identical signs in - 1 and 0 reflect the pole in - 5 1" We mention that with the first 4 points and m = 2 , n = 1 we get the same u without the zero value. With .~ := [ 1 , 2 , - 1 , 0 , ½]a- instead of y as input one gets indeed again r(x), but A has rank 3 (last row is zero after triangulation). Within the two-dimensional kemel the method chooses ~ -- [1, ½,3,-11,20/3] T. Without reduction this corresponds to ~(x) = (5x 2 + x - 6)/(10x 2 + 17x + 6). The sign pattern reflects the zeros of the denominator at x -- 6 and x -- -½, and not the poles of 7 which reduces to r. We have not been able to construct an example where the method still chooses after correction of y to .~ a fi with the same zero component and no unattainable point at the corresponding abcissa. According to remark (3), the zero row leads us to set m = 3, n = 1. Then the method yields the unique solution ~ = [1, - 54, - 2 , 4 , - 5] 5 T, which corresponds to r (without reduction) and again reflects the pole at x = - !2"
Example 2. Next we have tried a modified Bulirsch-Rutishauser example [5, p. 288]: interpolate
f(x) = cotx between equidistant points on [0.5°,5°], and compare at x = 1.5 ° the value of r with that of f . In view of the simple pole of f at x = 0, for any given N the interpolant is about as good with n---1 as with any larger n. N = 7 is sufficient for about machine precision, and the latter is conserved up to about N = 40. Then the precision gradually decreases because of smearing [9]: for N = 100 the error is 7.5 × 10 -9. With the interpolating polynomial in barycentric form (see (7) and (8) or [9, p. 238]), N ~ 40 is necessary for machine precision. And at x = 0 . 7 5 , thus closer to the extremity of the interval and the pole, r(0.75) with N = 7, n = 1, still has machine precision, whereas the error with the polynomial decreases to 5.2 × 10 -9 for N = 40 before increasing again because of smearing. If one chooses n = 2 , the two nonzero entries of the second line of (22), which have very similar sizes, decrease with N: for N = 5 , they are - 3 . 6 x 10 -7 and - 4 . 6 × 10 -7, for N = 8, - 6 . 4 x 10 -13 and - 7 . 0 × 10 -13, and the uk's have alternating signs. The maximal error Ilr - fll, as measured by
368
J.-P. Berrut, H.D. MittelmannlJournal of Computational and Applied Mathematics 78 (1997) 355-370
Table 1 Errors at x=-0.95 with rational interpolation of f(x) = e(x+l2)-~/(1 + 25x2) between equidistant points on [- 1, 1] N
n
Error
3 7 15 31 63
1 3 7 15 31
3.19e - 3 5.35e- 1 3.46e - 6 1.1e- 10 5.77e - 8
the maximal value of [r(x) - f ( x ) [ at 109 equidistant x between 0.5 ° and 5 °, is 3.7 x 10 -13. With N = 9, the entries of the second line of (22) become - 4 . 8 x 10 -15 and 1.5 x 10 -15. Is the kernel two-dimensional? Corollary 5.1 gives the answer: u5 and u6 have the same sign and IIr- fll = 7.1 × 10 -14, thus r has no pole on [0.5 °, 5°]: the kernel is two-dimensional. However, this is of no use for N = 10: the entries of (22) are 2.7 x 10 -15 and 1.6 x 10 -15, the signs of the uk's alternate and IIr- fll = 2.4x 10 -11. But it works again for ll~
Example 3. Finally we have experimented with f ( x ) = e(x+l2)-~/(1 + 25x2), first between equidistant points on [ - 1 , 1]. f displays an essential singularity at - 1 . 2 and Runge's phenomenon makes it completely unsuitable to polynomial interpolation close to the extremities of the interval of interpolation. For that reason we have evaluated r at x = - 0 . 9 5 , where the rounded exact value is 2.31716286612814. Because of the essential singularity, the best approximation results are obtained for most N with n as large as possible, i.e., n = N / 2 for N even and n = (N - 1 )/2 for N odd. The results are displayed in Table 1. They show that the method works very well, the only difficulty being again smearing when N becomes larger than about 30. To allow comparison with the polynomial, we have interpolated also between (~eby~ev points of the second kind on [ - 1 , 1], xk = cos(kit/N). The barycentric weights were given in [12], see [2] for an alternative treatment. The third column of Table 2 contains the results (the value of n is relevant only for the rational: the polynomial obviously corresponds to n = 0). Comparison with Table 1 shows that for this function rational interpolation between equidistant points is even better than polynomial interpolation between (~ebygev points! The table also contains the error at a point close to the center of the interval, where the Ceby~ev points are not as dense as in the vicinity of the extremities. We remark that in this example our method was about two digits more precise than the alternative consisting in letting b0 := 1 and solving the system (3) (the usual way, i.e., with partial pivoting) for the ak's and the remaining bk's. Notice finally that the last examples seem to demonstrate that the unexpected poles that for small N very often spoil the rational interpolants of a smooth function disappear as soon as the number of nodes is big enough for the interpolant to "understand" the function.
J.-P. Berrut, H.D. Mittelmann/Journal of Computational and Applied Mathematics 78 (1997) 355-370
369
Table 2 Comparison o f rational and polynomial errors when interpolating f ( x ) = e(x+l2)-t/(1 + 25x 2) between (~eby~ev points on [ - 1 , 1] N
3 7 15 31 63
n
1 3 7 15 31
x = -0.95
x = -0.05
Rational
Polynomial
Rational
Polynomial
2.64 1.74e- 1 8.19e - 8 5.2e - 14 0.0
2.62 6.47e4.84e t.05e 3.20e -
1.8 4.25e- 1 8.4e - 13 0.0 3.1e - 15
2.67 1.18 1.70e - 1 1.82e - 4 1.56e - 5
1 2 4 7
7. Conclusions The method given above for a direct computation of the barycentric representation of rational interpolants as the kernel of the matrix (17) seems very effective, at least as long as N is not too large. For very large N and difficult points like equidistant ones, higher precision should be used to cope with smearing. The method gives quite a precise information about the size of the solution space. The barycentric representation should often be favored in view of the valuable information it gives about unattainable points and poles of the interpolant in the interval of interpolation. One could surely think of determining the interpolant in more traditional ways and compute its barycentric weights in a second stage. It is known, however, that the process of computing the barycentric weights of a rational interpolant from its canonical representation can be ill-conditioned [9, p. 236].
Acknowledgements The authors wish to thank P. Noth for writing the first MATLAB program of our method in his diploma thesis at the University of Fribourg, and the two referees for their comments which have improved the appearance of the present work.
References [1] A.C. Antoulas and B.D.Q. Anderson, On the scalar rational interpolation problem, IMA J. Math. Control Inform. 3 (1986) 61-88. [2] J.-P. Berrut, Baryzentrische Formeln zur trigonometrischen Interpolation (I), Z. angew. Math. Phys. (ZAMP) 35 (1984) 91-105. [3] J.-P. Berrut, Linear rational interpolation o f continuous functions over an interval, in: W. Gautschi, ed., Mathematics of Computation 1943-1993: a Half-Century of Computational Mathematics, Proc. Symp. in Applied Mathematics (American Mathematical Society, Providence, 1994) 261-264. [4] J.-P. Berrut and H. Mittelmann, Lebesgue constant minimizing linear rational interpolation of continuous functions over the interval, Comput. Math. Appl., to appear.
370
J.-P. Berrut, H.D. MittelmannlJournal of Computational and Applied Mathematics 78 (1997) 355-370
[5] R. Bulirsch and H. Rutishauser, Interpolation und genfiherte Quadratur, in: R. Saner and I. Szab6, eds., Mathematische Hilfsmittel des Ingenieurs (Grundlehren der math. Wissenschaften Bd. 141, Springer, Berlin, 1968) 232-319. [6] B. Fischer and L. Reichel, Newton interpolation in Fej6r and Chebyshev points, Math. Comp. 53 (1989) 265-278. [7] P.R. Graves-Morris, Efficient reliable rational interpolation, in: M.G. de Gruin and H. van Rossum, eds., PadO Approximation and its Applications, Amsterdam 1980, Lecture Notes in Mathematics, Vol. 888 (Springer, Berlin, 1981) 28-63. [8] P.R. Graves-Morris, Symmetrical formulas for rational interpolants, J. Comput. Appl. Math. 10 (1984) 107-111. [9] P. Henrici, Essentials of Numerical Analysis (Wiley, New York, 1982). [10] M.H. Gutknecht, Block structure and recursiveness in rational interpolation, in: E.W. Cheney, C.K. Chui and L.L. Schumaker, eds., Approximation Theory VII (Academic Press, Boston, 1992) 93-130. [I1] J. Meinguet, On the solubility of the Cauchy interpolation problem, in: A. Talbot, ed., Approximation Theory (Academic Press, London, 1970) 137-163. [12] H.E. Salzer, Lagrangian interpolation at the Chebyshev points x,.v = cos(vn/n), v = 0(1)n; some unnoted advantages, Comput. J. 15 (1972) 156-159. [13] R. Schaback and H. Wemer, Numerische Mathematik (Springer, Berlin, 1992). [14] C. Schneider and W. Werner, Some new aspects of rational interpolation, Math. Comp. 47 (1986) 285-299. [15] J. Stoer, Einfiihrun9 in die Numerische Mathematik/, 4. Aufl. (Springer, Berlin, 1983). [16] H. Tal-Ezer, High degree polynomial interpolation in Newton form, S l A M J. Sci. Statist. Comput. 12 (1991) 648-667. [17] H. Wemer and R. Schaback, Praktische Mathematik H (Springer, Berlin, 1972). [18] W. Werner, Polynomial interpolation: Lagrange versus Newton, Math. Comp. 43 (1984) 205-217. [19] L. Wuytack, On some aspects of the rational interpolation problem, SIAM J. Numer. Anal. 11 (1974) 52-60.