INFOP.MATION PROCESSING LETTERS 1 (1972) l.57- 163. NORTH-HOLLAND PUBLISHING COMPANY
A FAST METHOD FOR INTERPOLATION USING PRECONDITIONING * Ellis HOROWITZ Received 11 January 1972 Revised version received 11 April 1972 analysis of algorithms
interpolation
numerical mathematics
Given n points (Xi, ui), 1 < i < 11we wish to determine the coefficients an_1, . . . , a0 of the unique poly1 such that G(xi) =yi fcr I < i Q n. This is the general problem of Werpo’ation and the best existing algorithms Lake o(n2) arithmetic operations. A restricted form of the ir.terpolatio;l i :oblem can be stated as follows: assuming that n values Xi, 1 < i Q n hake been previously given, find the uniq tie interpolating polynomial C(X) of degree < n-4 such that G(Xi) =:yi, 1 < i < II. Of course this problem d>e:, I. ;t include interpolating arbitrary sets of data. However, this restricted form often Joes occur when first aC l;:ztion and later interpolation are used jointly. Morever, there is an immediate analogy befaween being able to precondition a polynomial’s coefficients before evaluation and preconditioning the Xi before interpolation. Whereas preconditioning for evaluation +tlproves the method by only a constant factor, it will be shown that preconditioning for interpolation will producz an order of magnitude decrease in the computing time. To give a specific example of restricted interpclotion, suppose we wish to solve a linear system with polynomial entries. The numerical analysis approach would evaluate the entries at many points producing systems with rational number eIerle.lts. Each of these systems is solved and then all of these answers are interpolated to produce the final result. Alternatively one may use symbolic manipulation to work with the polynomials themselves, but recent work on symbolic algorithms of this type show that the most efficient methods will frost evaluate and then later interpolate. Since the points of evalu;rtion and hence for subsequent interpolation are arbitrary these points can oe chosen in advance, thus producing the restricted interpolation problem. For example, ColIins in [3] and Brown in [ 21 choose Xi = i. In this paper an algorithm for the restricted interpolation problem will be given which takes O(n(log ?I)~) steps. Also it will be shown how to pre-condition the values Xi in order to use this algorithm. The time necessary to compute the neededfunctions of the Xi, will be O(n2). Finally, it will be shown how to reduce the general interpolation problem to a “simpler’* problem such that if the simpler problem can be solved in less than o(n2) operations so may the general interpolation problem. Let us begin with Lagrangesfonnuk1 for determining G(X) given (Xi, *Vi),1 < i G n. Let D(X) = 1
G(x)=
cIJ -LjW
1 di<
Lj(xj)
yi
’
(1)
It is easy to verify that G(Xi) =Yj since Lj(Xj) = 0 for i f i and (L,(x,)/Lj(Xi))yj = yi. Also, degree (Li) = n-1 and
* Technical repart No. 71-118.
E, Horowitz,
tic%
t,
&pee
((7) G
A fust
method for interpolation
n-l. Now, if we wish to compute G(x) using Lagrange’s formula we hve the following:
oIn2) fxn2) W2)
I? 4JO 2) L&x), J GiGn
3) aq
-. T
l
I
a scco.ttdmethod commonIy used for intefpolation, see [3] , is an iterative scheme. LettmgDi(x) = (x-xi) . . , ;@-%i) and assuming we have already computed Gf__I(x) such that Gi_1(xi) = yj, 1 a
hte th t G&,x~>= Y&i_ 1 (Xi) + Gi_1 (Xi) = Yi and Gi(xi) = Gi_1 (Xi>= Yj for 1 Q i Q i--l since Di_1 (xi) = O. ‘The computing time is O(o for the ith ite ation since the evaIuation of Gi_ I and Di_1 at Xi both take O(i) steps 01W 89the mirftiplication of a constant times Di_l(x). Thus the total time for this algorithm is aIso O(n2). Fhwlly, consider Newton’s interpdation formula I ,
ax) * a0 +al(x-xl j f a#- x1) (x-x2) + . . . + a,.Jx-xl) . .
. . . (~-x~_.~),
v!v@@the a+ are determined by computing the divided differences Of(Xi,yih 1 < I f n, see 141 pp. 429. For # ‘*4 we wouId Me GYI YB x?-xi
=:, ‘1
Y3-Y2 -
"y;
%p Yz x3-x2 X3@Y3
i ‘&ma0
Y4-Y3 <-;x3
vi--r; ;;-q
=y';
x4-Xl
r;-v; -x4 -x2
r;-Y;' 111 =Yl
=Yz
=YL
=yr, aI =y;, a2 =y’; and a3 =y”’ . The total number of diviEcd differences to be computed is
(n-l) + (n-2) + . *. + 2 t I= $(n-1)
= O(n2),
bnpsving o(n2) arithmetic operations are needed tct compute the “is 0 < i Q n-l. &w, we ask car 1 :,k tiomputing times for these methods be reduced if we assume that we know the xi in d that we can precomputc any values based on these xi”s? Determining the coefficients in Newton’s form&a cleariy reIiekon the yis and requires O(R~)steps even if Xi-Xl are precomputed for aII i, f 4. n. in Baeiterative method we could precompute D&c) for 1 Q i < n as well as Di_ 1(Xi) for V/ r~_l. However, the time needed to compute G,_l(xl) and the multiplication of the cons-t times ~D&(x} MI tiks O(i) steps amI htnce no improvement can be effected in tie overall computing time for this nMWd.. Fi3UIly,returning t0 Lagrange’sformula we may weII precompute Li(X)/&i(Xi)for all i, 1 < i < n.
E. Horowitz, A
fast method for interpoiation
159
Ndvertheless, the last staie requires n multiplications ofyi by Li(X)/Li(Xi) for 1: i < n. Since degree (Li(x)) = L n-l it follows that the time for Lagrange’s method is also not reduced. An algorithm, INTERP, is now presented which computes the interp ’ rii* I polywmial in less than O(#) steps where we assume N valuesxi, 1 Q i Q N are given a priori. For simplicity we will often assumr: tnat N is ofthe formN= ’,“- 1, but the algorithm works perfectly well for any general N. By way of explanat@n of this algorithm, consider the 2n - 1 points (Xi, yi) dlspiayed in a binary tree, e.g. if n=4
4
b
G,(xl ----
G,c (se
---
--
G,(X)
GJ(X)
Then, for each of the n levels in this tree, 1NTERP computes Cl(X) which “closely” interpolates the points on the ith level; that is in our example G,(x) would be computed as x-x3 $(X1
=
(X2-X1)(X2-X3)
x-x2
-
l
l
(X2-X15)
y2 + (X3-X1)
Then, to find G(x) INTERP uses 4 precomputed polynomials/Ii G(x)=AIGl
(X3-X2)
(X3-X4).
l
. (X3-X151y3*
[xl, . . . , xl51 such that
+. . . +A&,
and then G(x$ =yi for 1 Gj < IS. Hence, the criteria for the Ai are precisely
and these polynomials are independent of the yi. Thus they may be precomputed =.& this is precisely what is done by algorithm PRE-COMPUTE which will be presented later. Now we “_,- the interpolation algorithm which would initially be invoked as INTERp((xi, Yj), k,
~wse we trace the algoritthm for 7 points (xi, yj), 1
ttbtsis. t
IN’TERP@, = 1, k, = 7) ,n+3;Gi r-0; ’ o it!!;INTERRk, =k2= l)+y+t; Gl +(y+zl)A, [X,, . . . . x,] i 4 2; INk’ERP(kI = 2, k, = 3); ff+-2;G2+O; i + 1; INTERP(B, = k2 =I2) +_.v~,‘~~;
0
Gz ‘GY2/~2A~1 Ix3 “3J; i + 2; INTERP(k, = k, = 3) +ys/a3 ; G* ‘0’2Ii;Z)4 [‘x2, “31 + ti3b3)A2 G’ *&/q)A, [q, wsx7] tG’A[x,,...,x,]; I *- 3; INTERP(k, = 4, k, = 7); l
jXl)rX31
l
n+3;G34-0; i c 1; PNTERP(R, = k2 = 4) *y&z4
;
G3~CVq/aq)Al[xg,..~,‘:71; I;A-- 2; !NTERP(k, = 5, k, =A); nc2;G4*0; i + 1; INTERP(k, = k2 = 5) + ys,$
;
G” + o’s/~5)Al [x5, x61 ; i *- 2; l.NTERP(k, = k, = 6) +-y6/u4 ; J4)I'x5) x 6 ] +9/+2[xS.x6]; @ + (u&yj G3 *-CYqb4b4~[xql ...,x7] +G A2[~4,.*..,X7]; i+- 3; HNTERP(k, = k, = 7) +-y& ; S3 *-G3 +(y7j0,)A3[x,9,. . . ,x,1;
G’ “Ql/al)A1[xl,. Return Gf .
. . ,x7] +G2A2[xl,.
. . ,x7] +G3A3[xl...
-
. ,x=9];
Using the definition of Ai [xk , , . .. ,;ckz1 givenin eq. (2) and substituting into the equal:ion for GI, we see that formula for the interpolation of 7 points (xii yj), 1 gj < 7. we have the Agrange 1 Let usnow derive the computing timt: for algorithm INTERP. In order to develop an asymptotic bound, let :if?)l be the time needed to interpola ta 2n - 1 points and initially we have k, = 1, k2 = 2” -1. We are concerned wi& step (3) since rhis step clearly bounds the time for all the others. The time for ~z:_2execution of step (3.3) h pfopor tional to
thefusttern b&g the time for the mu tiplicatisln rnd the second term the time for the addition operation. Now -2 since it is a function ;f Li-i - 11points and by the defiiition of de&Gi) < 2&-IL
E. Horowm. A fast vethod for interpolation
161
i; follc;ws that b 2”-’
> deg(GiAi) 2 deg(G) for 1 < i < ~1.
Therefore, we have that the total &?e for algorithm INTERP is the sum of the tunes for steps (S.L;Sand (3.3,) namely 1(2”)=1(2”-1)+.,.+1(1)+
c
(2’4
t2”-‘)=I(2”-‘)+...
2”-isi
t i( 1) c L’r12”
1 ditn
=2(~(2~-~)t...tI(l)~~tcn~~
+c(n-2);’
tc@--1)2n-1
= 2”-lJ( 1) t c(n2” t 2n-1 n(n-1)/2)
v..
= 0(n22n).
Thus, if N = 2” we have that the time to interpolate N points using INTERP is O(N(kg A!)*). To see thalt algorithm INTERP does produce the correct polynomial for 2” - 1 pomts, let
D(X) = (X-X*) . . . (X--X2” _I), Dj(X) = (X-X* j-1) . . . (X-X2/_ I), ai = (Xi--Xl).
. . fXi_Xj,l)
(Xi-Xi+1)
+-1 6 jc 2i_1
for 1, 2,22,.
G(x)=
Yj aj ’
-
G(x)=
. . . , XJn_l]
c 16i
1
G@i(xl,. . - ,x2n_1] =
C
lG;iGn
1 Gi
1 G i G 2n-1,
points. Then, frci.ll tke algorithm *xz ?bqve
. . ,2”-1
C
Dj(z)
x-xi
-
C
=
* s s(Xi_X*n__1),
4
and let us assume that
Gj(X)
1 Gi
= D(x)/Di(x)
c
2i-1
c
2i-l<.-<-i-l
Q(x)
Yj
D(x)
'j
Di(xJ
de-
x-xj
a ’
by definition. Therefore,
D(X) -yj ,
X-Xj
Qj
which is precisely the Lagrange form of the interpolating polynomial as given in eq. (1). Now let us examine an algorithm which detemlines all the necessary precomputed polynomial:,. AlgtWhm PRE-COMPUTE Input: N= k2-kl t 1 valuesxi, k, Gi G k,. Output: AJX& 1) . , . ( Xk2 1. 1) {Initializelj D(x) +- J ;n + 11 + log2(k2-k, For i+l....,ndo 2) [Form products)
+ l)];
1 t 2”-1,k2);2-+k1-1 f.Xexj); J
+ 2”-‘;
E. !iorowitz, A fast methodforinterpolation
162
D(X) t- D(X) 3) [Create A i
l
D&X);
For i*i,...,ndo
and recurse]
eck*-1 + 2’-’ ;f+ min (k, -1 + 2’4, k,}; If effthen call Pre-Compute (xp e G.i Gfi; Aj[xk,# * * I xk2 1 + ~(XliDi(X); Return; End, l
4) [End]
Note that the A, are computed in exactly the order for which they are needed by INTERP. What is the computing time for this al&-rithm?For N = 2n-l points, any straightforward analysiswould produce at least an o(Nz) bound since if we look just at computing the Di(X) in step 2, its time is
7 iC$n
c
j 8 1 &c
r>
(2j-1)2 = O(22”) = O(N2).
l
f9/<21-1
%?knt:e we must compute the Di(x), D,(x) and D(x)/D#) a bit more cleverly. For this we will first need the fact that 2 N-degreepolynomials can be multiplied in time O(N log N) using the fast Fourier transform and its convolution property, see [4J, pp. 427. Then vre will use a binary tree structure to guide the multiplications as in the fo3owmg way for ?J = 2” - 1 = 7 :
x;2x7
.
,D Thus, in general Dl(x), . . . , D,(x) and D(x) can be produced in
C
ppipi-Ifi__1)
= y-1
c
(i-l) = o(rr22”-l) = 0(N(logN)2).
1
1
Recalling ahatD(x) = k-x,).
. . (x-x~__ I), the above givesa new fast method for finding the values of the
ekmentary symmetric functions of x1, . . . , xiv_.1: i.e. the functions Cxj,
Cxjr
Xi21**mp
ZXil*s*XiN
I
l
Then in PRE-COMPUTEfhe time to form Aj in step (3) is
using the fast Fourier transform to do exact polynomial division and for the recursion in step (3) the time i$ P(2q
+r(2”-9
+ . . .+-P(2),
where &29 is the time to execute PRECOMPUTE on 2” points. Therefore, the total time is
E florowit:. A fast meth pd for interpolation
P(2”)=P(F-9+.
. . + P(2) + n*2* ‘= 2[P(2”4)
=22[P(2n-3)+...+P(2)]
163
t . . . t P(2)) + n22” + (n-_1)*2n- 1
+r,22”+2”-1((n-1~2t~n-‘~-I=...=~2”n3).
, Thus for N = 2”- 1 poirlts the total time is C(N(k~fi)~).
Finally we note that to compute the N coristantsat = D(X)/(X-Xi) 1x ., 1 4 i 4 N requires O(N2) arithmetic operations. However we need not evaluate N separate polynomials but &ply evaluate the derivative of D(x) at xic 1 G i QN. This ian be see I by using the Taylor series expansion of D(x) about the point “i, D’ (Xi) (X- Xi)'
C -
D(x) =D(xiJ +
j ,.
ial
.
NOW D(Xi) = 0 md dividing by X-Xi we get D(x) X-Xi
= D’(Xi) + (X--Xi) P(X)
and evaluating at Xi we nave
D(x)l(x-xi) I xi
=
_
D’(xi)*
In [ 1). p. 67 Borodin and Munro show how to evaluate an N-degree polynomial at N points in O(rt t.91) operations Thus we really have reduced the rntire interpolation problem to better than 0(N2), but not realty much better. Finally, then, let us summarize the rciults.
1) 2) 3)
Algorithm
Cbmpu ring time
lNTERP PRE-COMPUTE D’(Xi), 1 Q i GN
o(NiloB N)*) oiN(log W3) o(Ni91)
in terms of procticai computation, algorithms (2) and (3) are ;iot at present feasible since (2) depends on the frst Fourier %ansform for polynomial multiplication which suffers greatly from truncation errors and (3) depends upon iterating Strassen’s method for fast matrix multiplication for which we know of no existing implementation. Therefore, the more practical use of these methods will still be to a priori compute the furlcrions yrotlwed In (2) za! (3) using !nor e conventional means. Finaiiy let me note that if there exists a relationship ! N !. For each value between the Xi Such as if the Xi = i, then D’(xi) for 1 Q i Q N reduces to (- lp(i-1) we can obtain this constant using one more multiplication if we have a table of factorials. Thus the total time for interpolation in this cast would be better than 0(N2) or O(N(log N)3).
ofi
Refemces [ 1] [2] [3j (41
A. Borodin and 1. Munro, Evaluating Poi l-n.omials at Many Points, lnforma!ior~ Processing Letters 1 (1971) 66. W.S. Brown, On Euclid’s Algorithm and the Computation of Polynomial Greatest Common Divisors, J. ACM, 18,4 (1971) 478. GE. Collins, TheCalculation of MukivariatoPolynomi~ Resultants, J. ACM., 18,4 (1971)478. D.E. Knuth, The Art of Computer Programming, Vol. 2 (SecoriB editaon), Seminumerical Akorithms (Addison-Wesley, 19691.