An explicit solution for the cubic spline interpolation for functions of a single variable

An explicit solution for the cubic spline interpolation for functions of a single variable

Applied Mathematics and Computation 117 (2001) 251±255 www.elsevier.com/locate/amc An explicit solution for the cubic spline interpolation for functi...

75KB Sizes 2 Downloads 65 Views

Applied Mathematics and Computation 117 (2001) 251±255 www.elsevier.com/locate/amc

An explicit solution for the cubic spline interpolation for functions of a single variable B.S. Moon Korea Atomic Energy Research Institute, P.O. Box 105, Yusong, Taejon 305-600, South Korea

Abstract An algorithm for computing the cubic spline interpolation coecients without solving the matrix equation involved is presented in this paper. It requires only O…n† multiplication or division operations for computing the inverse of the matrix compared to O…n2 † or larger number of operations in the Gauss elimination method. Ó 2001 Elsevier Science Inc. All rights reserved. Keywords: Explicit solution; Cubic spline interpolation; B-splines; Inverse matrix

1. Introduction Consider the problem of ®nding one-dimensional approximates on equally spaced points ti of the interval ‰a; bŠ with functions S…t† ˆ c1 B1 …t† ‡ c2 B2 …t† ‡    ‡ cn Bn …t†;

a 6 t 6 b;

…1†

where Bj …t†'s are the cubic B-spline functions [1] de®ned on ‰tiÿ2 ; ti‡2 Š by 8 3 …t ÿ tiÿ2 † if t 2 ‰tiÿ2 ; tiÿ1 Š > > > 3 2 3 2 > ‡ 3h …t ÿ t † ‡ 3h…t ÿ t † ÿ 3…t ÿ t † if t 2 ‰tiÿ1 ; ti Š h < iÿ1 iÿ1 iÿ1 1 Bi …t† ˆ 2 h3 ‡ 3h2 …ti‡1 ÿ t† ‡ 3h…ti‡1 ÿ t†2 ÿ 3…ti‡1 ÿ t†3 if t 2 ‰ti ; ti‡1 Š > 6h > 3 > if t 2 ‰ti‡1 ; ti‡2 Š > : …ti‡2 ÿ t† 0; otherwise …2† for i ˆ 1; 2; . . . ; n with h ˆ …b ÿ a†=…n ÿ 3†, ti ˆ a ‡ …i ÿ 1†h. E-mail address: [email protected] (B.S. Moon). 0096-3003/01/$ - see front matter Ó 2001 Elsevier Science Inc. All rights reserved. PII: S 0 0 9 6 - 3 0 0 3 ( 9 9 ) 0 0 1 7 8 - 2

252

B.S. Moon / Appl. Math. Comput. 117 (2001) 251±255

Given a set of …n ÿ 2† points f…ti ; yi † j i ˆ 1; 2; . . . ; n ÿ 2g and the derivatives y 0 …a† and y 0 …b† at the two end points, it is desired to ®nd ci 's for i ˆ 1; 2; . . . ; n such that S 0 …a† ˆ y 0 …a†, S 0 …b† ˆ y 0 …b†, and S…ti † ˆ yi for i ˆ 1; 2; . . . ; n ÿ 2. This comprises a set of n linear equations Ac ˆ b, where c ˆ …c1 ; c2 ; . . . ; cn †T , b ˆ …y 0 …a†; y1 ; y2 ; . . . ; ynÿ2 ; y 0 …b††T , and the coecient matrix A of size n  n is given by 3 2 ÿ3=h 0 3=h 0 0 0 : : 6 1 4 1 0 0 0 : :7 7 6 6 0 1 4 1 0 0 : :7 7 6 6 0 0 1 4 1 0 : :7 7: 6 …3† Aˆ6 : : : : : : : :7 7 6 6 0 0 0 0 0 1 4 17 7 6 4 1 4 1 0 0 ÿ3=h 0 3=h 5 When the derivative constraints are deleted, i.e. the ®rst and the last rows of A are left out, there is an easy and fast inversion algorithm [2]. In the following, we describe an algorithm to compute the inverse of A with the two rows included.

2. An algorithm for computing the inverse In this section, we describe an algorithm to compute the inverse of the n  n matrix A in (3). Note that if we multiply the ®rst and the last rows by ÿh=3, compute the inverse of the resulting matrix, and multiply the ®rst and the last columns of the computed inverse by ÿ3=h, then we would obtain the desired inverse. Thus, we assume that the ®rst row of A is replaced by …1; 0; ÿ1; 0; . . . ; 0† and the last row by …0; 0; . . . ; 1; 0; ÿ1†. To compute the inverse of the revised matrix A, ®rst we de®ne a sequence of numbers bj 's by b1 ˆ 2; b2 ˆ 4; bj‡2 ˆ 4bj‡1 ÿ bj

…4†

for j ˆ 1; 2; . . . ; n ÿ 2. Next, we compute a matrix B ˆ …bij † by b11 ˆ bnÿ1 ;

b12 ˆ ÿbnÿ3 ;

b1j ˆ …ÿ1†j‡1 …bnÿj‡1 ‡ bjnÿjÿ2j‡1 †; b1;nÿ1 ˆ b1;n ˆ …ÿ1†n b2 and for i ˆ 2; 3; . . . ; n ÿ 1;

j ˆ 3; 4; . . . ; n ÿ 2;

…5†

B.S. Moon / Appl. Math. Comput. 117 (2001) 251±255

bi1 ˆ …ÿ1†i‡1 bnÿi ;

253

bi2 ˆ …ÿ1†i‡2 bnÿi ;

i‡j

bij ˆ …ÿ1† …bnÿjiÿjjÿ2 ‡ bjnÿiÿj‡1j‡1 †;

j ˆ 3; 4; . . . ; n ÿ 2;

…6†

bi;nÿ1 ˆ bi;n ˆ …ÿ1†n‡iÿ1 biÿ1 and for i ˆ n, bn;1 ˆ …ÿ1†n‡1 b2 ;

bn;2 ˆ …ÿ1†n‡2 b2 ;

bn;j ˆ …ÿ1†n‡j …bj ‡ bjÿ2 †; bn;nÿ1 ˆ …ÿ1†

2nÿ1

bnÿ3 ;

3 6 j 6 n ÿ 2;

bn;n ˆ …ÿ1†

2nÿ1

…7†

bnÿ1 :

Note that the signs are alternating in rows and in columns except for the last column where the elements have the same sign as the corresponding adjacent elements in the …n ÿ 1†th column. In the following, we will show that Aÿ1 ˆ …1=d†B where d ˆ 2…bnÿ1 ÿ 2bnÿ2 †. To compute …1=d†B, one can simply replace bj by b0j ˆ bj =d in (5)±(7) to obtain a matrix B0 which is the same as …1=d†B. ÿ bjÿ2 for j P 2 Lemma 1. The solution for the difference equation p jÿ1 bj ˆ 4bpjÿ1  jÿ1 for j ˆ 1; 2; with b1 ˆ 2; b2 ˆ 4 is given as bj ˆ …2 ‡ 3† ‡ …2 ÿ 3† . . . ; n. Proof. The characteristic equation for the p above di€erence p equation is z2 ÿ 4z ‡ 1 ˆ 0 whose solutions are z ˆ 2 ‡ 3 and z ˆ 2 ÿ 3. Therefore, the solution of the di€erence equation is p p bj ˆ c1 …2 ‡ 3†jÿ1 ‡ c2 …2 ÿ 3†jÿ1 : By substituting b1 ˆ 2 and b2 ˆ 4 into the above relation, we obtain c1 ˆ c2 ˆ 1.  p jÿ1 p jÿ1 Lemma 2. If bj ˆ …2 ‡ 3† ‡ …2 ÿ 3† for j ˆ 1; 2; . . . ; n, then bj 's satisfy bk bl ˆ bk‡lÿ1 ‡ bjkÿlj‡1 . A routine trivial proof of the above is omitted. Note that the relation in the middle of (6) can be written as ( …ÿ1†i‡j bnÿj biÿ1 for i 6 j < n ÿ 2; …6a† bi;j ˆ …ÿ1†i‡j bnÿi bjÿ1 for j 6 i < n ÿ 1: Lemma 3. Let d and bj 's be as defined above. Then bj 's, 3 6 j 6 n ÿ 3 satisfy bnÿj bjÿ2 ÿ 4bnÿj bjÿ1 ‡ bnÿjÿ1 bjÿ1 ˆ d:

254

B.S. Moon / Appl. Math. Comput. 117 (2001) 251±255

Proof. The left-hand side is equal to bnÿj …bjÿ2 ÿ 4bjÿ1 † ‡ bnÿjÿ1 bjÿ1 which is equal to ÿbnÿj bj ‡ bnÿjÿ1 bjÿ1 . Therefore, it is equivalent to prove ÿbnÿj bj ‡ bnÿjÿ1 bjÿ1 ˆ d for 2 6 j 6 n ÿ 3. For j ˆ 2, the left-hand side is equal to ÿ4bnÿ2 ‡ 2bnÿ3 ˆ ÿ4bnÿ2 ‡ 2…bnÿ1 ÿ 4bnÿ2 † ˆ d; which is the right-hand side. Assume that the relation is true for all j with j 6 r < n ÿ 3, then for j ˆ r ‡ 1, the left-hand side is ÿbr‡1 bnÿ…r‡1† ‡ br bnÿrÿ2 ˆ ÿ…4br ÿ brÿ1 †bnÿrÿ1 ‡ br bnÿrÿ2 ˆ ÿbr …4bnÿrÿ1 ÿ bnÿrÿ2 † ‡ brÿ1 bnÿrÿ1 ˆ ÿbr bnÿr ‡ brÿ1 bnÿrÿ1 ; which is equal to d by induction hypothesis.



Theorem 1. Let B ˆ …bij † where bij 's are as defined above. Then …1=d†B is the inverse of A, i.e. BA ˆ dI where I being the identity matrix. Proof. Let C ˆ …cij † be the product matrix BA. We are to show C ˆ dB. For elements in the ®rst row of C with 3 < j 6 n ÿ 2, we have c1;j ˆ b1;jÿ1 ‡ 4b1;j ‡ b1;j‡1 ˆ …ÿ1†j …bnÿj‡2 ‡ bnÿj ÿ 4bnÿj‡1 ÿ 4bnÿjÿ1 ‡ bnÿj ‡ bnÿjÿ2 † ˆ …ÿ1†j …bnÿj‡2 ÿ 4bnÿj‡1 ‡ bnÿj † ‡ …ÿ1†j …bnÿj ÿ 4bnÿjÿ1 ‡ bnÿjÿ2 † ˆ 0: For those elements satisfying 3 < i < j < n ÿ 2, we use (6a) to write ci;j ˆ bi;jÿ1 ‡ 4bi;j ‡ bi;j‡1 ˆ …ÿ1†

i‡jÿ1

biÿ1 …bnÿj‡1 ÿ 4bnÿj ‡ bnÿjÿ1 † ˆ 0:

Similarly, for elements satisfying 3 < j < i < n ÿ 1, we have ci;j ˆ bi;jÿ1 ‡ 4bi;j ‡ bi;j‡1 ˆ …ÿ1†

i‡jÿ1

bnÿ1 …bjÿ2 ÿ 4bjÿ1 ‡ bj † ˆ 0

and for the diagonal elements with 3 < j < n ÿ 2, cj;j ˆ bj;jÿ1 ‡ 4bj;j ‡ bj;j‡1 ˆ …ÿ1†

2jÿ1

…bnÿj bjÿ2 ÿ 4bnÿj bjÿ1 ‡ bnÿjÿ1 bjÿ1 † ˆ d:

A routine check for the rest of the elements is omitted.



3. Conclusion In the above, we showed that for computing the inverse of the matrix, we need to compute bj 's by using (4) which requires at most n multiplications and another n multiplications to compute b0j ˆ b=d. The matrix B is formed using b0j

B.S. Moon / Appl. Math. Comput. 117 (2001) 251±255

255

without any multiplication or division operation. To adjust the ®rst and the last columns by the factor ÿ3=h, we need another 2n multiplications. Thus, we need altogether 4n multiplications at most to compute the inverse. The algorithm is tested with an example of n ˆ 67, we found that kAB ÿ Ik1 ˆ 0:22  10ÿ15 for both the Gauss elimination method and the proposed method. References [1] P.M. Prenter, Splines and Variational Methods, Wiley, New York, 1975, pp. 78±81. [2] F. Yamaguchi, A new curve ®tting method using a CRT computer display, Computer Graphics and Image Processing 7 (1978) 425±437.