An improvement to Ostrowski root-finding method

An improvement to Ostrowski root-finding method

Applied Mathematics and Computation 173 (2006) 450–456 www.elsevier.com/locate/amc An improvement to Ostrowski root-finding method Miquel Grau *, Jose...

120KB Sizes 3 Downloads 82 Views

Applied Mathematics and Computation 173 (2006) 450–456 www.elsevier.com/locate/amc

An improvement to Ostrowski root-finding method Miquel Grau *, Jose´ Luis Dı´az-Barrero Technical University of Catalonia, Department of Applied Mathematics II and III, Jordi Girona 1-3, Omega, 08034 Barcelona, Spain

Abstract An improvement to the iterative method based on the Ostrowski one to compute nonlinear equation solutions, which increases the local order of convergence is suggested. The adaptation of a strategy presented here gives a new iteration function with an additional evaluation of the function. It also shows a smaller cost if we use adaptive multi-precision arithmetic. The numerical results computed using this system with a floating point system representing 200 decimal digits support this theory.  2005 Elsevier Inc. All rights reserved. Keywords: Nonlinear equations; Iterative methods; Order of convergence; Effective efficiency

1. Introduction Generalizing the Newton–Secant method for solving nonlinear equations we obtain a family of methods from which the two with better order of convergence are precisely Newton–Secant and Ostrowski methods (see [1]). We also

*

Corresponding author. E-mail addresses: [email protected] (M. Grau), [email protected] (J.L. Dı´az-Barrero).

0096-3003/$ - see front matter  2005 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2005.04.043

M. Grau, J.L. Dı´az-Barrero / Appl. Math. Comput. 173 (2006) 450–456

451

give a new iterative method which generalizes the computation of simple zeros of nonlinear equations by means of OstrowskiÕs method. The variant that we present here consist in adding the evaluation of the function at another point in the procedure iterated by OstrowskiÕs method. As a consequence, the order of convergence of the method increases. This technique was applied efficiently to the Cauchy method in [2] and the Hermite method in [3]. Moreover, currently we need to compute more digits, more quickly and with more precision. This fact has led us to compute with interactive symbolic mathematics program such as Maple [4,5] with adaptive multi-precision arithmetic that uses floating point representation up to 200 decimal digits of mantissa. The numerical results with a number of function test [6,7] seem to show that, at least on this set of problems, the new method works better not only in order but in efficiency.

2. Notation and basic results Let y = f(x) be a function with a real root a and let (xk)k2N be a sequence of real numbers that converges towards a. We say that the order of convergence is q if ek = xk  a, and exists a q 2 R+, such that when k tends to infinite  6¼ 0; ekþ1 ! C eqk 6¼ 1. Recall that a ðjÞis a simple root when f 0 (a) 5 0 in a neighborhood Ia of a. We also ðaÞ define Aj ¼ fj!f 0 ðaÞ ; j P 2. In what follows, we will describe a constructive procedure that led us to obtain improved iteration methods from the known composition of the Newton and generalized Secant methods. At a first step, we use the classical Newton method. Namely, zk = g2(xk) = xk  f(xk)/f 0 (xk). The generalized Secant method consists in consider the points (zk, f(zk)) and a generic point selected from the segment joining the points (zk, f(zk)) and (xk, f(xk)). That is, the point (dxk + (1  d)zk, c f(xk)). The family of iteration functions of this composition like Newton–Secant method is /ðxÞ ¼ z  f ðzÞ

dðx  zÞ ; cf ðxÞ  f ðzÞ

ð1Þ

where z = g2(x) and xk+1 = /(xk). From Taylor expansions of f ðxk Þ; ff0ðxðxkkÞÞ and f(zk) of the last family [2], we obtain the following difference equation:   d ekþ1 ¼ A2 1  e2k þ Oðe3k Þ. ð2Þ c

M. Grau, J.L. Dı´az-Barrero / Appl. Math. Comput. 173 (2006) 450–456

452

An improvement of the order of (2) is obtained if we set d = c. Then ekþ1 ¼ A22

2c  1 3 ek þ Oðe4k Þ. c

ð3Þ

In the case for d = c = 1, we have the Newton–Secant method with local order of convergence equal to 3. Namely, ekþ1 ¼ A22 e3k þ Oðe4k Þ. For d ¼ c ¼ 12, we obtain the Ostrowski method with local order of convergence equal to 4. That is, ekþ1 ¼ A2 ðA22  A3 Þe4k þ Oðe5k Þ. Note that the Ostrowski method is the only one of the family defined by (1) that attains maximum order, namely q = 4. From the family of iterative methods given in (1) we define a computational efficiency for an iterative method by E = q1/h [9], where q is the order of the method, and h is the cost of the evaluation of the function and its derivatives. If at each step only m functions are evaluated and we assume that all the evaluations have the same cost namely, one, we have E = q1/m. For the cases d = c = 1, and d ¼ c ¼ 12 we have two methods that need three evaluations (m = 3), two evaluations of the function and one of its derivative. Hence, the computational efficiency of those two iterative methods is 1 1 33 ¼ 1.4422 and 43 ¼ 1.5874 respectively. 3. Main result A well-known method [8,9] that improves the quadratic order of convergence of the Newton method is the following: zk ¼ g2 ðxk Þ; xkþ1 ¼ xk  bðf ðxk Þ þ f ðzk ÞÞ;

ð4Þ

where b = f 0 (xk)1 and it converges locally at least cubically. We introduce a new term, called l, that plays the same role as b in (4). It arises from OstrowskiÕs method and it is defined by l¼

xk  zk . 2f ðzk Þ  f ðxk Þ

ð5Þ

It is obtained from the last term of Eq. (1) when d ¼ c ¼ 12. Now the following question arises: it is possible to find a method similar to the OstrowskiÕs one with an improved order of convergence? If we consider zk ¼ g2 ðxk Þ; xkþ1 ¼ zk þ lf ðzk Þ; ~xkþ1 ¼ xkþ1 þ lf ðxkþ1 Þ; the answer is yes and the results are presented in the following theorem.

ð6Þ

M. Grau, J.L. Dı´az-Barrero / Appl. Math. Comput. 173 (2006) 450–456

453

Theorem 1. Let f : I ! R denote a real valued function defined on I a neighborhood of a simple root a of f(x) (f(a) = 0, f 0 (a) 5 0). Assume that f has derivatives up to the fourth order in I. Then the iteration function defined by (6) has a local order of convergence equal to 6 and the error satisfies the following difference equation: ~ekþ1 ¼ A2 ðA23  3A22 A3 þ 2A42 Þe6k þ Oðe7k Þ; ðjÞ

ðaÞ ; j ¼ 2; 3; 4. where ~ekþ1 ¼ ~xkþ1  a, and Aj ¼ fj!f 0 ðaÞ

Proof. In a neighborhood of a, substituting the Taylor developments of f(xk), f ðxk Þ , f(zk) and l into (6), we obtain f 0 ðxk Þ ekþ1 ¼ A2 ðA22  A3 Þe4k þ ð2A2 ð2A32  4A2 A3 þ A4 Þ þ 2A3 Þe5k þ Oðe6k Þ. The value of l is given by l¼

1 f 0 ðaÞ

ð1 þ A2 ðA22  A3 Þe2k Þ þ Oðe3k Þ;

and yields the following difference equation: ~ekþ1 ¼ ekþ1 þ lðf ðzk Þ þ f ðxkþ1 ÞÞ. The Taylor series of f(zk) and f(xk+1) give the difference equation of the error that is claimed in the preceding statement and the proof is complete. h Notice that the improvement of the local order goes from q = 4 for the older method, to q = 6 for the one presented here. The computational efficiency, after adding one unity to the cost of the evaluation of the function, goes from 1 1 43 ¼ 1.5874 to 64 ¼ 1.5651. However, the E results are only theoretical values that must be revised depending on the relative cost of the function, the cost of its derivative and the arithmetic of the computation. With the possibility of varying the number of digits in the computation, or a mathematic computation system such as Maple, or with any modification to Fortran language [10], it is possible to obtain a better efficiency for the method that the old one which at first does not seem sufficiently capable of carrying out the computation.

4. Numerical experiments and comparison We have tested the two preceding methods with thirteen functions using the Maple computer algebra system. We have computed the root of each function

454

M. Grau, J.L. Dı´az-Barrero / Appl. Math. Comput. 173 (2006) 450–456

for initial approximation x0, and we have defined at each step of the iterative method the length of the floating point arithmetic with multi-precision by Digits :¼ q  ½ log ek þ 1; where q is the order of the method which extends the length of the mantissa of the arithmetic, and [x] is the largest integer 6x. The iterative method is stopped when jxk  aj < g, where the tolerance g has been chosen equal to g = 10200, and a the exactly root computed with 210 significant digits. If in the last step of any iterative method it is necessary to increase the number of digits beyond 200, then this is done. The test functions are the same as those presented by Alefeld and Potra in [6] and Costabile, Gualtieri and Luceri in [7]. In these methods it is necessary to begin with one initial approximation x0. Table 1 shows the expression of the functions, the initial approximation which is the same for the two methods, and the root with five significant digits. Table 2 shows the number of necessary iterations for each method (OstrowskiÕs method G4 and the improved one G6) and each function to compute the root with the described precision. In the last column of Table 2 we present the total of number of evaluations of the functions and their derivatives (TNFE). The cost or computational time of these methods is described between parenthesis in Table 2. The number of times that the root is computed is four thousand. The unit of time is one second and the results presented here give an average of several computing methods, followed by a rounding of the figures. In spite of TNFE is greater using G6 method, the total cost in time is fewer.

Table 1 Test functions, their initial approximation x0 and their roots Test functions

x0

Root

f1(x) = x3 + 1 f2(x) = 2xec + 12ecx [c = 5] f3(x) = 2xec + 1  2ecx [c = 10] f4(x) = 2xec + 1  2ecx [c = 20] f5(x) = [1 + (1  c)4]x  (1  cx)4 [c = 5] f6(x) = [1 + (1  c)4]x  (1  cx)4 [c = 10] f7(x) = [1 + (1  c)4]x  (1  cx)4 [c = 20] f8 ðxÞ ¼ x2 þ sin 5x  14 f9 ðxÞ ¼ 5x1 4x f10(x) = x  3 ln(x) f11(x) = exp(x)  4x2 f12(x) = exp(x)  4x2 f13(x) = exp(x) + cos(x)

0.9 0.20 0.10 0.05 0.05 0.005 0.01 0.50 0.001 2.00 0.75 4.25 1.50

1.0 0.13826 0.69314 · 101 0.34657 · 101 0.36171 · 102 0.15147 · 103 0.76686 · 105 0.40999 0.2 1.8572 0.71481 4.3066 1.7461

G4 G6

f1(x)

f2(x)

f3(x)

f4(x)

f5(x)

f6(x)

f7(x)

f8(x)

f9(x)

f10(x)

f11(x)

f12(x)

f13(x)

TNFE

4 (7) 3 (6)

4 (12.5) 4 (16)

5 (16) 4 (15)

5 (16.5) 4 (15.5)

4 (10.5) 3 (8.5)

4 (13) 3 (12)

4 (13.5) 3 (12)

4 (12.5) 3 (11)

2 (1.5) 2 (1.5)

4 (8.5) 3 (7)

5 (12.5) 4 (11)

4 (9.5) 3 (7.5)

4 (10) 3 (8)

159 168

M. Grau, J.L. Dı´az-Barrero / Appl. Math. Comput. 173 (2006) 450–456

Table 2 Iteration number for g = 10200, computational time and total number of function evaluations (TNFE)

455

456

M. Grau, J.L. Dı´az-Barrero / Appl. Math. Comput. 173 (2006) 450–456

5. Concluding remarks A variant of the Ostrowski iterative method based on an additional evaluation of the function is given. As a conclusion, we can infer that G6 method works better than the G4 method according to the theoretical analysis of the order and respect to a practical efficiency. Note also the importance of using arithmetics that allow us to dynamically define the number of necessary digits for the computations. Many numerical applications use high precision in their computations. In these types of applications, high order numerical methods are important. The results of this numerical experiments show that the method introduced, associated with a multi-precision and adaptive arithmetic floating point, shows low computing times. Finally, we conclude that the method presented in this paper is competitive with other recognized efficient equation solvers such as the Ostrowski method [1].

References [1] A.M. Ostrowski, Solutions of Equations and System of Equations, Academic Press, New York, 1960. [2] M. Grau, M. Noguera, A variant of CauchyÕs method with accelerated fifth-order convergence, Applied Mathematics Letter 17 (2004) 509–517. [3] M. Grau, An improvement to the computing of nonlinear equation solutions, Numerical Algorithms 34 (2003) 1–12. [4] D. Betounes, M. Redfern, Mathematical Computing, Springer-Verlag, Berlin, 2002. [5] F. Garvan, The Maple Book, Chapman & Hall, London, 2001. [6] G.E. Alefeld, F.A. Potra, Some efficient methods for enclosing simple zeros of nonlinear equations, BIT 32 (1992) 334–344. [7] F. Costabile, M.I. Gualtieri, R. Luceri, A new iterative method for the computation of the solutions of nonlinear equations, Numerical Algorithms 28 (2001) 87–100. [8] J. Stoer, R. Bulirsch, Introduction to Numerical Analysis, Springer-Verlag, Berlin, 1983. [9] J.F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, New Jersey, 1964. [10] D.H. Bailey, Multiprecision translation and execution of Fortran programs, ACM Transactions on Mathematical Software 19 (3) (1993) 288–319.