Journal of Computational and Applied Mathematics 201 (2007) 208 – 216 www.elsevier.com/locate/cam
An iterative method for solving nonlinear equations Nenad Ujevi´c∗ Department of Mathematics, University of Split, Teslina 12/III, 21000 Split, Croatia Received 15 March 2005
Abstract An iterative method for finding a solution of the equation f (x) = 0 is presented. The method is based on some specially derived quadrature rules. It is shown that the method can give better results than the Newton method. © 2006 Elsevier B.V. All rights reserved. MSC: 65H05 Keywords: Nonlinear equations; Method; Quadrature rules; Convergence
1. Introduction One of the most basic problems in numerical analysis (and of the oldest numerical approximation problems) is that of finding values of the variable x which satisfy f (x) = 0 for a given function f. The Newton method is the most popular method for solving such equations. Some historical points about this method can be found in [15]. In recent years a number of authors have considered methods for solving the nonlinear equations. For example, see [1–4,6,8–10]. It is well known that some of these methods can be obtained using Taylor or interpolating polynomials. As we know, these polynomials give approximations of functions. If we integrate these approximations then we get corresponding quadrature formulas, for example, Newton–Cotes formulas. Certain recently obtained results in numerical integration have emphasized that error bounds for the quadrature formulas are, generally speaking, better than error bounds for the corresponding approximate polynomials. A natural question is: whether we can use these quadrature formulas to obtain methods for solving nonlinear equations? It is already known that quadrature formulas and nonlinear equations are connected, for example, see [9]. In this paper, we give a new approach to this subject. We derive a method for solving nonlinear equations using some specially derived quadrature formulas. This new approach is different than the above-mentioned connection between quadrature formulas and nonlinear equations. In Section 2, we derive two quadrature formulas and use them to obtain the method for solving nonlinear equations. In this section we also give an algorithm for the obtained method and give an algorithm for the Newton method. In Section 3, we consider some convergence results. In Section 4, we give few numerical examples and compare this method with the Newton method. We show that this new method can give better results than the Newton method. ∗ Tel.: +385 21385133; fax: +385 21385431.
E-mail address:
[email protected]. 0377-0427/$ - see front matter © 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.cam.2006.02.015
N. Ujevi´c / Journal of Computational and Applied Mathematics 201 (2007) 208 – 216
209
Moreover, we show that the new method can be applied in some cases where the Newton method fails to give desired results.
2. The method We define the mapping ⎧ 5a + b ⎪ , ⎨t − 6 K1 (x, t) = ⎪ ⎩ t − a + 5b , 6
t ∈ [a, x], (1) t ∈ (x, b],
where x ∈ [a, b]. We also suppose that f ∈ C 1 (a, b). Integrating by parts, we obtain a
b
K1 (x, t)f (t) dt
b 5a + b a + 5b = t− t− f (t) dt + f (t) dt 6 6 a x b b−a = f (t) dt. [f (a) + 4f (x) + f (b)] − 6 a
x
(2)
If we set x = (a + b)/2 in (2) then we get the well-known Simpson’s quadrature rule
b b b−a a+b a+b f (a) + 4f + f (b) − , t f (t) dt. f (t) dt = K1 6 2 2 a a The quadrature rule (3) is considered, for example, in [7,11]. We have b (b − a)2 K (x, t)f (t) dt f ∞ . 1 6 a
(3)
(4)
We also define the mapping K2 (x, t) =
t − a, t − b,
t ∈ [a, x], t ∈ (x, b],
(5)
where x ∈ [a, b]. Integrating by parts, we obtain a
b
K2 (x, t)f (t) dt x b = (t − a)f (t) dt + (t − b)f (t) dt a x b = (b − a)f (x) − f (t) dt.
(6)
a
We also have b a+b K2 (x, t) dt = (b − a) x − 2 a
(7)
210
N. Ujevi´c / Journal of Computational and Applied Mathematics 201 (2007) 208 – 216
and
b
f (t) dt = f (b) − f (a).
(8)
a
From (6)–(8) it follows
b b 1 f (t) dt K2 (x, t)f (t) dt − K2 (x, t) dt b−a a a a b a+b f (t) dt. [f (b) − f (a)] − = (b − a)f (x) − x − 2 a b
(9)
If we set x = (a + b)/2 in (9) then we get the well-known mid-point quadrature rule and if we set x = a then we get the well-known trapezoidal quadrature rule. The quadrature rule (9) is considered, for example, in [5,12]. We have b b b 1 K2 (x, t)f (t) dt − K2 (x, t) dt f (t) dt b − a a a a b
b 1 K2 (x, t) − K2 (x, s) ds f (t) dt = b−a a a
(b − a)2 f ∞ . 4
(10)
If we denote
b
R1 (x) = a
b
R2 (x) =
K1 (x, t)f (t) dt,
a
1 K2 (x, t) − b−a
b
K2 (x, s) ds f (t) dt,
a
then from (2) and (9) it follows
b−a [f (a) + 4f (x) + f (b)] − R1 (x), 6 a b a+b [f (b) − f (a)] − R2 (x). f (t) dt = (b − a)f (x) − x − 2 a b
f (t) dt =
From (4), (10) and the above two relations we see that
b−a [f (a) + 4f (x) + f (b)], 6 a b a+b [f (b) − f (a)], f (t) dt ≈ (b − a)f (x) − x − 2 a b
f (t) dt ≈
if a is sufficiently close to b. Thus, b−a a+b [f (a) + 4f (x) + f (b)] ≈ (b − a)f (x) − x − [f (b) − f (a)], 6 2 if a is sufficiently close to b.
N. Ujevi´c / Journal of Computational and Applied Mathematics 201 (2007) 208 – 216
211
If we suppose that f (b) = 0,
(11)
then we have
b−a a+b [f (a) + 4f (x)] ≈ (b − a)f (x) + x − f (a). 6 2
¯ We now consider the equation (b → b)
b¯ − a a + b¯ [f (a) + 4f (x)] = (b¯ − a)f (x) + x − f (a). 6 2
(12)
From the above considerations we conclude that b¯ ≈ b, if a is sufficiently close to b. The solution b¯ of Eq. (12) is given by b¯ = a + 3(x − a)
f (a) . 2f (a) − f (x)
(13)
We choose x as x=a−
f (a) , f (a)
0 < < 1.
(14)
If we set b¯ = xk+1 , a = xk and x = zk then from (13) and (14) we get xk+1 = xk + 3(zk − xk )
f (xk ) 2f (xk ) − f (zk )
(15)
and zk = xk −
f (xk ) , f (xk )
0 < < 1.
(16)
Relations (15) and (16) give a method for iterative finding of a root of the nonlinear equation f (x) = 0. We now write a corresponding algorithm for this method. Algorithm 1. Step 1: Choose x0 , ∈ (0, 1), > 0 and a positive integer m. Set k = 0. Step 2: Calculate zk = xk −
f (xk ) , f (xk )
xk+1 = xk + 3(zk − xk )
f (xk ) . 2f (xk ) − f (zk )
Step 3: If |xk+1 − xk | < or k > m then stop. Step 4: Set k → k + 1 and go to Step 2. The main aim of this paper is to prove that the sequence (xk ), given by (15)–(16), i.e., obtained by Algorithm 1, converges to b—the solution of the equation f (x) = 0. We also use the well-known algorithm for the Newton method. Algorithm 2. Step 1: Choose x0 ∈ R, > 0 and a positive integer n. Set k = 0. Step 2: Calculate xk+1 = xk −
f (xk ) . f (xk )
Step 3: If |xk+1 − xk | < or k > n then stop. Step 4: Set k → k + 1 and go to Step 2.
212
N. Ujevi´c / Journal of Computational and Applied Mathematics 201 (2007) 208 – 216
3. Convergence results Here, we suppose that f ∈ C 2 (c, d), f (b) = 0, b ∈ (c, d), f (x) = 0, f (x) = 0, x ∈ (c, d). We define the function (a) = a + 3(x − a)
f (a) , 2f (a) − f (x)
(17)
where x=a−
f (a) , f (a)
0 < < 1.
(18)
(Compare (17) and (18) with (13) and (14).) From the Newton method we know that f (a) = b. lim x = lim a − a→b a→b f (a)
(19)
We now consider the derivative (a) in a neighborhood of the point b. We have
(a) = 1 + 3(xa
f (a) f (a)(2f (a) − f (x)xa ) f (a) . − 1) + 3(x − a) − 2f (a) − f (x) 2f (a) − f (x) (2f (a) − f (x))2
(20)
We also have
lim
a→b
xa
1−
= lim
a→b
f (a)2 − f (a)f (a) f (a)2
= 1 − 1 − lim
a→b
f (a)f (a)
f (a)2
= 1 − .
(21)
Using L’Hospital rule we get lim
f (x) f (x) = lim xa = 1 − f (a) a→b f (a)
(22)
lim
1 1 f (a) = = . 2f (a) − f (x) 2 − lima→b (f (x)/f (a)) 1 +
(23)
a→b
and
a→b
From (18)–(23) it follows lim (a) = 1 − 3
a→b
f (a) f (a) − 3 lim a→b f (a) 2f (a) − f (x) 1+
+ 3 lim
a→b
f (a) f (a)(2f (a) − f (x)xa ) f (a) (2f (a) − f (x))2
2 − (f (x)/f (a))xa −3 + 3 lim a→b [2 − f (x)/f (a)]2 1+ 1+ 1 + 3 =1−3 =1−6 1+ 1+ 1+ =1−3
(24)
N. Ujevi´c / Journal of Computational and Applied Mathematics 201 (2007) 208 – 216
213
i.e., lim (a) = 1 − 3
a→b
. 1+
(25)
It is not very difficult to verify that 1 − 3 < 1, 1 + if 0 < 1. From the above considerations we conclude that there exists a neighborhood U (b) of b such that | (a)| < < 1
for each a ∈ U (b).
(26)
Then from (x) − (y) = ()(x − y),
∈ (x, y)
and x, y ∈ U (b), we find that |(x) − (y)| < |x − y| for 0 < < 1, x, y ∈ U (b).
(27)
ˆ Hence, is a contracting mapping. Thus, there exists a unique fixed point b, ˆ = b. ˆ (b)
(28)
We now have ˆ ˆ f (b) ˆ ˆ = bˆ − 3 f (b) = b, (b) ˆ 2f (b) ˆ − f (x) f (b) ˆ where xˆ = bˆ −
ˆ f (b) ˆ f (b)
ˆ = 0. such that f (b) Now it is not difficult to see that bˆ = b.
(29)
We proved the following result. Theorem 3. Let f ∈ C 2 (c, d), f (b) = 0, b ∈ (c, d), f (x) = 0, f (x) = 0, x ∈ (c, d) and a0 ∈ U (b), where U (b) is defined above. Then the sequence xk+1 = (xk ), where x0 = a and is defined by (17), converges to b. We now consider the problem of choosing the parameter . We suppose that f (x) < 0, x < b, f (x) > 0, f (x) < 0, x ∈ U (b). First, we require that b¯ b, i.e., a + 3(x − a)
f (a) b 2f (a) − f (x)
or −3
f (a) f (a) b − a. f (a) 2f (a) − f (x)
(30)
214
N. Ujevi´c / Journal of Computational and Applied Mathematics 201 (2007) 208 – 216
Inequality (30) is equivalent to the inequality −3
1 f (a) 1 1. b − a f (a) 2 − f (x)/f (a)
Using L’Hospital rule, we have lim
f (a) f (a) = lim = −f (b) b − a a→b −1
lim
f (x) = 1 − . f (a)
a→b
and a→b
From the above two relations it follows that −3 lim
a→b
3 f (a) 1 1 = . b − a f (a) 2 − f (x)/f (a) 1 +
We have 3 1 2+
1 iff . 2
Second, we require that b − a + 3(x − a)
f (a) 2f (a) − f (x)
(31)
f (a) , b − a − f (a)
(32)
i.e., that b¯ is a better approximation of b than the element a − f (a)/f (a) of the Newton sequence. Inequality (32) is equivalent to the inequality 3
f (a) 1. 2f (a) − f (x)
Using the previous results (passing to limit) we get 3 1 1+
or
1 . 2
(33)
From (31) and (33) we see that it is appropriate to choose = 21 . Then, we have a
f (a) ¯ b b. f (a)
In fact, in most cases we have a
f (a) < b¯ b. f (a)
Thus, method (15)–(16) is better than the Newton method. In a similar way we can consider other possible cases, such as f (x) > 0, x < b, f (x) < 0, f (x) < 0, x ∈ U (b), etc. Remark 4. We have to calculate f (xk ), f (xk ) on each step to perform the Newton method and we have to calculate f (xk ), f (zk ), f (xk ) on each step to perform method (15)–(16). If n is a number of required iterates for the Newton method and m is a number of required iterates for method (15)–(16) then (in principle) we shall use the last mentioned method if 3m < 2n, i.e., m < 23 n. Few examples with the last property are given in Section 4.
N. Ujevi´c / Journal of Computational and Applied Mathematics 201 (2007) 208 – 216
215
4. Numerical examples In this section, we give few numerical examples and compare method (15)–(16) and the Newton method. We use Algorithms 1 and 2 with = 1.0E − 10, = 21 and m = n = 10 000 for finding a solution of the equation f (x) = 0. √ Example 5. Let f (x) = 3 x. A simple calculation gives √ 2−53 2 xk+1 = √ xk , 43 2+2 if we apply method (15)–(16). Since 3 2 − 5√ 2 √ <1 4 3 2 + 2 it is obvious that the above sequence converges to 0—the exact solution of the equation f (x) = 0. On the other hand, the Newton method gives the sequence xk+1 = −2xk . It is also clear that this sequence diverges for each initial approximation x0 = 0. The above example shows that method (15)–(16) can give the solution when the Newton method fails to do that. Example 6. Let f (x) = 1/x − 1 and x0 = 2.7. Algorithm 1 gives the solution b = 1 after m = 6 iterates while the Newton method stops after n = 10 001 iterates and does not give the solution. (If we choose, for example, x0 = 1.7, then both the methods give the solution.) Example 7. Let f (x) = exp(1 − x) − 1 and x0 = 3. Algorithm 1 gives the solution b = 1 after m = 5 iterates and the Newton method gives the solution after n = 11 iterates. Example 8. Let f (x) = x exp(−x) and x0 = 0.92. Algorithm 1 gives the solution b = 0 after m = 10 iterates and the Newton method gives the solution after n = 20 iterates. Example 9. Let f (x) = exp(x 2 + 7x − 30) − 1 and x0 = 2.8. Algorithm 1 gives the solution b = 3 after m = 7 iterates and the Newton method gives the solution after n = 17 iterates. Example 10. Let f (x) = 1/x − sin x + 1 and x0 = −1.3. Algorithm 1 gives the solution b = −0.629446 after m = 7 iterates and the Newton method gives the solution b = −10.5568 after n = 75 iterates. Examples 5, 7 and 8 are taken from [4] and Example 9 is taken from [14]. 5. Conclusion remarks Of course, we can derive many quadrature rules (using different Peano kernels) and use the procedure presented in this paper to obtain methods for solving nonlinear equations. It is also clear that it has sense to consider only those rules (and their combinations) which give better results than some existing methods (for example, the Newton method). Many analytical and numerical experiments have showed that only few of these combinations satisfy the above requirement. For example, one of successful combinations is given in [13]. References [1] S. Abbasbandy, Improving Newton–Raphson method for nonlinear equations by modified adomian decomposition method, Appl. Math. Comput. 145 (2003) 887–893.
216
N. Ujevi´c / Journal of Computational and Applied Mathematics 201 (2007) 208 – 216
[2] S. Amat, S. Busquier, J.M. Gutierrez, Geometric constructions of iterative functions to solve nonlinear equations, J. Comput. Appl. Math. 157 (2003) 197–205. [3] E. Babolian, J. Biazar, Solution of nonlinear equations by adomian decomposition method, Appl. Math. Comput. 132 (2002) 167–172. [4] A. Ben-Israel, Newton’s method with modified functions, Contem. Math. 204 (1997) 39–50. [5] X.L. Cheng, Improvement of some Ostrowski–Grüss type inequalities, Comput. Math. Appl. 42 (2001)109–114. [6] F. Costabile, M.I. Gualtieri, S.S. Capizzano, An iterative method for the solutions of nonlinear equations, Calcolo 30 (1999) 17–34. [7] S.S. Dragomir, P. Cerone, J. Roumeliotis, A new generalization of Ostrowski’s integral inequality for mappings whose derivatives are bounded and applications in numerical integration and for special means, Appl. Math. Lett. 13 (2000) 19–25. [8] M. Frontini, Hermite interpolation and a new iterative method for the computation of the roots of non-linear equations, Calcolo 40 (2003) 109–119. [9] M. Frontini, E. Sormani, Third-order methods from quadrature formulae for solving systems of nonlinear equations, Appl. Math. Comput. 149 (2004) 771–782. [10] Mamta, V. Kanwar, V.K. Kukreja, S. Singh, On some third-order methods for solving nonlinear equations, Appl. Math. Comput., to appear. [11] C.E.M. Pearce, J. Peˇcari´c, N. Ujevi´c, S. Varošanec, Generalizations of some inequalities of Ostrowski–Grüss type, Math. Inequal. Appl. 3 (1) (2000) 25–34. [12] N. Ujevi´c, New bounds for the first inequality of Ostrowski–Gruss type and applications, Comput. Math. Appl. 46 (2003) 421–427. [13] N. Ujevi´c, A method for solving nonlinear equations, submitted for publication. [14] S. Weerakoon, T.G. Fernando, A variant of Newton’s method with accelerated third-order convergence, Appl. Math. Lett. 13 (2000) 87–93. [15] T. Yamamoto, Historical development in convergence analysis for Newton’s and Newton-like methods, J. Comput. Appl. Math. 124 (2000) 1–23.