Applied Mathematics and Computation 182 (2006) 1149–1153 www.elsevier.com/locate/amc
Fourth-order convergent iterative method for nonlinear equation Muhammad Aslam Noor *, Faizan Ahmad Mathematics Department, COMSATS Institute of Information Technology, Sector H-81, Islamabad, Pakistan
Abstract In this paper, we suggest and analyze a new iterative method for finding approximate solution of the nonlinear equation f(x) = 0. It is shown that proposed method has fourth-order convergence. Several numerical examples are given to illustrate that the method developed in this paper give better results than the other methods including Newton method. Ó 2006 Elsevier Inc. All rights reserved. Keywords: Nonlinear equation; Predictor–corrector method; Numerical example; Newton method
1. Introduction In recent years, several iterative methods have been developed for finding the numerical solutions of nonlinear equation f(x) = 0, see [1–8] and the reference therein. Inspired and motivated by the research going on in this area, we suggest a new iterative method which has quadratic convergence. Combining this new method and Newton’s method, we consider a predictor–corrector method for solving nonlinear equation f(x) = 0. Several numerical examples are given to illustrate the efficiency of these new methods. We have shown that new method can be applied in some cases when the Newton method fails to give desired result. 2. Iterative methods Now we will consider the nonlinear equation f ðxÞ ¼ 0:
ð2:1Þ
Let r be the exact root and x0 be the initial guess known for the required root. Assume x1 ¼ x0 þ h;
h 1;
be the first approximation to the root. *
Corresponding author. E-mail addresses:
[email protected],
[email protected] (M.A. Noor),
[email protected] (F. Ahmad).
0096-3003/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.04.068
ð2:2Þ
1150
M.A. Noor, F. Ahmad / Applied Mathematics and Computation 182 (2006) 1149–1153
Consider the following auxiliary equation with a parameter p: 2
2
gðxÞ ¼ p3 ðx x0 Þ f ðxÞ f ðxÞ ¼ 0;
ð2:3Þ
where p 2 R and jpj < 1. It is clear that the root of (2.1) is also the root of (2.3) and vice versa. If x1 = x0 + h is the better approximation for the required root, then (2.3) gives p3 h2 f 2 ðx0 þ hÞ f ðx0 þ hÞ ¼ 0:
ð2:4Þ
Expending by the Taylor’s theorem and simplifying, we get h¼
f 0 ðx
2f ðx0 Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; f 02 ðx0 Þ þ 4p3 f 3 ðx0 Þ 0Þ
ð2:5Þ
in which sign should be chosen to make the denominator largest in magnitude. Combining (2.2) and (2.5) we suggest the following iteration method for solving nonlinear equation f(x) = 0. Algorithm 2.1. Here p is chosen so that f(xn) and p have the same sign. For a given x0, calculate x1, x2, . . . by the iterative schemes, xkþ1 ¼ xk
f 0 ðx
2f ðxk Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; f 02 ðxk Þ þ 4p3 f 3 ðxk Þ kÞ
where sign is chosen such as to make the denominator largest in magnitude. For p = 0, Algorithm 2.1 reduces to well-known Newton method. We now suggest a predictor–corrector iterative method by combining Algorithm 2.1 and the well-known Newton’s method. Algorithm 2.2. Here p is chosen so that f(xn) and p have the same sign. For given x0, calculate x1, x2, . . . such that, 2f ðxk Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; f 02 ðxk Þ þ 4p3 f 3 ðxk Þ kÞ f ðzx Þ ; ¼ zk 0 f ðzk Þ
zk ¼ xk xkþ1
f 0 ðx
where sign should be chosen so as to make the denominator largest in magnitude. 3. Convergence analysis In this section, we consider the convergence analysis of iterative technique given in Algorithm 2.2. Theorem 3.1. Let r 2 I be a simple zero of sufficiently differentiable function f : I R ! R for an open interval I. If x0 is sufficiently close to r, then the iterative method defined by Algorithm 2.2 has fourth-order convergence. Proof. The technique is given by y n ¼ xn
f 0 ðx
xnþ1 ¼ y n
2f ðxn Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; f 02 ðxn Þ þ 4p3 f 3 ðxn Þ nÞ
f ðy n Þ : f 0 ðy n Þ
ð3:1Þ ð3:2Þ
M.A. Noor, F. Ahmad / Applied Mathematics and Computation 182 (2006) 1149–1153
1151
From (3.1), we get xnþ1 ¼ xn
f ðx Þ h n 3 i: f 0 ðxn Þ 1 þ p3 ff 02ðxðxnnÞÞ
ð3:3Þ
Let r be a simple zero of f. Since f is sufficiently differentiable, by expanding f (xn) and f 0 (xn) about r, we get f ðxn Þ ¼ f 0 ðrÞ½en þ c2 e2n þ c3 e3n þ c4 e4n þ ; 0
0
f ðxn Þ ¼ f ðrÞ½1 þ 2c2 en þ
3c3 e2n
þ
4c4 e3n
þ
5c5 e4n
ð3:4Þ þ ;
ð3:5Þ
f ðkÞ ðrÞ
where ck ¼ k!1 f 0 ðrÞ , k = 1, 2, 3, . . . and en = xn r. From (3.4) and (3.5), we have f ðxn Þ ¼ en c2 e2n þ 2ðc22 c3 Þe3n þ ð7c2 c3 4c32 3c4 Þe4n þ f 0 ðxn Þ
ð3:6Þ
f 3 ðxn Þ ¼ f 0 ðrÞ½e3n c2 e4n þ : f 0 2 ðxn Þ
ð3:7Þ
and
From (3.3), (3.6) and (3.7), we get y n ¼ r þ c2 e2n þ 2ðc3 c22 Þe3n þ ð4c32 þ 3c4 7c2 c3 þ p3 f 0 ðrÞÞe4n þ
ð3:8Þ
Expanding f (yn), f 0 (yn) about r and using (3.8), we get 2
ðy n rÞ 00 f ðrÞ þ 2! ¼ c2 e2n þ 2ðc3 c22 Þe3n þ ð5c32 þ 3c4 7c2 c3 þ p3 f 0 ðrÞÞe4n þ ;
f ðy n Þ ¼ f ðrÞ þ ðy n rÞf 0 ðrÞ þ
ð3:9Þ
2
ðy n rÞ 00 f ðrÞ þ 2! ¼ 1 þ 2c2 e2n þ 4ðc3 c22 Þe3n þ ð8c42 þ 6c2 c4 11c22 c3 þ 2p3 f 0 ðrÞc2 Þe4n þ
f 0 ðy n Þ ¼ f 0 ðrÞ þ ðy n rÞf 00 ðrÞ þ
ð3:10Þ
From (3.8), (3.9), (3.10) and en+1 = xn + r, we get enþ1 ¼ c32 e4n þ which shows that the method has the fourth-order convergence.
ð3:11Þ h
4. Numerical examples We present some examples to illustrate the efficiency of the new developed predictor–corrector iterative methods in this paper. We compare the Newton method (NM), the method of Abbasbandy [1] (AM), the method of Chun [3] (CM) and NA (Algorithm 2.2), the method introduced in this present paper. We used e = 1015. The following stopping criteria are used for computer programs: ðiÞ jxnþ1 xn j < e; ðiiÞ jf ðxnþ1 Þj < e: The examples are the same as in Chun [3]. f1 ðxÞ ¼ sin2 x x2 þ 1; f2 ðxÞ ¼ x2 ex 3x þ 2; f3 ðxÞ ¼ cos x x; 3
f4 ðxÞ ¼ ðx 1Þ 1;
1152
M.A. Noor, F. Ahmad / Applied Mathematics and Computation 182 (2006) 1149–1153
f5 ðxÞ ¼ x3 10; 2
f6 ðxÞ ¼ xex sin2 x þ 3 cos x þ 5; f7 ðxÞ ¼ ex
2 þ7x30
1:
As for the convergence criteria, it was required that the distance of two consecutive approximations d for the zero was less than 1015. Also displayed is the number of iterations to approximate the zero (IT), the approximate zero xn and the value f(xn). Also given the number of function evaluation for desired accuracy (NOFE). From Table 4.1 we can see that the new suggested scheme performs better than the methods which we have taken for the comparison sake and the method of Chun and Abbasbandy both have fourth-order convergence but Algorithm 2.2 performs better than those. We see the number of iterations which are less than any other method for Algorithm 2.2. While talking about NOFE it is also less for Algorithm 2.2 in most of the cases.
Table 4.1 Examples and comparison IT
xn
f(xn)
d
NOFE
1.04e50 5.81e55 2.0e63 2.0e63
7.33e26 1.39e18 1.31e17 1.0e34
14 15 20 12
f1, x0 = 1 NM AM CM NA
7 5 5 3
1.4044916482153412260350868178 1.4044916482153412260350868178 1.4044916482153412260350868178 1.4044916482153412260350868178
f2, x0 = 2 NM AM CM NA
6 5 4 3
0.25753028543986076045536730494 0.25753028543986076045536730494 0.25753028543986076045536730494 0.2575302854398607604553673049
2.93e55 1.0e63 1.0e63 1.0e3
9.1e28 1.45e26 9.46e29 7.6e21
12 15 16 12
f3, x0 = 1.7 NM AM CM NA
5 4 4 3
0.73908513321516064165537208767 0.73908513321516064165537208767 0.73908513321516064165537208767 0.7390851332151606416553120877
2.03e32 7.14e47 5.02e59 0
2.34e16 8.6e16 9.64e20 1.22e35
10 15 16 12
f4, x0 = 3.5 NM AM CM NA
8 5 5 4
2 2 2 2
2.06e42 0 0 0
8.28e22 4.3e22 1.46e24 1.60e20
16 15 16 16
f5, x0 = 1.5 NM AM CM NA
7 5 4 2
2.1544346900318837217592935665 2.1544346900318837217592935665 2.1544346900318837217592935665 2.1544346900318837217592935665
2.06e54 5.0e63 5.0e63 5.0e63
5.64e28 1.18e25 9.8e23 2.19e24
14 15 16 8
f6, x0 = 2 NM AM CM NA
9 6 6 4
1.2076478271309189270094167584 1.2076478271309189270094167584 1.2076478271309189270094167584 1.2076478271309189270094167584
2.27e40 4.0e63 4.0e63 4.0e63
2.73e21 4.35e45 2.57e32 4.25e37
18 18 24 16
f7, x0 = 3.5 NM AM CM NA
13 7 8 6
1.52e47 4.33e48 2.0e62 0
4.2e25 2.25e17 2.43e33 3.34e42
26 21 32 24
3 3 3 3
M.A. Noor, F. Ahmad / Applied Mathematics and Computation 182 (2006) 1149–1153
1153
Acknowledgement This research is supported by the Higher Education Commission, Pakistan, through Research Grant No. i-28/HEC/HRD/2005/90. References [1] S. Abbasbandy, Improving Newton–Raphson method for nonlinear equations by modified Adomian decomposition method, Appl. Math. Comput. 145 (2003) 887–893. [2] Richard L. Burden, J. Douglas Faires, Numerical Analysis, PWS Publishing Company, Bostan, 2002. [3] C. Chun, Iterative methods improving Newton’s method by the decomposition method, Comput. Math. Appl. 50 (2005) 1559–1568. [4] Mamta V. Kanwar, V.K. Kukreja, Sukhjit Singh, On a class of quadratically convergent iteration formulae, Appl. Math. Comput. 166 (3) (2005) 633–637. [5] M. Aslam Noor, Numerical Analysis and Optimization, Lecture Notes, COMSATS Institute of Information Technology, Islamabad, Pakistan, 2006. [6] M. Aslam Noor, Faizan Ahmad, Numerical comparison of iterative methods for solving nonlinear equations, Appl. Math. Comput., in press, doi:10.1016/j.amc.2005.11.151. [7] M. Aslam Noor, F. Ahmad, S. Javeed, Two-step iterative methods for nonlinear equations, Appl. Math. Comput., in press, doi:10.1016/j.amc.2006.01.065. [8] X. Wu, H.W. Wu, On a lass of quadratic convergence iteration formulae without derivatives, Appl. Math. Comput. 10 (7) (2000) 77– 80.