Applied Mathematics and Computation 251 (2015) 378–386
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
Some numerical methods for solving nonlinear equations by using decomposition technique Farooq Ahmed Shah a,⇑, Muhammad Aslam Noor b a b
Department of Mathematics, COMSATS Institute of Information Technology, Attock, Pakistan Department of Mathematics, COMSATS Institute of Information Technology, Islamabad, Pakistan
a r t i c l e
i n f o
Keywords: Decomposition technique Iterative method Convergence Newton method Auxiliary function Coupled system of equations
a b s t r a c t In this paper, we use the system of coupled equations involving an auxiliary function together with decomposition technique to suggest and analyze some new classes of iterative methods for solving nonlinear equations. These new methods include the Halley method and its variant forms as special cases. Various numerical examples are given to illustrate the efficiency and performance of the new methods. These new iterative methods may be viewed as an addition and generalization of the existing methods for solving nonlinear equations. Ó 2014 Elsevier Inc. All rights reserved.
1. Introduction It is well known that a wide class of problems, which arises in diverse disciplines of mathematical and engineering science can be studied by the nonlinear equation of the form f ðxÞ ¼ 0. Numerical methods for finding the approximate solutions of the nonlinear equation are being developed by using several different techniques including Taylor series, quadrature formulas, homotopy and decomposition techniques, see [1–17] and the references therein. In this paper, we use alternative decomposition technique to suggest the main iterative schemes which generates the iterative methods of higher order. First of all, we rewrite the given nonlinear equation along with the auxiliary function as an equivalent coupled system of equations using the Taylor series. This approach enables us to express the given nonlinear equation as sum of linear and nonlinear equations. This way of writing the given equation is known as the decomposition and plays the central role in suggesting the iterative methods for solving nonlinear equations f ðxÞ ¼ 0. In this work, we use the system of coupled equations to express the given nonlinear equations as a sum of linear and nonlinear operators involving the auxiliary function gðxÞ. This auxiliary function helps to deduce several iterative methods for solving nonlinear equations. The effectiveness and efficiency of the auxiliary function can be observed in the next section for deriving the robust iterative methods for solving nonlinear equations. Results obtained in this paper, suggest that this new technique of decomposition is a promising tool. In section 2, we sketch out the main ideas of this technique and suggest some multi-stepiterative methods for solving nonlinear equations. 0 One can notice that if the derivative of the function vanishes, that is f ðxn Þ ¼ 0, during the iterative process, then the sequence generated by the Newton method or the methods derived in [1–17] are not defined. Due to cardinal sin of division results in a mathematical breakdown. This is another motivation of the paper that the derived higher order methods also converge even if the derivative vanishes during the iterative process. We also show that the new methods include Newton ⇑ Corresponding author. E-mail addresses:
[email protected] (F.A. Shah),
[email protected] (M.A. Noor). http://dx.doi.org/10.1016/j.amc.2014.11.065 0096-3003/Ó 2014 Elsevier Inc. All rights reserved.
F.A. Shah, M.A. Noor / Applied Mathematics and Computation 251 (2015) 378–386
379
method and Halley method and their variant forms as special cases. Several numerical examples are given to illustrate the efficiency and the performance of the new iterative methods. Our results can be considered as an important improvement and refinement of the previously known results. 2. Iterative methods In this section, we suggest some new iterative methods for solving nonlinear equations by using decomposition technique involving an auxiliary function. This auxiliary function diversifies the main recurrence relations for the best implementation of the methods to obtain the approximate solution of nonlinear equations. This technique purposes main iterative schemes to provide higher order convergent iterative methods. Consider the nonlinear equation of the type
f ðxÞ ¼ 0:
ð1Þ
Assume that p is a simple root of nonlinear equation (1) and c is an initial guess sufficiently close to p. Let gðxÞ be the auxiliary function, such that
f ðxÞgðxÞ ¼ 0:
ð2Þ
We can rewrite the nonlinear equation (2) as a system of coupled equations by using Taylor series technique as:
0 f ðcÞgðcÞ þ f ðcÞgðcÞ þ f ðcÞg 0 ðcÞ ðx cÞ þ hðxÞ ¼ 0:
ð3Þ
Equation (3) can be written in the following form
0 hðxÞ ¼ f ðxÞgðcÞ f ðcÞgðcÞ f ðcÞgðcÞ þ f ðcÞg 0 ðcÞ ðx cÞ:
ð4Þ
where c is the initial approximation for a zero of (1). We can rewrite equation (4) in the following form
f ðcÞgðcÞ hðxÞ : x¼c 0 0 f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ
ð5Þ
We express (5), in the following form as:
x ¼ c þ NðxÞ;
ð6Þ
f ðcÞgðcÞ ; c ¼c 0 f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ
ð7Þ
where
and
NðxÞ ¼
hðxÞ : 0 f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ
ð8Þ
Here NðxÞ is a nonlinear function. We now construct a sequence of higher order iterative methods by using the following decomposition technique, which is mainly due to Daftardar-Gejji and Jafari [5]. This decomposition of the nonlinear function NðxÞ is quite different from that of Adomian decomposition. The main idea of this technique is to look for a solution having the series form
x¼
1 X xi :
ð9Þ
i¼0
The nonlinear operator N can be decomposed as
( ! !) 1 i i1 X X X : NðxÞ ¼ Nðx0 Þ þ N xj N xj i¼1
j¼0
ð10Þ
j¼0
Combining (7), (9) and (10), we have
( ! !) 1 1 i i1 X X X X xi ¼ c þ Nðx0 Þ þ N xj N xj : i¼0
i¼1
j¼0
j¼0
ð11Þ
380
F.A. Shah, M.A. Noor / Applied Mathematics and Computation 251 (2015) 378–386
Thus we have the following iterative scheme:
8 x0 ¼ c; > > > > > x1 ¼ N ðx0 Þ; > > > > > < x2 ¼ N ðx0 þ x1 Þ N ðx0 Þ; .. > > . > ! ! > > m m1 > X X > > > xj N xj : m ¼ 1; 2; . . . > : xmþ1 ¼ N j¼0
ð12Þ
j¼0
Then
x1 þ x2 þ þ xmþ1 ¼ Nðx0 þ x1 þ þ xm Þ;
m ¼ 1; 2; . . . ;
ð13Þ
and
x¼cþ
1 X xi :
ð14Þ
i¼1
It can be shown that the series From (5) and (12), we get
x0 ¼ c ¼ c
P1
i¼0 xi
converges absolutely and uniformly to a unique solution of equation (1).
f ðcÞgðcÞ ; 0 f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ
ð15Þ
From (4) and (15), it can be easily obtained that
hðx0 Þ ¼ f ðx0 ÞgðcÞ;
ð16Þ
and
x1 ¼ Nðx0 Þ ¼
hðx0 Þ f ðx0 ÞgðcÞ ¼ : 0 0 f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ
ð17Þ
Note that, x is approximated by
X m ¼ x0 þ x 1 þ x 2 þ þ x m ;
ð18Þ
lim X m ¼ x:
ð19Þ
where m!1
For m ¼ 0,
x X 0 ¼ x0 ¼ c ¼ c
f ðcÞgðcÞ : 0 f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ
ð20Þ
This formulation allows us to suggest the following one-step iterative method for solving the nonlinear equation (1). Algorithm 2.1. For a given x0 , compute the approximate solution xnþ1 by the following iterative scheme:
xnþ1 ¼ xn
f ðxn Þgðxn Þ ; 0 ½f ðxn Þg 0 ðxn Þ þ f ðxn Þgðxn Þ
n ¼ 0; 1; 2; . . .
This is the main iteration scheme which is also introduced by He [8] and Noor [11] for generating different iterative methods for solving nonlinear equations. For m ¼ 1,
f ðcÞgðcÞ f ðx0 ÞgðcÞ : x X 1 ¼ x0 þ x1 ¼ c þ Nðx0 Þ ¼ c 0 0 f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ This formulation allows us the following recurrence relation for solving nonlinear equations. Algorithm 2.2. For a given x0 , compute the approximate solution xnþ1 by the following iterative schemes:
yn ¼ xn
f ðxn Þgðxn Þ ; 0 f ðxn Þg 0 ðxn Þ þ f ðxn Þgðxn Þ
ð21Þ
F.A. Shah, M.A. Noor / Applied Mathematics and Computation 251 (2015) 378–386
xnþ1 ¼ yn
½f ðxn
f ðyn Þgðxn Þ ; 0 n Þ þ f ðxn Þgðxn Þ
Þg 0 ðx
381
n ¼ 0; 1; 2; . . .
In a similar way, for m ¼ 2, we have
x X 2 ¼ x0 þ x1 þ x2 ¼ c þ Nðx0 þ x1 Þ ¼ c
f ðcÞgðcÞ hðx0 þ x1 Þ 0 0 f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ
f ðcÞgðcÞ ½f ðx þ x1 Þ þ f ðx0 ÞgðcÞ 0 : ¼c 0 0 f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ f ðcÞg 0 ðcÞ þ f ðcÞgðcÞ
ð22Þ
This formulation enables us to suggest the following recurrence relation for solving nonlinear equations. Algorithm 2.3. For a given x0 , compute the approximate solution xnþ1 by the following iterative schemes:
yn ¼ xn
zn ¼ y n
f ðxn Þgðxn Þ ; 0 f ðxn Þg 0 ðxn Þ þ f ðxn Þgðxn Þ
f ðyn Þgðxn Þ ; 0 f ðxn Þg 0 ðxn Þ þ f ðxn Þgðxn Þ
xnþ1 ¼ zn
f ðzn Þgðxn Þ ; 0 ½f ðxn Þg 0 ðxn Þ þ f ðxn Þgðxn Þ
n ¼ 0; 1; 2; . . .
Algorithm 2.1, Algorithm 2.2 and Algorithm 2.3 are the main iterative schemes which can be used to generate several iterative methods of higher order for different values of the auxiliary function gðxn Þ. The contribution of the auxiliary function is the attractiveness of this modification and charisma of this novel technique. Proper selection of this auxiliary function converts the main recurrence relations in diversified form for best implementation for obtaining the solution of nonlinear equations. To convey the basic idea, we consider only one value of the auxiliary function i.e. gðxn Þ ¼ ea xn . In this case from Algorithm 2.1, Algorithm 2.2 and Algorithm 2.3, we obtain the following iterative methods for solving nonlinear equations (1). Algorithm 2.4. For a given x0 , compute the approximate solution xnþ1 by the following iterative scheme:
xnþ1 ¼ xn
f ðxn Þ ; 0 ½f ðxn Þ af ðxn Þ
n ¼ 0; 1; 2; . . .
Algorithm 2.4 is also obtained by He [8] and Noor [11] by using variational iteration technique. For a ¼ 0, Algorithm 2.4 reduces to the well known Newton method for solving nonlinear equations (1). Algorithm 2.5. For a given x0 , compute the approximate solution xnþ1 by the following iterative schemes:
yn ¼ xn
f ðxn Þ ; 0 f ðxn Þ af ðxn Þ
xnþ1 ¼ yn
f ðyn Þ ; 0 f ðxn Þ af ðxn Þ
n ¼ 0; 1; 2; . . .
Algorithm 2.5 is called the two-step predictor–corrector method. For a ¼ 0, Algorithm 2.5 is exactly the same as obtained in [3,9] for solving nonlinear equations (1). Algorithm 2.6. For a given x0 , compute the approximate solution xnþ1 by the following iterative schemes:
yn ¼ xn
zn ¼ y n
f ðxn Þ ; 0 f ðxn Þ af ðxn Þ
f ðyn Þ ; 0 f ðxn Þ af ðxn Þ
f ðzn Þ ; xnþ1 ¼ zn 0 f ðxn Þ af ðxn Þ
n ¼ 0; 1; 2; . . .
382
F.A. Shah, M.A. Noor / Applied Mathematics and Computation 251 (2015) 378–386
Remark 2.1. Algorithms 2.4–2.6 are derived for a special value of the auxiliary function. For different other values of this auxiliary function one can derive diverse iterative schemes for solving nonlinear equations. Remark 2.2. We would like to point out that, if we take a ¼ 0; a ¼ 12 ; a ¼ 1; . . . ; in above derived iterative methods, we can obtain various known and new classes of iterative methods for solving nonlinear equations. All the methods derived above can be viewed as an alternative to the Newton method and its variant forms. Remark 2.3. It is important to say that never choose such a value of a which makes the denominator zero during an iterative process for the search of approximate solution within the required tolerance. It is necessary that sign of a should be chosen so as to keep the denominator largest in magnitude in above derived Algorithms.
3. Convergence analysis In this section, we consider the convergence criteria of the main iterative schemes presented in Section 2. Theorem 3.1. Let p 2 I be a simple zero of sufficiently differentiable function f : I # R ! R for an open interval I. If x0 is sufficiently close to p, then iterative method defined by Algorithm 2.3 has at least fourth-order convergence. 0
Proof. Let p be a simple zero of f ðxÞ. Then by expanding f ðxn Þ and f ðxn Þ, in Taylor’s series about p we have
0 f ðxn Þ ¼ f ðpÞ en þ c2 e2n þ c3 e3n þ c4 e4n þ c5 e5n þ c6 e6n þ O e7n ;
ð23Þ
0 0 f ðxn Þ ¼ f ðpÞ 1 þ 2c2 en þ 3c3 e2n þ 4c4 e3n þ 5c5 e4n þ 6c6 e5n þ O e7n :
ð24Þ
and
Where ðkÞ
ck ¼
1 f ðpÞ ; k! f 0 ðpÞ
k ¼ 2; 3; . . . ;
and en ¼ xn p: 0
Now expanding f ðxn Þgðxn Þ, f ðxn Þg 0 ðxn Þ and f ðxn Þgðxn Þ in Taylor’s series, we obtain
0 f ðxn Þgðxn Þ ¼ f ðpÞ gðpÞen þ ðg 0 ðpÞ þ c2 gðpÞÞe2n þ g 1 e3n þ O e4n ;
ð25Þ
0 f ðxn Þg 0 ðxn Þ ¼ f ðpÞ g 0 ðpÞen þ ðg 00 ðpÞ þ c2 g 0 ðpÞÞe2n þ g 1 e3n þ O e4n ;
ð26Þ
0 0 f ðxn Þgðxn Þ ¼ f ðpÞ ðgðpÞ þ g 0 ðpÞ þ 2c2 gðpÞÞen þ g 1 e2n þ g 2 e3n þ O e4n ;
ð27Þ
and
where
g1 ¼
1 00 g ðpÞ þ c3 g 00 ðpÞ þ c2 gðpÞ; 2
ð28Þ
g2 ¼
1 000 g ðpÞ þ 4c4 gðpÞ þ c2 g 00 ðpÞ þ 3c3 g 0 ðpÞ: 6
ð29Þ
and
From (25), (26) and (27), we get
0 00 f ðxn Þgðxn Þ g ðpÞ g ðpÞ 2 g 0 ðpÞ 2 3 4 þ c c þ 2c c ¼ e n 2 en 3 2 2 en þ ðOÞen : 2 gðpÞ gðpÞ gðpÞ f ðxn Þgðxn Þ þ f ðxn Þg 0 ðxn Þ 0
ð30Þ
Using (30), we have
yn ¼ p þ
0 00 g ðpÞ g ðpÞ 2 g 0 ðpÞ g 0 ðpÞ 3 e þ O e4n : þ c2 e2n c2 þ 2c3 2 c2 þ 2c22 þ 2 gðpÞ gðpÞ gðpÞ gðpÞ n
ð31Þ
Expanding f ðyn Þ, in Taylor’s series about p and using (31), we have
" # 0 2 ! g 0 ðpÞ g 00 ðpÞ 2 g 0 ðpÞ g ðpÞ 2 2 3 4 en þ Oðen Þ : þ c2 en 2c2 2c3 f ðyn Þ ¼ f ðpÞ c 2 c2 þ 2 gðpÞ gðpÞ 2 gðpÞ gðpÞ 0
ð32Þ
F.A. Shah, M.A. Noor / Applied Mathematics and Computation 251 (2015) 378–386
383
Now using (32), we obtain
" 0
f ðyn Þgðxn Þ ¼ f ðpÞ ðg 0 ðpÞ þ c2 gðpÞÞ e2n
# ! ðg 0 ðpÞÞ2 þ c2 g 0 ðpÞ g 00 ðpÞ þ 2c22 gðpÞ 2c3 gðpÞ e3n þ O e4n : gðpÞ
ð33Þ
Now using (26), (27) and (33), we get
0 0
4 f ðyn Þgðxn Þ g ðpÞ g 00 ðpÞ g 0 ðpÞ g ðpÞ 0 2 2 3 e þ c 5 ðpÞ c þ O e : ¼ f 2 en þ 2c 3 4c2 þ 2 3 n n gðpÞ gðpÞ gðpÞ gðpÞ f ðxn Þgðxn Þ þ f ðxn Þg 0 ðxn Þ 0
ð34Þ
Now (30) and (34), we obtain
"
" 0 2 # 0 3 0 2 g 0 ðpÞ g ðpÞ g 00 ðpÞ g ðpÞ g ðpÞ g 0 ðpÞ g 0 ðpÞ e3n þ 3c2 þ 4 þ 5c3 þ 7c2 c3 9c32 12c2 15c22 gðpÞ gðpÞ gðpÞ gðpÞ gðpÞ gðpÞ gðpÞ
g 0 ðpÞ g 00 ðpÞ 4 þ2 e þ O e5n : ð35Þ gðpÞ gðpÞ n
zn ¼ 2c22 þ 3c2
Expanding f ðzn Þ; in Taylor’s series about p and using (35), we have
""
" 0 2 # 0 3 0 2 g 0 ðpÞ g ðpÞ g 00 ðpÞ g ðpÞ g ðpÞ g 0 ðpÞ g 0 ðpÞ 3 en þ 3c2 þ 4 þ 5c3 f ðzn Þ ¼ f ðpÞ þ 3c2 12c2 15c22 gðpÞ gðpÞ gðpÞ gðpÞ gðpÞ gðpÞ gðpÞ
0 00 g ðpÞ g ðpÞ 4 e þ O e5n : þ7c2 c3 9c32 þ 2 gðpÞ gðpÞ n 0
2c22
ð36Þ
Now using (26), (27) and (36), we get
" " 0 2 # 0 3 0 f ðzn Þgðxn Þ g 0 ðpÞ g ðpÞ g 0 ðpÞ g ðpÞ g 00 ðpÞ 2 3 2 g ðpÞ ¼ 2c2 þ 3c2 þ þ 5 3c 17c þ 23c e 2 2 0 n 2 gðpÞ gðpÞ gðpÞ gðpÞ gðpÞ gðpÞ f ðxn Þg 0 ðxn Þ þ f ðxn Þgðxn Þ
0 0 00 g ðpÞ g ðpÞ g ðpÞ 4 e þ O e5n : 7c2 c3 þ 13c32 2 5c3 gðpÞ gðpÞ gðpÞ n
ð37Þ
From (35) and (37), we obtain
xnþ1 ¼ p þ
" # 3 0 2 0 g 0 ðpÞ g ðpÞ g ðpÞ þ 4c32 e4n þ O e5n : þ5 c2 þ 8c22 gðpÞ gðpÞ gðpÞ
ð38Þ
Finally, the error equation is
enþ1 ¼
" 0 2 # g 0 ðpÞ g ðpÞ e4n þ Oðe5n Þ: þ c2 þ 2c2 gðpÞ gðpÞ
ð39Þ
Error equation (39) shows that the Algorithm 2.3, the main iterative scheme has at least fourth-order convergence. Consequently, the methods generated from this scheme will also have at least fourth-order convergence. h In a similar way, one can prove the convergence of Algorithm 2.2 and the methods developed from this scheme. Remark 3.1. From the study of convergence analysis, we note that for some special values of the auxiliary function gðxÞ, several iterative methods can be derived from the Algorithm 2.1, Algorithm 2.2 and Algorithm 2.3. If
g 0 ðxn Þ gðxn Þ
00
¼ 2ff 0ðxðxn ÞÞ, then the Algorithm 2.1 reduces to the following iterative method. n
Algorithm 3.1. For a given x0 , compute the approximate solution xnþ1 by the following iterative scheme: 0
xnþ1 ¼ xn
2f ðxn Þf ðxn Þ 0
2
00
2½f ðxn Þ f ðxn Þf ðxn Þ
;
n ¼ 0; 1; 2; . . . ;
which is well known Halley method [17] of third-order convergence. Similarly the Algorithm 2.2 and Algorithm 2.3 reduce to the following iterative methods respectively for the above mentioned particular value of the auxiliary function. Algorithm 3.2. For a given x0 , compute the approximate solution xnþ1 by the following iterative schemes: 0
2f ðxn Þf ðxn Þ yn ¼ xn 0 ; 2 00 2 f ðxn Þ f ðxn Þf ðxn Þ
384
F.A. Shah, M.A. Noor / Applied Mathematics and Computation 251 (2015) 378–386 0
2f ðyn Þf ðxn Þ xnþ1 ¼ yn 0 ; 2 00 2 f ðxn Þ f ðxn Þf ðxn Þ
n ¼ 0; 1; 2; . . . ;
which is fourth-order convergent method for solving nonlinear equations. Algorithm 3.3. For a given x0 , compute the approximate solution xnþ1 by the following iterative schemes: 0
2f ðxn Þf ðxn Þ yn ¼ xn 0 ; 2 00 2 f ðxn Þ f ðxn Þf ðxn Þ 0
2f ðyn Þf ðxn Þ zn ¼ yn 0 ; 2 00 2 f ðxn Þ f ðxn Þf ðxn Þ 0
2f ðzn Þf ðxn Þ xnþ1 ¼ zn 0 ; 2 00 2 f ðxn Þ f ðxn Þf ðxn Þ
n ¼ 0; 1; 2; . . .
Which is fifth-order convergent method for solving nonlinear equations and appears to be new one. If
g 0 ðxÞ gðxÞ
00
ðxÞ ¼ ff 0 ðxÞ , then the Algorithm 2.4 reduces to the following iterative method.
Table 4.1 (Numerical comparison of examples for a ¼ 1). IT
xn
jf ðxn Þj
d
q
TOC
f 1 ; x0 ¼ 1 NM HM CN NR1 NR2
9 5 5 4 4
1.4044916482153412260 1.4044916482153412260 1.4044916482153412260 1.4044916482153412260 1.4044916482153412260
0.00e01 0.00e01 0.00e01 0.00e01 0.00e01
4.21e51 6.98e32 1.31e17 2.78e26 1.58e51
2.00000 3.00643 3.95150 3.00659 4.00041
0.0160 0.0160 0.0160 0.0160 0.0150
f 2 ; x0 ¼ 1:5 NM HM CN NR1 NR2
8 13 12 8 5
2.2041177331716202959 2.2041177331716202959 2.2041177331716202959 2.2041177331716202959 2.2041177331716202959
1.00e59 1.00e59 3.00e59 1.00e59 1.00e59
1.44e52 2.77e23 1.58e33 0.00e01 9.95e29
2.00000 3.00213 3.98124 2.99471 3.97654
0.0160 0.0160 0.0150 0.0150 0.0150
f 3 ; x0 ¼ 1:5 NM HM CN NR1 NR2
9 6 11 4 4
2.0000000000000000000 2.0000000000000000000 2.0000000000000000000 2.0000000000000000000 2.0000000000000000000
0.00e01 0.00e01 0.00e01 0.00e01 0.00e01
1.32e45 7.96e40 2.28e56 2.77e26 1.10e42
2.00000 3.00109 3.92479 3.00324 4.25777
0.0160 0.0160 0.0160 0.0150 0.0160
f 4 ; x0 ¼ 0:5 NM HM CN NR1 NR2
12 11 16 4 4
2.1544346900318837217 2.1544346900318837217 2.1544346900318837217 2.1544346900318837217 2.1544346900318837217
0.00e01 8.00e59 1.00e58 8.00e59 1.00e58
5.73e34 1.00e59 4.65e42 1.55e32 4.49e42
2.00000 3.00047 3.83801 2.99488 3.92436
0.0160 0.0000 0.0160 0.0160 0.0160
f 5 ; x0 ¼ 2 NM HM CN NR1 NR2
11 7 7 7 6
0.9153066008102487851 0.9153066008102487851 0.9153066008102487851 0.9153066008102487851 0.9153066008102487851
9.00e60 1.00e59 2.00e60 1.00e60 2.00e60
4.70e48 0.28e50 2.00e64 0.00e01 1.59e49
2.00000 2.99935 3.85152 3.00133 4.08338
0.0320 0.0470 0.0320 0.0310 0.0470
f 6 ; x0 ¼ 3:5 NM HM CN NR1 NR2
14 9 9 9 8
3.0000000000000000000 3.0000000000000000000 3.0000000000000000000 3.0000000000000000000 3.0000000000000000000
0.00e01 0.00e01 0.00e01 2.00e58 0.00e01
1.17e48 6.57e39 1.00e63 0.00e01 1.85e35
2.00000 2.99432 3.98889 2.99946 3.99994
0.0160 0.0160 0.0160 0.0150 0.0310
385
F.A. Shah, M.A. Noor / Applied Mathematics and Computation 251 (2015) 378–386
Algorithm 3.4. For a given x0 , compute the approximate solution xnþ1 by the following iterative schemes: 0
yn ¼ xn
f ðxn Þf ðxn Þ ; 2 00 f ðxn Þ f ðxn Þf ðxn Þ 0
0
zn ¼ y n
f ðyn Þf ðxn Þ ; 2 00 f ðxn Þ f ðxn Þf ðxn Þ 0
0
f ðzn Þf ðxn Þ xnþ1 ¼ zn 0 ; 2 00 f ðxn Þ f ðxn Þf ðxn Þ
n ¼ 0; 1; 2; . . . ;
which is sixth-order convergent method for solving nonlinear equations and also appears to be new one. Additionally, Algorithm 3.4 can also be implemented for obtaining approximate solution of nonlinear equations having unknown multiplicity P 1.
4. Numerical results We now present some examples to illustrate the efficiency of the newly developed two-step and three-step iterative methods in this paper. We compare the Newton method (NM), the method of Hasanov et al. [6] (HM), the method of Chun [3] (CM), Algorithm 2.5 (NR1), and the Algorithm 2.6 (NR2) introduced in this paper. All the numerical experiments are performed with Intel (R) Core[TM] 2 2.1 GHz, 1 GB of RAM and computer uses 32 bit real numbers and all the codes are written in Maple. In Tables 4.1 and 4.2, results are shown for all considered examples. IT stands for number of iterations, d is the
Table 4.2 (Numerical comparison of examples for a ¼ 12). IT
xn
jf ðxn Þj
d
q
TOC
f 1 ; x0 ¼ 1 NM HM CN NR1 NR2
9 5 5 5 4
1.4044916482153412260 1.4044916482153412260 1.4044916482153412260 1.4044916482153412260 1.4044916482153412260
0.00e01 0.00e01 0.00e01 0.00e01 0.00e01
4.21e51 6.98e32 1.31e17 1.58e37 1.58e51
2.00000 3.00643 3.95150 3.00659 3.98144
0.0160 0.0160 0.0160 0.0160 0.0150
f 2 ; x0 ¼ 1:5 NM HM CN NR1 NR2
8 13 12 6 5
2.2041177331716202959 2.2041177331716202959 2.2041177331716202959 2.2041177331716202959 2.2041177331716202959
1.00e59 1.00e59 3.00e59 0.00e01 1.00e59
1.44e52 27.7723 1.58e33 1.19e32 9.95e29
2.00000 3.00213 3.98124 2.99991 3.97654
0.0160 0.0160 0.0150 0.0150 0.0150
f 3 ; x0 ¼ 1:5 NM HM CN NR1 NR2
9 6 11 7 5
2.0000000000000000000 2.0000000000000000000 2.0000000000000000000 2.0000000000000000000 2.0000000000000000000
0.00e01 0.00e01 0.00e01 0.00e01 0.00e01
1.32e45 7.96e40 2.28e56 3.59e26 4.34e46
2.00000 3.00109 3.92479 3.00324 3.99977
0.0160 0.0160 0.0160 0.0150 0.0160
f 4 ; x0 ¼ 0:5 NM HM CN NR1 NR2
12 11 16 5 4
2.1544346900318837217 2.1544346900318837217 2.1544346900318837217 2.1544346900318837217 2.1544346900318837217
0.00e01 8.00e59 1.00e58 8.00e59 0.00e01
5.73e34 1.00e59 4.65e42 1.64e49 6.67e24
2.00000 3.00047 3.83801 3.00000 3.92436
0.0160 0.0160 0.0160 0.0160 0.0160
f 5 ; x0 ¼ 2 NM HM CN NR1 NR2
11 7 7 7 6
0.9153066008102487851 0.9153066008102487851 0.9153066008102487851 0.9153066008102487851 0.9153066008102487851
9.00e60 1.00e59 2.00e60 1.00e60 0.00e01
4.70e48 0.28e50 2.00e64 1.26e29 5.03e30
2.00000 2.99935 3.85152 3.00133 4.08338
0.0320 0.0470 0.0320 0.0310 0.0470
f 6 ; x0 ¼ 3:5 NM HM CN NR1 NR2
14 9 9 9 8
3.0000000000000000000 3.0000000000000000000 3.0000000000000000000 3.0000000000000000000 3.0000000000000000000
0.00e01 0.00e01 0.00e01 0.00e01 0.00e01
1.17e48 6.57e39 1.00e63 4.67e39 1.40e22
2.00000 2.99432 3.98889 2.99946 3.99994
0.0160 0.0160 0.0160 0.0160 0.0160
386
F.A. Shah, M.A. Noor / Applied Mathematics and Computation 251 (2015) 378–386
difference of the last two consecutive iterations ðd ¼ jxnþ1 xn jÞ and in the last column TOC is expressed for CPU time taking 1 second as unit. The computational order of convergence q is approximated (see [17]) by means of
q
ln ðjxnþ1 xn j=jxn xn1 jÞ : ln ðjxn xn1 j=jxn1 xn2 jÞ
The following stopping criteria are used for computer programs:
ðiÞ jxnþ1 xn j < e;
ðiiÞ jf ðxn Þj < e:
We consider the following nonlinear equations as test problems of different numerical methods. 2
f 1 ðxÞ ¼ sin x x2 þ 1;
f 2 ðxÞ ¼ x2 ex þ x þ 2;
f 3 ðxÞ ¼ ðx 1Þ3 1;
f 4 ðxÞ ¼ x3 10;
2
2
f 5 ðxÞ ¼ xex sin x þ 3 cos x x;
2 þ7x30
f 6 ðxÞ ¼ ex
1:
5. Conclusion In this paper, we have studied some classes of one-step, two-step and three-step iterative methods for solving nonlinear equations by using a different decomposition technique. Our method of derivation of the iterative methods is very simple as compared to the other decomposition techniques. This is another aspect of the simplicity. If we consider the definition of 1
efficiency index [17] as pm , where p is the order of the method and m is the number of functional evaluations per iteration 1
required by the method, we have that Algorithm 2.5 has the efficiency index equal to 33 1:44224957, while the Algorithm 1 4
2.6 has the efficiency index 4 1:414213562, which is the same as that of Newton method. The methods derived above are 0 well defined and robust even when f ðxn Þ 0, and numerical solidity of the methods can also be observed during numerical experimentations. Using the technique and idea of this paper, one can suggest and analyze higher-order multi-step iterative methods for solving nonlinear equations as well as the system of nonlinear equations. Acknowledgement Authors would like to thank Dr. S.M. Junaid Zaidi, Rector, CIIT, for providing excellent research facilities in both the campuses of CIIT. References [1] S. Abbasbandy, Improving Newton–Raphson method for nonlinear equations modified Adomian decomposition method, Appl. Math. Comput. 145 (2003) 887–893. [2] G. Adomian, Nonlinear Stochastic Systems and Applications to Physics, Kluwer Academic Publishers, Dordrecht, 1989. [3] C. Chun, Iterative methods improving Newton’s method by the decomposition method, Comput. Math. Appl. 50 (2005) 1559–1568. [4] C. Chun, Y. Ham, A one-parameter fourth-order family of iterative methods for nonlinear equations, Appl. Math. Comput. 189 (2007) 610–614. [5] V. Daftardar-Gejji, H. Jafari, An iterative method for solving nonlinear functional equations, J. Math. Anal. Appl. 316 (2006) 753–763. [6] V.I. Hasanov, I.G. Ivanov, G. Nedzhibov, A new modification of Newton method, Appl. Math. Eng. 27 (2002) 278–286. [7] J.H. He, A new iteration method for solving algebraic equations, Appl. Math. Comput. 135 (2003) 81–84. [8] J.H. He, Variational iteration method-some recent results and new interpretations, J. Appl. Math. Comput. 207 (2007) 3–17. [9] M.A. Noor, K.I. Noor, Three-step iterative methods for nonlinear equations, Appl. Math. Comput. 183 (2006) 322–327. [10] M.A. Noor, K.I. Noor, Some iterative schemes for nonlinear equations, Appl. Math. Comput. 183 (2006) 774–779. [11] M.A. Noor, New classes of iterative methods for nonlinear equations, Appl. Math. Comput. 191 (2007) 128–131. [12] M.A. Noor, F.A. Shah, Variational iteration technique for solving nonlinear equations, J. Appl. Math. Comput. 31 (2009) 247–254. [13] M.A. Noor, F.A. Shah, K.I. Noor, E. Al-said, Variational iteration technique for finding multiple roots of nonlinear equations, Sci. Res. Essays 6 (6) (2011) 1344–1350. [14] M.A. Noor, F.A. Shah, A family of iterative schemes for finding zeros of nonlinear equations having unknown multiplicity, Appl. Math. Inf. Sci. 8 (5) (2014) 1–7. [15] F.A. Shah, M.A. Noor, M. Batool, Derivative-free iterative methods for solving nonlinear equations, Appl. Math. Inf. Sci. 8 (5) (2014) 1–5. [16] F.A. Shah, M.A. Noor, Variational iteration technique and some methods for the approximate solution of nonlinear equations, Appl. Math. Inf. Sci. Lett. 2 (3) (2014) 85–93. [17] J.F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall Englewood Cliffs, New Jersey, USA, 1964.