Applied Mathematics and Computation 217 (2010) 2608–2618
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
A novel cubically convergent iterative method for computing complex roots of nonlinear equations R. Oftadeh *, M. Nikkhah-Bahrami, A. Najafi School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran
a r t i c l e
i n f o
a b s t r a c t A fast and simple iterative method with cubic convergent is proposed for the determination of the real and complex roots of any function F(x) = 0. The idea is based upon passing a defined function G(x) tangent to F(x) at an arbitrary starting point. Choosing G(x) in the form of xk or kx, where k is obtained for the best correlation with the function F(x), gives an added freedom, which in contrast to all existing methods, accelerates the convergence. Also, this new method can find complex roots just by a real initial guess. This is in contrast to many other methods like the famous Newton method that needs complex initial guesses for finding complex roots. The proposed method is compared to some new and famous methods like Newton method and a modern solver that is fsolve command in MATLAB. The results show the effectiveness and robustness of this new method as compared to other methods. Ó 2010 Elsevier Inc. All rights reserved.
Keywords: Root of continuous functions Taylor expansion Real and complex root Number of iterations
1. Introduction Solving for the roots of equations such as F(x) = 0 is an old and known problem. The most famous and commonly used method is Newton method defined by:
xnþ1 ¼ xn
Fðxn Þ F 0 ðxn Þ
n P 0:
ð1Þ
Other familiar methods are Bisection, Secant, False position, Brent, Halley, Schroder, Householder, Ridders, Muller, and Laguerr etc. which can be found in the literature. Also, in the recent years, many methods have been developed for solving nonlinear equations. These methods were developed using Taylor interpolating polynomials [1,2], quadrature formulas [3,4], decomposition [5,6], homotopy perturbation method [7,8], and other techniques [9,10]. Also, many Newton-type iterative methods have been developed for finding roots of nonlinear equations. From one point of view, these methods can be categorized as one-step [11,12], two-step [13,14] and three-step [15] iterative methods. Each of these methods has a different rate of convergence; second order [16,17], third order [11,18] and more than third order [15,19]. Most of these methods need a proper first guess of the root. Some of them calculate only the real roots and complex mode of computation is not possible, or if it is possible the initial guess must be complex (such as Newton’s method). The proposed method introduced here does not have those weaknesses and can find both the real and the complex roots of any nonlinear function even if the initial guess was a real number. Recently, the authors have developed a similar method for computing complex roots of systems of nonlinear equations [20]. We show here that the modified version of that method can effectively be used for finding the roots of a single nonlinear equation. * Corresponding author. E-mail addresses:
[email protected],
[email protected] (R. Oftadeh). 0096-3003/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2010.07.075
R. Oftadeh et al. / Applied Mathematics and Computation 217 (2010) 2608–2618
2609
2. The proposed method Let the nonlinear function be represented by F(x). Therefore, the nonlinear equation can be written as:
FðxÞ ¼ 0
ð2Þ
Using the Taylor series expansion, we express the function in terms of an arbitrary function GðxÞ which will be defined later.
FðxÞ ¼ Fðxn Þ þ
a1 ðxn Þ a2 ðxn Þ ðGðxÞ Gðxn ÞÞ þ ðGðxÞ Gðxn ÞÞ2 þ . . . 1! 2!
ð3Þ
Where
F 0 ðxn Þ G0 ðxn Þ a0 ðxn Þ aiþ1 ðxn Þ ¼ i0 G ðxn Þ
a1 ðxn Þ ¼
ð4Þ i ¼ 2; 3; 4; . . .
ð5Þ
Therefore
! F 0 ðxn Þ F 00 ðxn Þ F 0 ðxn ÞG00 ðxÞ ðGðxÞ Gðxn ÞÞ2 þ . . . FðxÞ ¼ Fðxn Þ þ 0 ðGðxÞ Gðxn ÞÞ þ G ðxn Þ G02 ðxn Þ G03 ðxn Þ
ð6Þ
By considering the above equation we can approximate F(x) with:
FðxÞ Fðxn Þ þ
F 0 ðxn Þ ðGðxÞ Gðxn ÞÞ G0 ðxn Þ
ð7Þ
Let the right hand of Eq. (7) be represented by H(x):
HðxÞ ¼ Fðxn Þ þ
F 0 ðxn Þ ðGðxÞ Gðxn ÞÞ G0 ðxn Þ
ð8Þ
In order for H(x) to be compatible with F(x) around x ¼ xn we must have:
Hðxn Þ ¼ Fðxn Þ
ð9Þ
H0 ðxn Þ ¼ F 0 ðxn Þ
ð10Þ
H00 ðxn Þ ¼ F 00 ðxn Þ
ð11Þ
Conditions (9) and (10) are automatically satisfied. For condition (11) to be satisfied we must have:
F 00 ðxn Þ G00 ðxn Þ ¼ F 0 ðxn Þ G0 ðxn Þ
ð12Þ
Now by letting, F(x) = 0 from Eq. (7) we obtain:
Fðxn Þ xnþ1 ¼ G1 Gðxn Þ G0 ðxn Þ 0 F ðxn Þ
ð13Þ
Note that Eq. (13) will be equal to the Newton formula, if we let G(x) = x. 3. Selection of G(x) G(x) must be selected in a way that can approximate any function. In addition, according to Eq. (13) the inverse of function G(x) must be obtainable. Polynomials and exponential functions are usually appropriate for these purposes. So G(x) can be expressed in one of the forms kx, xk or exp(kx). If G(x) = xk from Eqs. (12) and (13) we have:
F 00 ðxn Þ ðk 1ÞðkÞxk2 xn F 00 ðxn Þ n )k¼1þ 0 ¼ 0 k1 ðkÞxn F ðxn Þ F ðxn Þ 1=k 1=k k1 x Fðxn Þ Fðxn Þ ð13Þ ! xnþ1 ¼ xkn k n 0 ) xnþ1 ¼ xn 1 k F ðxn Þ xn F 0 ðxn Þ
ð12Þ !
If G(x) = kx, then:
ð14Þ n ¼ 0; 1; 2; . . .
2610
R. Oftadeh et al. / Applied Mathematics and Computation 217 (2010) 2608–2618 x
ð12Þ !
F 00 ðxn Þ F 00 ðxn Þ k n ðlnðkÞÞ2 ! k ¼ e F 0 ðxn Þ ¼ xn 0 F ðxn Þ k ðlnðkÞÞ
xn
ð13Þ ! xnþ1 ¼ ln k lnðkÞ k
xn
! Fðxn ÞF 00 ðxn Þ
ð15Þ
F 0 ðxn Þ2
Therefore:
xnþ1
F 0 ðxn Þ Fðxn ÞF 00 ðxn Þ ln 1 ¼ xn þ 00 F ðxn Þ F 0 ðxn Þ2
! n ¼ 0; 1; 2; . . .
ð16Þ
Note that the proper choice of G(x) is based on F(x). If FðxÞ contains an exponential term, G(x) = kx may offer a good rate of convergence. On the other hand G(x) = xk is especially suitable for polynomial equations. Also note that set G(x) to be ekx gives the same results for xn+1 = u(xn) as G(x) = kx (Eq. 16). 4. Illustrative examples
Example 4.1. To further elaborate on the proposed method consider the following nonlinear equation:
FðxÞ ¼ tan
x ex x5 þ 100 x2 þ 1
ð17Þ
For computing a root of this equation, an arbitrary initial value x0 = 1800 is used. For this example, G(x) is set to xk. Table 1 shows the convergence history of the proposed method. To clarify the procedure of the method the comparison between F(x) and H(x) is also illustrated in Fig. 1. The charts in this figure show the compatibility between F(x) and H(x) around the starting point in each step. It can be seen from Table 1 that although the initial value is far from the root the algorithm can find the root in just 4 steps with jF(xi)j = 4.26 1014. Example 4.2. Consider the following nonlinear equation:
FðzÞ ¼ sinðzÞ z ¼ 0
ð18Þ kx
This equation is a special case of the Kepler equation with e = 1 and M = 0 [21]. For this equation G(x) is set to e . To determine the ability of the proposed method for finding different roots, specially complex ones, initial values for z are set to a + bi, that a and b are real numbers and vary from 20 to 20 with steps of 1. Therefore the algorithm runs for 412 = 1681 different initial values. The stopping criteria for each run is set to jF(xi)j < 5 1012. The algorithm found 13 different roots (one real and twelve complex roots). Fig. 2 shows the roots found by the algorithm. The roots numbered from 1 to 13. Fig. 3 shows the initial value regions of convergence of the 13 different roots. From Fig. 3 one can also find harmony in converging initial values to different roots. Therefore, if the initial value domain is extended in the real axis direction, the algorithm may find other complex roots of (18). It is interesting to note that with just real initial values the algorithm can find the all the roots in the related domain. That is, initial value domain of [20 20], leads to 13 different roots that numbered before. Note that we examine the special case of the Kepler equation in this example. One can vary e and M and find all of the related roots in the domain of initial values. 5. Rate of convergence Let ei = r xi be the actual error between xi and the root r. By means of Eq. (6) and according to Eq. (12) we can write:
FðrÞ Fðxi Þ þ
F 0 ðxn Þ a3 ðGðrÞ Gðxi ÞÞ þ ðGðrÞ Gðxi ÞÞ3 ¼ 0 3! G0 ðxi Þ
ð19Þ
Table 1 Convergence history of proposed method for (17). Step
xi
k
jF(xi)j
1 2 3 4
1800 2.504806159963997 2.512032208495923 2.512032208395826
5.000000000000000 4.998552959678708 4.998581738776183 4.998581738775789
1.889567e+016 1.430728223 1.993302e008 4.263256e014
2611
R. Oftadeh et al. / Applied Mathematics and Computation 217 (2010) 2608–2618
1
x 10
Step 2
Step 1
16
400 H(x) F(x)
0
H(x) F(x)
300 200 100
-1
0 -2 -100 -3
0
500
1000
-200 -3
1500
-2
-1
Step 3
0
1
2
3
Step 4
400
400 H(x) F(x)
H(x) F(x)
200
200
0
0
-200 -3
-2
-1
0
1
2
3
-200 -3
-2
-1
0
1
2
3
Fig. 1. Compatibility between F(x) and H(x) around the starting point in each step for Eq. (17).
4
2
3
12 5
11 8
6
Roots
Imaginary Part
2 1
7 0 -1 -2
4 -3
1
-4 -25
9
3
-20
-15
10
-10
-5
0 Real Part
5
10
15
13 20
25
Fig. 2. Roots of Eq. (18) found by the algorithm (one real and twelve complex roots).
From Eqs. (13) and (7), we have:
Fðxi Þ a3 G0 ðxi Þ xiþ1 ¼ G1 Gðxi Þ G0 ðxi Þ 0 ¼ G1 GðrÞ þ ðGðrÞ Gðxi ÞÞ3 0 3! F ðxi Þ F ðxi Þ
ð20Þ
On the other hand from the Taylor series:
GðrÞ Gðxi Þ ei G0 ðxi Þ
ð21Þ
Therefore from Eqs. (19) and (20) we can conclude: 1
xiþ1 ¼ G
a3 ðG0 ðxi ÞÞ4 3 GðrÞ þ e 3! F 0 ðxi Þ i
!
! a3 ðG0 ðxi ÞÞ4 3 1 e G ðGðrÞÞ þ 3! F 0 ðxi Þ i G0 ðGðrÞÞ 1
ð22Þ
2612
R. Oftadeh et al. / Applied Mathematics and Computation 217 (2010) 2608–2618
Fig. 3. Convergence of initial value domain to different roots of Eq. (18).
Therefore:
r xiþ1 ¼ eiþ1
a3 ðG0 ðxi ÞÞ4 1 ¼ 3! F 0 ðxi Þ G0 ðGðrÞÞ
!
e3i
ð23Þ
6. Numerical examples We now present some examples to illustrate the efficiency of the newly developed method. This section compares the proposed method with fsolve command in MATLAB, the methods of Sánchez (w42 and w63 Þ [22], the method of Ujevic [23], the method of Jesheng et al [24] and Newton’s method (NM). All computations were done using MATLAB. The following stopping criterion is used for computer programs:
jFðxi Þj < e
ð24Þ
e = 5 1012 was used. Here, the goal is to compare the power of methods for finding roots for a large amount of initial values. In the following examples NR indicates the Number of Roots that a method has found in the prescribed initial value range, NF indicates Number of Fails, Ave IT indicates average number of iterations for that method to find any root and finally, Ave F indicates average number of function evaluations for a method to find a root. (Note that if a method fails to find any root that step gets a 30iteration penalty). Example 6.1. Consider the following polynomial equation:
F ¼ z100 1 ¼ 0
ð25Þ
Obviously, this equation has 100 different roots (2 real and 98 complex roots). Initial values in this example are set to ±0.025 + bi, where b is a real number and varies from 0.999 to 1 with steps of 0.001. Therefore the algorithm runs for 2 2000 ¼ 4000 different initial values. Table 2 shows the comparison between different methods. As can be seen from x the table, the proposed method with both GðxÞ ¼ xk and GðxÞ ¼ k can find all 100 roots of (25). Also, the number of failures for our method is zero while other methods have failed in many cases. The average number of iterations and average number of function evaluations for our method is less than other compared methods. Fig. 4 shows the 100 roots of Eq. (25) found by the new method. Fig. 5 shows the number of iterations versus different runs of the new method. Example 6.2. Consider another polynomial equation as:
F ¼ z75 3z50 þ z25 2 ¼ 0
ð26Þ
Obviously, this equation has 75 different roots (1 real and 74 complex roots). Initial values in this example are set to a + bi, that a, b is real numbers and vary from 0.14 to 0.15 with steps of 0.1. Therefore the algorithm runs for 30 30 = 900 dif-
2613
R. Oftadeh et al. / Applied Mathematics and Computation 217 (2010) 2608–2618 Table 2 Comparison of methods for example 1.
Present study (G(x) = xk) Present study (G(x) = kx) fsolve Sánchezðw42 Þ[22] Sánchezðw63 Þ[22] Ujevic[23] Jsheng et al.[24] NM
d
NFb
Ave ITc
Ave Fd
100
0
2
6
100
0
5.57
16.73
19 54
3380 3578
28.27 35.21
56.09 105.63
56
3850
36.09
144.36
32 54 48
3952 3670 3692
30.43 36.14 53.41
91.30 108.43 106.83
Number of roots. Number of failures. Average number of iterations. Average number of function evaluations.
1 0.8 0.6 0.4 Imaginary part
c
0.2 0 -0.2 -0.4 -0.6 -0.8 -1 -1
-0.5
0 Real part
0.5
1
Fig. 4. Roots of Eq. (25) found by the new method.
10 9
G(x)=kx
8
G(x)=xk
7 Iteration Number
a b
NRa
6 5 4 3 2 1 0 0
200
400
600
800
1000 1200 Run Number
1400
Fig. 5. Iteration cost for example 1.
1600
1800
2000
2614
R. Oftadeh et al. / Applied Mathematics and Computation 217 (2010) 2608–2618
Table 3 Comparison of methods for example 2.
Present study (G(x) = xk) Present study (G(x) = kx) fsolve Sánchezðw42 Þ[22] Sánchezðw63 Þ[22] Ujevic[23] Jsheng et al.[24] NM
NR
NF
Ave IT
Ave F
25
1
6.03
18.08
75
1
8.23
24.7
0 0
900 900
– –
– –
0
900
–
–
0 0 0
900 900 900
– – –
– – –
1.5
1
Imaginary part
0.5
0
-0.5
-1
-1.5 -1.5
-1
-0.5
0 Real part
0.5
1
1.5
Fig. 6. Roots of Eq. (26) found by the new method.
20 G(x)=kx
18
G(x)=xk
16
Iteration Number
14 12 10 8 6 4 2 0
0
100
200
300
400
500
600
Run Number Fig. 7. Iteration cost for example 2.
700
800
900
2615
R. Oftadeh et al. / Applied Mathematics and Computation 217 (2010) 2608–2618 Table 4 Comparison of methods for example 3- Real initial value range.
Present study (G(x) = xk) Present study (G(x) = kx) fsolve Sánchezðw42 Þ[22] Sánchezðw63 Þ[22] Ujevic[23] Jsheng et al.[24] NM
NR
NF
Ave IT
16
6
18.91
Ave F 56.73
12
0
12.58
37.68
1 1
9 25
55.72 29.06
112.58 87.18
1
28
30.53
122.12
1 1 1
27 25 25
53.14 26.19 59.64
159.42 78.57 119.28
Table 5 Comparison of methods for example 3- Complex initial value range. NR
NF
Ave IT
303
3942
16.47
49.41
286
212
8.07
24.22
Sánchezðw42 Þ[22]
32 128
6902 6821
26.25 26.57
52.32 79.72
Sánchezðw63 Þ[22] Ujevic[23] Jsheng et al.[24] NM
60
8877
27.97
111.88
47 72 91
8729 8283 8603
28.64 26.75 30.64
85.91 80.24 61.27
Present study (G(x) = xk) Present study (G(x) = kx) fsolve
Ave F
ferent initial values. Table 3 shows the comparison between different methods. As can be seen from the table the proposed method with G(x) = kx can find all 75 roots of Eq. (26), while all other methods have failed to find any root. Fig. 6 shows 75 roots of Eq. (26) found by the new method. Fig. 7 shows the number of iterations versus different runs of the new method. Example 6.3. Consider the following nonlinear equation:
F ¼ ez
2 þ7z3
1¼0
ð27Þ
In this example we use two sets of initial values. The first set contains real numbers from 0.1 to 10 with steps of 0.1. The second set contains complex numbers in the form of a þ bi where a; b are real numbers and vary from 4.9 to 5 with steps of 0.1. Therefore for the first set, the algorithm runs for 100 different initial values and for the second set for 100 100 ¼ 10000 different initial values. Tables 4 and 5 show the comparison between different methods. As can be seen 30
20
Imaginary part
10
0
-10
-20
-30 -40
-30
-20
-10
0
10
Real part Fig. 8. Roots of Eq. (27) found by the new method.
20
30
2616
R. Oftadeh et al. / Applied Mathematics and Computation 217 (2010) 2608–2618
Table 6 Comparison of methods for Eq. (28).
Present study (G(x) = xk) Present study (G(x) = kx) fsolve Sánchezðw42 Þ[22] Sánchezðw63 Þ[22] Ujevic[23] Jsheng et al.[24] NM
NR
NF
Ave IT
67
11
6.65
Ave F 19.95
23
1
6.64
19.93
1 1
9988 1989
29.97 25.84
59.95 77.52
1
8606
26.72
100.87
1 1 1
2528 2852 3203
37.33 37.59 37.23
111.99 112.77 111.70
NR
NF
28
257
22
5
1 1
Table 7 Comparison of methods for Eq. (29).
Present study (G(x) = xk) Present study (G(x) = kx) fsolve Sánchezðw42 Þ[22] Sánchezðw63 Þ[22] Ujevic[23] Jsheng et al.[24] NM
Ave IT
Ave F
13.93
41.78
6.27
18.82
400 417
16.33 32.25
13.63 96.76
1
921
32.55
130.19
1 1 1
827 229 446
26.86 55.86 131.52
80.58 167.57 263.05
Table 8 Comparison of methods for Eq. (30). NR
NF
Ave IT
Ave F
1224
2177
12.03
36.10
1267
2330
12.25
36.75
Sánchezðw42 Þ[22]
320 466
2478 5602
14.33 19.81
29.51 59.43
Sánchezðw63 Þ[22] Ujevic[23] Jsheng et al.[24] NM
345
7027
22.60
90.41
354 959 636
5770 4639 5250
20.93 18.71 20.64
62.79 56.14 41.28
Present study (G(x) = xk) Present study (G(x) = kx) fsolve
6
4
Imaginary part
2
0
-2
-4
-6 -400
-300
-200
-100
0 Real part
100
200
Fig. 9. Roots of Eq. (28) found by the new method.
300
400
2617
R. Oftadeh et al. / Applied Mathematics and Computation 217 (2010) 2608–2618 x
from these tables the proposed method with both GðxÞ ¼ xk and GðxÞ ¼ k can find more roots than other compared methods. It is interesting that the new method can find complex roots from real initial values while other methods can only find a single real root. Fig. 8 shows the roots of Eq. (27) found by the new method. Example 6.4–6.6. Consider the following nonlinear equations:
F ¼ 2 sin z þ 1 z ¼ 0 2
ð28Þ
2
F ¼ zez sin z þ 3 cos z þ 5 ¼ 0 z zsin ð10Þ z ¼0 lnðzÞ tan F¼ zþ1 2 þ cos z
ð29Þ ð30Þ
In these three examples we want to test the methods with real initial value ranges. For (28) initial values are in the range of 9.999 to 0 with steps of 0.001. For Eq. (29) initial values vary from 0.001 to 1 with steps of 0.001 and for Eq. (30) vary from 1 to 10000 with steps of 1. The results for these three examples are presented in Tables 6–8. These tables show that the new method is the winner in all cases concerning the number of roots found by the method, number of failures, number of iterations and number of function evaluations. The roots found by the proposed method are illustrated in Figs. 9–11. It is interesting to note that, for Eq. (30), our method can find more than 1200 roots many more than the roots found by other compared methods. 5 4 3
Imaginary part
2 1 0 -1 -2 -3 -4 -5 -5
-4
-3
-2
-1
0 Real part
1
2
3
4
5
Fig. 10. Roots of Eq. (29) found by the new method.
40 30 20
Imaginary part
10 0 -10 -20 -30 -40 -50 -60 0
1000
2000
3000
4000
5000 6000 Real part
7000
Fig. 11. Roots of Eq. (30) found by the new method.
8000
9000
10000
2618
R. Oftadeh et al. / Applied Mathematics and Computation 217 (2010) 2608–2618
7. Conclusion In this paper a new, simple, effective and flexible method is presented for computing real and complex roots of nonlinear equations. Numerical examples show the power of the proposed method which can find many more roots (especially complex roots) as compared to other existing methods. The method is nicely robust to the location of initial value therefore it can find roots far from the given initial value. Also, the proposed method can handle complex initial values more efficiently than the other compared methods. It can find complex roots from real initial values. Whereas, most of the methods in many cases do not have such ability. Also, the number of failures is much less compare to other methods and finally the number of iterations is comparable to other fast methods. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]
J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several variables, Academic Press, 1970. R.L. Burden, J.D. Faires, Numerical Analysis, seventh ed., PWS Publishing Company, Boston, 2001. A. Cordero, J.R. Torregrosa, Variants of Newton’s method using fifth-order quadrature formulas, Appl. Math. Comput. 190 (2007) 686–698. M. Aslam Noor, Fifth-order convergent iterative method for solving nonlinear equations using quadrature formula, J. Math. Control Sci. Appl. 1 (2007). M.T. Darvishi, A. Barati, A third-order Newton-type method to solve systems of nonlinear equations, Appl. Math. Comput. 187 (2007) 630–635. S. Abbasbandy, Extended Newton’s method for a system of nonlinear equations by modified Adomian decomposition method, Appl. Math. Comput. 170 (2005) 648–656. A. Golbabai, M. Javidi, A new family of iterative methods for solving system of nonlinear algebraic equations, Appl. Math. Comput. 190 (2007) 1717– 1722. Jian-Lin Li, Adomian’s decomposition method and homotopy perturbation method in solving nonlinear equations, J. Comput. Appl. Math. 228 (2009) 168–173. M. Grau-Sánchez, J.M. Peris, J.M. Gutiérrez, Accelerated iterative methods for finding solutions of a system of nonlinear equations, Appl. Math. Comput. 190 (2007) 1815–1823. H.H.H. Homeier, A modified Newton method with cubic convergence: the multivariate case, J. Comput. Appl. Math. 169 (2004) 161–169. M. Çetin Koçak, A class of iterative methods with third-order convergence to solve nonlinear equations, J. Comput. Appl. Math. 218 (2008) 290–306. M.V. Kanwar, V.K. Kukreja, S. Singh, On a class of quadratically convergent iteration formulae, Appl. Math. Comput. 166 (3) (2005) 633–637. N. Ujevic, A method for solving nonlinear equations, Appl. Math. Comput. 174 (2006) 1416–1426. YoonMee Hama, Changbum Chun, Sang-Gu Lee, Some higher-order modifications of Newton’s method for solving nonlinear equations, J. Comput. Appl. Math. 222 (2008) 477–486. Weihong Bi, Hongmin Ren, Qingbiao Wu, Three-step iterative methods with eighth-order convergence for solving nonlinear equations, J. Comput. Appl. Math. 225 (2009) 105–112. Xinlong Feng, Yinnian He, Parametric iterative methods of second-order for solving nonlinear equation, Appl. Math. Comput. 173 (2006) 1060–1067. Xinyuan Wu, Hongwei Wu, On a class of quadratic convergence iteration formulae without derivatives, Appl. Math. Comput. 107 (2000) 77–80. Liang Fang, Guoping He, Zhongyong Hu, A cubically convergent Newton-type method under weak conditions, J. Comput. Appl. Math. 220 (2008) 409– 412. Liang Fang, Guoping He, Some modifications of Newton’s method with higher-order convergence for solving nonlinear equations, J. Comput. Appl. Math. 228 (2009) 296–303. M. Nikkhah-Bahrami, R. Oftadeh, An effective iterative method for computing real and complex roots of systems of nonlinear equations, Appl. Math. Comput. 215 (2009) 1813–1820. John P. Boyd, Rootfinding for a transcendental equation without a first guess: Polynomialization of Kepler’s equation through Chebyshev polynomial expansion of the sine, Appl. Numer. Math. 57 (2007) 12–18. Miquel Grau-Sánchez, Improving order and efficiency: composition with a modified Newton’s method, J. Comput. Appl. Math. 231 (2009) 592–597. Nenad Ujevic, An iterative method for solving nonlinear equations, J. Comput. Appl. Math. 201 (2007) 208–216. Kou Jisheng, Li Yitian, Wang Xiuhua, Third-order modification of Newton’s method, J. Comput. Appl. Math. 205 (2007) 1–5.