Applied Mathematics and Computation 217 (2010) 3190–3198
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
A comparison of the Newton–Krylov method with high order Newton-like methods to solve nonlinear systems Byeong-Chun Shin a,1, M.T. Darvishi b,⇑, Chang-Hyun Kim a a b
Department of Mathematics, Chonnam National University, Republic of Korea Department of Mathematics, Razi University, Kermanshah 67149, Iran
a r t i c l e
i n f o
a b s t r a c t We compare the CPU time and error estimates of some variants of Newton method of the third and fourth-order convergence with those of the Newton–Krylov method used to solve systems of nonlinear equations. By expanding some numerical experiments we show that the use of Newton–Krylov method is better in the cost and accuracy points of view than the use of other high order Newton-like methods when the system is sparse and its size is large. Ó 2010 Elsevier Inc. All rights reserved.
Keywords: Newton–Krylov method System of nonlinear equations GMRES CPU time Iteration number
1. Introduction Consider the following system of nonlinear equations
FðxÞ ¼ 0;
ð1Þ t
m
1
where F = (f1(x), . . . , fm(x)) with nonlinear m-variable functions fi : R ! R for i = 1, 2, . . . , m. Many relationships in nature are inherently nonlinear. Thus, solving nonlinear equations occurs frequently in scientific works. Finding a root of Eq. (1) has become one of the most important problems in numerical analysis. There is a large number of numerical methods to solve Eq. (1). Among such approaches, Newton’s method is one of the most powerful numerical methods and an important basic method which has a quadratic convergence if the function F is continuously differentiable and if a good initial guess x0 is provided. However, at each Newton iteration step we need to find the exact solution of the obtained linear system of Newton scheme, which is a little bit expensive in actual applications, in particular, when the problem size m is very large. To reduce the computational cost of the Newton method, Dembo et al. proposed an inexact Newton method, so-called the Newton– Krylov method, in [1]. Frontini and Sormani proposed a third-order method based on a quadrature formula to solve systems of nonlinear equations in [2] and Cordero and Torregrosa developed some variants of Newton’s method based on trapezoidal and midpoint rules of quadrature in [3]. Also, Darvishi and Barati have presented some high order iterative methods in [4–6] and Babajee et al. proposed a fourth-order iterative technique in [7]. In this paper, we will show from some numerical experiments that the use of Newton–Krylov method is better in the cost and accuracy points of view than the use of other high order Newton-like methods when the system size is large. The paper is organized as follows. In the following section we give a brief on the Newton–Krylov method to solve a system of linear equations. Then some interesting high order methods to solve systems of nonlinear equations are presented in Section 3.
⇑ Corresponding author. 1
E-mail address:
[email protected] (M.T. Darvishi). The author was supported by the research fund of KOSEF-R14-2003-019-01001.
0096-3003/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2010.08.051
B.-C. Shin et al. / Applied Mathematics and Computation 217 (2010) 3190–3198
3191
In Section 5 we provide some numerical comparisons of high order methods and Newton–Krylov method with some examples. 2. Preliminaries A Newton–Krylov method is an implementation of Newton’s method in which a Krylov subspace method is used to solve approximately the linear subproblems for each Newton step. Krylov subspace method constitutes a broad and widely used class of iterative linear algebra methods that includes the classical conjugate gradient method for symmetric positive-definite systems [8] and methods developed for nonsymmetric linear systems such as the generalized minimal residual Method (GMRES) [9], the bi-conjugate gradient stabilized method (Bi-CGSTAB) [10] and the transpose-free quasi-minimal residual (TFQMR) [11]. 2.1. Newton–Krylov method Consider the system (1) of nonlinear equations, in which F is assumed to be continuously differentiable in a neighborhood of a zero x. We also assume that the Jacobian F0 (x) is non-singular. Denote by xn the approximate solution of the Eq. (1) at the nth Newton iteration step. Newton’s method requires, for the next approximate solution, the evaluation of F0 (xn) and then the solution sn of the following linear system
F0 ðxn Þsn ¼ Fðxn Þ:
ð2Þ
Then, we have the next Newton iteration xn+1 = xn + sn. In Eq. (2) F0 (xn) denotes the Jacobian matrix of F(x) at the current iterate xn. The main strength of Newton’s method is that if x0 is sufficiently close to x then {xn} quadratically converges to x. However, at each Newton iteration we have to solve the linear system F0 (xn)sn = F(xn). Computational cost finding such exact solutions sn can be very expensive if the size of system is large. From viewpoint of computational cost, the Newton–Krylov method (or inexact Newton method) is developed, in which the method uses an approximate solution instead of the exact solution of the Eq. (2), (see [12] for more details). In the Newton–Krylov method, the solution sn of the Eq. (2) is approximated with the following condition
kF0 ðxn Þsn þ Fðxn Þk 6 gn kFðxn Þk;
ð3Þ
where gn 2 [0, 1) is called the forcing term. The condition (3) is called the inexact Newton condition. The forcing term reflects how accurately sn solves F0 (xn)sn = F(xn). At each iteration step of the inexact Newton method, we only need to solve the obtained system approximately using an efficient iteration method, e.g., the matrix splitting methods [13–15] or the modern Krylov subspace methods [16]. Then, the computational cost of the Newton–Krylov method can be considerably reduced relative to the general Newton’s method. Furthermore, a quadratic convergence of the Newton–Krylov method can be obtained if the sequence of the forcing terms goes to zero as kF(xn)k goes to zero (see [17] for details). In this point of view, the Newton–Krylov method can be more practical and effective than the Newton method in actual applications. But, the resultant sn may not be useful if xn is far away from a solution. In fact, the Newton–Krylov methods, like all Newton-like methods, must usually be globalized, i.e., it is augmented with certain auxiliary procedures (globalizations) that increases the likelihood of convergence when good initial approximate solutions are not available. Globalizations are typically structured to test whether a step gives satisfactory progress towards a solution and, if necessary, to modify it to obtain the step that gives satisfactory progress. One method of improving global convergence is backtracking, which gives us the inexact Newton Backtracking method [12]. Inexact Newton Backtracking Algorithm (N–K). 1. Let x0 be given. 2. Let gmax 2 [0, 1), t 2 (0, 1) and 0 < hmin < hmax < 1 be given. 3. For n = 0, 1, . . . until convergence, Choose initial gn 2 [0, gmax] and sn such that
kF0 ðxn Þsn þ Fðxn Þk 6 gn kFðxn Þk; while kF(xn + sn)k > [1 t(1 gn)]kF(xn)k, – choose h 2 [hmin, hmax], – update sn = h sn and gn = 1 h(1 gn). Set xn+1 = xn + sn. The idea with backtracking is that if a step is not acceptable, then it is shortened until it is acceptable. The reduction sn = h sn with h 2 [hmin, hmax] is called the safeguarded backtracking. The condition h 6 hmax guarantees that the backtracking
3192
B.-C. Shin et al. / Applied Mathematics and Computation 217 (2010) 3190–3198
loop can be completed with some appropriate steps. Also, the condition h P hmin guarantees that the number of executing step will not be too short. One may choose t small, e.g. t = 104, so that a step is accepted if there is minimal progress. Typically hmin = 0.1 and hmax = 0.5 are chosen in practice [18]. Previous studies in [19–21] have shown that the forcing terms may affect the robustness of the Newton–Krylov method, globalization not withstanding. In this paper we adopt the forcing term, called ‘‘Choice 1” given in [19], which is determined as follows. Select g0 2 [0, 1) and set
gn ¼
jkFðxn Þk kFðxn1 Þ þ F0 ðxn1 Þsn1 kj ; kFðxn1 Þk
n ¼ 0; 1; . . .
ð4Þ
In keeping with practice elsewhere (e.g., see [19,20]), we follow (4) with the safeguard
gn
n n oo pffiffi pffiffi ð1þ 5Þ=2 ð1þ 5Þ=2 min gmax ; max gn ; gn1 > 0:1: whenever gn1
ð5Þ
pffiffiffi In our implementations, we used g0 = 0.01 and gmax = 0.9. The exponent ð1 þ 5Þ=2 is related to a local convergence rate associated with the forcing terms [22]. There are many Krylov subspace methods including the generalized minimal residual method (GMRES) [9], the Biconjugate Gradient Stabilized method (BICGSTAB) [10] and the conjugate gradient squared method (CGS) [23]. The generalized minimal residual method (GMRES) is an iterative method for the numerical solution for systems of linear equations. The method approximates the solution by vectors in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find those vectors [24]. In this paper, in order to solve the linear system (2) in the Newton–Krylov method, we use an alternative implementation of the generalized residual minimal method given in [24]. 3. High order Newton-like methods Recently, some high order Newton-like iterative methods to solve nonlinear system (1) have been studied. Some of them are very attractive because they do not use the second order derivative of the function F. In [2], Frontini and Sormani proposed a modified Newton method (mNm) with the following iteration scheme which has the third-order convergence:
1 1 xnþ1 ¼ xn F0 xn F0 ðxn Þ1 Fðxn Þ Fðxn Þ: 2
ðmNmÞ
In [5], Darvishi and Barati introduced the following fourth-order method (FM):
xnþ1 ¼ xn
1 1 0 2 xn þ gðxn Þ 1 F ðxn Þ þ F0 þ F0 ðgðxn ÞÞ Fðxn Þ; 6 3 2 6
ðFMÞ
where
gðxn Þ ¼ xn F0 ðxn Þ1 Fðxn Þ þ F xnþ1 ; with
xnþ1 ¼ xn F0 ðxn Þ1 Fðxn Þ: With the same xnþ1 , Babajee et al. also proposed the following third-order Chebyshev-like iterative method (CL) in [25]:
xnþ1 ¼ xnþ1 F0 ðxn Þ1 F xnþ1 :
ðCLÞ
Nedzhibov [26] presented some interesting third and fourth-order methods such as
1 1 xnþ1 ¼ xn F0 xn hðxn Þ Fðxn Þ; 2 1 xnþ1 ¼ xn F0 ðxn Þ1 F0 xn þ hðxn Þ hðxn Þ 2
ðNed1Þ ðNed2Þ
and
1 xnþ1 ¼ xn ð3F0 ðyðxn ÞÞ F0 ðxn ÞÞ1 ð3F0 ðyðxn ÞÞ þ F0 ðxn ÞÞhðxn Þ; 2 0
ðNed3Þ
where h(x) = F (x)1F(x) and yðxÞ ¼ x 23 hðxÞ. The first two methods (Ned1) and (Ned2) are the third-order methods, while the method (Ned3) is of the fourth-order convergence. In the following section we will provide some numerical results for some systems of nonlinear equations using the Newton–Krylov method with backtracking (N–K) and the high order methods mentioned above.
3193
B.-C. Shin et al. / Applied Mathematics and Computation 217 (2010) 3190–3198
4. Numerical results In this section, we describe the solutions for some systems of nonlinear equations. To solve the desired linear systems we used the Gauss elimination method, except for the Newton–Krylov(N–K) method where we used the GMRES method. All of computations were performed using Matlab. The stopping criteria is kF(xn)k < 1013 where xn is the nth approximation of the solution. Example 1. The first example has the following form [6]:
ex1 ex2 þ x1 cos x2 ¼ 0; x1 þ x2 ¼ 1:
Its exact solution is x = (4.38161975, 5.38161976)t. For an initial guess we set x0 = (4, 5)t and the numerical results are given in Table 1. Example 2. The second test problem takes the form:
8 x1 x3 þ x4 x1 þ x4 x3 > > > < x2 x 3 þ x 4 x 2 þ x 4 x 3 > x1 x 2 þ x 1 x 3 þ x 2 x 3 > > : x1 x 2 þ x 4 x 1 þ x 4 x 2
¼ 0; ¼ 0; ¼ 1; ¼ 0:
Its exact solution is given in [6] as:
x1 ¼ 0:577380952380952380952380; x2 ¼ 0:577380952380952380952380; x3 ¼ 0:577380952380952380952380; x4 ¼ 0:289115646258503401360544: To solve this system we set x0 = (0.5, 0.5, 0.5, 0.2)t. Table 2 shows the results. Example 3. The third test problem is
8 2 2 2 > < x1 þ x2 þ x3 ¼ 9; x1 x2 x3 ¼ 1; > : x1 þ x2 x23 ¼ 0:
Table 1 Results for Example 1. Method
kerrork2
CPU time (s)
Iter.
N–K Ned1 Ned2 Ned3 mNm FM CL
8.005932e015 1.776357e015 1.776357e015 1.776357e015 1.776357e015 1.776357e015 1.243450e014
0.187500 0.093750 0.125000 0.078125 0.109375 0.125000 0.078125
4 3 3 2 3 2 2
Method
kerrork2
CPU time (s)
Iter.
N–K Ned1 Ned2 Ned3 mNm FM CL
4.002966e016 0 3.925231e017 2.220446e016 0 6.153481e015 9.614813e017
0.359375 0.140625 0.156250 0.109375 0.156250 0.156250 0.140625
7 3 3 2 3 2 3
Table 2 Results for Example 2.
3194
B.-C. Shin et al. / Applied Mathematics and Computation 217 (2010) 3190–3198 Table 3 Results for Example 3. Method
kerrork2
CPU time (s)
Iter.
N–K Ned1 Ned2 Ned3 mNm FM CL
3.675597e015 1.790235e015 1.387779e017 1.790235e015 1.790235e015 1.790235e015 1.790235e015
0.421875 0.171875 0.218750 0.171875 0.171875 0.328125 0.218750
8 4 5 4 4 4 5
Method
kerrork2
CPU time (s)
Iter.
N–K Ned1 Ned2 Ned3 mNm FM CL
3.5318e015 0.000000 0.000000 0.000000 0.000000 0.000000 2.6680e010
0.093750 0.109375 0.109375 0.109375 0.140625 0.125000 0.171875
5 4 4 3 4 3 3
Table 4 Results for Example 4.
Its approximate solution is given in [3] as (2.0902946, 2.1402581, 0.2235251)t. We set x0 = (2.5, 1, 1)t. Table 3 shows the results. Example 4. The fourth test problem is
8 4 4 4 > < x1 þ x2 þ x3 ¼ 3; 3 3 x1 x2 þ x33 ¼ 1; > : 2 x1 þ x22 x23 ¼ 1: To obtain the exact solution (1, 1, 1)t of system, we set x0 = (1.25, 1.25, 1.25)t. Table 4 shows the results. 4.1. Numerical tests for sparse systems of nonlinear equations Example 5. Consider F(x) = (f1(x), f2(x), . . . , fm(x))t with
fi ðxÞ ¼ xi xiþ1 1;
i ¼ 1; 2; . . . ; m 1;
fm ðxÞ ¼ xm x1 1: When m is odd, the exact zeros of F(x) are (1, 1, . . . , 1)t and (1, 1, . . . , 1)t. We solve the above system for various values of m. In all of them we set the initial guess to be (0.5, 0.5, . . . , 0.5)t. The numerical results are given in Table 5. Example 6. Consider F(x) = (f1(x), f2(x), . . . , fm(x))t with
fi ðxÞ ¼ x2i 1;
i ¼ 1; 2; . . . ; m:
The exact zeros of F(x) are (1, 1, . . . , 1)t and (1, 1, . . . , 1)t. For various values of m, we use same initial guess (0.5, 0.5, . . . , 0.5)t. The numerical results are provided in Table 6. Example 7. The next test problem is F(x) = (f1(x), f2(x), . . . , fm(x))t with
fi ðxÞ ¼ cosðxi Þ 1;
i ¼ 1; 2; . . . ; m:
To obtain the exact solution (0, 0, . . ., 0)t for different values of m, we use same initial guess p ¼ ð0:0087; 0:0087; . . . ; 0:0087Þt . For this example the stopping criteria is kF(x )k < 106. The numerical ð0:5; 0:5; . . . ; 0:5Þt 180 n results are provided in Table 7. The Jacobian matrix of nonlinear systems in Examples 5–7 are sparse matrices. In the last example we present a system of nonlinear equations with a full Jacobian matrix.
3195
B.-C. Shin et al. / Applied Mathematics and Computation 217 (2010) 3190–3198 Table 5 Results for Example 5. m
Method
13
N–K Ned1 Ned2 Ned3 mNm FM CL
CPU time (s) 0.484375 0.500000 0.531250 0.375000 0.546875 0.640625 0.421875
5 4 4 3 4 3 4
Iter.
71
N–K Ned1 Ned2 Ned3 mNm FM CL
7.828125 11.890625 14.468750 10.281250 12.109375 13.750000 8.609375
5 4 5 3 4 3 5
151
N–K Ned1 Ned2 Ned3 mNm FM CL
75.031250 121.578125 172.046875 166.718750 138.281250 199.046875 109.093750
5 4 5 3 4 3 5
201
N–K Ned1 Ned2 Ned3 mNm FM CL
272.546875 461.812500 588.906250 431.984375 790.218750 802.640625 501.546875
5 4 5 3 4 3 5
m
Method
CPU time (s)
20
N–K Ned1 Ned2 Ned3 mNm FM CL
0.953125 1.000000 0.968750 0.750000 1.015625 11.156250 0.734375
5 4 4 3 4 3 4
50
N–K Ned1 Ned2 Ned3 mNm FM CL
3.953125 5.312500 6.593750 4.031250 5.390625 6.093750 4.015625
5 4 5 3 4 3 5
100
N–K Ned1 Ned2 Ned3 mNm FM CL
20.812500 31.546875 42.078125 28.140625 38.000000 49.500000 27.546875
5 4 4 3 4 3 5
135
N–K Ned1 Ned2 Ned3 mNm FM CL
116.437500 196.812500 238.812500 132.234375 190.828125 211.015625 114.453125
5 4 5 3 4 3 5
Table 6 Results for Example 6. Iter.
3196
B.-C. Shin et al. / Applied Mathematics and Computation 217 (2010) 3190–3198 Table 7 Results for Example 7. m
Method
30
N–K Ned1 Ned2 Ned3 mNm FM CL
CPU time (s) 0.531250 0.593750 0.593750 0.390625 0.593750 0.921875 0.390625
4 3 3 2 3 3 3
Iter.
50
N–K Ned1 Ned2 Ned3 mNm FM CL
1.750000 1.812500 1.812500 1.843750 1.859375 2.890625 1.078125
5 3 3 3 3 3 3
100
N–K Ned1 Ned2 Ned3 mNm FM CL
16.343750 17.734375 23.218750 17.500000 16.859375 24.734375 12.562500
5 3 4 3 3 3 4
200
N–K Ned1 Ned2 Ned3 mNm FM CL
185.078125 223.296875 300.062500 220.984375 225.937500 336.265625 151.671875
5 3 4 3 3 3 4
Example 8. [Chandrasekhar H-equation [27]] The Chandrasekhar integral equation [27] which arises from radiative transfer theory is a nonlinear integral equation which gives a full nonlinear system of equations if discretized. The Chandrasekhar integral equation is given
FðP; cÞ ¼ 0;
P : ½0; 1 ! R;
ð6Þ
with parameter c and the operator F as
1 Z c 1 yPðv Þ FðP; cÞðyÞ ¼ PðyÞ 1 dv : 2 0 yþv
ð7Þ
If we discretize the integral equation (7) using the mid-point integration rule with m grid points
Z
1
f ðxÞdx ¼
0
n 1X f ðxj Þ; n j¼1
xj ¼ ðj 0:5Þh;
h¼
1 ; n
j ¼ 1; . . . ; m;
we obtain the following system of nonlinear equations:
F i ðy; cÞ ¼ yi 1
n c X xi y i 2n j¼1 xi þ xj
!1 ;
i ¼ 1; . . . ; m:
ð8Þ
When starting with (1, 1, . . . , 1)t vector, the system (8) has a solution for all c 2 (0, 1). We set c = 0.5, Table 8 shows the results. We note that in these cases the Jacobian is a full matrix, hence we changed the stopping criteria as kF(xn)k < 106. 5. Conclusion We have compared some high order iterative methods with the Newton–Krylov method. As we can see from the numerical results, when the size of the nonlinear systems is small the run time of Newton–Krylov method is not much better than the other methods. But, when the size of the problem increases, the run time of the Newton–Krylov method gets better than the other methods if we have a sparse system. We are preparing a paper for a fusion method combining the Newton–Krylov method with high order methods to solve systems of nonlinear equations.
3197
B.-C. Shin et al. / Applied Mathematics and Computation 217 (2010) 3190–3198 Table 8 Results for Example 8. m
Method
5
N–K Ned1 Ned2 Ned3 mNm FM CL
CPU time (s) 0.125000 0.093750 0.093750 0.078125 0.093750 0.109375 0.093750
4 2 2 2 2 2 2
Iter.
15
N–K Ned1 Ned2 Ned3 mNm FM CL
0.296875 0.171875 0.171875 0.156250 0.171875 0.234375 0.125000
5 2 2 2 2 2 2
50
N–K Ned1 Ned2 Ned3 mNm FM CL
2.828125 1.562500 1.625000 1.515625 1.453125 2.265625 0.921875
6 2 2 2 2 2 2
75
N–K Ned1 Ned2 Ned3 mNm FM CL
9.703125 5.718750 5.859375 5.687500 5.375000 7.812500 3.015625
6 2 2 2 2 2 2
100
N–K Ned1 Ned2 Ned3 mNm FM CL
21.718750 13.593750 15.406250 13.484375 14.968750 19.812500 6.953125
6 2 2 2 2 2 2
200
N–K Ned1 Ned2 Ned3 mNm FM CL
264.453125 198.812500 189.609375 193.750000 198.640625 250.390625 92.953125
6 2 2 2 2 2 2
References [1] R.S. Dembo, S.C. Eisenstat, T. Steihaug, Inexact Newton methods, SIAM J. Numer. Anal. 19 (1982) 400–408. [2] M. Frontini, E. Sormani, Third-order methods from quadrature formulae for solving systems of nonlinear equations, Appl. Math. Comput. 149 (2004) 771–782. [3] A. Cordero, J.R. Torregrosa, Variants of Newton’s method for functions of several variables, Appl. Math. Comput. 183 (2006) 199–208. [4] M.T. Darvishi, A. Barati, A third-order Newton-type method to solve systems of nonlinear equations, Appl. Math. Comput. 187 (2007) 630–635. [5] M.T. Darvishi, A. Barati, A fourth-order method from quadrature formulae to solve systems of nonlinear equations, Appl. Math. Comput. 188 (2007) 257–261. [6] M.T. Darvishi, A. Barati, Super cubic iterative methods to solve systems of nonlinear equations, Appl. Math. Comput. 188 (2007) 1678–1685. [7] D.K.R. Babajee, M.Z. Dauhoo, M.T. Darvishi, A. Barati, A note on the local convergence of iterative methods based on Adomian decomposition method and 3-node quadrature rule, Appl. Math. Comput. 200 (2008) 452–458. [8] M.R. Hestenes, E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Stand. 49 (1952) 409–435. [9] Y. Saad, M.H. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput. 7 (1986) 856–869. [10] H.A. van der Vorst, Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems, SIAM J. Sci. Statist. Comput. 13 (1992) 631–644. [11] R.W. Freund, A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems, SIAM J. Sci. Comput. 14 (1993) 470–482. [12] S.C. Eisenstat, H.F. Walker, Globally convergent inexact Newton methods, SIAM J. Optim. 4 (1994) 393–422. [13] Z.-Z. Bai, G.H. Golub, L.-Z. Lu, J.-F. Yin, Block triangular and skew-Hermitian splitting methods for positive-definite linear systems, SIAM J. Sci. Comput. 26 (3) (2005) 844–863.
3198
B.-C. Shin et al. / Applied Mathematics and Computation 217 (2010) 3190–3198
[14] Z.-Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) 603–626. [15] Z.-Z. Bai, J.-C. Sun, D.-R. Wang, A unified framework for the construction of various matrix multisplitting iterative methods for large sparse system of linear equations, Comput. Math. Appl. 32 (12) (1996) 51–76. [16] Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing Company, Boston, 1996. [17] H.-B. An, Z.-Z. Bai, A globally convergent Newton-GMRES method for large sparse systems of nonlinear equations, Appl. Numer. Math. 57 (2007) 235– 252. [18] J.E. Dennis Jr., R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Series in Automatic Computation, PrenticeHall, Englewood Cliffs, NJ, 1983. [19] S.C. Eisenstat, H.F. Walker, Choosing the forcing terms in an inexact Newton method, SIAM J. Sci. Comput. 17 (1996) 16–32. [20] J.N. Shadid, R.S. Tuminaro, H.F. Walker, An inexact Newton method for fully coupled solution of the Navier–Stokes equations with heat and mass transport, J. Comput. Phys. 137 (1997) 155–185. [21] R.S. Tuminaro, H.F. Walker, J.N. Shadid, On backtracking failure in Newton-GMRES methods with a demonstration for the Navier–Stokes equations, J. Comput. Phys. 180 (2002) 549–558. [22] R.P. Pawlowski, J.N. Shadid, J.P. Simonis, H.F. Walker, Globalization techniques for Newton–Krylov methods and applications to the fully coupled solution of the Navier–Stokes equations, SIAM Rev. 48 (4) (2006) 700–721. [23] P. Sonneveld, CGS, a fast Lanczos-type solver for nonsymmetric linear systems, SIAM J. Sci. Stat. Comput. 10 (1989) 36–52. [24] H.F. Walker, L. Zhou, A simpler GMRES, Numer. Linear Algebra Appl. 1 (6) (1994) 571–581. [25] D.K.R. Babajee, M.Z. Dauhoo, M.T. Darvishi, A. Karami, A. Barati, Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations, J. Comput. Appl. Math. 233 (2010) 2002–2012. [26] G.H. Nedzhibov, A family of multi-point iterative methods for solving systems of nonlinear equations, J. Comput. Appl. Math. 222 (2) (2008) 244–250. [27] C.T. Kelley, Solution of the Chandrasekhar H-equation by Newton’s method, J. Math. Phys. 21 (1980) 1625–1628.