Journal of Computational and Applied Mathematics 218 (2008) 290 – 306 www.elsevier.com/locate/cam
A class of iterative methods with third-order convergence to solve nonlinear equations M. Çetin Koçak∗ Chemical Engineering Department, Engineering Faculty, Ankara University, Tandoˇgan, 06100 Ankara, Turkey Received 25 July 2006; received in revised form 3 February 2007
Abstract Algebraic and differential equations generally co-build mathematical models. Either lack or intractability of their analytical solution often forces workers to resort to an iterative method and face the likely challenges of slow convergence, non-convergence or even divergence. This manuscript presents a novel class of third-order iterative techniques in the form of xk+1 =gu (xk )=xk +f (xk )u(xk ) to solve a nonlinear equation f with the aid of a weight function u. The class currently contains an invert-and-average (gKia ), an average-and-invert (gKai ), and an invert-and-exponentiate (gKe ) branch. Each branch has several members some of which embed second-order Newton’s (gN ), third-order Chebychev’s (gC ) or Halley’s (gH ) solvers. Class members surpassed stand-alone applications of these three well-known methods. Other methods are also permitted as auxiliaries provided they are at least of second order. Asymptotic convergence constants are calculated. Assignment of class parameters to non-members carries them to a common basis for comparison. This research also generated a one-step “solver” that is usable for post-priori analysis, trouble shooting, and comparison. © 2007 Elsevier B.V. All rights reserved. Keywords: Algebraic equation solvers; Iterative methods; Fixed-point iterations; Simulation; Convergence order; Direct substitution; Partial substitution; Newton’s method; Halley’s method; Nonlinear equations; Convergence acceleration
1. Introduction Efficient techniques are required to solve a large number of nonlinear equations encountered, for instance, in scientific and engineering models. Dynamic simulation applications in particular may need multiple runs each demanding thousands of times zeroes of algebraic equations coupled to differential equations. Either lack or intractability of an analytical solution frequently directs one to harness a numerical scheme and face a possibility of slow convergence or divergence, in other words, inefficiency or failure. Let f (x) = 0 denote a nonlinear equation to be solved. Efforts continue worldwide to develop new iterative methods and improve old ones. This paper presents a novel class of fixed-point techniques in the form of xk+1 = gu (xk ) = xk + f (xk )u(xk ) where k is an iteration counter and u is a weight function. A fixed-point method starts from one or more guessed values of the independent variable x, supposedly close to the target solution z, thereafter repeatedly uses direct substitution xk+1 = g(xk ) until it reaches a point where two consecutive x values coincide within a pre-specified ∗ Tel.: +90 312 212 67 20.
E-mail address:
[email protected]. 0377-0427/$ - see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.cam.2007.02.001
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
291
tolerance, and reports this point z. Partial substitution is an attempt to improve convergence through a gain function G multiplying the correction to x at each iteration such that xk+1 = gps (xk ) = xk + G(xk )(g(xk ) − xk ). The ideal gain Gid immediately to jump to z varies with xk according to Gid (xk ) = (z − xk )/(g(xk ) − xk ). Gid is of course not known a priori. However, it is clear from this formula that a variable gain would be more appropriate than a constant. Beyond this point in the text, when functions are written without an argument the latter is x. For instance, gu =x +f u is the iteration form studied in this research, gN = x − f/f is the well-known Newton’s method [1,3,4,14] and gNps = x − Gf /f represents partial substitution variants of gN . More about gN and its gNps variants later. Atkinson [1] gave the following theorem pertinent to convergence order n: Theorem. Assume that z is a root of x = g(x), and that g is n times continuously differentiable for all x near z, for some n2. Furthermore, assume g (z) = · · · = g (n−1) (z) = 0. Then if the initial guess x1 is chosen sufficiently close to z, the iteration xk+1 = g(xk ), k 1 will have order of convergence n, and z − xk+1 k+1 g (n) (z) , = lim n = (−1)n−1 n k→∞ k k→∞ (z − xk ) n! lim
k = xk − z.
According to this theorem, for n = 2 lim
k→∞
z − xk+1 (z − xk )2
= lim
k→∞
k+1 1 = − g (z). 2! 2k
(1)
More pertinently, for third-order methods including the class developed in this research, lim
k→∞
z − xk+1 (z − xk )3
= lim
k→∞
k+1 3k
=
1 g (z). 3!
(2)
with the implication that in some neighbourhood of z k+1 =
1 g (z) 3k . 3!
Thus, the relative smallness of g (z) can be a criterion for selection among third-order methods. Linear or first order methods are those where n = 1, g (z) = 0, and k+1 is proportional to k in the neighbourhood of z. For second order, the only requirement is g (z) = 0. For n > 2, this becomes g (z) = g (z) = · · · = g (n−1) (z) and g (n) (z) = 0. Remark 1. nth-order solvers are a subset of (n − 1)th-order solvers. Higher n may mean fewer but not necessarily cheaper iterations. Irrespective of n, there is always a risk of divergence if not started close enough to z. Iteration equations obtained from a rearrangement of f (x) = 0 are mostly linear. 2. Solvers of gu type Direct differentiation of gu with respect to x gives gu = 1 + f u + f u ,
g u = f u + 2f u + f u ,
gu = f u + 3f u + 3f u + f u .
As x goes to z the terms containing f disappear because f (z) = 0 and so these derivatives tend to gu (z) = 1 + f (z)u(z),
(3a)
gu (z) = f (z)u(z) + 2f (z)u (z),
(3b)
gu (z) = f (z)u(z) + 3f (z)u (z) + 3f (z)u (z).
(3c)
292
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
Equating them to zero and rearranging, one obtains gu (z) = 0 ⇒ u(z) = −1/f (z),
(4a)
gu (z) = 0 ⇒ u (z) = −0.5f (z)u(z)/f (z) = 0.5f (z)/f (z), 2
gu (z) = 0 ⇒ u (z) = −(f (z)u(z) + 3f (z)u (z))/(3f (z)) =
(4b) f (z) 3f 2 (z)
−
f 2 (z) 2f 3 (z)
.
(4c)
The implicit condition f (z) = 0 in (4) means that z must be simple root (r = 1). As expected, these conditions are the first three rungs up the steep ladder of convergence order. Violation of (4a) is enough to make gu a first-order process. It is relatively easy to find a function u that tends to −1/f (z) in the limit, thereby satisfying (4a) and rendering a second-order method, but it is a formidable task to fulfil (4b) in addition and rise to third order. It is even more difficult to satisfy (4c) as an extra step and climb to fourth order. Newton’s method gN is a simple one-point technique without memory. It is a piecewise linearization of f since it extends its tangent at the current iterate to intersect the x-axis and suggests this intercept as the next approximation to z. Since gN = x + f uN , uN = −1/f , uN (z) = −
1 , f (z)
uN (z) =
f (z) f 2 (z)
,
uN (z) =
f (z) f 2 (z)
−
2f 2 (z) f 3 (z)
.
Comparison with the above conditions reveals immediately that uN (z) satisfies (4a) but uN (z) fails (4b). Thus, gN is utmost second order unless f (z) = 0. Now uN (z) and uN (z), respectively, satisfy the form of (4b) and (4c) but fail their coefficients. This is an indication that gN has a good prospect of improvement and acceleration as exemplified by numerous previous modifications surveyed below in Section 2.1. Repetition of z is known to demote convergence of gN from quadratic to superlinear or geometrical. If r is the multiplicity of z, then g (z) = (r − 1)/r = 0 and gNr = x − rf /f restores second-order convergence [4]. Note that gNr is a gNps with a fixed G = r. Assume that z is a simple zero of f. Differentiating gNps = x − Gf /f twice f f 2 − ff ff f , − G = 1 − G − G 1 − f f f 2 f 2 ff ff (f f + ff )f 2 − 2f f (ff ) f gNps = − G − G 1 − 2 − G 1 − 2 + G f f f f 4 ff f ff ff 2 f + − = − G − 2G 1 − 2 + G f f f f 2 f 3 f f f f 2 f = G − 2G − G − 2G − G . − 2 f f f f f gNps = 1 − G
(z) = 1 − G(z), g (z) = (f (z)/f (z))G(z) − 2G (z). It follows that g So, gNps Nps is (at least) of third order if and Nps only if
G(z) = 1,
(5a)
G (z) = 0.5f (z)/f (z). Since gNps = x + f uNps , uNps = −G/f , these conditions are as a matter of fact redressed (4a) and (4b). Let L be the so-called logarithmic degree of convexity: L = f f/f 2 . Differentiation gives f 2f 2 f f (z) , L − (z) = . L = −f f f (z) f 3 f 2
(5b)
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
293
According to (5), a gNps with G = H (L) can achieve third-order convergence only if H (0) = 1 and H (0) = 0.5 because G(z) = H (0) and G (z) = H (z)L (z). 2.1. Previous works Many solvers in the literature are in fact of gNps and hence of gu type. Their recent abundance is evidence that these two types and the choice of G in partial substitution are key research topics. In an attempt to translate Halley’s own derivation into modern mathematics, Gander [5] put many well-known third-order methods in such a gNps form, namely, gG = x − Gf /f ,
G = H (L), L = f f/f . 2
The first gG method was Halley’s solver1 gH = x − G
f f = x − H (L) , f f
1 1 −1 1 H (L) = 1 − L = 1 + L + L2 + · · · . 2 2 4
The second was Euler’s method2 gE = x − G
f f = x − H (L) , f f
H (L) = 2(1 +
√
1 1 1 − 2L)−1 = 1 + L + L2 + · · · . 2 2
There was a Hansen–Patrick family gH.P = x − G
f f = x − H (L) , f f
H (L) = (b + 1)(b +
1 b+3 2 1 − (b + 1)L)−1 = 1 + L + L + ···. 2 8
Then came Ostrowski’s square root iteration gO = x − G
f f = x − H (L) , f f
1 3 H (L) = (1 − L)−0.5 = 1 + L + L2 + · · · , 2 8
and quadratic inverse interpolation gqii = x − G
f f = x − H (L) , f f
1 H (L) = 1 + L. 2
Regarding them as quadratic curves that were tangent to f at the current iterate, Sharma [13] collected in a oneparameter family several solvers including gN , gE , gH , super-Halley and Chebychev’s methods. The latter two were, respectively f f 1 1 1 gs-H = x − G = x − H (L) , H (L) = 1 − L (1 − L)−1 = 1 + L + L2 + · · · , f f 2 2 2 gC = x − G
f f = x − H (L) , f f
1 H (L) = 1 + L. 2
The family was third order with the single exception of gN . Remark 2. Notice that gE ≡ gs-H and gC ≡ gqii .
1 Deiters [2] called it “Kepler’s method”. 2 Pakdemirli and Boyacı [12] reported this as “Householder’s method”.
294
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
Hernandez and Romero [7] created an nth-order method (n 2) for approximating square roots: f gH.R = x − H (L) , f
H (L) = (1 −
√
1 − 2L)/L =
n−2 1/2 j =0
j +1
(−1)j 2j +1 Lj .
They asserted that for n = 2 and 3 the method was, respectively, equivalent to gN and gC . Iterative process of type gu can be accelerated by a scheme due to Nedzhibov [11] as follows: gNd = x + f u
f 1/r
f 1/r . − f 1/r (gu )
This acceleration will raise the convergence order by at least 1. So, gNr pushed in this fashion or gNdNr = x − r
f 1/r f 1/r f f − f 1/r (gNr )
is a third-order gNps . A modern approach to boost convergence of gN is to switch from f to a product of f times fs where the latter is a suitable secondary function. For instance, Gerlach [6] reported third-order convergence using fs = C/ f where C is a real constant. Hernandez and Romero [7] noted that, for C = 1, this rendered gH . According to Traub [14] “gH is one the most frequently rediscovered methods in the literature”! ∗ that is at least of third order Koçak [8] devised a remarkable partial substitution to transform g into gm ∗ gm = x + G(g − x),
G=
1 . 1 − 0.5(g + g (z))
∗ ≡ g . Boosted by the same technique, g This transformation converts gN to gH , that is gmN H Nr which is second order yielded the following solver: ∗ gmNr =x−
2rff (r + 1)f 2 − rf f
.
2.2. A new gu class with third-order convergence All third-order gu methods must satisfy (4a) and (4b). This work has added a novel such class which presently contains three branches (gKia , gKai and gKe ) each based on a different solution. Section 2.2.4 is assigned to their asymptotic error constants. 2.2.1. The invert-and-average subclass gKia This subclass harnesses the average of two derivative reciprocals in u as follows: 1 1 gKia = x + f u, u = −0.5 . +a , a= f f (xs ) Difference between the subclass members arises from the choice of the secondary first derivative f (xs ). Remember that gN employs uN = −1/f . For the forerunner gKiaz , a is a simple constant equal to the reciprocal of f (z): 1 1 gKiaz = x + f u, u = −0.5 + a , a= . f f (z) The drawback of gKiaz is that f (z) may be inestimable until the target z is reached! A compromise is to replace this unknown quantity with the value of the first derivative at a point xX that hopefully tends to z. In gKiaX , xX comes from an auxiliary iterator gX which is at least second order. Formally, 1 1 gKiaX = x + f u, u = −0.5 , gX (z) = z, gX + a , a= (z) = 0. f f (xX )
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
Current members of gKiaX type are 1 gKiaC = x + f u, u = −0.5 +a , f 1 gKiaH = x + f u, u = −0.5 +a , f 1 gKiaN = x + f u, u = −0.5 + a , f
a=
1
,
xX = gC ,
a=
1 , f (xX )
xX = gH ,
a=
1 , f (xX )
xX = gN .
f (x
X)
295
2.2.2. The average-and-invert subclass gKai This subclass harnesses the reciprocal of the average of two derivatives in u as follows: gKai = x + f u,
u = −1/(0.5(f + d)),
d = f (xs ).
Difference between subclass members arises from the choice of the secondary first derivative f (xs ). For the forerunner gKaiz , d is a constant equal to f (z): gKaiz = x + f u,
u = −1/(0.5(f + d)),
d = f (z).
The drawback here is that f (z) may be inestimable until the target z is reached! Other members accrue from replacement of this unknown quantity with the value of the first derivative at a point xX that hopefully tends to z. xX comes from an auxiliary iterator gX which is at least second order: gKaiX = x + f u,
u = −1/(0.5(f + d)),
d = f (xX ),
xX = gX ,
gX (z) = z,
gX (z) = 0.
Members of gKaiX type are gKaiC = x + f u,
u = −1/(0.5(f + d)),
d = f (xX ),
xX = gC ,
gKaiH = x + f u,
u = −1/(0.5(f + d)),
d = f (xX ),
xX = gH ,
xX = gN .
gKaiN = x + f u,
u = −1/(0.5(f + d)),
d = f (xX ),
It is noteworthy that gKaiN is identical with “variant of Newton’s method (VNM)” derived by Weerakoon and Fernando [15] utilizing a Newton–Cotes quadrature formula, namely the trapezoidal rule. 2.2.3. The invert-and-exponentiate subclass gKe The u of this subclass incorporates as a factor an exponentiated ratio of some f and f values: gKe = x + f u,
u = −E/f ,
E = ecf ,
c = 0.5f2 /f 1 . 2
gKe is a gNps with a variable gain: G = E = ecf = 1 + (cf ) + 2!1 (cf )2 + · · · , G(z) = E(z) = 1. This expansion of E may be beneficially truncated in the neighbourhood of z where |cf | is sufficiently small. For the forerunner gKez , c is a constant dependent on the values of f and f at z: gKez = x + f u,
u = −E/f ,
E = ecf ,
c = 0.5f (z)/f (z)2 .
Various rescue substitutions are envisaged for cases where f (z) and f (z) are unavailable till z is known. This is how other subclass members come about. gKea uses analytical f and f in c: gKea = x + f u,
u = −E/f ,
E = ecf ,
c = 0.5f /f . 2
Since cf = (0.5f /f )f = 0.5L here, E = ecf = e0.5L = 1 + 21 L + 2!21 2 L2 + 3!21 3 L3 · · · = H (L). Thus, gKea appears to be akin to gH , gC ≡ gqii , gE ≡ gs-H , gH.P , gO and other methods in Gander’s third-order gNps
form gG = x − Gf /f , G = H (L), L = f f/f 2 .
296
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
Table 1 A list of gu (z) results for the three subclassesa Subclass gKia gu gKiaz gKiaC , gKiaH gKiaN a To
Subclass gKe
Subclass gKai gu (z) f (z) 2f (z) − f (z) 2f (z) − f (z) 2f (z)
gu (z) f (z) 2f (z) f (z) 2f (z) 3f 2 (z) f (z) 2f (z) + 2f 2 (z)
gu
3f 2 (z) 2f 2 (z) 3f 2 (z) 2f 2 (z)
gKaiz gKaiC , gKaiH gKaiN
gu gKez gKea
gu (z)
2f (z) 15f 2 (z) f (z) − 4f 2 (z) 2 (z) − ff (z) + 9f 2 (z) 4f (z)
obtain the asymptotic error constant, divide gu (z) by 3!.
A further departure from gKez is numerically to approximate f in c using an auxiliary point: u = −E/f ,
gKenX = x + f u,
E = ecf ,
c = 0.5D2/f , 2
D2 = (f (xX ) − f )/(xX − x), xX = gX . In gKenN , for instance, help comes from gN : u = −E/f ,
E = ecf ,
D2 = (f (xX ) − f )/(xX − x),
xX = gN .
gKenN = x + f u,
c = 0.5D2/f , 2
The extra variation in gKenN2 is the replacement of f in c by f (xX ): u = −E/f ,
gKenN2 = x + f u,
D2 = (f (xX ) − f )/(xX − x),
E = ecf ,
c = 0.5D2/f (xX )2 ,
xX = gN .
gKenNav differs from gKenN and gKenN2 in replacing f in c by 0.5(f + f (xX )): gKenNav = x + f u,
u = −E/f ,
D2 = (f (xX ) − f )/(xX − x),
E = ecf ,
c = 0.5D2/fav ,
fav = 0.5(f + f (xX )),
2
xX = gN .
2.2.4. Calculation of asymptotic error constants The following procedure was adopted to obtain these values: (a) Find u(z), u (z), and u (z) for the selected member. (b) Insert these into (3c): gu (z) = f (z)u(z) + 3f (z)u (z) + 3f (z)u (z).
(3c)
(c) Finally, employ (2) to determine the constant required: lim
k→∞
z − xk+1 (z − xk )
3
= lim
εk+1
k→∞ ε 3 k
=
1 g (z). 3! u
(2)
Remember that the class commonly renders u(z) and u (z), respectively, defined in (4a) and (4b). Members may differ, however, in u (z) and hence in gu (z) as displayed in Table 1. Calculation details are presented further down. Deduction of the error constant from (2) is a straightforward step.
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
297
The invert-and-average subclass gKia To start with, consider gKiaz :
1 1 u = −0.5 , + f (z) f u(z) = −
u (z) =
1 f (z)
,
u (z) =
u = 0.5 0.5f (z) f 2 (z)
f f
u =
, 2
0.5(f f 2 − 2f f 2 ) f 4
,
,
0.5(f (z)f 2 (z) − 2f (z)f 2 (z)) f 4 (z)
= 0.5
f (z) f 2 (z)
−
2f 2 (z)
f 3 (z)
,
gu (z) = f (z)u(z) + 3f (z)u (z) + 3f (z)u (z) f (z) 0.5f (z) f (z) 2f 2 (z) − + 3f (z)0.5 = − + 3f (z) f (z) f 2 (z) f 2 (z) f 3 (z) f (z) 3f 2 (z) . − = 0.5 f (z) f 2 (z) (z) = 0. Since Now consider gKiaX where u = −0.5(1/f + 1/f (gX )) and gX
u = 0.5
f (gX )gX
f
, + f 2 f 2 (gX ) 2 + f (g )g )f 2 (g ) − 2f (g )f 2 (g )g 2 (f (gX )gX f f 2 − 2f f 2 X X X X X X u = 0.5 + f 4 (gX ) f 4
it follows that u(z) = −
1 f (z)
u (z) = 0.5
, u (z) =
0.5f (z) f 2 (z)
,
f (z)f 2 (z) − 2f (z)f 2 (z) f 4 (z)
+
(z) f (z)f 2 (z)gX
f 4 (z)
.
For gKiaN , gX = gN = x − f/f , gN =
f f f 2
gN = L =
= L, gN (z) = L(z) = 0,
(f f + f f )f 2 − 2f f 2 f f 4
,
gN (z) = L (z) =
f (z) . f (z)
(z) = L (z) = f (z)/f (z) into (6) yields after simplification Insertion of gN
u (z) = 0.5
f (z) f 2 (z)
−
f 2 (z) f 3 (z)
.
(6)
298
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
Therefore, gu (z) = f (z)u(z) + 3f (z)u (z) + 3f (z)u (z)
f (z) 0.5f (z) f (z) f 2 (z) + 3f (z)0.5 − 3 = − + 3f (z) f (z) f 2 (z) f 2 (z) f (z)
= 0.5
f (z) . f (z)
(z) = g (z) = 0), On the other hand, for gKiaC , gKiaH , and combination with any other third-order auxiliary (gX X 2 f (z) 2f (z) − , u (z) = 0.5 f 2 (z) f 3 (z)
gu (z) = f (z)u(z) + 3f (z)u (z) + 3f (z)u (z)
f (z) 0.5f (z) f (z) 2f 2 (z) + 3f (z)0.5 − = − + 3f (z) f (z) f 2 (z) f 2 (z) f 3 (z) f (z) 3f 2 (z) = 0.5 − . f (z) f 2 (z)
The average-and-invert subclass gKai First, consider gKaiz : u=−
1 , 0.5(f + f (z))
1 u(z) = − , f (z)
u (z) =
2f
u =
(f + f (z))
0.5f (z) f 2 (z)
, 2
u = 2
u = 0.5
,
(f (f + f (z))2 − 2f 2 (f + f (z)))
f (z) f 2 (z)
−
f 2 (z)
(f + f (z))4
f 3 (z)
,
,
gu (z) = f (z)u(z) + 3f (z)u (z) + 3f (z)u (z)
f (z) 0.5f (z) f (z) f 2 (z) + 3f (z)0.5 − 3 = − + 3f (z) f (z) f 2 (z) f 2 (z) f (z)
= 0.5
f (z) . f (z)
(z) = 0. Since Now consider gKaiX where u = −1/(0.5(f + f (gX ))) and gX
u = 2
f + f (gX )gX
u = 2
(f + f (gX ))2
,
2 + f (g )g )(f + f (g ))2 − 2(f + f (g )g )2 (f + f (g )) (f + f (gX )gX X X X X X X
(f + f (gX ))4
it follows that u(z) = − u (z) = 2 =
1 , f (z)
u (z) =
0.5f (z) f 2 (z)
,
(z))(2f (z))2 − 2f 2 (z)(2f (z)) (f (z) + f (z)gX
f (z) 2f 2 (z)
(2f (z))4 +
(z) f (z)gX
2f 2 (z)
−
f 2 (z) 2f 3 (z)
.
(7)
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
299
(z) = g (z) = f (z)/f (z) into (7) yields after simplification For gKaiN , insertion of gX N
u (z) = 0.5
f (z) f 2 (z)
.
Therefore, gu (z) = f (z)u(z) + 3f (z)u (z) + 3f (z)u (z) (z) (z) (z) 2 (z) f f (z) f 0.5f f + 3f (z)0.5 2 = 0.5 . = − + 3f (z) +3 2 f (z) f (z) f 2 (z) f (z) f (z) This result is congruent with the cas reported for VNM in [15]. On the other hand, for gKiaC , gKiaH , and combination (z) = g (z) = 0), with any other third-order auxiliary (gX X
u (z) = 0.5
f (z) f 2 (z)
−
f 2 (z)
f 3 (z)
,
gu (z) = f (z)u(z) + 3f (z)u (z) + 3f (z)u (z) f (z) f (z) 0.5f (z) f (z) f 2 (z) + 3f − = 0.5 = − + 3f (z) . (z)0.5 f (z) f (z) f 2 (z) f 2 (z) f 3 (z) The exponentiation subclass gKe Consider gKez first where u=−
E , f
E = ecf ,
c = 0.5
f (z) f (z)2
,
E(z) = 1, E = cf E.
It follows that 1 f − c E, u = 2E − E = f f f 2 f f f f 2 − 2f f 2 f f 2 − 2f f 2 E + E + − c E = − c cf E, u = f 4 f 2 f 4 f 2
f
and hence u(z) = −1/f (z),
u (z) =
f (z) f 2 (z)
−2
u (z) = 0.5f (z)/(f (z))2 , f 2 (z) f 3 (z)
+ 0.5
f (z) f 2 (z)
2
f =
f (z) f 2 (z)
−
7 f 2 (z) . 4 f 3 (z)
Thus, gu (z) = f (z)u(z) + 3f (z)u (z) + 3f (z)u (z) f (z) 0.5f (z) f (z) 7 f 2 (z) = − + 3f (z) + 3f (z) − f (z) f 2 (z) f 2 (z) 4 f 3 (z) =2
f (z) 15 f 2 (z) . − f (z) 4 f 2 (z)
300
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
Consider gKea now where u = −E/f , E = ecf , c = 0.5f /f 2 , E(z) = 1. It follows that: f f (z) f f 2 − 2f f 2 E, E E = (c f + cf )E = 0.5 , f + (z) = 0.5 f f (z) f 4 E f E E f f 2 − 2f f 2 f u = 2E − = − 0.5 f+ f f f f f 2 f 4 f f f 2 = 0.5 f E, − − 2 f 2 f 3 f 4
f
u = 0.5
f f 2 − 2f f 2 f 4
− cf 1 f −
f f 3
−2
f 2 f 4
f
E + 0.5
f f 2
− cf 2 f
E,
where cf 1 and cf 2 are coefficients that disappear with f as x goes to z. Then, u(z) = − 1/f (z), u (z) = 0.5f (z)/(f (z))2 , (z) 2 (z) (z) 2 (z) f f f f (z) f (z) f −2 3 − −2 3 + 0.5 0.5 u (z) = 0.5 f (z) f 2 (z) f (z) f 2 (z) f (z) f 2 (z) = 0.25
f 2 (z) f 3 (z)
.
Thus, gu (z) = f (z)u(z) + 3f (z)u (z) + 3f (z)u (z) = −
0.5f (z) f 2 (z) f (z) + 3f + 3f (z) (z)0.25 f (z) f 2 (z) f 3 (z)
= −
f (z) 9 f 2 (z) . + f (z) 4 f 2 (z)
2.2.5. Complexity and cost of calculations per iteration The forerunners gKiaz and gKaiz are handicapped by the special information f (z) they require which may or may not be available. The forerunner gKez wants f (z)/f (z)2 ! The class offers methods successfully approximating these three quantities. Now gN is simple needing only one f and one f at each iteration, whereas both gC and gH require one f as an extra. As for the new methods, the following facts must be born in mind: • • • • • • •
gKiaz and gKaiz has one averaging and one inversion to do in addition to one f and one f . gKiaN and gKaiN implement an internal gN to supply xX and calculate f again prior to averaging and inversion . gKiaC and gKaiC employ gC as an auxiliary to supply xX where f is re-estimated prior to averaging and inversion. gKiaH and gKaiH employ gH as an auxiliary to supply xX where f is re-estimated prior to averaging and inversion. gKez comprises one f, one f , and one exponentiation calculation. gKea contains one f, one f , one f , and one exponentiation estimation. gKenN and gKenNav involve an internal gN , one f , and one exponentiation calculation.
2.2.6. Crosslinking of the subclasses through u Given u, a = −2u − 1/f , d = −(2/u + f ), E = −uf and c = (ln E)/f .
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
301
2.2.7. Class parameters for non-members or visitors A non-member or outsider, say gV , may visit the class with u = (gV − x)/f and be equipped with the pertinent subclass parameter(s) as in Section 2.2.6. The original convergence order of the visitor is preserved. It is at least third order if its fictitious equivalent satisfies stipulations (4a) and (4b). Suppose gN is the visitor, for instance:u = −1/f , a = −1/f , E = 1, and c = 0. It obviously remains second order because its u (z) = f (z)/(f (z))2 lacks the factor 0.5 and thus fails condition (4b). 3. A one-step solver g ∗ This is the simplest “solver” that leaps from any x directly to z, annihilating the error at once: g ∗ = z. On the xg-plane, this curve is a horizontal line passing through z. It is albeit usable only post priori because z appears on the right. Its convergence order is the highest possible for all its derivatives are zero. As shown later, it is a promising tool for post-priori analysis, trouble shooting, and comparison. u∗ = (gV − x)/f = (g ∗ − x)/f = (z − x)/f = −ε/f when g ∗ visits the class. 4. Numerical tests 4.1. Initial tests The new methods gKaiN , gKiaC , gKiaH , gKiaN , gKiaz , gKenN , gKenNav , gKea , and gKez were initially tested and compared with the well-known old techniques gC ≡ gqii , gH , and gN . Selected from the literature were two frequently encountered, simple but interesting benchmarks: Estimation of the square and cube root of a positive number N. The forerunner gKez is useable in both cases because c = 0.5f (z)/f (z)2 was luckily available a priori. Remember that gKea appears to be related to gH = x −
1 f 2 = x − ff /(f − 0.5ff ), f (1 − 0.5L)
gC = x −
f f f 2 f (1 + 0.5L) = x − − . f f 2f 3
4.1.1. The square root problem Required here√is the square root of a positive number N. The pertinent f , f , and f definitions are f = x 2 − N = √ (x − N )(x + N ), f = 2x, and f = 2. Since c is luckily known a priori in this problem gKez is readily applicable: 2 1 0.25 0.5f (z) = = 2= c= . 2 2 N 4z f (z) (2x) x=z On the other hand, gKiaz has to harness insider knowledge that a = 1/f (z) = 1/(2z) = 0.5/z. The test choice N = 100 means that the roots are z1 = 10 and z2 = −10. Here, the focus is on the positive root and this renders f (z) = 20, 1/f (z) = 0.05, a = 1/f (z) = 0.05, and c = 0.25/N = 0.0025. 4.1.2. The cube root problem √ √ √ 3 The target is the cube root of a positive number N. The equations are f = x 3 − N = (x − 3 N )(x 2 + x 3 N + N 2 ), f = 3x 2 and f = 6x. As in the previous case, gKez is again readily applicable because 0.5f (z) 6x 1 1 c= = 2 = 3= 2 3N 3z f (z) 3x 2 x=z
and is known a priori. The test choice N = 1000 means that the only real root is z = 10 and c = 1/(3N ) = 0.3333 × 10−3 . On the other hand, gKiaz employs insider knowledge once again: f (z) = 300 and a = 1/f (z) = 0.3333 × 10−2 .
302
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
Table 2 Analysis using g ∗ Square roota
x
1 3 5 7 9 11 13 15 20 30 50 75 100 250 500 aa ba
Cube rootb
u∗
a∗
E∗
c∗ f
c∗
0.5f /f 2
u∗
a∗
E∗
c∗ f
c∗
0.5f /f 2
−0.0909 −0.0769 −0.0667 −0.0588 −0.0526 −0.0476 −0.0435 −0.0400 −0.0333 −0.0250 −0.0167 −0.0118 −0.0091 −0.0038 −0.0020
−0.3182 −0.0128 0.0333 0.0462 0.0497 0.0498 0.0485 0.0467 0.0417 0.0333 0.0233 0.0169 0.0132 0.0057 0.0029
0.1818 0.4615 0.6667 0.8235 0.9474 1.0476 1.1304 1.2000 1.3333 1.5000 1.6667 1.7647 1.8182 1.9231 1.9608
−1.7047 −0.7732 −0.4055 −0.1942 −0.0541 0.0465 0.1226 0.1823 0.2877 0.4055 0.5108 0.5680 0.5978 0.6539 0.6733
0.0172 0.0085 0.0054 0.0038 0.0028 0.0022 0.0018 0.0015 0.0010 0.0005 0.0002 0.0001 0.0001 0.0000 0.0000
0.2500 0.0278 0.0100 0.0051 0.0031 0.0021 0.0015 0.0011 0.0006 0.0003 0.0001 0.0000 0.0000 0.0000 0.0000
−0.0090 −0.0072 −0.0057 −0.0046 −0.0037 −0.0030 −0.0025 −0.0021 −0.0014 −0.0008 −0.0003 −0.0002 −0.0001 −0.0000 −0.0000
−0.3153 −0.0226 −0.0019 0.0023 0.0033 0.0033 0.0030 0.0027 0.0020 0.0012 0.0005 0.0002 0.0001 0.0000 0.0000
0.0270 0.1942 0.4286 0.6712 0.8967 1.0967 1.2707 1.4211 1.7143 2.0769 2.4194 2.6062 2.7027 2.8802 2.9400
−3.6109 −1.6386 −0.8473 −0.3986 −0.1091 0.0923 0.2395 0.3514 0.5390 0.7309 0.8835 0.9579 0.9943 1.0579 1.0784
0.0036 0.0017 0.0010 0.0006 0.0004 0.0003 0.0002 0.0001 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.3333 0.0123 0.0027 0.0010 0.0005 0.0003 0.0002 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
= 1/f (z) = 0.05 for gKiaz and c = 0.25/N = 0.0025 for gKez . = 1/f (z) = 3.333 × 10−3 for gKiaz and c = 1/(3N) = 0.3333 × 10−3 for gKez .
4.1.3. Analysis using g ∗ Now, the one-step solver g ∗ = z = x + f u∗ renders u∗ = (z − x)/f = −ε/f , a ∗ = −2u∗ − 1/f , E ∗ = −u∗ f and ∗ c = (ln E ∗ )/f . The number of iterations that another solver needs to reach z depends on how far it is to these starred quantities—just one iteration should suffice at points where it intersects this ideal curve. As regards matching these changing ideals in a wide range, the forerunners (with constant parameters) are obviously at a disadvantage compared with members with varying parameters. Table 2 presents these starred values on the interval (1,500) for the two test cases together with those used in gKiaz , gKez , and gKea . Both cases display negative a ∗ values at low ends but f is positive throughout in the interval of interest! Recall that a is a correction to 1/f . So, subclass gKia may rather waste one or two iterations in this region. Here, temporary use of zero for a should help to some extent. gKiaz is reasonably close to a ∗ when 5 x 30 and 7 x 20 for the square root and cube root case. gKiaz and gKez intersect g ∗ only at x = z = 10! In Section 2.2.3, it was mentioned that gKe was a gNps where G = E. So, E ∗ is the ideal variable gain for gN . Table 2 indicates that the risk of gN correction lessens as x increases. 4.1.4. Single iteration runs Initially each method was allowed only one iteration in order to see trouble spots and take measures before initiating complete runs. Table 3 depicts the g values in the two cases, guessed x varying between 1 and 50. Remember that the target z = 10 in either case. Throughout, only gH and gKaiN consistently point towards z without overshooting. At the low end gC is negative and gKenN and gKea are stuck at a false fixed-point (E → 0, u → 0). These troubles disappear a little later. Further on, with a view to being close to z, all new methods first beat second order gN , secondly gC , and finally gH . Almost all the time gKiaN is the best; it is slightly defated by gKiaC at the low end and by gKenNav at the other. High values of x make gKiaz and gKez negative. Except for these two, it seems better to start with initial guesses above z. These initial tests lead to incorporation of two interventions into the iteration process. The first was to arbitrarily limit the product of c and f in the gKe subclass so that −2 cf 0.5. This would eliminate false fixed-points and also excessive actions (E → ∞, u → ∞). The second one was to chop off three quarters of x in the case of gKiaz instead of implementing a suggestion large enough to swing it to below zero. In the square root case, f is equal to D2, the numerical approximation of the second derivative, since D2 = (f (xX ) − f )/(xX − x) = 2(xX − x)/(xX − x) = 2, irrespective of the auxiliary gX . This explains why gKenN and
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
303
Table 3 Single iteration performance of various methods x
gC
gH
gN
Square root case 1.0000 −1175 2.5000 −49 5.0000 6.8750 9.4000 9.9988 9.7000 9.9999 9.9000 10.0000 9.9500 10.0000 10.0500 10.0000 10.1000 10.0000 10.3000 10.0001 10.6000 10.0009 25.0000 12.2950 50.0000 20.2400
2.9223 6.4474 9.2857 9.9994 9.9999 10.0000 10.0000 10.0000 10.0000 10.0001 10.0005 11.7089 18.4211
Cube root case 1.0000 < − 110000 2.5000 −1048 5.0000 −11 9.4000 9.9957 9.7000 9.9995 9.9000 10.0000 9.9500 10.0000 10.0500 10.0000 10.1000 10.0000 10.3000 10.0004 10.6000 10.0030 25.0000 14.7664 50.0000 27.9996
1.9970 334.00 4.8864 55.0000 8.5000 16.6667 9.9984 10.0391 9.9998 10.0094 10.0000 10.0010 10.0000 10.0003 10.0000 10.0002 10.0000 10.0010 10.0002 10.0087 10.0013 10.0333 13.6628 17.2000 25.2988 33.4667
50.5000 21.2500 12.5000 10.0191 10.0046 10.0005 10.0001 10.0001 10.0005 10.0044 10.0170 14.5000 26.0000
gKaiN
gKiaz
gKenN
26.2401 12.9779 10.2500 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.6983 14.9231
28.2250 14.2188 10.6250 10.0006 10.0001 10.0000 10.0000 10.0000 10.0000 9.9999 9.9995 6.6250 −22
1.0000 2.9410 8.5427 9.9991 9.9999 10.0000 10.0000 10.0000 10.0000 10.0001 10.0007 12.0464 19.4900
1.0060 167.50 209.25 167.50 2.7165 28.7501 35.6213 28.8042 6.9266 12.1422 12.8518 11.3583 9.9971 10.0022 10.0020 9.9997 9.9997 10.0002 10.0002 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0003 9.9998 9.9998 10.0000 10.0022 9.9985 9.9984 10.0004 14.4119 9.9212 8.0423 12.8607 27.1640 15.3721 9.4432 23.2812
169.17 30.3906 12.2917 10.0019 10.0002 10.0000 10.0000 10.0000 10.0000 9.9998 9.99983 −3 −165
2.9223 6.4474 9.2857 9.9994 9.9999 10.0000 10.0000 10.0000 10.0000 10.0001 10.0005 11.7089 18.4211
gKiaC 25.7289 11.3973 11.4773 10.0006 10.0001 10.0000 10.0000 10.0000 10.0000 9.9999 9.9995 9.0749 8.3557
gKiaH 34.2193 15.5102 10.7692 10.0006 10.0001 10.0000 10.0000 10.0000 10.0000 9.9999 9.9995 8.5405 5.4286
gKiaN
gKenNav
gKea
gKez
48.6864 18.3789 10.8709 10.0003 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 9.9998 10.2998 13.6365
1.0000 2.9410 8.5427 9.9991 9.9999 10.0000 10.0000 10.0000 10.0000 10.0001 10.0007 12.0464 19.4900
39.6471 17.3325 11.2177 10.0014 10.0002 10.0000 10.0000 10.0000 10.0000 9.9998 9.9987 −14 −9632
1.0000 333.99 2.5000 54.7848 5.0744 15.1642 9.9957 10.0011 9.9995 10.0001 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0000 10.0004 9.9999 10.0031 9.9995 14.8502 12.3285 28.2117 22.0083
1.0000 2.5000 6.1313 9.9971 9.9997 10.0000 10.0000 10.0000 10.0000 10.0003 10.0022 14.3440 26.9873
239.68 40.3143 13.7152 10.0040 10.0005 10.0000 10.0000 10.0000 10.0000 9.9995 9.9961 −997 < − 1019
gKea columns are identical. Incidentally, gH and gKaiN are also identical here. These pairs differ in the cube root case but remain close to each other. 4.1.5. Iterating to convergence Starting from a specified point, each method was now allowed to iterate by direct substitution, that is by xk+1 =g(xk ), until |f | < 10−4 or k > 10. Tables 4 and 5, respectively, show the results in the square root and cube root case. Of the old methods, gC makes a bad start at x = 1 and moves x far to the left to the negative region. It takes 9 (30) iterations to converge to the negative (positive) root in the square (cube) root case. It is more successful when started with an x value greater than z, certainly doing better than gN which is the worst of the whole lot. gH is remarkable in that it always moves steadily towards z. Thanks to the limiting actions described above, the new methods now all converge in 4 (6) iterations on average in the square (cube) root case. They are generally better than gN and close to gH . An exception is gKaiN whose results exactly coincide with those of gH in the square root case but worsen in the cube root problem, especially at the low x end. gKaiN outputs fall behind those of gN when the initial guess is 1 and require 65 iterations to converge to z!. 4.2. A more complicated problem Here is a more involved case: f = Ax 2 − BeCx cos x + D,
A = 1,
f = 2Ax + BeCx (sin x − C cos x),
B = 5,
C = 0.1,
D = 4.65,
f = 2A + BeCx (2C sin x + cos x − C 2 cos x).
304
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
Table 4 Square root iterationsa : xk+1 = g(xk ) k
gC
gH
gN
gKaiN
gKiaC
Starting at x = 1 1 −1174.625 2.92233 50.50000 2.92233 25.72893 2 −440.5482 7.17764 26.24010 7.17764 9.01978 3 −165.3758 9.91168 15.02553 9.91168 10.00289 4 −62.46917 10.00000 10.84043 10.00000 10.00000 5 −24.62140 10.00000 10.03258 10.00000 10.00000 6 −12.19541 10.00005 7 −10.03397 10.00000 8 −10.00000 10.00000 9 −10.00000
gKiaH
gKiaN
gKiaz
gKenN
gKenNav
gKea
gKez
34.21927 7.17764 10.08911 10.00000 10.00000
26.24010 10.84043 10.00005 10.00000 10.00000
28.22500 4.63822 10.83083 9.99868 10.00000 10.00000
7.69910 48.68638 7.69910 39.64714 9.92644 13.41336 9.92644 9.04284 10.00000 9.98483 10.00000 10.00576 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000
Starting at x = 25 1 12.29500 11.70886 2 10.03812 10.00976 3 10.00000 10.00000 4 10.00000 10.00000 5 6
14.50000 10.69276 10.02279 10.00003 10.00000 10.00000
11.70886 9.07493 8.54054 10.69828 10.00976 10.00240 10.00976 10.00003 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000
6.62500 10.14507 9.99999 10.00000 10.00000
12.04638 10.29982 12.04638 7.68843 10.02247 9.99997 10.02247 10.08911 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000
Starting at x = 50 1 20.24000 18.42105 2 11.14478 10.53414 3 10.00588 10.00035 4 10.00000 10.00000 5 10.00000 10.00000 6 7
26.00000 14.92308 10.81205 10.03040 10.00004 10.00000 10.00000
18.42105 8.35573 5.42857 10.53414 10.01583 10.53414 10.00035 10.00000 9.99965 10.00000 10.00000 10.00000 10.00000 10.00000
−22.0000 9.96875 10.00000 10.00000
19.49002 13.63647 19.49002 10.43069 10.85842 9.98315 10.85842 9.99951 10.00203 10.00000 10.00203 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000
a For
14.92308 10.03050 10.00000 10.00000
gKiaz , the x sequence starting at 50 is (50, 12.5000, 9.96875, 10.00000).
Table 6 compares selected methods starting at x = 0.7 and iterating until |f | < 10−8 . The insider data needed are KiaN and gKaiz in this case. The latter requires f (z), however. Koçak [9,10] applied gKiaN to estimate the boiling temperature (T ) of an N-component mixture at a specified pressure (P ). The function to be solved was f (z) = 2.295624 and f (z) = 7.154334 with z = 0.39278506. The winners are g
f (T ) =
N
yi − 1,
yi = xi pi0 (T )/P ,
pi0 (T ) = exp(A1i + A2i /(A3i + T )),
N = 3.
i=1
2 ∗ Here was its derivative f (T )=− N i=1 yi A2i /(A3i +T ) . Compared in [10] were the results of applying gN , gmN ≡ gH , ∗ ∗ gKiaN , and gmKiaN , the latter being gKiaN accelerated by gm transformation described above. 5. Conclusions Equipped with suitable bounds the newly created class of third-order methods are now fully operational and ready to be tested in further applications. The benchmark cases have proved wide range stability and success of the three current subclasses. With respect to decimal rectification and total number of iterations, most members are superior to second-order Newton’s formula as well as third-order Chebychev’s and Halley’s. Further, gKiaN appears to be the most favourable member due to its relative simplicity, low cost, overall success and stability. This member of invert-andaverage subclass employs Newton’s method as an auxiliary and then averages two derivative reciprocals per step. It is interesting that gKea is related to gG type solvers like gC and gH . Assignment of class parameters carries non-members to a common basis for comparison. This research has a byproduct: a one-step “solver . The first is a promising tool for post-priori analysis, trouble shooting, and comparison.
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
305
Table 5 Cube root iterationsa : xk+1 = g(xk ) k
gC
gH
gN
Starting at x = 1 1 −110555 1.99701 2 −61419 3.94705 3 −34122 7.42570 4 −18957 9.79535 5 −10531 9.99994 6 −5851 10.00000 7 −3250 8 −1806 9 −1003 10 −557
334.0000 222.6697 148.4532 98.98390 66.02329 44.09199 29.56612 20.09207 14.22043 11.12865
Starting at x = 20 1 12.46528 11.76471 2 10.13136 10.02810 3 10.00004 10.00000 4 10.00000 10.00000 5 10.00000
14.16667 11.10534 10.10638 10.00112 10.00000
Starting at x = 250 1 138.8978 125.0120 2 77.19423 62.55398 3 42.97887 31.46827 4 24.17715 16.47956 5 14.36872 10.72391 6 10.49207 10.00227 7 10.00172 10.00000 8 10.00000 10.00000 9 10.00000 10
166.6720 111.1267 74.11144 49.46831 33.11509 22.38069 15.58594 11.76281 10.25098 10.00609
a For
gKaiN 1.00597 1.01208 1.01835 1.02476 1.03135 1.03810 1.04503 1.05214 1.05945 1.06697
gKiaC
gKiaH
gKiaN
167.5000 49.19700 15.15932 9.74282 10.00015 10.00000 10.00000
209.2450 167.5014 34.93968 76.80418 8.00846 35.35707 10.09450 16.92690 9.99999 10.54650 10.00000 10.00031 10.00000 10.00000
12.23121 9.57501 8.65417 10.08193 10.00073 10.02590 10.00001 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000 134.6250 72.52370 39.16590 21.48116 12.84193 10.15069 10.00004 10.00000 10.00000
73.36192 21.85010 9.64444 10.00004 10.00000 10.00000
41.71199 8.523116 10.03506 10.00000 10.00000 10.00000
11.27018 10.00422 10.00000 10.00000
114.5980 52.59385 24.43528 12.66294 10.04015 10.00000 10.00000
gKiaz
gKenN
gKenNav
gKea
169.1650 −7926 −89 9.99852 10.00000 10.00000
46.06665 26.04741 15.37052 10.77952 10.00639 10.00000 10.00000
5.41667 11.59615 9.97065 10.00000 10.00000
12.51635 10.92552 12.19117 10.38246 10.14116 9.99874 10.07689 9.99898 10.00004 10.00000 10.00001 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000 10.00000
−25832 139.9925 −353 78.41427 9.01234 43.99440 10.00893 24.91141 10.00000 14.80659 10.00000 10.00333 10.00000 10.00000
333.9940 150.4446 67.78848 30.65331 14.39194 10.06499 10.00000 10.00000 10.00000
112.6154 50.76824 23.08058 11.74112 9.99682 10.00000 10.00000
46.06665 24.92767 14.31081 10.39638 10.00066 10.00000 10.00000
gKez
133.7089 71.54165 38.38127 20.94504 12.57111 10.11533 10.00002 10.00000 10.00000
239.6845 107.9697 48.67955 22.15846 11.10005 9.97574 10.00000 10.00000
112.6154 50.76825 23.08058 11.42775 9.946778 10.00002 10.00000 10.00000
gKiaz , the x sequence starting at 1 is (1, 169.165, 42.29125, 10.57281 . . .) and starting at 250 is (250, 62.50000, 62.50000, 15.62500, . . .).
Table 6 Iterations to solve the last case starting at x = 0.7 k
gC
gH
gN
gKiaz
gKiaN
gKaiz
gKaiN
gKez
gKea
1 2 3 4 5 6
0.4248716 0.3929155 0.3927851 0.3927851
0.4159243 0.3928125 0.3927851 0.3927851
0.4663395 0.3996174 0.3928563 0.3927851 0.3927851 0.3927851
0.3569780 0.3929115 0.3927851 0.3927851
0.3990491 0.3927851 0.3927851
0.3918444 0.3927851 0.3927851
0.4140947 0.3928062 0.3927851 0.3927851
0.2271407 0.4372970 0.3922718 0.3927851 0.3927851
0.4209642 0.3928552 0.3927851 0.3927851
References [1] [2] [3] [4] [5] [6] [7]
K.E. Atkinson, An Introduction to Numerical Analysis, Wiley, New York, 1978. U.K. Deiters, Calculation of densities from cubic equations of state, AIChE J. 48 (4) (2002) 882–886. L.V. Fausette, Numerical Methods: Algorithms and Applications, Prentice-Hall, Engelwood Cliffs, NJ, 2003. C-E. Fröberg, Introduction to Numerical Analysis, second ed., Addison-Wesley, Reading, MA, 1972. W. Gander, On Halley’s iteration method, Amer. Math. Monthly 92 (1985) 131–134. J. Gerlach, Accelerated convergence in Newton’s method, SIAM Rev. 36 (1994) 272–276. M.A. Hernandez, N. Romero, Accelerated convergence in Newton’s method for approximating square roots, J. Comput. Math. 177 (1) (2005) 225–229. [8] M.Ç. Koçak, Simple geometry facilitates iterative solution of a nonlinear equation via a special transformation to accelerate convergence to third order, J. Comput. Appl. Math., accepted for publication.
306
M.Ç. Koçak / Journal of Computational and Applied Mathematics 218 (2008) 290 – 306
[9] M.Ç. Koçak, A new iterative method created and applied to equilibrium calculations, in: Presented at 17th International Congress of Chemical and Process Engineering CHISA 2006, Prague, Czech Republic, 27–31 August, 2006. [10] M.Ç. Koçak, Acceleration of equilibrium calculations via a new technique, in: 17th International Congress of Chemical and Process Engineering CHISA 2006, Prague, Czech Republic, 27–31 August 2006. [11] G.A. Nedzhibov, An acceleration of iterative processes for solving nonlinear equations, Appl. Math. Comput. 168 (2005) 320–332. [12] M. Pakdemirli, H. Boyacı, Generation of root finding algorithms via perturbation theory and some new formulas, Appl. Math. Comput. 184 (2) (2007) 783–788. [13] J.R. Sharma, A family of third-order methods to solve nonlinear equations by quadratic approximation, Appl. Math. Comput. 184 (2) (2007) 210–215. [14] J.F. Traub, Iterative Methods for Solution of Equations, Prentice-Hall, Englewood Cliffs, NJ, 1964. [15] S. Weerakoon, T.G.I. Fernando, A variant of Newton’s method with accelerated third-order convergence, Appl. Math. Lett. 13 (2000) 87–93.