Symbolic computation and computer graphics as tools for developing and studying new root-finding methods

Symbolic computation and computer graphics as tools for developing and studying new root-finding methods

Applied Mathematics and Computation 295 (2017) 95–113 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage...

2MB Sizes 0 Downloads 26 Views

Applied Mathematics and Computation 295 (2017) 95–113

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Symbolic computation and computer graphics as tools for developing and studying new root-finding methods I. Petkovic´ a,∗, Ð. Herceg b a

Faculty of Electronic Engineering, University of Niš, A. Medvedeva 14, Niš 18000, Serbia Department of Mathematics and Informatics, Faculty of Science, Trg Dositeja Obradovi´ca 4, University of Novi Sad, Novi Sad 21 000, Serbia

b

a r t i c l e

i n f o

Keywords: Symbolic computation Solving nonlinear equations Multiple zeros Computer graphics Dynamic study Basin of attraction

a b s t r a c t Many very difficult problems in applied mathematics and other scientific disciplines cannot be solved without powerful computational systems, such as symbolic computation and computer graphics. In this paper we construct two new families of the fourth order iterative methods for finding a multiple real or complex zero of a given function. For developing these methods, a recurrent formula for generating iterative methods of higher order for solving nonlinear equations is applied and implemented by symbolic computation through several programs in computer algebra system Mathematica. Symbolic computation was the only tool for solving the considered complex problem since it provides handling and manipulating complex mathematical expressions and other mathematical objects. The properties of the proposed rapidly convergent methods are illustrated by several numerical examples. To examine the convergence behavior of the presented methods, we also give the dynamic study of these methods using basins of attraction. Such a methodology, besides a visualization of iterative processes, deliveries very important features on iterations including running CPU time and average number of iterations, as a function of starting points. The program for plotting basins of attraction in Mathematica is included. © 2016 Elsevier Inc. All rights reserved.

1. Introduction In many scientific disciplines such as applied mathematics, engineering disciplines, physics, chemistry, computer science, biology, astronomy, geology, as well as many other fields of human activities, the solution to a lot of complex problems had to wait for the development of computer hardware and software to reach a more advanced level. Many difficult problems remained unsolved due to the lack of powerful hardware and advanced and sophisticated software. As commented in [1], “at the beginning of the 21st century the rapid development of computer power and accessibility, computer multi-precision arithmetics and symbolic computation enabled the construction, testing and analysis of very efficient numerical algorithms, even confirmation of analytically derived results.” Symbolic computation has successfully substituted lengthy manual calculation with computer-based computation and manipulation. In particular, in numerical mathematics further development of new algorithms of higher efficiency and higher accuracy has become possible. In this paper we concentrate on methods and procedures for the construction, analysis and practical application of algorithms for solving nonlinear equations with the support of symbolic computation. We emphasize that the construction of presented root-solvers would be hardly feasible and most likely impossible without the use of this specific computer software. ∗

Corresponding author. ´ [email protected] (Ð. Herceg). E-mail addresses: [email protected], [email protected] (I. Petkovic),

http://dx.doi.org/10.1016/j.amc.2016.09.025 0 096-30 03/© 2016 Elsevier Inc. All rights reserved.

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

96

Symbolic computation (often called computer algebra systems) for solving mathematical problems deals with mathematical expressions and other mathematical objects, even when these expressions contain variables in a general form such as f(a, b, c, ...), where a, b, c are not numerical values and f need not be given explicitly. Among many manipulations, this specific software enables automatic simplification of expressions, differentiation, integration, polynomial manipulations and even exact computation, treating the mentioned variables as symbols. The interested reader is referred to the books by Cohen [2] and Beiley et al. [3] for more details on symbolic computation and its applications. Another tool for comparison and analysis of root-finding algorithms, presented in this paper, belongs to the area of computer graphics. It provides a deep and fruitful insight into visualization of approximating function zeros using basins of attraction, see, e.g. [4] and [5]. Such approach dramatically improves the quality estimation of iterative methods concerning areas of convergence. Combined with the study of computational efficiency of root-finding methods, it leads to considerably better understanding of iterative processes. Today, mathematicians and computer scientists carry out sophisticated mathematical operations employing powerful computer machines supported by the modern computer algebra systems such as Mathematica, Maple, Axiom, GAP, Maxima, Sage and SymPy. These computer algebra systems, which enable both symbolic computation and a dynamic study using basins of attraction, are available on Windows, Mac OS X and Linux. There is a vast number of papers devoted to iterative methods for finding multiple zeros, see, e.g., [6–17]. In this paper we are concerned with generating iterative methods of higher order for finding approximations to multiple roots of nonlinear equations. For this purpose, we use a suitable generating recurrence formula for the construction of an iterative method of order n + 1 starting from a known iterative method of order n. In this process there appear the derivative of complicated and lengthy mathematical expressions and necessary simplifications. For this reason, we were forced to employ symbolic computation through several programs written in computer algebra system Mathematica. Our attention was restricted to the two families of iterative processes for approximating multiple zeros: Laguerre’s type methods and Traub–Gander’s methods, both of the cubic convergence. The application of the mentioned generating recurrence relation to these families gives two new families of order four. In addition, in Section 3 it is shown that there is a connection between the Laguerre family and the Traub–Gander family. In this way we derive some new fourth order methods for multiple zeros. Convergence properties of the derived methods are demonstrated in Section 4 by four numerical examples. In Section 5 we use basins of attraction to study and compare the presented iterative methods as a function of starting points. Besides useful data about CPU time and average number of iterations for 360,0 0 0 points belonging to a square, a visual insight into convergence behavior of the tested methods was enabled in this way. There is also an Appendix which contains a complete program written in Mathematica for plotting basins of attraction and providing useful information on iteration process. We consider that this program will be of benefit to the readers since complete programs written in Mathematica are very seldom presented in the literature. 2. Laguerre-like methods Let f be a given sufficiently many times differentiable function in some complex domain S with a simple or multiple zero

α . One of the best known iterative methods for finding approximations to α is the third order Laguerre family of iterative methods

zˆ = z −

f  (z ) ±



(ν − 1 )

ν f (z ) , (z )2 − ν (ν − 1 ) f (z ) f  (z )

(1)

2 f

where ν (=0, 1) is a parameter and zˆ is a new approximation. There are a lot of papers and book chapters devoted to Laguerre’s method (1), see for instance [15–24]. This method possesses very good convergence properties and robustness so that it is often used in modern software packages. Note that Laguerre’s method shows extremely good behavior when |z| is large, see Parlett [22]. The main goal of this section is to construct Laguerre-like methods of fourth order for finding multiple zeros. We start with the family of third order iterative methods of Laguerre’s type for finding a multiple zero α of f of the known multiplicity m, presented in the form

Lm ( f, ν ; z ) := z −

 1+s

mν f ( z ) / f  ( z )



f (z ) f  (z ) (ν − 1 ) mν − 1 − mν  2 f (z )

 ( ν = 0, 1 ),

(2)

which was known to Bodewig [19]. The symbol s ∈ {−1, 1} denotes the sign which should be chosen in front of the square root. The following alternative form of (2) (setting mν = λ in (2)) has been presented in [16]

Lm ( f, λ; z ) := z −

λ f ( z )/ f  ( z )   ( λ = 0, m ). f (z ) f  (z ) λ − m  1 + sgn(λ − m ) λ−1−λ  2 m f (z )

(3)

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

97

According to (3) and the relation mν = λ it follows that the sign s in (2) should be taken as s = sgn(ν − 1 ). We note that most computational softwares (including computer algebra system Wolfram’s Mathematica, used for the execution of numerical examples in Section 4) return the square root (of real or complex number) with positive real component automatically. However, we leave the sign s in forthcoming formulas of Laguerre’s type to be more general and to ensure correct implementation for every compiler. Choosing different values of the parameter ν in Laguerre’s formula (2), we obtain various cubically convergent methods, such as Euler’s method (ν = 2, s = 1) and Ostrowski’s method (ν → ∞, s = 1 or ν → −∞, s = −1, assuming that the limit process is applied in the latter case. If ν = 1, then (2) becomes

f (z ) Lm ( f, ν ; z ) = z − m  . f (z ) The last iterative formula defines quadratically convergent Newton-like method for multiple zeros, which was known to Schröder [25] many decades ago. In what follows we will always assume that ν = 1. Remark 1. According to the above discussion about the choice of the sign s in (2) it follows that if the parameter ν is very small in magnitude then s = sgn(ν − 1 ) = −1. Let us consider the following important particular case when ν → 0, yielding s = −1. After rationalization of the denominator of (2), we obtain in the limit process

Lm ( f, 0; z ) = z −

f (z ) . f (z ) f  (z ) m+1  f (z ) −  2m 2 f (z )

(4)

This is third order method of Halley’s type for multiple zeros, often studied in the literature (see, e.g., [20,26]). Remark 2. Starting from the factorization f (z ) = (z − α )m p(z ), p(α ) = 0, Hansen and Patrick [20] have applied original approach and derived the following third order method

zˆ = z −

mλ f  (z ) ±



m(mλ + 1 ) f (z ) m(λ(m − 1 ) + 1 ) f  (z )2 − m(mλ + 1 ) f (z ) f  (z )

.

(5)

Although this method was announced as a new one, it is only rediscovered Laguerre-like method (2). Indeed, putting ν = 1/(mλ ) + 1 in (2) we obtain (5). In order to construct fourth order methods of Laguerre’s type, we use the following assertion considered in [27]. Theorem 1. Let zk+1 = gn (zk ) (k = 0, 1, . . . ) be an iterative method of order n for finding a simple or multiple zero α of a given function f (sufficiently many times differentiable). Then the iterative method

zk+1 = gn+1 (zk ) := zk −

z k − gn ( z k ) 1 1 − gn (zk ) n

( n ≥ 2; k = 0, 1, . . . )

(6)

has the order of convergence n + 1. Let us introduce the abbreviation

A j = A j (z ) =

f ( j ) (z ) ( j = 2, 3, . . . ). j! f  (z )

In the sequel the term cor in programs will denote the correction ψ (z) in iteration function of the form ϕ (z ) = z − ψ (z ). Fourth order methods are obtained starting from the third order Laguerre family (2) by a short program in Mathematica using symbolic computation. In that case, we take g3 (z ) = Lm ( f, ν ; z ) and deal with the abbreviation

Q (z ) =



(ν − 1 )(ν m(1 − A2 u ) − 1 )

(see (2)). We will restrict to the case ν > 1, the asymptotic error constant for ν < 1 is determined in a similar way by a small modification of PROGRAM 1. For simplicity we replace ν by a. PROGRAM 1: CONSTRUCTION OF LAGUERRE-LIKE METHODS OF FOURTH ORDER g[z_ ] = z - (a* m *f[z]/f’[z])/(1 + s*Q[z]); g1[z_ ] = FullSimplify[D[g[z], z]]; Q’[z] = -a* m (a - 1)/(2 Q[z]) D[(f[z] f’’[z])/f’[z]^2, z]; f[z] = u*f’[z]; f’’[z] = 2*A2*f’[z]; f’’’[z] = 6*A3*f’[z]; cor = FullSimplify[(z - g[z])/(1 - g1[z]/3)] Out[cor] =

3amuQ[z](1 + sQ[z] ) (−1 + a )a2 m2 su(A2 − 4uA22 + 3uA3 ) + Q[z](1 + sQ[z] )(2 + a(m − 2muA2 ) + 2sQ[z] )

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

98

Substituting a = ν we obtain the iterative formula

Lm ( f, ν ; z ) = z − cor

3ν muQ[z](1 + sQ[z] ) , (ν − 1 )ν 2 m2 su(A2 − 4uA22 + 3uA3 ) + Q [z](1 + sQ [z] )(2 + ν (m − 2muA2 ) + 2 sQ[z] )  Q = (ν − 1 )(ν m(1 − 2A2 u ) − 1 ), (ν = 0, 1 ).

= z−

(7)

Theorem 2. The order of convergence of Laguerre-like iteration (7) is four. The proof of Theorem 2 follows directly according to Theorem 1. Let us consider some particular methods which follow from (7) for the special choices of parameter ν = 2, ν → ∞, ν → 0. For the second and third case we apply a limit process. In this way we obtain the following iterative methods of the fourth order (omitting the argument z for simplicity). ν = 2, Euler-like method:

3mu(1 − Q + m(−2 + 4A2 u )) , 2 + 3m(1 + Q )(−1 + 2A2 u ) − m2 (2 − 6A2 u + 6A3 u2 )

Lm ( f, 2; z ) = z −

Q=



(8)

2m − 1 − 4mA2 u.

ν → ∞, Ostrowski-like method: Lm ( f, ∞; z ) = z −

3mu(1 − 2A2 u )



m(1 − 3A2 u + 3A3 u2 ) + 2(1 − 2A2 u ) m − 2mA2 u

.

(9)

ν → 0, Halley-like method: Lm ( f, 0; z ) = z −

3mu(1 + m(1 − 2A2 u )) . 1 + 3m ( 1 − 2A2 u ) + 2m2 ( 1 − 3A2 u + 3A3 u2 )

(10)

Remark 3. The iterative method (10) was derived in [28]. For m = 1 the method (10) reduces to Kiss’ method [29] for simple zeros, although it was explicitly presented in the fundamental paper [25] by Schröder many decades before Kiss. The determination of asymptotic error constant of Laguerre-like method (7) is a difficult task and we are forced to use symbolic computation presented in the program below. We will use the following development of the function f about the zero α of multiplicity m

f (z ) =





f (m ) ( α ) m ε 1 + C1 ε + C2 ε 2 + C3 ε 3 + · · · , m!

ε = z − α , Ck =

m! f ( m+k ) ( α ) ( k = 0, 1, 2, . . . ). ( m + k )! f ( m ) ( α )

PROGRAM 2: ASYMPTOTIC ERROR CONSTANT OF LAGUERRE-LIKE METHODS u = Sum[uk[k]*e^k,{k,1,5}]; Q = Sum[qk[k]*e^k, {k,0,5}]; g0 = Series[(a*m*u)/(1 + Q),{e,0,6}] // FullSimplify; p = Series[1 - 1/3 *(1 - D[g0, e]),{e,0,5}] // FullSimplify; g1 = Series[1/p,{e,0,5}] // FullSimplify; e1 = Series[e - g0*g1,{e,0,4}] // FullSimplify; uk[j_ ] := FullSimplify[Coefficient[e Series[(1 + Sum[C[k]*e^k,{k,1,6}])/ (m + Sum[(m+k) C[k] e^k,{k,1,6}]),{e,0,6}],e,j]]; s[j_ ] := FullSimplify[Coefficient[Series[(m(m-1) + Sum[(m+k-1)(m+k) C[k]*e^k,{k,1,6}])/(m + Sum[(m+k) C[k]*e^k,{k,1,6}]),{e,0,6}], e, j]]; t = (Sum[uk[k]*e^(k-1),{k,1,5}])* (Sum[s[k]*e^k, {k,0,5}]); qk[j_ ] := FullSimplify[Coefficient[Series[Sqrt[(a-1) (a*m-1) - a(a-1)*m*t],{e,0,6}],e,j],Assumptions -> a-1 > 0]; b[j_ ] := Coefficient[e1, e, j] // FullSimplify b[4] /. C[1] -> C1 /. C[2] -> C2 /. C[3] -> C3 // Apart Out[b[4]]=

C13 −C 13 + 2C 1C 2 C 13 − 3C 1C 2 + 3C 3 + + 3m 6(−1 + a )2 m3 2(−1 + a )m2

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

99

Fig. 1. Influence of the parameter ν to (i) approximation errors (left) and (ii) variation rate (right) of Laguerre-like method (7) for ν ∈ (−30, 30 ).

In the above program

Bk =

a[4] gives the asymptotic error constant of Laguerre-like family (7). Introducing the abbreviation

f (k ) ( α ) k!

we find that Ck = Bm+k /Bm . Now we use the expression for the Laguerre-like family (7) in the form

AEC (7 ) =

(11)

a[4] given above and rewrite the asymptotic error constant for

 B3m+1 2Bm Bm+1 Bm+2 − B3m+1 B3m+1 − 3Bm Bm+1 Bm+2 + 3Bm+3 B2m  1 + + , 3m 2m2 ( ν − 1 ) B3m 6m3 (ν − 1 )2

where we put a = ν. It is assumed that ν = 1, which was previously stressed. The lengthy expression is the price which had to be paid in order to present a general result. In special cases for particular methods we have

AEC (8 ) = AEC (9 ) = AEC (10 ) =

 1 B2m Bm+3 (m − 1 )Bm Bm+1 Bm+2 (2m − 1 )(m − 1 )B3m+1  − + , m m2 6m3 B3m  B3  1 m+1 2 − B B B + B B , m m +1 m +2 m +3 m 3 mB3m

 1 B2m Bm+3 (m + 1 )Bm Bm+1 Bm+2 (2m + 1 )(m + 1 )B3m+1  − + . 3 m m2 6m3 Bm

We end this section with a short study of the dependence of the accuracy of zero approximations and convergence behavior of the new iterative method (7) as a function of the parameter ν . For this purpose we have investigated the approximation errors |Lm ( f, ν ; z ) − α| and the variation rate of Lm ( f, ν ; z ) (given by (7)) when the parameter ν and the approximation x vary within some (real) intervals. The variation rate is presented by the function

   ∂ Lm ( f, ν ; z )  .  v(z, ν ) =   ∂ν

We have shown that the Laguerre-like method (7) preserves the order of convergence four for arbitrary parameter ν = 1 and in the limit cases when ν → 0 (Halley-like method (10)) and ν → ±∞ (Ostrowski-like method (1)). However, the convergence rate of the method (7) is only cubic for ν = 1. In our experiment we have chosen the test function

f ( z ) = ( z 4 + z )2 which has the zeros 0, −1, eiπ /6 , e−iπ /6 , each of multiplicity 2. We will concentrate to the zero α = 0. We have plotted two graphs (see Fig. 1) taking the interval [−0.5, 0.5] for z (the neighborhood of α = 0) and the interval [−30, 30] for the parameter ν . Absolute values of the approximation errors |Lm ( f, ν ; z ) − α| are displayed in Fig. 1 (left) and the variation rate v(x, ν ) is presented in Fig. 1 (right). Approximation errors |Lm ( f, ν ; z ) − α| and the variation rate v(z, ν ) have small values for ν (excepting the vicinity of ν = 1), especially when z approaches the zero α = 0. Besides, both graphs are considerably smooth in this range including large values of |ν |. Since large |ν | just defines Ostrowski-like method (9), these facts point to a pretty stable convergence behavior and good convergence characteristics of the method (9). This inference fully matches the conclusions given in Sections 4 and 5. The line (colored in white) on both graphs shows that the program Mathematica could not produce a part of graphs for ν = 1, the value that was earlier mentioned as a critical point. In fact, the parameter ν = 1 defines the third order method (4) in a limit process, as shown before.

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

100

Altogether, we may conclude that the Laguerre-like method (7) shows good convergence behavior for a considerably large range of values of the parameter ν . Also, it can be concluded that the choice of ν close to 1 (say, in the interval [1 − , 1 + ] for 0 < 1) should be avoided since this interval of ν gives only cubic convergence. We note that the above conclusions have been drawn according to the test function f (z ) = (z4 + z )2 and several other test-functions. For this reason, we may speak about typical convergence behavior rather than about absolutely true facts. Namely, the variation rate v(z, ν ) is given by a very complicated expression so that there might be examples of test-functions as exceptions. 3. Methods of Traub–Gander’s type Let us start this section with the following assertion given in [15, Theorem 2–5]. Theorem 3. Let G(z) be an iteration function which defines a zero-solver of order p ≥ 2 for some set of multiplicities m. Then for these values of m there exists a function ω(z) such that

G ( z ) = z − u ( z )ω ( z ),

u (z ) =

f (z ) , f  (z )

ω ( α ) = 0.

(12)

To provide a rich structure of the relation (12) and to construct a variety of iterative methods, it is suitable to choose

ω (z ) = h(r (z )) and then search for suitable forms of the functions h and r that produce the desired order of convergence of (12). Then the iteration function (12) gets the form

G(z ) = z − u(z )h(r (z )).

(13)

The form of the function r = r (z ) will be considered in what follows. For example, in some cubically convergent iterative methods for solving nonlinear equations we often meet the expression

r (z ) = R f (z ) :=

f (z ) f  (z ) . f  ( z )2

(14)

Taking h(r (z )) = 1 + r (z )/(2(1 − β r (z ))) in (13), where β is a parameter, Werner [30] derived the following class of iterative methods for finding simple zeros





R f (z ) f (z ) W ( f, β ; z ) = z −  1+ , f (z ) 2(1 − β R f (z ))

(15)

where Rf (z) is defined by (14). The family of iterative methods (15) is often called Chebyshev–Halley’s method; namely, setting β = 0 in (15) one obtains Chebyshev’s method, for β = 1/2 Halley’s method, for β = 1 Super Halley’s method and Newton’s method when β tends to ± ∞. For more details see, e.g., [31–33], and references cited in [33]. Note that Gander [34] generalized the class of methods (15) dealing with the form (13). He found that the family (13) is cubically convergent under the condition

h ( 0 ) = 1,

h ( 0 ) = 1 /2 ,

|h (0 )| < ∞,

(16)

see [34] and [35]. Due to the mentioned historical notes and Traub’s Theorem 3, families of methods in this section will be called Traub–Gander’s families. Let us now consider the case when α is the zero of f of the known order of multiplicity m ≥ 1. Note that α is a simple zero for the function

F (z ) = f (z )1/m . We find the first two derivatives of F:

F  (z ) =

1 F (z ) f  (z ) f (z )1/m−1 f  (z ) = , m m f (z )

F  (z ) = F (z ) ·

(1 − m ) f  (z )2 + m f (z ) f  (z ) . m2 f ( z )2

(17)

Using the expressions for the derivatives F and F  given by (17), let us replace u(z) and Rf (z), appearing in (13) and (14), by new functions v and Tf (z) defined by

v(z ) :=

F (z ) m f (z ) =  , F  (z ) f (z )

t = T f (z ) :=

F (z )F  (z ) m f (z ) f  (z ) =1−m+ .  2 F (z ) f  ( z )2

(18)

For simplicity, we will write sometimes only t instead of Tf (z) assuming that t is a function of z given by (18). Using the expressions given by (18) we can modify the iteration function (13) for simple zeros to the following iteration function for finding multiple zeros

f (z ) Gm ( f ; z ) = z − m  h(T f (z )). f (z )

(19)

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

101

Let us show that T f (z ) = O(ε k ), k ≥ 1, otherwise if T f (z ) = O(1 ) then (19) would define linearly convergent quasiNewton’s method, which is not our goal. Taylor’s expansions of f(z), f (z) and f  (z) about the zero α of multiplicity m are given by

⎧ f (m ) ( α ) m 2 3 ⎪ ⎨ f (z ) = m! ε (C0 + C1 ε + C2 ε + O(ε )), (m ) f  (z ) = f m(!α ) ε m−1 (U0 + U1 ε + U2 ε 2 + O(ε 3 )), ⎪ ⎩  (m ) f (z ) = f m(!α ) ε m−2 (V0 + V1 ε + V2 ε 2 + O(ε 3 )),

(20)

where

Cj =

m! f ( m+ j ) ( α ) , U j = (m + j )C j , V j = (m + j − 1 )(m + j )C j , C0 = 1. ( m + j )! f ( m ) ( α )

Using (20) and developing the denominator of Tf (z) in Taylor’s series, we find f (m ) ( α ) m!

(m )

(C0 + C1 ε + O(ε 2 )) · f m(!α ) (V0 + V1 ε + O(ε 2 ))  2 f (m ) ( α ) 2 )) ( U + U ε + O ( ε 0 1 m! m − 1  2C1 = 1−m+m + 2 ε + O (ε 2 )

t = T f (z ) = 1 − m + m

m

=

m C0

2C1 2 f (m+1) (α ) ε + O (ε 2 ) = ε + O ( ε 2 ) = O ( ε ). m m ( m + 1 ) f (m ) ( α )

Therefore, the iterative formula (19) is meaningful. To find the asymptotic error constant for the third order family of Gander’s type for multiple zeros (19), we use symbolic computation in computational software Mathematica. First, we introduce the following notation: f = f (z ), f1 = f  (z ), f2 = f  (z ), fm = f (m ) (α ), Ck = e = ε = z − α , er = e1 = Gm ( f ; z ) − α .

m! f ( m+k ) ( α ) , ( m + k )! f ( m ) ( α )

PROGRAM 3: ASYMPTOTIC ERROR CONSTANT OF GANDER-LIKE FAMILY f = e^m*fm/m!*(1 + C1*e + C2*e^2 + C3*e^3); f1 = D[f, e]; f2 = D[f1, e]; t = 1 - m + m*Expand[f*f2]*Series[1/f1^2, {e, 0, 4}] // FullSimplify; er = e - m*f/f1*(h0 + h1*t + h2*t^2/2 + h3*t^3/6) // FullSimplify; e1 = Series[Collect[er, e], {e, 0, 4}] Out[e1]= e(1 − h0 ) +

+

e2 (h0C1 − 2h1C1 ) m

e3 2mC2(h0 − 3h1 ) − C12 (mh0 + h0 − 3mh1 − 5h1 + 2h2 ) m2





+ O e5

Regarding the last expression for e1 we conclude that necessary and sufficient conditions which guarantee cubic convergence of the iterative methods (19) are given by

h ( 0 ) = 1,

h ( 0 ) = 1 /2 ,

|h (0 )| < ∞,

(21)

which coincides with (16). This is quite expected since we have dealt with the function F (z ) = f (z )1/m which has a simple zero. We recall that the argument of h is t given by (18) in the case of multiple zeros. Therefore, we have obtained a general family of iterative methods for finding multiple zeros

f (z ) Gm ( f ; z ) = z − m  · h(T f (z )), f (z )

T f (z ) = 1 − m + m

f (z ) f  (z ) , f  ( z )2

(22)

which is cubically convergent under the condition (21). The method (22) will serve as a starting point for the construction of fourth order method using accelerating formula (6). From the expression for e1 from PROGRAM 3 we observe that the asymptotic error constant (AEC) of the method (22) is

AEC (22 ) =

(m + 3 − 4h (0 ))C12 − 2mC2 2m2

.

(23)

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

102

Using (11) and the substitution Ck = Bm+k /Bm (see Section 2), the expression (23) becomes

AEC (22 ) =

(m + 3 − 4h (0 ))B2m+1 − 2mBm Bm+2 2m2 B2m

.

Some examples of simple functions h which satisfy the conditions (21) are given below:

⎧ h1 (t ) = (1 + t/4 )2 , ⎪ ⎪ ⎪ ⎪ h2 (t ) = 1 + t/2 + bt 2 , b is arbitrary, ⎪ ⎪ ⎪ ⎪ t ⎪ ⎪ ⎪ ⎪h3 (t ) = 1 + 2(1 + bt ) , b is arbitrary, ⎪ ⎨ h4 (t ) = (1 − 12 t )−1 , Halley-like method, ⎪ ⎪ ⎪ 1 + ( 12 + b)t + ct 2 ⎪ ⎪ h5 (t ) = , b, c, d are arbitrary ⎪ ⎪ 1 + bt + dt 2 ⎪ ⎪ √ ⎪h (t ) = 1/ 1 − t , Ostrowski-like method, ⎪ ⎪ 6 ⎪ √ ⎩ h7 (t ) = 2/(1 + 1 − 2t ), Euler-like method.

(24)

Note that h3 and h4 can be obtained from h5 setting c = d = 0. We also observe that the iterative formula of Werner’s type for multiple zeros



T f (z ) f (z ) Wm ( f, β ; z ) = z − m  1+ f (z ) 2(1 − β T f (z ))

 (25)

arises from h3 given by (24). In the case of simple zeros (m = 1) h3 leads to the Werner method (15). Our goal is to increase the convergence speed of the third order Traub–Gander’s type family (22) using the generating formula (6). PROGRAM 4: ACCELERATED FOURTH ORDER METHODS OF TRAUB–GANDER’S TYPE t[z_ ] = m (f[z] f’’[z])/f’[z]^2 - (m-1) g[z_ ] = z - m f[z]/f’[z] h[t[z]]; g1[z_ ] = FullSimplify[D[g[z], z]]; f[z] = u*f’[z]; f’’[z] = (t+m-1)/(m*u) f’[z]; f’’’[z] = 6*A3*f’[z]; p = FullSimplify[z - m f[z]/f’[z]*h[t]]; c = g1[z] // FullSimplify; cor = (z - p)/(1 - c/3) // FullSimplify Out[cor]=

3muh[t] 2 + h[t] − th[t] − (3m(−1 + t ) + 2(−1 + t )2 + m2 (1 − 6A3u2 ))h [t]

Therefore, the accelerating family of iterative methods of Traub–Gander’s type for multiple zeros is given by

Gm ( f ; z ) = z − cor = z−

3muh(t )



2 + (1 − t )h(t ) − 3m(t − 1 ) + 2(t − 1 )2 + m2 (1 − 6A3 u2 ) h (t )

,

(26)

where t(z) is replaced by t for brevity. Note that t in h (t) is treated as a simple argument (not as a function of z). Theorem 4. The order of convergence of Gander-like method (26) for finding multiple zeros is four. The proof of Theorem 4 follows directly according to Theorem 1. In a similar manner as in Section 2, using symbolic computation and a program in Mathematica, after laborious work we determine the asymptotic error constant of the method (26)

AEC (26 ) =

mBm Bm+3 − Bm+1 Bm+2 (−4h (0 ) + m + 3 ) m2 Bm

2

+

B3m+1 (−12h (0 )(3m + 4 ) + 8h (0 ) + 3(m + 1 )(2m + 7 )) 18m3 Bm 3

,

where Bk is given by (11). Remark 4. We note that PROGRAM 4 can be used for the construction of various fourth order methods for multiple zeros by defining h in different ways (for instance, choosing h from the list (24)). The iterative formula (26) is general enough and produces a lot of particular methods for finding multiple zeros. We give some particular iterative methods that follow from (26):

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

103

h(t ) = 1 + t/(2 − 2β t ), Werner-like family of order four:

Wm ( f, β ; z ) = z −



3mu(β t − 1 )((2β − 1 )t − 2 )



m2 6A3 u2 − 1 + β t 2 3 + t + 6β − 2β t − 12β t + 3t (1 − t )

.

(27)

From Werner-like family (27) we obtain some particular methods taking various values for parameter β . β = 1/2, Halley-like method of order four, formula (10); β = 0, Chebyshev-like method of order four:

Cm ( f ; z ) = z −

3m ( 3 − m )u + 6m2 A2 u2 ; (m + 1 )(4 − m ) + 6m(m − 1 )A2 u + 6m2 (A3 − 2A22 )u2

(28)

β = 1, Super Halley-like method of order four: SHm ( f ; z ) = z −

3(m + 1 )u − 6(2m + 1 )A2 u2 + 12mA22 u3

m + 5 − 6(m + 3 )A2 u + 6(2A22 (m + 1 ) + A3 )u2 − 8mA32 u3

;

β → ∞, Halley-like method of order three: Hm ( f ; z ) = z −

f ( z )/ f  ( z ) f (z ) f  (z ) m+1 − 2m 2 f  ( z )2

(see, also (4)). Since the last iterative method Hm is obtained when β → ∞ and has the decreased convergence order three, it is recommendable to avoid the choice of β in (27) whose absolute value is large. This is obvious from Table 3 (except for f3 ) for Werner-like method dealing with β = 15; indeed, we face only a cubic convergence. We finish this section with an interesting question: Is there any connection between the Laguerre-like family (2) and the family of Traub–Gander’s type (22)? Comparing (2) and (22) we find that the function h(t) is of the form

h(t ) =

ν f (z ) f  (z ) , s ∈ {−1, 1}, t = 1 − m + m . f  ( z )2 1 + s (ν − 1 )2 − ν (ν − 1 )t 

(29)

First, let ν > 1, which means that we choose s = 1 in (29). Hence

h(t ) =

1+



ν (ν − 1 )2 − ν (ν − 1 )t

(30)

and

h (t ) =

( ν − 1 )ν 2 .  2 (ν − 1 ) − ν (ν − 1 )t (1 + (ν − 1 )2 − ν (ν − 1 )t )2



2

(31)

Setting t = 0 in the expressions for h(t) and h (t) given by (30) and (31), we find that

h ( 0 ) = 1 , h ( 0 ) = 1 /2 . Therefore, the conditions (21) are satisfied, which means that there is the connection between the families (2) and (22) if ν > 1, that is, assuming that the sign “+” should be taken in front of the square root in (2) (see the discussion at the beginning of Section 2). It is also clear (by taking m = 1) that the Laguerre family (1) can be obtained from the Traub– Gander family (13)–(14) for simple zeros (considering that the conditions (16) hold). Note that the functions h6 and h7 from the list (24) can be obtained as special cases of the function h given by (29). Now, let ν < 1 and, consequently, s = −1 should be taken in (29). It can be proved that the conditions (21) cannot be achieved. If ν is very small in magnitude, then h(0) ≈ 1 and h (0) ≈ 1/2, but this is not sufficient to confirm the equivalence of (2) and (22). However, in a limit case when ν → 0 (the case of Halley-like method in (2)) we choose the sign s = −1 and after rationalization of the denominator of (29) we find



h(t ) = and

h (t ) =

( ν − 1 )2 − t ( ν − 1 )ν + 1 t (ν − 1 ) − ν + 2

(ν < 1 )

 (1 − ν )(−(t − 1 )ν 2 + (t − 2 )ν + 2( −(ν − 1 )((t − 1 )ν + 1 ) + 1 )) ( ν < 1 ).  2(t (ν − 1 ) − ν + 2 )2 −(ν − 1 )((t − 1 )ν + 1 )

Hence, in the limit process, we obtain

h (0 ) =

1+



( ν − 1 )2 → 1 when ν = 0 2−ν

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

104

Table 1 Tested functions. f(x)

2 2 2 f 1 (x ) = ex +6x−16 − 1 (x − 1 )3 − 1

x2  4 2 f 2 (x ) = xe − sin x + 3 cos x + 5

 √  2 f 3 (x ) = x sin x − 2 sin (x/ 2 ) x5 + x2 + 100

x2 +4x+5 3 2 − 1 sinh(2 + i + ix ) f 4 (x ) = e

α

m

x0

4

1.7

2

4 6

−0.7 −1.2

−1.207647827130918927009 . . . 0

5

−1.7

−2 + i

Table 2 Errors of approximations: Laguerre-like methods. Functions f1

f2

f3

f4



and

h ( 0 ) =

Methods

k=1

k=2

k=3

COC by (32)

Euler-like IM, ν = 2 Halley-like IM, ν → 0 Ostrowski-like IM, ν → ∞ Laguerre-like IM ν = −10

1.28(−2 ) 3.62(−3 ) 6.50(−3 ) 5.72(−3 )

3.91(−7 ) 2.95(−10 ) 7.23(−10 ) 2.08(−10 )

2.80(−25 ) 1.30(−38 ) 1.14(−37 ) 3.16(−40 )

4.0020 4.0034 3.9933 4.0049

Euler-like IM, ν = 2 Halley-like IM, ν → 0 Ostrowski-like IM, ν → ∞ Laguerre-like IM ν = −10

2.46(−3 ) 1.11(−2 ) 1.18(−2 ) 1.74(−2 )

1.005(−6 ) 5.67(−9 ) 4.21(−8 ) 2.44(−8 )

2.30(−24 ) 3.84(−34 ) 1.18(−30 ) 9.25(−32 )

4.0039 3.9970 3.9939 3.9927

Euler-like IM, ν = 2 Halley-like IM, ν → 0 Ostrowski-like IM, ν → ∞ Laguerre-like IM ν = −10

3.84(−2 ) 6.17(−2 ) 4.80(−2 ) 4.91(−2 )

6.53(−11 ) 4.82(−10 ) 1.61(−10 ) 1.76(−10 )

1.67(−55 ) 1.28(−50 ) 1.89(−53 ) 3.44(−53 )

5.0050∗ 5.0053∗ 5.0656∗ 5.0571∗

Euler-like IM, ν = 2 Halley-like IM, ν → 0 Ostrowski-like IM, ν → ∞ Laguerre-like IM ν = −10

0.138 0.468 7.27(−2 ) 6.06(−2 )

7.07(−5 ) 2.61(−3 ) 3.49(−6 ) 1.62(−6 )

4.56(−18 ) 2.71(−12 ) 1.87(−23 ) 8.34(−25 )

3.9928 3.9094 3.9985 4.0012

COC rc is 5 instead of 4 for f3 ; the explanation is given in Comment 1 below.

 (1 − ν )(ν 2 − 2ν + 2 + 2 (ν − 1 )2 ) → 1/2 when ν = 0.  2 ( ν − 2 )2 ( ν − 1 )2

Therefore, the conditions (21) are satisfied in the case of Halley-like method (10) (ν → 0). In fact, the choice ν = 0 in Laguerre-like method (2) corresponds to the choice of parameter β = 1/2 in the method (25). 4. Numerical examples We have tested iterative methods of the fourth order generated by the recurrence relation (6) in Sections 2 and 3. Since these methods possess very fast convergence and, thus, produce approximations of high accuracy, we have used multiprecision arithmetic employing computer algebra system Mathematica. The tested functions are displayed in Table 1, together with the multiplicity m, the initial approximation x0 and the zero α of fk . Sometimes, in Sections 4 and 5 we use the abbreviations IM for iterative method. Theorems 2 and 4 give the theoretical order of convergence of the accelerated iterative methods. However, it is always convenient to check the convergence behavior in practice. For this reason, in our numerical experiments we have calculated the so-called computational order of convergence rc (COC, for brevity) using the approximate formula

rc =

log | f (zk+1 )/ f (zk )| . log | f (zk )/ f (zk−1 )|

(32)

Note that the formula (32) is a special case of a general formula given in [36]. In Table 2 we have presented the errors of approximations εk = |zk − α| (k = 1, 2, 3 ) produced by four methods belonging to Laguerre’s class of iterative methods considered in Section 2. The stopping criterion has been given by the inequality | f (zk )|1/m < 10−5 . The denotation A(−h ) means A × 10−h and it was also used in Table 3. Comment 1. Many authors in their papers often assert that (their) proposed methods are equal or even better than the existing ones, deducing such conclusion according to only several numerical examples. In reality, such assertion is rarely true. Namely, the behavior of an iterative method depends on the quality of chosen initial approximation as well as on the asymptotic error constant of iteration function, depending directly on the characteristics of function whose zero is sought. For this reason it is hard to make a general classification of iteration functions in the sense of their ranking, even if they are considered in the class of methods with the same order of convergence. From Table 2 we observe that none of the employed

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

105

Table 3 Errors of approximations: Traub–Gander-like methods. Functions f1

f2

f3

f4

∗∗

Methods

k=1

k=2

k=3

COC by (32)

Chebyshev-like IM Werner-like IM, β = 1 Werner-like IM, β = 2 Werner-like IM, β = 15

0.105 5.46(−3 ) 1.79(−2 ) 3.52(−2 )

2.56(−3 ) 2.84(−9 ) 9.95(−6 ) 2.83(−3 )

5.56(−10 ) 6.92(−35 ) 6.63(−19 ) 8.89(−7 )

4.4500 3.9962 4.0182 3.0782

Chebyshev-like IM∗∗ Werner-like IM β = 1 Werner-like IM, β = 2 Werner-like IM, β = 15

3.45(−2 ) 2.01(−2 ) 3.76(−2 ) 7.02(−2 )

6.61(−7 ) 1.95(−7 ) 3, 31(−5 ) 5.58(−3 )

7.49(−26 ) 1.64(−27 ) 1.30(−17 ) 1.64(−6 )

3.9964 3.9940 4.0266 3.0895

Chebyshev-like IM Werner-like IM β = 1 Werner-like IM, β = 2 Werner-like IM, β = 15

0.103 3.47(−2 ) 3.27(−3 ) 3.90(−2 )

7.11(−9 ) 3.63(−11 ) 5.40(−16 ) 1.54(−9 )

2.03(−44 ) 8.87(−57 ) 6.45(−80 ) 1.55(−46 )

4.9628∗ 5.0790∗ 5.0013∗ 4.9977

Chebyshev-like IM Werner-like IM β = 1 Werner-like IM, β = 2 Werner-like IM, β = 15

0.737 0.57 0.406 0.473

7.96(−2 ) 4.98(−2 ) 2.77(−2 ) 2.10(−2 )

4.18(−6 ) 1.20(−6 ) 2.39(−7 ) 5.47(−7 )

4.0600 4.2703 4.2980 3.3799

The initial approximation x0 = −0.8 was used instead of x0 = −0.7 given in Table 2.

14 000 12 000 10 000 8000 6000 4000 2000 1.5

1.0

0.5

0.5

1.0

1.5

Fig. 2. The graph of the function f2 .

methods have shown the best convergence behavior for all tested examples, which is quite expected according to the above discussion. In particular, from Table 2 we observe that all tested methods run with efforts at the beginning of iterative process in the case of complex zero α = −2 + i of the function f4 , especially Euler-like and Halley-like methods. The main reason for this relatively modest convergence behavior lies in the fact that the tested methods start with real initial approximation x0 = −1.7 although the zero α = −2 + i is complex. In the continuation of iterative procedure, the obtained results confirm the fourth order of convergence of these two methods. Similar and even worse behavior for f4 can be seen in Table 3 for iterative methods discussed in Section 3. In such situations values of COC rc , calculated by (32), are not quite reliable, see the first line and the last three lines in Table 3. However, one additional iteration confirms precisely the fourth order of convergence of all tested methods for all functions f1 − f4 . Furthermore, solving the equation f3 (z ) = 0 we notice unexpectedly fast convergence with COC very close to five (marked by ∗ in Tables 2 and 3, the last column), although the theoretical order is four for all methods. This increased convergence (4 ) comes from the fact that Lm ( f3 , ν ; z )|z=α = 0 and Gm(4) ( f, β ; z )|z=α = 0 in the case of the function f3 . Comment 2. Testing Chebyshev-like method for f2 we have observed a strange behavior of approximation accuracy (marked by ∗∗ in Table 3). Namely, this method (formula (28)) converges very slowly for the initial approximation x0 = −0.7, used for all numerical experiments. However, only small shift to x0 = −0.8 gives very fast convergence, as it can be seen from the results given in Table 3. The explanation lies in the shape of graph of function f2 on the interval [−1.3, −1], where this graph is almost flat, see Fig. 2. Various iteration functions also show strange behavior in such situations. Besides, comparing approximation errors in Tables 2 and 3 we observe that, at the first sight, the iterative methods constructed in Section 2 show better results in average. However, we immediately notice the exception; iteration functions from Section 3 produce more accurate approximations for the function f3 . All mentioned facts are in accordance with the remark discussed in Comment 1 that, in general, it is hard to make the rank of iterative methods concerning their convergence quality.

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

106

5. Basins of attraction of the presented methods There are various approaches to compare and possibly rank iterative methods for solving nonlinear equations. For a long time the comparison of different algorithms was based on comparisons of (i) the number of iterations that provide the convergence (usually, controlled by a suitable termination criterion), (ii) the number of function evaluations per iteration, and (iii) the amount of CPU time required to satisfy the given termination criterion. The mentioned quantitative characteristics can be presented to a satisfactory extent by the computational efficiency defined by the efficiency index E = r 1/d , where r is the order of convergence and d is the computational cost which is defined as the number of new pieces of information needed per iteration. Usually, d is the number of function evaluations per iteration, see Traub [15]. The described comparison method by computational efficiency assumes that the chosen initial approximation to the sought zero of a given function is sufficiently good to provide the convergence, which is obviously a disadvantage. Actually, as emphasized in [37], convergence behavior of any iterative method does depend in a complicated and unpredictable way on the starting point. In practice, searching for a closed set(s) in the complex plane, formed from the starting points that converge to the zero, is a difficult task which often consumes a lot of CPU time. These sets for some algorithms have a smooth convergence pattern and others have a rather chaotic pattern, which points to their complex and intricate structure, see, e.g. [38]. Sometimes they are fractals in nature. At the beginning of the 21st century, the development of powerful computer hardware and computer graphics have enabled the development of a new methodology for visual study of convergence behavior of algorithms for solving nonlinear equations, based on the notion of basin of attraction. Definition 1. Let f be a given sufficiently many times differentiable function in some complex domain S ⊆ C with simple or multiple zeros α1 , α2 , . . . , αλ ∈ S, and a (convergent) root-finding iteration defined by

zk+1 = g(zk ), the basin of attraction for the zero α i is defined as follows:

B f,g (αi ) = {ζ ∈ S | the iteration zk+1 = g(zk ) with z0 = ζ converges to

αi }.

The basin of attraction is a method for visual study of the behavior of root-finding methods as a function of the various starting points. It is worth noting that a basin of attraction is not only a method to visually understand how a root-finding method behave, it also offers some qualitative issues as a criterion for comparison. For these useful properties, the visual study using basins of attraction is another fruitful and efficient measure together with the methodology based on computational efficiency, described at the beginning of this section. In general, a complete method for comparing root-solvers should use both methodologies. Let us emphasize that the methods considered in this paper are of the fourth order and use four function evaluations (of f, f , f  , f   ) per iteration, so that they have the same computational efficiency (neglecting, for simplicity, so called combined cost such as numbers of basic operations and square-rooting). For this reason, we use only the basins of attraction and results of numerical experiments to compare algorithms presented in this paper. The dynamic study for the comparison of root-finding algorithms for simple zeros was launched by Stewart [39] and Varona [5] and continued in the works of Amat et al. [40–42], Scott et al. [37], Chicharro et al. [43], Neta et al. [44], Argyros and Magreñan [45], and others. Basins of attraction for methods for finding multiple zeros can be found in the papers of Neta and Chun [16,38,46,47] and [48]. The dynamic study has also considered in the recent papers [49–54]. In this section we give the dynamic study of the iterative methods (8)–(10) and (28) using basins of attraction and six test polynomials with multiple zeros to generate their basins. Note that the terms zero and root are used interchangeably in the case of algebraic polynomials, the case appearing in this section. We assign a color to each attraction basin of a root in the following manner: each basin will have a different color and the shading is darker when the number of iterations is higher. Starting points for which a method does not converge are colored black. We allow a maximum of 40 iterations from every initial point; if the number of iterations exceeds 40 then we treat the initial point as divergent one and paint it black. All methods are tested on equally spaced points in the square S centered at the origin. Let us note that, in general, the square S can be always chosen in such a way that contains all roots of a tested polynomial. Namely, it is well known that all (simple or multiple) roots α1 , . . . , αq of the polynomial

Pn (z ) = an zn + an−1 zn−1 + · · · + a1 z + a0

( an = 0 )

of degree n (≥q) are contained in the disk D = {z | |z| ≤ R} centered in the origin and having the radius R, where

R = 2 max |an− j /an |1/ j 1≤ j≤n

(see, e.g., Henrici [18, p. 457]). Then S is the tangent square to the disk D with sides parallel to the coordinate axes. Our methods have been tested on the 360,0 0 0 equally spaced points of the square S = {−3, 3} × {−3, 3} centered at the origin. For each basin we collect data about the CPU time in seconds for all 360,0 0 0 points, average number of iterations (for all point of the square S) required to reach the accuracy |zk − α| < 10−8 and the number of dark (divergent) points for each method and each example. These data are given in Table 4. We implemented our experiments on PC Fujitsu P500 E85 with Intel processor i7-2600 working on 3.4 GHz. Basins of attraction were plotted for six algebraic polynomials running the methods (8), (9), (10) and (28), in this order. These basins are presented in Figs. 3–8 for the polynomials p1 − p6 .

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

Fig. 3. Euler-like IM (8), Ostrowski-like IM (9), Halley-like IM (10), Chebyshev-like IM (28) for the roots of the polynomial (z5 − z )4 .

Fig. 4. Euler-like IM (8), Ostrowski-like IM (9), Halley-like IM (10), Chebyshev-like IM (28) for the roots of the polynomial (z4 − 0.75z2 − 0.25 )3 .

Fig. 5. Euler-like IM (8), Ostrowski-like IM (9), Halley-like IM (10), Chebyshev-like IM (28) for the roots of the polynomial (z4 − 1 )5 .

Fig. 6. Euler-like IM (8), Ostrowski-like IM (9), Halley-like IM (10), Chebyshev-like IM (28) for the roots of the polynomial (z3 + 3z2 + z − 5 )3 .

Fig. 7. Euler-like IM (8), Ostrowski-like IM (9), Halley-like IM (10), Chebyshev-like IM (28) for the roots of the polynomial (z4 − z )3 .

107

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

108

Table 4 Iteration data for Examples 1–6. Examples

Methods

A

B

C (%)

Euler-like IM (8)

466.8

6.33

0

Example 1 p1 ( z ) = ( z 5 − z )4

Ostrowski-like IM (9) Halley-like IM (10) Chebyshev-like IM (28) Euler-like IM (8)

238.9 259.1 487.7

4.50 5.21 5.56

0 0 0 0

Example 2 p2 (z ) = (z4 − 0.75z2 − 0.25 )3

Ostrowski-like IM (9) Halley-like IM (10) Chebyshev-like IM (28) Euler-like IM (8)

253.2 282.0 341.0

4.08 4.95 5.83

0 0 0 0

Example 3 p3 ( z ) = ( z 4 − 1 )5

Ostrowski-like IM (9) Halley-like IM (10) Chebyshev-like IM (28) Euler-like IM (8)

163.6 197.3 460.9

3.70 4.64 5.04

0 0.26 0.31 0

Example 4 p4 ( z ) = ( z 3 + 3z 2 + z − 5 )3

Ostrowski-like IM (9) Halley-like IM (10) Chebyshev-like IM (28) Euler-like IM (8)

202.1 224.1 414.4

3.12 3.64 5.82

0 0 0 0

Example 5 p5 ( z ) = ( z 4 − z )3

Ostrowski-like IM (9) Halley-like IM (10) Chebyshev-like IM (28) Euler-like IM (8)

206.5 229.6 660.7

4.05 4.66 5.04

0 0 0 0

Example 6 p6 (z ) = (z5 + 2z4 + 2z3 + 10z2 + 25z )2

Ostrowski-like IM (9) Halley-like IM (10) Chebyshev-like IM (28) Euler-like IM (8)

294.4 345.7 471.9

3.21 3.87 5.60

0 0 0 0

Average over all examples

Ostrowski-like IM (9) Halley-like IM (10) Chebyshev-like IM (28)

226.5 256.3

3.78 4.49

0 0.043 0.052

A—CPU time in second; B—Average number of iterations per point; C— Percentage of divergent (“black”) points.

Fig. 8. Euler-like IM (8), Ostrowski-like IM (9), Halley-like IM (10), Chebyshev-like IM (28) for the roots of the polynomial (z5 + 2z4 + 2z3 + 10z2 + 25z )2 .

Example 1. In the first example we have taken the polynomial

p1 ( z ) = ( z 5 − z )4 with roots at 0, ± 1, ± i, each with multiplicity 4. The basins of attraction are given in Fig. 3 for all tested methods. Basins of attraction of all roots except α = 0 for the methods (8) and (9) are relatively large and unvaried. The basins of the root α = 0 for the remaining two methods (10) and (28) have interesting propeller-like shapes placed along diagonal lines. Between the areas of unvaried well-behaved convergence of the methods (9), (10) and (28) there are fractal shapes. In terms of CPU time and the number of iterations, Ostrowski-like IM (9) is the fastest, followed by (10), (28) and, finally, by Euler-like IM (8) that requires doubled CPU time and number of iterations in reference to the method (9). It is obvious that Ostrowski-like IM (9) is the best in spite of somewhat/a little complicated basin. None basin of attraction have dark points. Example 2. We have considered the polynomial

p2 (z ) = (z4 − 0.75z2 − 0.25 )3 with the zeros ± 1, ± 0.5i, each of multiplicity 3. The basins of attraction of Ostrowski-like IM (9), Halley-like IM (10) and Chebyshev-like IM (28), displayed in Fig. 4, are similar to a good extent for all roots concerning both size of areas and shapes. Among all six tested polynomials, the polynomial p2 requires the longest CPU time and number of iteration considering the

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

109

employed methods. Large unvaried areas surrounding each root of the polynomial p2 are bonded with almost straight lines, which is a good convergence property. We observe several blobs on the boundaries of particular basins, from which larger are related to the roots −0.5i and 0.5i. These three methods are also pretty fast and require considerably less CPU time than Euler-like IM (8). Euler-like method (8) has a strange symmetric basin of attraction regarding all roots, with smaller area for imaginary roots ± 0.5i. According to these facts and the data from Table 4 concerning CPU time and average number of iterations, Euler-like IM (8) shows the worst convergence behavior for the polynomial p2 . On the other hand, Ostrowski-like IM (10) is the best, see Table 4 for p2 . None basin have black points. Example 3. The polynomial

p3 ( z ) = ( z 4 − 1 )5 has the zeros ± 1, ± i of multiplicity 5. Again, the methods (8), (9) and (28) show the best convergence behavior. Their basins are displayed in Fig. 5. The basin attractors of these methods for each root are divided into four disjoint areas by straight (diagonal) lines, which is very desirable property. These three methods have some small circular blubs along diagonal lines, the least method (9), the most method (28). It is interesting that the basins for the methods (10) and (28) have black points on real axis 962 and 1104, respectively, but their number does not exceed 0.31% of 360,0 0 0 tested points. Euler-like IM (8) has large basins divided by diagonal straight lines, but also several sub-domains of convergence for distant roots, placed symmetrically. In terms of CPU time and number of iterations, the method (9) is again the best, the rank of the remaining methods is (10), (28), (8). As in the case of Example 2, Euler-like IM (8) is again the worst, it consumes almost doubled CPU time and requires the greatest number of iterations in reference to other methods. Example 4. The next example is a polynomial with the roots 1, −2 ± i of multiplicity 3,

p4 ( z ) = ( z 3 + 3z 2 + z − 5 )3 . The basin attractors of these roots are displayed in Fig. 6. Basins of attraction consist of large unvaried area surrounding each root of the polynomial p4 , which points to a good convergence behavior of all tested methods. We observe nothing interesting in Fig. 6 except the following two not so important facts: (i) basins of the methods (8) and (9) are pretty similar and (ii) Halley-like IM (10) and Chebyshev-like IM (28) have few blobs in different colors on the boundary lines. According to data given in Table 1, as in the case of all six polynomials p1 − p6 Ostrowski-like IM (9) is the fastest, followed by (10), (28) and, finally, by Euler-like IM (8) that requires doubled CPU time and number of iterations in reference to the best method (9). There are none black points for any method. Example 5. The polynomial

p5 ( z ) = ( z 4 − z )3 . has four roots at 0, 1, −0.5 ± 0.866025...i, each with multiplicity 3. The basins are displayed in Fig. 7. Almost all conclusions given for the polynomial p1 (z ) = (z5 − z )4 hold for the polynomial p5 (z ) = (z4 − z )3 . This could be expected since these polynomials are of similar form. While the basins of attraction for the roots α2 = 1, α3,4 = −0.5 ± 0.866025i are large and unvaried, the basin for the root α1 = 0 is small for the Euler-like IM (8), separated to four disjoint area for Ostrowski-like IM (9) and has propeller-like shape for the methods (10) and (28). Again, Ostrowski-like IM (9) is the fastest, Euler-like IM (8) is the slowest, see Table 4. There are none black points for any method. Example 6. The polynomial

p6 (z ) = (z5 + 2z4 + 2z3 + 10z2 + 25z )2 has the roots 0, −2 ± i, 1 ± 2i of multiplicity 2. The basins of attraction for the polynomial p6 are more complicated than the basins of the previously tested polynomials, and has intricate regions of convergence, see Fig. 8. In the case of the roots −2 ± i and 1 ± 2i the corresponding basins are mainly large and have almost unvaried area. The most interesting basin appears for the root α = 0. Each of the basins for this root has blobs on the boundary lines. Besides, we observe the tendency of enlargement of the basins for α = 0 for Halley-like IM (10) and Chebyshev-like IM (28). In terms of CPU time and number of iterations, the rank remains the same as in the previous examples: Ostrowski-like IM (9) is the fastest, followed by the methods (10), (28) and (8). Again, Euler-like IM (8) consumes the longest CPU time in reference to the remaining methods. For all methods the basins of attraction have none dark points. Concluding remarks on basins of attraction: First, according to the size and shapes of basins of attractions for all methods and all 6 examples we can conclude that most basins consist of a large unvaried area, which points to the wellbehaved convergence of the considered methods. In the couple of examples convergence behavior of Euler-like method (8) is not very well. The absence of dark (divergent) points in 5 examples and very few dark points in Example 3 for the methods (10) and (28) (not exceeding 0.31% out of 360,0 0 0 points taken as starting points) is also a very good characteristic of the tested methods. According to the average CPU time for each basin and average number of iterations for 360,0 0 0 points listed in Table 4, we conclude that Ostrowski-like IM (8) is the fastest (215.1 s), followed by (10) (226.5 s), (28) (256.3 s) and (8) (471.9 s). The same rank is valid regarding average number of iterations. It is interesting to note that the same rank

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

110

holds for all tested examples. Therefore, the presented dynamic study and tested functions have shown that Ostrowski-like IM is the best. We also observe that Euler-like IM (8) requires approximately doubled CPU time related to the remaining three methods. According to the above-mentioned facts it is clear that Euler-like IM (8) is the worst. 6. Conclusions We have presented a useful approach for constructing zero-finders of the increased order n + 1 starting from a method of order n. Combining symbolic computation and a powerful accelerating recurrent formula, we have generated two new fourth order families of iterative methods for approximating multiple zeros. The derivation process contains lengthy and pretty complex mathematical expressions so that we were forced to use symbolic computation through several programs realized in computer algebra system Mathematica. Choosing nontrivial test functions with real and complex zeros we have performed numerical examples using multi-precision arithmetic in computational software Mathematica in Section 4. These examples have shown that the theoretical order of convergence of the newly developed methods and the computational order rc (see (32)) match well. Numerical examples presented in Section 4 have not been sufficient to give the answer which of the considered methods is the best. For this reason we have applied in Section 5 another methodology for estimating the quality of root-solvers based on basin attractors. This is a method for visual study of the behavior of root-finding methods as a function of the various starting points. Besides visual insight into the convergence behavior of iterative methods, a suitably written program for plotting basins of attraction can also offer some qualitative issues as a criterion for comparison, such as CPU time and average number of iterations required to satisfy termination criterion (Table 4). Such a program, realized in Mathematica, is given in Appendix. In general, it is hard to make absolutely valid rank of iterative methods regarding their quality. However, the analysis of iterative methods based on the two described methodologies, computational efficiency (including numerical experiments) and dynamic study, provides quite satisfactory insight into the quality of the considered methods, which is certainly of benefit for the user. Acknowledgments The authors wish to thank anonymous reviewers for careful reading and valuable comments which improved the quality of the paper. This work is supported by the Serbian Ministry of Education and Science. Appendix PROGRAM—BASIN OF ATTRACTIONS AND INFORMATION OF ITERATIONS Domain for plotting basins of attractions is chosen to be the square S = {z = x + iy : −a ≤ x ≤ a , −b ≤ y ≤ b} centered at the origin. The program written in Mathematica gives the main result fractColor of type Fractal, which includes basin of attraction plotted with its step-parameter M defined as a number of points along x and y axis (M = 600 in the presented experiments), number of starting points M2 = MxM (=360,0 0 0) belonging to the square S, total CPU time of iterations for all M2 starting points from S, number of convergent points, number of divergent (black) points, average number of iterations for each method and each example, total CPU time (in seconds) needed for completion of iterative procedures for all M2 = 360,0 0 0 points belonging to the square S. A test example for Chebyshev-like IM is displayed in Fig. 9.

Fig. 9. Outcome of the program—basin of attraction for Chebyshev-like IM (28) applied to the roots of the polynomial (z5 − z )4 .

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

111

A particular test function is selected by evaluating DefineFn[k] with concrete value k ∈ {1, 2, 3, 4, 5, 6} (six test functions). The chosen test function and its derivatives are defined in this part. The maximum number of iterations maxiter must be specified beforehand. maxiter = 40; DefineFn[k_ ] := ( Switch[k, 1, f[z_ ] = (z^5 - z)^4; m = 4, 2, f[z_ ] = ((z^2 - 1) (z^2 + 0.25))^3; m = 3, 3, f[z_ ] = (z^4 - 1)^5; m = 5, 4, f[z_ ] = (z^3 + 3 z^2 + z - 5)^3; m = 3, 5, f[z_ ] = (z^4 - z)^3; m = 3, 6, f[z_ ] = (z^5 + 2 z^4 + 2 z^3 + 10 z^2 + 25 z)^2; m = 2; ]; f1[z_ ] = D[f[z], z]; f2[z_ ] = D[f[z], {z, 2}]; f3[z_ ] = D[f[z], {z, 3}]; cf = Compile[{{z, _ Complex}}, f[z]]; cf1 = Compile[{{z, _ Complex}}, f1[z]]; cf2 = Compile[{{z, _ Complex}}, f2[z]]; cf3 = Compile[{{z, _ Complex}}, f3[z]]; u = Compile[{{z, _ Complex}}, cf[z]/cf1[z]]; a2 = Compile[{{z, _ Complex}}, cf2[z]/2/cf1[z]]; a3 = Compile[{{z, _ Complex}}, cf3[z]/6/cf1[z]]; Print[‘‘f[z]=’’, f[z]]; Point[‘‘Function zeros’’]; rootf = N[Union[Map[(z /. #) &, Solve[f[z] == 0, z]]]]) Concrete iterative method is given by the corresponding expression. For demonstration, Chebyshev-like method (cebis) is defined and applied. method = ‘‘CHEBYSHEV-LIKE METHOD’’; cebis = Compile[{{z, _ Complex}}, z - (3m(3-m) u[z] + 6 m2 a2[z] u[z]^2)/((m+1)(4-m) + 6m(m-1) a2[z]*u[z] + 6 m^2 (a3[z] - 2 a2[z]^2) u[z]^2)]; The rootPosition function determines the index of the root based onthe value of the approximation z. rootPosition = Compile[{{z, _ Complex}}, Module[{i = 1}, While[i <= Length[rootf], If[Abs[z - rootf[[i]]] < 10^ -6, Return[i]]; i++ ]; 0 ], {{rootf, _ Complex, 1}} ]; Function col for a given ordinary number of roots (app) and number of iterations (iter)generates color in the format HSB (hue, saturation, brightness). Brightness takes the valuesfrom the interval [0, 1], where 0 is maxiter and 1 is one iteration. col[{ap_ , z_ , iter_ }] := Module[{pp}, pp = N[iter/maxiter]; Apply[List, ColorConvert[Hue[ Switch[ap, 5, {0.12, 1, 1 - pp}, 4, {0.47, 1, 1 - pp}, 3, {0.3, 1, 1 - pp}, 2, {0.6, 0.5, 1 - pp}, 1, {1, 0.5, 1 - pp}, 0, {0, 0, 0}] ], ‘‘RGB’’]] ]

112

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

The iterative procedure is defined in the iter function. It runs the choseniterative method for a supplied starting value z0 and returns the root index,final approximation value and number of iterations. iter[z0_ ] := Module[{z = z0, z1 = z0, i = 0, ok = True, rp}, While[ok, z1 = cebis[z]; If[! NumberQ[z1], Return[{rootPosition[z], z, i}]]; i++; z = z1; rp = rootPosition[z]; If[(rp != 0) || (i >= maxiter), Return[rp, z, i]] ]] Functions that give information on iterations (CPU time, number of iterations, number of dark points) Area[f_ Fractal] := Area /. List @@ f Basin[f_ Fractal] := Basin /. List @@ f CPU[f_ Fractal] := CPU /. List @@ f IterAverage[f_ Fractal] := IterAverage /. List @@ f Points[f_ Fractal] := Points /. List @@ f Fn[f_ Fractal] := Fn /. List @@ f ConvPoints[f_ Fractal] := ConvPoints /. List @@ f Info[f_ Fractal] :=Print[‘‘\nArea: ’’, Area[f], ‘‘\nCPU: ’’, CPU[f], ‘‘\nIteration average: ’’, IterAverage[f], ‘‘\nNumber of points: ’’, Points[f], ‘‘\nConvergent points: ’’, ConvPoints[f]] Definition of the StandardForm for the Fractal expressions,used by Mathematica when displaying the output. Format[Fractal[list ], StandardForm] := (Info[Fractal[list]]; Basin[Fractal[list]]) Drawing basin of attraction; x-step is determined by (x1-x0)/(xn-1) and y-stepby (y1-y0)/(yn-1), where xn and yn are number of points along x and y axis. The result of the function is an expression of type Fractal which contains the plot and information of iterations. fractColor[x0_ , x1_ , xn_ , y0_ , y1_ , yn_ ] := Module[{tim, tab, tabc, tabConv, rast, xk = 1. (x1 - x0)/(xn - 1), yk = 1. (y1 - y0)/(yn - 1), gr, numpoints = xn yn, iterAverage, rootMarkers}, {tim, tab} =Timing[Table[iter[x + I y], {y, y0, y1, yk}, {x, x0, x1, xk}]]; tabc = Map[col, tab, {2}]; tabConv = Select[Flatten[tab, 1], ((#[[1]] != 0) && (#[[3]] <= maxiter)) &]; iterAverage = N[Plus @@ #/Length[#]] &[Transpose[tabConv][[3]]]; rootMarkers = {White, (Circle[{Re[#], Im[#]}, Scaled[0.009]] &) /@rootf, Black, (Circle[{Re[#], Im[#]}, Scaled[0.011]] &) /@ rootf}; gr = Graphics[Translate[ Scale[Raster[tabc], {(x1 - x0)/(xn), (y1 - y0)/(yn)}, {0, 0}], {x0, y0}], rootMarkers}, Frame -> True, BaseStyle -> {FontFamily -> ‘‘Times’’, FontSize -> 16}]; Fractal[Area -> {{x0, x1, xn}, {y0, y1, yn}}, CPU -> tim, Basin -> gr, Points -> numpoints, ConvPoints -> Length[tabConv], IterAverage -> iterAverage, Tab0 -> tab, Fn -> f[z]] ] Some error messages need to be switched offbefore running the fracColor function. Off[CompiledFunction::cfne]; Off[Power::infy]; Off[Infinity::indet]; After executing all above preparatory code, plots can be generatedby first invoking DefineFn and then fractColor: Print[method]; DefineFn[1] (*Setting k = 1, we chose the function 1 defined above*) fractColor[-3, 3, 600, -3, 3, 600] OUTCOME CHEBYSHEV-LIKE METHOD f[z]=(-z+z^4)^3 Function zeros {0.,1.,-0.5-0.866025 i,-0.5+0.866025 i}

I. Petkovi´c, Ð. Herceg / Applied Mathematics and Computation 295 (2017) 95–113

113

Area: {{-3,3,600},{-3,3,600}} CPU: 242.062500 Iteration average: 4.65912 Number of points: 360000 Convergent points: 360000 References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54]

J. Borwein, D. Bailey, Mathematics by Experiments, A K Peters, Ltd., Wellesley, MA, 2008. J.S. Cohen, Computer Algebra and Symbolic Computation: Mathematical Methods, AK Peters Ltd., 2003. D.H. Beiley, J.M. Borwein, N.J. Calkin, R. Girgensohn, D.R. Luke, V.H. Moll, Experimental Mathematics in Action, A K Peters, Ltd., Wellesley, MA, 2007. B. Kalantari, Polynomial Root-Finding and Polynomiography, World Scientific, New Jersey, 2009. J.L. Varona, Graphic and numerical comparison between iterative methods, Math. Intelligencer 24 (2002) 37–46. C. Chun, B. Neta, A third-order modification of Newton’s method for multiple roots, Appl. Math. Comput. 211 (2009) 474–479. S. Li, L. Cheng, B. Neta, Some fourth-order nonlinear solvers with closed formulae for multiple roots, Comp. Math. Appl. 59 (2010) 126–135. S. Li, X. Liao, L. Cheng, A new fourth-order iterative method for finding multiple roots of nonlinear equations, Appl. Math. Comput. 215 (2009) 1288–1292. B. Neta, A.N. Johnson, High-order nonlinear solver for multiple roots, Comp. Math. Appl. 55 (2008) 2012–2017. J.R. Sharma, R. Sharma, Modified Jarratt method for computing multiple roots, Appl. Math. Comput. 217 (2010) 878–881. X. Zhou, X. Chen, Y. Song, Construction of higher order methods for multiple roots of nonlinear equations, J. Comput. Appl. Math. 235 (2011) 4199–4206. P. Kravanja, M.V. Barel, Computing the Zeros of Analytic Functions, Lecture Notes in Mathematics 1727, vol. 1727, Springer, Berlin, 20 0 0. B.I. Yun, A non-iterative method for solving nonlinear equations, Appl. Math. Comput. 198 (2008) 691–699. B.I. Yun, Transformation methods for finding multiple roots of nonlinear equations, Appl. Math. Comput. 217 (2010) 599–606. J.F. Traub, Iterative Methods for the Solution of Equations, Prentice-Hall, Englewood Cliffs, New Jersey, 1964. B. Neta, C. Chun, On a family of Laguerre methods to find multiple roots of nonlinear equations, Appl. Math. Comput. 219 (2013) 10987–11004. C. Chun, H.J. Bae, B. Neta, New families of nonlinear third-order solvers for finding multiple roots, Comput. Math. Appl. 57 (2009) 1574–1582. P. Henrici, Applied and Computational Complex Analysis, vol. I, John Wiley and Sons Inc., New York, 1974. E. Bodewig, Sur la méthode Laguerre pour l’approximation des racines de certaines équations algébriques et sur la critique d’Hermite, Indag. Math. 8 (1946) 570–580. E. Hansen, M. Patrick, A family of root finding methods, Numer. Math. 27 (1977) 257–269. T.Y. Li, Z. Zeng, The Laguerre iteration in solving the symmetric tridiagonal eigenproblem. Revisited, SIAM J. Sci. Comput. 5 (1994) 1145–1173. B. Parlett, Laguerre’s method applied to the matrix eigenvalue problem, Math. Comput. 18 (1964) 464–485. A.M. Ostrowski, Solution of Equations in Euclidean and Banach Space, Academic Press, New York, 1973. J. Skowron, A. Gould, General complex polynomial root solver and its further optimization for binary microlenses (2012), arxiv:1203.1034v1. E. Schröder, Über unendlich viele Algorithmen zur Auflösung der Gleichungen, Math. Ann. 2 (1870) 317–365. N. Obreshkov, On numerical solution of equation (in Bulgarian), Annu. Univ. Sofia Fac. Sci. Phys. Math. 56 (1963) 73–83. B. Jovanovic´ , A method for obtaining iterative formulas of higher order, Mat. Vesnik 9 (24) (1972) 365–369. M.R. Farmer, G. Loizou, An algorithm for the total, or partial, factorization of a polynomial, Math. Proc. Camb. Phil. Soc. 82 (1977) 427–437. I. Kiss, Über eine Verallgemeinerung des newtonschen Näherungsverfahrens, Z. Angew. Math. Mech. 34 (1954) 68–69. W. Werner, Some improvements of classical iterative methods for the solution of nonlinear equations, in: E.L. Allgower, K. Glashoff, H.O. Peitgen (Eds.), Numerical Solution of Nonlinear Equations (Proc., Bremen, 1980), Lecture Notes in Math., vol. 878, 1981, pp. 427–440. J.M. Gutiérrez, M.A. Hernandez, A family of Chebyshev–Halley type methods in Banach spaces, Bull. Aust. Math. Soc. 55 (1997) 113–130. D. Li, P. Liu, J. Kou, An improvement of Chebyshev–Halley methods free from second derivative, Appl. Math. Comput. 235 (2014) 221–225. A. Cordero, J.R. Torregrosa, P. Vindel, Dynamics of a family of Chebyshev–Halley type methods, Appl. Math. Comput. 219 (2013) 8568–8583. W. Gander, On Halley’s iteration method, Am. Math. Month. 92 (1985) 131–134. C. Chun, M.Y. Lee, B. Neta, J. Džunic´ , On optimal fourth-order iterative methods free from second derivative and their dynamics, Appl. Math. Comput. 218 (2012) 6427–6438. L.O. Jay, A note on q-order of convergence, BIT 41 (2001) 422–429. M. Scott, B. Neta, C. Chun, Basin attractors for various methods, Appl. Math. Comput. 218 (2011) 2584–2599. B. Neta, M. Scott, C. Chun, Basin attractors for various methods for multiple roots, Appl. Math. Comput. 218 (2012) 5043–5066. B.D. Stewart, Attractor Basins of Various Root-finding Methods, Naval Postgraduate School, Department of Applied Mathematics, Monterey, CA, 2001 (M.S. thesis). S. Amat, S. Busquier, S. Plaza, Review of some iterative root-finding methods from a dynamical point of view, Scientia 10 (2004a) 3–35. S. Amat, S. Busquier, S. Plaza, Dynamics of a family of third-order iterative methods that do not require using second derivatives, Appl. Math. Comput. 154 (2004b) 735–746. S. Amat, S. Busquier, S. Plaza, Dynamics of the King and Jarratt iterations, Aequ. Math. 69 (2005) 212–223. F. Chicharro, A. Cordero, J.M. Gutiérrez, J.R. Torregrosa, Complex dynamics of derivative-free methods for nonlinear equations, Appl. Math. Comput. 219 (2013) 7023–7035. B. Neta, M. Scott, C. Chun, Basin of attractions for several methods to find simple roots of nonlinear equations, Appl. Math. Comput. 218 (2012) 10548–10556. I.K. Argyros, Á.A. Magreñán, On the convergence of an optimal fourth-order family of methods and its dynamics, Appl. Math. Comput. 252 (2015) 336–346. B. Neta, C. Chun, Basins of attraction for several optimal fourth order methods for multiple roots, Math. Comput. Simul. 103 (2014) 39–59. C. Chun, B. Neta, Basins of attraction for Zhou–Chen–Song fourth order family of methods for multiple roots, Math. Comput. Simul. 109 (2015a) 74–91. C. Chun, B. Neta, Basins of attraction for several third order methods to find multiple roots of nonlinear equations, Appl. Math. Comput. 268 (2015b) 129–137. I. Petkovic´ , B. Neta, On an application of symbolic computation and computer graphics to root-finders: the case of multiple roots of unknown multiplicity, J, Comput. Appl. Math. 308 (2016) 215–230. A. Cordero, Á.A. Magreñán, C. Quemada, J.R. Torregrosa, Stability study of eighth-order iterative methods for solving nonlinear equations, J. Comput. Appl. Math. 291 (2016) 348–357. R. Behl, A. Cordero, S.S. Motsa, J.R. Torregrosa, Construction of fourth-order optimal families of iterative methods and their dynamics, Appl. Math. Comput. 271 (2015) 89–101. Á.A. Magreñán, A new tool to study real dynamics: the convergence plane, Appl. Math. Comput. 248 (2014) 215–224. J.L. Hueso, E. Martínez, C. Teruel, Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems, J. Comput. Appl. Math. 275 (2015) 412–420. Á.A. Magreñán, Different anomalies in a Jarratt family of iterative root-finding methods, Appl. Math. Comput. 233 (2014) 29–38.