On the Kurchatov method for solving equations under weak conditions

On the Kurchatov method for solving equations under weak conditions

Applied Mathematics and Computation 273 (2016) 98–113 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage...

394KB Sizes 1 Downloads 67 Views

Applied Mathematics and Computation 273 (2016) 98–113

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

On the Kurchatov method for solving equations under weak conditions Ioannis K. Argyros a, Hongmin Ren b,∗ a b

Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA College of Information and Engineering, Hangzhou Polytechnic, Hangzhou Zhejiang, 311402 PR China

a r t i c l e

i n f o

MSC: 65D10 65D99 65G99 65J15 65H10 Keywords: Kurchatov method Newton’s method Banach space Local-semilocal convergence Divided difference

a b s t r a c t We present a new convergence analysis for the Kurchatov method in order to solve nonlinear equations in a Banach space setting. In the semilocal convergence case, the sufficient convergence conditions are weaker than in earlier studies such as Argyros (2005, 2007), Ezquerro et al. (2013) and Kurchatov (1971). This way we extend the applicability of this method. Moreover, in the local convergence case, our radius of convergence is larger leading to a wider choice of initial guesses and fewer iterations to achieve a desired error tolerance. Numerical examples are also presented. © 2015 Elsevier Inc. All rights reserved.

1. Introduction In this study we are concerned with the problem of approximating a locally unique solution x of equation

F (x) = 0,

(1.1)

where F is Fréchet-differentiable operator defined on a convex subset of a Banach space X with values in a Banach space Y. Many problems in Computational Sciences and other disciplines can be brought in a form like (1.1) using mathematical modelling [2,6,10]. The solutions of these equations can be rarely be found in closed form. That is why most solution methods for these equations are usually iterative. The practice of Numerical Functional Analysis for finding such solutions is essentially connected to Newton-like methods [1–14]. Newton’s method converges quadratically to x if the initial guess is close to enough to the solution. The study about convergence of iterative procedures is normally centered on two types: semi-local and local convergence analysis. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure. While the local analysis is based on the information around a solution, to find estimates of the radii of convergence balls. In the present paper, we study the local as well as semilocal convergence of quadratically convergent Kurchatov method defined for each n = 0, 1, . . . by

xn+1 = xn − A−1 n F (xn ), ∗

Corresponding author. Tel.: +86 57128287202; fax: +86 57128287203. E-mail addresses: [email protected] (I.K. Argyros), [email protected], [email protected] (H. Ren).

http://dx.doi.org/10.1016/j.amc.2015.09.065 0096-3003/© 2015 Elsevier Inc. All rights reserved.

(1.2)

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

99

where, x−1 , x0 are initial points and An = [2xn − xn−1 , xn−1 ; F ], where [x, y; F] denotes the divided difference of operator F at the points x, y ∈ D. We need the definition of divided differences of order one and two. Definition 1.1. Let x, y be two points in D. The linear operator [x, y.; F] ∈ L(X, Y) is called a divided difference of order one of operator F at the x, y if the following holds

[x, y; F ](x − y) = F (x) − F (y)

f or any x, y ∈ D with x = y.

If F is Fréchet-differentiable on D, then F  (x) = [x, x; F ]. Similarly, [x, y, z; F] is a divided difference of order two at points x, y, z ∈ D, if

[x, y, z; F ](y − z) = [x, y; F ] − [x, z; F ]. If F is twice-Fréchet-differentiable on D, then 12 F  (x) = [x, x, x; F ]. The convergence of the Kurchatov method has been studied under hypotheses up to the divided difference of order two in [1,3] and up to the divided difference of one in [2–7]. In particular, the condition

A−1 0 ([u, x, y; F ] − [v, x, y; F ]) ≤ δu − v

(1.3)

for each x, y, u, v ∈ D has been used. However, there are examples where (1.3) is violated or [., ., .; F] does not exist. For an example, define function f : [−1, 1] → ( − ∞, ∞) by

f (x) = x2 ln x2 + c1 x2 + c2 x + c3 ,

f (0) = c3 ,

where c1 , c2 , c3 are given real numbers. Then, we have that limx→0 x2 ln x2 = 0, limx→0 x ln x2 = 0, f  (x) = 2x ln x2 + 2(c1 + 1)x + c2 and f  (x) = 2( ln x2 + 3 + c1 ). Then, function f does not satisfy (1.3). Here, we use hypotheses first on the divided difference of order one. When, we use divided difference of order two the convergence conditions are weaker than in [3–5,8,10]. Hence, the applicability of the Kurchatov method is extended. The paper is organized as follows: Section 2 contains results on the semilocal convergence analysis of the Kurchatov method while the local convergence analysis is presented in Section 3. Finally, the numerical examples are presented in the concluding Section 4. 2. Semilocal convergence We present the semilocal convergence of Kurchatov method (1.2). First, we need two auxiliary results on sequences that will be shown to be majorizing for the Kurchatov method (1.2). Lemma 2.1. Let L > 0, Li > 0, i = −1, 0, 1, 2, 3, t0 ≥ 0, t1 > 0 be given parameters. Denote by α the smallest root of polynomial p in the interval (0, 1) defined by

p(t ) = 2L0 t 3 + ( − L0 + L1 + L2 )t 2 + L3 t − (L2 + L3 ). Suppose that

0<

L0 (t1 − t−1 ) + L1 (t0 − t−1 ) ≤ α, 1 − (L−1 (2t1 − t0 ) + Lt0 )

(2.1)

0<

L2 (t2 − t0 ) + L3 (t1 − t0 ) ≤α 1 − (L0 (2t2 − t1 ) + L1 t1 )

(2.2)

and

0<α ≤1−

(L0 + L1 )(t1 − t0 ) , 1 − (L0 + L1 )t0

(2.3)

where, t−1 = 0,

t2 = t1 +

L0 (t1 − t−1 ) + L1 (t0 − t−1 ) (t1 − t0 ). 1 − (L−1 (2t1 − t0 ) + Lt0 )

Then, scalar sequence {tn } defined by

tn+2 = tn+1 +

L2 (tn+1 − tn−1 ) + L3 (tn − tn−1 ) (tn+1 − tn ) 1 − (L0 (2tn+1 − tn ) + L1 tn )

f or each n = 1, 2, . . .

(2.4)

is well defined, increasing, bounded above by

t  =

t1 − t0 + t0 1−α

(2.5)

and converges to its unique least upper bound t which satisfies

t  ∈ [t1 , t  ].

(2.6)

100

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

Moreover, the following estimates hold for each n = 0, 1, . . .

tn+2 − tn+1 ≤ α(tn+1 − tn ).

(2.7)

Proof. Using the definition of polynomial p we have that p(0) = −(L2 + L3 ) < 0 and p(1) = L0 + L1 > 0. It follows by the intermediate value theorem that p has roots in the interval (0, 1). Denote by α the smallest root. In view of (2.1) and (2.2) it follows α2 1−α 3 that t2 − t1 ≤ α(t1 − t0 ), t3 − t2 ≤ α(t2 − t1 ) ≤ α 2 (t1 − t0 ), t2 ≤ 1− 1−α (t1 − t0 ) + t0 , t3 ≤ 1−α (t1 − t0 ) + t0 . That is,

0<

L2 (tk+1 − tk−1 ) + L3 (tk − tk−1 ) ≤α 1 − (L0 (2tk+1 − tk ) + L1 tk )

(2.8)

holds for k = 1. Suppose that tk+1 − tk ≤ α k (t1 − t0 ) and tk+1 ≤ for k = 1, 2, . . . , n, if

1−α k+1 1−α (t1

− t0 ) + t0 hold for each k = 0, 1, . . . , n. Then, (2.8) holds

L2 [α k (t1 − t0 ) +

α k−1 (t1 − t0 )] + L3 α k−1 (t1 − t0 )     1 − α k+1 1 − αk k + L0 α α (t1 − t0 ) + (t1 − t0 ) + t0 + α L1 (t1 − t0 ) + t0 − α ≤ 0 1−α 1−α

(2.9)

holds for k = 1, 2, . . . , n. Estimate (2.9) motivates us to introduce recurrent polynomials fk on the interval (0, 1) by



fk (t ) = L2 t k−1 + L2 t k−2 + L3 t k−2 + L0 t k + L0

1 − t k+1 1 − tk + L1 1−t 1−t



(t1 − t0 ) + (L0 + L1 )t0 − 1.

(2.10)

We need a relationship between two consecutive polynomials fk . By some algebraic manipulation we get that

fk+1 (t ) = fk (t ) + p(t )t k−2 (t1 − t0 ).

(2.11)

By the definition of α and p that

fk+1 (α) = fk (α).

(2.12)

Define function f∞ on the interval (0, 1) by

f∞ (t ) = lim fk (t ).

(2.13)

k→∞

Then, by (2.13) and (2.10) we get that

f∞ (t ) =

(L0 + L1 )(t1 − t0 ) 1−t

+ (L0 + L1 )t0 − 1.

(2.14)

In particular, we have

f∞ (α) = lim fk (α).

(2.15)

k→∞

In order for us to show (2.9) it suffices to show fk (α ) ≤ 0 or by (2.12) and (2.15) that

f∞ (α) ≤ 0, which is true by (2.3) and (2.14). The induction for (2.8) (i.e for (2.7)) is complete. Hence, sequence {tk } is increasing, bounded above by t and as such that it converges to its unique least upper bound t which satisfies (2.6).  Next we present an alternative to Lemma 2.1. Lemma 2.2. Let L > 0, Li > 0, i = −1, 0, 1, 2, 3, K ≥ 0, s0 ≥ 0, s1 > 0 be given parameters. Assume β is the smallest positive root of polynomial p defined in interval [0, 1] by

p(t ) = 2L0 t 3 +

L + L 2 3 2



+ K (s1 − s0 ) − L0 + L1 t 2 − K (s1 − s0 )t −

L 2 + L3 . 2

(2.16)

Suppose

0<

0<

(s1 − s−1 ) + K (s0 − s−1 )2 ≤ β, 1 − [L−1 (2s1 − s0 ) + Ls0 ]

L0 +L1 2

(s2 − s0 ) + K (s1 − s0 )2 ≤ β, 1 − [L0 (2s2 − s1 ) + L1 s1 ]

L2 +L3 2

β ≤1−

(L0 + L1 )(s1 − s0 ) , 1 − (L0 + L1 )s0

(2.17)

(2.18)

(2.19)

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

101

where, s−1 = 0,

s2 = s1 +

(s1 − s−1 ) + K (s0 − s−1 )2 (s1 − s0 ). 1 − [L−1 (2s1 − s0 ) + Ls0 ]

L0 +L1 2

Then, scalar sequence {sn } defined for each n = 1, 2, . . . by

sn+2 = sn+1 +

(sn+1 − sn−1 ) + K (sn − sn−1 )2 (sn+1 − sn ) 1 − [L0 (2sn+1 − sn ) + L1 sn ]

L2 +L3 2

(2.20)

is well defined, increasing, bounded above by

s =

s1 − s0 + s0 1−β

(2.21)

and converges to its unique least upper bound s which satisfies

s ∈ [s1 , s ].

(2.22)

Moreover, the following estimates hold for each n = 0, 1, . . .

sn+2 − sn+1 ≤ β(sn+1 − sn ).

(2.23)

Proof. The proof follows as in Lemma 2.1 if we replace α , {tk }, (2.8)(k > 1) by β , the new sequence {sk } and

0<

(sk+1 − sk−1 ) + K (sk − sk−1 )2 ≤ β, k > 1 1 − [L0 (2sk+1 − sk ) + L1 sk ]

L2 +L3 2

to arrive at



1 − β k+1 β k (s1 − s0 ) + (s1 − s0 ) + s0 1−β   1 − βk + L1 β (s1 − s0 ) + s0 − β ≤ 0 1−β

L 2 + L3 k (β + β k−1 )(s1 − s0 ) + K β 2k−2 (s1 − s0 )2 + L0 β 2



and, since β 2k−3 ≤ β k−1 to the definition of function fk on the interval (0, 1) by



fk (t ) =

L2 + L3 k−1 1 − t k+1 (t + t k−2 )(s1 − s0 ) + Kt k−1 (s1 − s0 )2 + L0 t k (s1 − s0 ) + (s1 − s0 ) + s0 2 1−t



+ L1





1 − tk (s1 − s0 ) + s0 − 1. 1−t

Then, we have that

fk+1 (t ) = fk (t ) + p(t )t k−2 (s1 − s0 ). The rest of the proof follows exactly as in Lemma 2.1.  Next, we present two semilocal convergence results for Kurchatov method (1.2). In the first one, we use Lemma 2.1 and hypotheses on the divided difference of order one for F. In the second result, we use Lemma 2.2 and divided differences up to order two. Denote by U(r, ξ ), U (r, ξ ), the open and closed balls in X, respectively with center r ∈ X and of radius ξ > 0. Theorem 2.3. Let F: D ⊂ X → Y be a Fréchet-differentiable operator. Suppose that there exists a divided difference [., .; F] of order one for operator F on D × D. Moreover, suppose that there exist x−1 , x0 ∈ D, L > 0, Li > 0, i = −1, 0, 1, 2, 3, t0 ≥ 0, t1 > 0 such that for each x, y, z, v ∈ D

A−1 0 ∈ L(Y, X ),

(2.24)

x0 − x−1  ≤ t0 , (x0 ) ≤ t1 − t0 , 

(2.25)



A−1 0 F



A−1 0

([x, y; F ] − A0 ) ≤ L0 x − 2x0 + x−1  + L1 y − x−1 ,

(2.27)



A−1 0

([x, y; F ] − [z, v; F ]) ≤ L2 x − z + L3 y − v,

(2.28)

A−1 0

(A1 − A0 ) ≤ L−1 2(x1 − x0 ) − (x0 − x−1 ) + Lx0 − x−1 ,

U (x0 , r0 ) ⊆ D, r0 = max (2(t1 − t0 ), t − t0 ) 

x0 − A−1 F (x0 ), 0

(2.26)

(2.29) t

and hypotheses of Lemma 2.1 hold, where x1 = A0 = [2x0 − x−1 , x−1 ; F ] and is given in Lemma 2.1. Then, sequence {xn } generated by the Kurchatov method (1.2) is well defined, remains in U (x0 , r0 ) and converges to a solution x ∈ U (x0 , r0 ) of equation F (x) = 0. Moreover, the following estimates hold for each n = 0, 1, . . .

xn − x  ≤ t  − tn .

(2.30)

102

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

Furthermore, if there exists R > r0 such that

U (x0 , R) ⊆ D

(2.31)

L0 t  + L1 (R + t0 ) < 1,

(2.32)

and

then, the point

x

is the only solution of F (x) = 0 in U (x0 , R).

Proof. We shall show using induction that the following assertions hold for k = 0, 1, . . .:

(Ik ) xk − xk−1  ≤ tk − tk−1 . These assertions hold for k = 0, 1 by (2.25) and the first hypothesis in (2.26), respectively. We also have that x−1 , x0 ∈ U (x0 , t  ), x1 is well defined and x1 ∈ U (x0 , t  ). Then, we also have that 2x1 − x0 − x0  = 2x1 − x0  ≤ 2(t1 − t0 ) ≤ r0 , so 2x1 − x0 ∈ U (x0 , r0 ). Suppose (Ik ) holds for each k = 1, 2, . . . , n. Using (2.27), and the proof of Lemma 2.1, we get in turn

A−1 0 (Ak − A0 ) ≤ L0 2xk − xk−1 − (2x0 − x−1 ) + L1 xk−1 − x−1  ≤ L0 (xk − xk−1  + xk − x0  + x0 − x−1 ) + L1 xk−1 − x−1  ≤ L0 (tk − tk−1 + tk − t0 + t0 − t−1 ) + L1 (tk−1 − t0 + t0 ) = L0 (2tk − tk−1 ) + L1 tk−1 < 1, where,

L0 =



L−1 ,

k = 1,

L0 ,

k > 1,

 L1 =

L,

k = 1,

L1 ,

k > 1.

(2.33)

It follows from (2.33) and the Banach lemma on invertible operator [2,6,9] that A−1 ∈ L(Y, X ) and k

A−1 A0  ≤ k

1 1 − [L0 (2tk − tk−1 ) + L1 tk−1 ]

.

(2.34)

Using the Kurchatov method (1.2), we obtain the approximation

F (xk ) = F (xk ) − F (xk−1 ) − Ak−1 (xk − xk−1 ) = ([xk , xk−1 ; F ] − Ak−1 )(xk − xk−1 ).

(2.35)

It follows from the induction hypotheses and (2.28) that

A−1 0 F (xk ) ≤ (L2 xk − (2xk−1 − xk−2 ) + L3 xk−1 − xk−2 )xk − xk−1  ≤ (L2 (xk − xk−1  + xk−1 − xk−2 ) + L3 xk−1 − xk−2 )xk − xk−1  ≤ (L2 (tk − tk−2 ) + L3 (tk−1 − tk−2 ))(tk − tk−1 ), where,

L2 =



L0 ,

k = 1,

L2 ,

k > 1,

 L3 =

L1 ,

k = 1,

L3 ,

k > 1.

(2.36)

Using the Kurchatov method (1.2), (2.4), (2.34) and (2.36), we get that

xk+1 − xk  ≤ A−1 A0 A−1 0 F (xk ) ≤ k

(L2 (tk − tk−2 ) + L3 (tk−1 − tk−2 ))(tk − tk−1 ) = tk+1 − tk 1 − [L0 (2tk − tk−1 ) + L1 tk−1 ]

and

xk+1 − x0  ≤ xk+1 − xk  + xk − x0  ≤ tk+1 − t0 ≤ t  − t0 ≤ r0 , which completes the induction. Hence, {xk } is a complete sequence in a Banach space X and as such it converges to some x ∈ U (x0 , r0 )(since U (x0 , r0 ) is a closed set). By letting k → ∞ in (2.36) we obtain F (x ) = 0. Estimate (2.30) follows from (Ik ) by using standard majorizing techniques [2,6,9,14]. To complete the proof we show the uniqueness of the solution in U (x0 , R). Let y ∈ U (x0 , R) be such that F (y ) = 0. Using (2.27), (2.32) and the proof of Lemma 2.1, we have in turn that     A−1 0 ([x , y ; F ] − A0 ) ≤ L0 x − (2x0 − x−1 ) + L1 y − x−1   ≤ L0 (x − x0  + x−1 − x0 ) + L1 (y − x0  + x0 − x−1 ) ≤ L0 (t  − t0 + t0 − t−1 ) + L1 (R + t0 − t−1 ) = L0 t  + L1 (R + t0 ) < 1.

(2.37)

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

103

It follows that [x , y ; F ]−1 exists. Then from the identity

[x , y ; F ](x − y ) = F (x ) − F (y ) = 0 we deduce x = y .  Theorem 2.4. Let F: D ⊂ X → Y be a Fréchet-differentiable operator. Suppose that there exist divided differences [., .; F], [., ., .; F] of order one and order two for operator F on D × D and D × D × D, respectively. Moreover, suppose that there exist x−1 , x0 ∈ D, L > 0, Li > 0, i = −1, 0, 1, 2, 3, K ≥ 0, s0 ≥ 0, s1 > 0 such that for each x, y, z, v ∈ D

A−1 0 ∈ L(Y, X ),

x0 − x−1  ≤ s0 , A−1 0 F (x0 ) ≤ s1 − s0 , A−1 0 (A1 − A0 ) ≤ L−1 2(x1 − x0 ) − (x0 − x−1 ) + Lx0 − x−1 , A−1 0 ([x, y; F ] − A0 ) ≤ L0 x − 2x0 + x−1  + L1 y − x−1 , A−1 0 ([x, y; F ] − [z, v; F ]) ≤ L2 x − z + L3 y − v, A−1 0 ([x, v, y; F ] − [x, z, y; F ]) ≤ K v − z, U (x0 , R0 ) ⊆ D, R0 = max (2(s1 − s0 ), s − s0 ) and hypotheses of Lemma 2.2 hold, where x1 = x0 − A−1 F (x0 ), A0 = [2x0 − x−1 , x−1 ; F ] and s is given in Lemma 2.2. Then, sequence 0 {xn } generated by the Kurchatov method (1.2) is well defined, remains in U (x0 , R0 ) and converges to a solution x ∈ U (x0 , R0 ) of equation F (x) = 0. Moreover, the following estimates hold for each n = 0, 1, . . .

xn − x  ≤ s − sn .

(2.38)

Furthermore, if there exists R > R0 such that

U (x0 , R) ⊆ D and

L0 s + L1 (R + s0 ) < 1, then, the point x is the only solution of F (x) = 0 in U (x0 , R). Proof. We shall show using induction that the following assertions hold for k = 0, 1, . . .:

(Ik ) xk − xk−1  ≤ sk − sk−1 . These assertions hold for k = 0, 1 by the hypotheses. We also have that x−1 , x0 ∈ U (x0 , s ), x1 is well defined and x1 ∈ U (x0 , s ). Then, we also have that 2x1 − x0 − x0  = 2x1 − x0  ≤ 2(s1 − s0 ) ≤ R0 , so 2x1 − x0 ∈ U (x0 , R0 ). Suppose (Ik ) holds for each k = 1, 2, . . . , n. It follows from the induction hypotheses and (2.35) that −1 A−1 0 F (xk ) = A0 ([xk , xk−1 ; F ] − [2xk−1 − xk−2 , xk−2 ; F ])(xk − xk−1 ) = A−1 0 ([xk , xk−1 ; F ] − [xk , xk−2 ; F ] + [xk , xk−2 ; F ] − [2xk−1 − xk−2 , xk−2 ; F ])(xk − xk−1 ) = A−1 0 ([xk , xk−1 , xk−2 ; F ](xk−1 − xk−2 ) + [xk , 2xk−1 − xk−2 , xk−2 ; F ](xk − 2xk−1 + xk−2 ))(xk − xk−1 )  = A−1 ([xk , xk−1 , xk−2 ; F ] − [xk , 2xk−1 − xk−2 , xk−2 ; F ])(xk−1 − xk−2 ) 0  + [xk , 2xk−1 − xk−2 , xk−2 ; F ](xk − xk−1 ) (xk − xk−1 )

≤ K xk−1 − xk−2 2 xk − xk−1  + A−1 0 [xk , 2xk−1 − xk−2 , xk−2 ; F ](xk − xk−1 )(xk − xk−1 ).

(2.39)

On the other hand, by the definition of divided differences, we have that

[xk , 2xk−1 − xk−2 , xk−2 ; F ](xk − xk−1 ) = [xk , 2xk−1 − xk−2 , xk−2 ; F ](xk − (2xk−1 − xk−2 ) + xk−1 − xk + xk − xk−2 ),

(2.40)

which means

[xk , 2xk−1 − xk−2 , xk−2 ; F ](xk − xk−1 ) 1 = [xk , 2xk−1 − xk−2 , xk−2 ; F ](xk − (2xk−1 − xk−2 ) + xk − xk−2 ) 2  1 = [xk , xk−2 ; F ] − [2xk−1 − xk−2 , xk−2 ; F ] + [2xk−1 − xk−2 , xk ; F ] − [2xk−1 − xk−2 , xk−2 ; F ] . 2

(2.41)

104

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

Substituting (2.41) into (2.39) and using the hypotheses, we get that

1 −1  2 A−1 A0 [xk , xk−2 ; F ] 0 F (xk ) ≤ K xk−1 − xk−2  xk − xk−1  +

2  − [2xk−1 − xk−2 , xk−2 ; F ] + [2xk−1 − xk−2 , xk ; F ] − [2xk−1 − xk−2 , xk−2 ; F ] (xk − xk−1 )

1 (L2 xk + xk−2 − 2xk−1  + L3 xk − xk−2 )xk − xk−1  2 L 2 + L3 ≤ K xk−1 − xk−2 2 xk − xk−1  + (xk − xk−1  + xk−1 − xk−2 )xk − xk−1  2 L2 + L3 ≤ K (sk−1 − sk−2 )2 (sk − sk−1 ) + (sk − sk−1 + sk−1 − sk−2 )(sk − sk−1 ) 2

L2 + L3 = K (sk−1 − sk−2 )2 + (sk − sk−2 ) (sk − sk−1 ), 2 ≤ K xk−1 − xk−2 2 xk − xk−1  +

(2.42)

where, L2 , L3 are the same constants defined in the proof of Theorem 2.3. On the other hand, similarly as (2.34), we have that A−1 ∈ L(Y, X ) and k

A−1 A0  ≤ k

1 1 − [L0 (2sk − sk−1 ) + L1 sk−1 ]

,

where, L0 , L1 are the same constants defined in the proof of Theorem 2.3. Using the Kurchatov method (1.2) and (2.42), we get that

xk+1 − xk  ≤ A−1 A0 A−1 0 F (xk ) ≤ k

K (sk−1 − sk−2 )2 +

(sk − sk−2 ) (sk − sk−1 ) = sk+1 − sk 1 − (L0 (2sk − sk−1 ) + L1 sk−1 ) L2 +L3 2

and

xk+1 − x0  ≤ xk+1 − xk  + xk − x0  ≤ sk+1 − s0 ≤ s − s0 ≤ R0 , which completes the induction. Hence, {xk } is a complete sequence in a Banach space X and as such it converges to some x ∈ U (x0 , R0 )(since U (x0 , R0 ) is a closed set). By letting k → ∞ in (2.42) we obtain F (x ) = 0. Estimate (2.38) follows from (Ik ) by using standard majorizing techniques [2,6,9,14]. The uniqueness part is identical to Theorem 2.3, since, again (2.27) is used.  Remark 2.5. (a) The limit point t (or s ) can be replaced by t (or s ) given in the corresponding closed form by (2.5) (or (2.21)) in both theorems. (b) We have that

L−1 ≤ L0 ≤ L2 ,

(2.43)

L ≤ L 1 ≤ L3 ,

(2.44) L2 L3 L0 , L1

can be arbitrarily large [2,6,7]. If the divided difference [x, y; F] is symmetric, then we have hold in general and that L2 = L3 and L0 = L1 . Notice that in the literature they use L−1 = L = L0 = L1 = L2 = L3 to study iterative methods with divided differences [8–14]. However, if strict inequality holds in any of the inequality in (2.43) or (2.44), then, our approach leads to tighter majorizing sequences weaker sufficient convergence conditions and an at least as precise information on the location of the solutions [8–14]. (c) Condition (2.29) can be replaced by for all x, y ∈ D0 ⇒ 2y − x ∈ D0 , which holds e.g., if X = Y = D0 [3–5]. 3. Local convergence In this section, we present the local convergence of Kurchatov method (1.2). Theorem 3.1. Let F be a continuous nonlinear operator defined on an open subset D of a Banach space X with values in a Banach space Y. Assume: (a) equation F (x) = 0 has a solution x ∈ D at which the Fréchet derivative exists and is invertible; (b) there exist Mi ≥ 0, i = 0, 1, 2, M ≥ 0 with M0 + M1 + M2 + M > 0 such that the divided differences of order one and two of F on D0 ⊆ D satisfy the following Lipschitz conditions:

F  (x )−1 ([x, y; F ] − F  (x )) ≤ M0 x − x  + M1 y − x  f or each x, y ∈ D0 ,

(3.1)

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

105

F  (x )−1 ([x, x ; F ] − [y, x ; F ]) ≤ M2 x − y f or each x, y ∈ D0 ,

(3.2)

F  (x )−1 ([x, v, y; F ] − [x, z, y; F ]) ≤ Mv − z f or each x, y, z, v ∈ D0 ;

(3.3)

(c) the ball

U  = U (x , 3r ) ⊆ D0 , where,

r

(3.4)

is the unique positive root of polynomial q defined by

q(t ) = 4M2 t 3 + 2Mt 2 + (3M0 + M1 )t − 1.

(3.5)

Then, the sequence {xn } generated by method (1.2) is well defined, remains in x provided that

U(x , r )

for each n = 0, 1, 2, . . . and converges to

x−1 , x0 ∈ U (x , r ).

(3.6)

Moreover, the following estimates hold for n ≥ 0:

xn+1 − x  ≤

Mxn−1 − x xn − xn−1  + M2 xn − xn−1 2 xn − x  xn − x  ≤ xn − x  < r . 1 − M0 (xn − x  + xn − xn−1 ) − M1 xn−1 − x 

Furthermore, if there exists R ∈ [r , equation F (x) = 0

in U (x , R ).

If M0

1   M0 )(M0 = 0) such that U (x , R )    = 0, x is unique in U(x , r ).

(3.7)

⊆ D0 , then the limit point x is the only solution of

, r )

and (3.7) hold for all n ≥ 0 by induction. It is easy to see that the polynomial q defined by Proof. We shall show xn+1 ∈ U (x0 (3.5) has a unique positive root r under the hypotheses. We have from (3.5) that



1 − (3M0 + M1 )r = 4M2 r 3 + 2Mr 2

>0,

if M2 + M > 0;

= 0,

if M2 + M = 0.

Let us suppose (3.6) holds. Notice that for each u, v ∈ U (x , r ), we get that 2v − u − x  ≤ 2v − x  + u − x  < 3r . Then, we have from the hypotheses (3.1), (3.5) and (3.6) that

I − F  (x )−1 A0  = F  (x )−1 ([2x0 − x−1 , x−1 ; F ] − F  (x )) ≤ M0 2x0 − x−1 − x  + M1 x−1 − x  ≤ M0 (x0 − x  + x0 − x−1 ) + M1 x−1 − x  ≤ M0 (2x0 − x  + x−1 − x ) + M1 x−1 − x   < (3M0 + M1 )r = 1, if M2 = M = 0 (i.e., 3M0 + M1 > 0); ≤ (3M0 + M1 )r < 1, else.

(3.8)

It follows from (3.8) and the Banach lemma on the invertible operators that A−1 exists, and the following estimate holds: 0

1 1 − M0 (x0 − x  + x0 − x−1 ) − M1 x−1 − x  1 ≤ . 1 − M0 (2x0 − x  + x−1 − x ) − M1 x−1 − x 

  A−1 0 F (x ) ≤

(3.9)

Hence, x1 is well defined, and we have from (1.2) that

x1 − x = x0 − x − A−1 0 F (x0 )   = A−1 0 ([2x0 − x−1 , x−1 ; F ] − [x0 , x ; F ])(x0 − x )     = A−1 0 ([2x0 − x−1 , x−1 ; F ] − [2x0 − x−1 , x ; F ] + [2x0 − x−1 , x ; F ] − [x0 , x ; F ])(x0 − x )     = A−1 0 ([2x0 − x−1 , x−1 , x ; F ](x−1 − x ) + [2x0 − x−1 , x0 , x ; F ](x0 − x−1 ))(x0 − x )



= A−1 ([2x0 − x−1 , x−1 , x ; F ] − [2x0 − x−1 , x0 , x ; F ])(x−1 − x ) 0



+ [2x0 − x−1 , x0 , x ; F ](x0 − x ) (x0 − x )



    −1 ([2x − x , x , x ; F ] − [2x − x , x , x ; F ])(x − x ) = A−1 0 −1 −1 0 −1 0 −1 0 F (x )F (x )



+ ([2x0 − x−1 , x ; F ] − [x0 , x ; F ])(x0 − x−1 )(x0 − x ) (x0 − x ). Using (3.2), (3.3), (3.5), (3.9) and (3.10), we have that

x 1 − x   ≤

Mx0 − x−1 x−1 − x  + M2 x0 − x−1 2 x0 − x  x 0 − x   1 − M0 (x0 − x  + x0 − x−1 ) − M1 x−1 − x 

(3.10)

106

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113



M(x0 − x  + x−1 − x )x−1 − x  + M2 (x0 − x  + x−1 − x )2 x0 − x  x0 − x  1 − M0 (x0 − x  + x0 − x−1 ) − M1 x−1 − x 



Mx0 − x x−1 − x  + Mx−1 − x 2 + 4M2 r 2 x0 − x  x0 − x  1 − M0 (2x0 − x  + x−1 − x ) − M1 x−1 − x 



⎧ ⎨0 ≤ x0 − x  < r , ⎩

2

if M = M2 = 0,

3

2Mr + 4M2 r x 0 − x   = x 0 − x   < r  , 1 − (3M0 + M1 )r

else,

(3.11)

which means x1 ∈ U(x0 , r ) and (3.7) is true for n = 1. By simply replacing x−1 , x0 , x1 by xk−1 , xk , xk+1 in the preceding estimates, we arrive at estimates (3.7). Then, in view of the estimate xk+1 − xk  ≤ xk − x  < r , we deduce that limk→∞ xk = x and xk+1 ∈ U (x , r ). Finally, to show the uniqueness part, let y ∈ U (x , R ) be a solution of equation F (x) = 0. Set Q = [y , x ; F ]. Then if M0 > 0, using (3.1), we get that

F  (x )−1 (Q − F  (x )) ≤ M0 y − x  ≤ M0 R < 1.

(3.12)

It follows from (3.12) that Q is invertible. In view of the identity

0 = F (y ) − F (x ) = Q (y − x ), we deduce that x = y . If M0 = 0 and y ∈ U(x , r ), it follows from (3.12) that x = y and the x is the unique solution of equation F (x) = 0 in U(x , r ).  Remark 3.2. If M + M2 > 0, the estimates (3.7) means that sequence {xn } converges to x quadratically. In fact, by (3.7), we have that the following estimates holds for all n ≥ 0:

xn+1 − x  ≤

Mxn−1 − x xn − x  + Mxn−1 − x 2 + M2 (xn − x  + xn−1 − x )2 xn − x  xn − x  1 − M0 (2xn − x  + xn−1 − x ) − M1 xn−1 − x  Mr xn − x 2 + Mxn − x xn−1 − x 2 + 4M2 r 2 xn − x 2 , 1 − (3M0 + M1 )r

(3.13)

 2  xn−1 −x  2 (Mr 2 + 4M2 r 3 )( xnr−x ) + Mr 2 xnr−x ( r )   ≤ , n ≥ 0. 1 − (3M0 + M1 )r

(3.14)

≤ which means

xn+1 − x  r







Denote

qn =

x n − x   r

,

n ≥ −1.

(3.15)

We have from (3.14) and (3.15) that

qn+1 ≤ c1 q2n + c2 qn q2n−1 ,

n ≥ 0,

(3.16)

where,

c1 =

Mr 2 + 4M2 r 3 , 1 − (3M0 + M1 )r

c2 =

Mr 2 . 1 − (3M0 + M1 )r

In view of the definition of r , we have that 0 ≤ c2 < c1 < 1 and c1 + c2 = 1. By (3.16), we deduce that

q1 ≤ c1 q20 + c2 q0 q2−1 ≤ c1 q0 + c2 q0 = q0 , q2 ≤ c1 q21 + c2 q1 q20 ≤ c1 q1 q0 + c2 q1 q0 = q1 q0 . ... qn+1 ≤ c1 q2n + c2 qn q2n−1 ≤ c1 qn qn−1 + c2 qn qn−1 = qn qn−1 , The last estimates show that {qn } converges to zero with order order q and the following limit holds:

qn+1 → A as n → ∞, qqn

√ 1+ 5 2

n ≥ 1. ≈ 1.618 at least. Let assume that {qn } converges to zero with

(3.17)

which means

qn+1 ∼ Aqqn

as n → ∞

(3.18)

qn ∼ Aqqn−1

as n → ∞.

(3.19)

and

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

107

Here, A > 0 is a constant. We can rewrite (3.19) as

 q  1q n

A

∼ qn−1

as n → ∞.

(3.20)

Using (3.16), it is obvious that q ≤ 2. Let us suppose 1 < q < 2. Then, using (3.16) and (3.20), we have that

qn q2n−1 qn+1 2−q + c2 , q ≤ c1 qn qn qqn

n ≥ 0.

(3.21)

Let n trend to ∞ in two sides of (3.21), we deduce that the left side of (3.21) trends to constant A and the first term of the right side of (3.21) trends to zero. As for the second term of the right side of (3.21), we have that

c2

qn q2n−1 qqn

∼ c2 q1−q n

 q  2q n

A

= c2

 1  2q A

(2−q)(1+q)

qn

q

→ 0,

as n → ∞,

which leads to a contradiction (since A > 0). So, we must have q = 2. If we drop the divided difference of order two from the hypotheses, we can show two more local convergence results along the same lines of Theorem 3.1. In the next result, we suppose the divided difference of order one [x, y; F] can be expression as

 [x, y; F ] =

1 0

F (tx + (1 − t )y)dt,

f or x, y ∈ D0 ⊆ D,

which holds in many cases [2,6]. Theorem 3.3. Let F: D ⊆ X → Y be a Fréchet-differentiable operator. Suppose: (a) equation F (x) = 0 has a solution x ∈ D at which the Fréchet-derivative exists and is invertible; (b) there exist Ni ≥ 0, i = 0, 1, N ≥ 0 with N0 + N1 + N > 0 such that the divided difference of order one of F on D0 ⊆ D satisfies the following Lipschitz conditions

F  (x )−1 (F  (x) − F  (x )) ≤ N0 x − x  f or each x ∈ D0 , F  (x )−1 ([x, y; F ] − [x, x ; F ]) ≤ Ny − x  f or each x, y ∈ D0 , F  (x )−1 ([x, x ; F ] − [y, x ; F ]) ≤ N1 x − y f or each x, y ∈ D0 ; (c) the ball

U (x , 3r0 ) ⊆ D0 , where,

r0 =

1 3 N 2 0

+ N + 2N1

.

Then, the sequence {xn } generated by method (1.2) is well defined, remains in U (x , r0 ) for all n ≥ 0 and converges to x provided that

x−1 , x0 ∈ U (x , r0 ). Moreover, the following estimates hold for each n = 0, 1, 2, . . .:

xn+1 − x  ≤

Nxn−1 −x +N1 xn −xn−1  1−N0 (xn −x + 12 xn−1 −x )

Furthermore, if there exists R0 ∈ [r0 ,

F (x) = 0

in U (x , R0 ).

If N1 =

0, x

1 N1 ), N1

x n − x  .

(3.22)

= 0 such that U (x , R0 ) ⊆ D0 , then the limit point x is the only solution of equation

is unique in U (x , r0 ).

Proof. We shall show xn+1 ∈ U (x0 , r0 ) and (3.22) hold for all n ≥ 0 by induction. Let us suppose x−1 , x0 ∈ U (x , r0 ) holds. Notice that for each u, v ∈ U (x , r0 ), we get that 2v − u − x  ≤ 2v − x  + u − x  < 3r0 . Then, we have from the hypotheses that

I − F  (x )−1 A0  = F  (x )−1 ([2x0 − x−1 , x−1 ; F ] − F  (x ))  1  = F  (x )−1 F  (t (2x0 − x−1 ) + (1 − t )x−1 ) − F  (x ) dt   ≤

0

 =

0

1

0

1

N0 t (2x0 − x−1 − x ) + (1 − t )(x−1 − x )dt N0 2t (x0 − x ) + (1 − 2t )(x−1 − x )dt

≤ N0 (x0 − x  +

1 3 x−1 − x ) < N0 r0 < 1. 2 2

(3.23)

108

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

It follows from (3.23) and the Banach lemma on the invertible operators that A−1 exists, and the following estimate holds: 0   A−1 0 F (x ) ≤

1 1 − N0 (x0 −

x

 + 12 x−1 − x )

.

(3.24)

Hence, x1 is well defined, and we have from (1.2) that

x1 − x = x0 − x − A−1 0 F (x0 )   = A−1 0 ([2x0 − x−1 , x−1 ; F ] − [x0 , x ; F ])(x0 − x )     = A−1 0 ([2x0 − x−1 , x−1 ; F ] − [2x0 − x−1 , x ; F ] + [2x0 − x−1 , x ; F ] − [x0 , x ; F ])(x0 − x ).

(3.25)

Then, we have that

x 1 − x   ≤

Nx−1 − x  + N1 x0 − x−1 

1 − N0 (x0 − x  +

1 2

x−1 − x )

 x 0 − x  .

(3.26)

which means x1 ∈ U (x0 , r0 ) and (3.22) is true for n = 1. By simply replacing x−1 , x0 , x1 by xk−1 , xk , xk+1 in the preceding estimates, we arrive at estimates (3.22). Then, in view of the estimate xk+1 − xk  ≤ xk − x  < r0 , we deduce that limk→∞ xk = x and xk+1 ∈ U (x , r0 ). Finally, to show the uniqueness part, let y ∈ U (x , R0 ) be a solution of equation F (x) = 0. Set Q = [y , x ; F ]. Then if N1 > 0, we get that

F  (x )−1 ([y , x ; F ] − [x , x ; F ]) ≤ N1 y − x  ≤ N1 R0 < 1.

(3.27)

It follows from (3.27) that Q is invertible. In view of the identity

0 = F (y ) − F (x ) = Q (y − x ), we deduce that x = y . If N1 = 0 and y ∈ U (x , r0 ), it follows from (3.27) that x = y and the x is the unique solution of equation F (x) = 0 in U (x , r0 ).  We omit the proof of the next result, since it is similar to Theorem 3.3. Theorem 3.4. Let F: D ⊆ X → Y be a continuous operator. Suppose: (a) equation F (x) = 0 has a solution x ∈ D at which the Fréchet-derivative exists and is invertible; (b) there exist Hi ≥ 0, i = 0, 1, 2, H ≥ 0 with H0 + H1 + H2 + H > 0 such that the divided difference of order one of F on D0 ⊆ D satisfies following Lipschitz conditions

F  (x )−1 ([x, y; F ] − F  (x )) ≤ H0 x − x  + H y − x  f or each x, y ∈ D0 , F  (x )−1 ([y, x ; F ] − [2y − x, x; F ]) ≤ H1 y − x + H2 x − x  f or each x, y ∈ D0 ; (c) the ball

U (x , 3r1 ) ⊆ D0 , where,

r1 =

1 . H + 3H0 + 2H1 + H2

Then, the sequence {xn } generated by method (1.2) is well defined, remains in U (x , r1 ) for all n ≥ 0 and converges to x provided that

x−1 , x0 ∈ U (x , r1 ). Moreover, the following estimates hold for each n = 0, 1, 2, . . .:

xn+1 − x  ≤

H1 xn − xn−1  + H2 xn−1 − x   x n − x  . 1 − (H0 2xn − xn−1 − x  + H xn−1 − x )

Furthermore, if there exists R1 ∈ [r1 , equation F (x) = 0

in U (x , R1 ).

1 H0 +H ), H0

+ H = 0 such that U (x , R1 ) ⊆ D0 , then the limit point x is the only solution of

If H0 = H = 0, x is unique in U (x , r1 ).

4. Numerical examples In this section, we present some numerical examples. Example 4.1. Let X = Y = R, D0 = D = ( − 1, 1) and define F on D by

F (x) = ex − 1.

(4.1)

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

109

Then, x = 0 is a solution of Eq. (1.1), and F  (x ) = 1. Note that for any x, y ∈ D0 , we have

  1    F  (tx + (1 − t )y)dt − F  (x )   0  1    =  (etx+(1−t )y − 1)dt  0  1  2   tx + ( 1 − t ) y ( tx + ( 1 − t ) y ) + + · · · )dt  =  (tx + (1 − t )y)(1 + 2! 3! 0  1      1 1 ≤  (tx + (1 − t )y) 1 + + + · · · dt 

|F  (x )−1 ([x, y; F ] − [x , x ; F ])| =

2!

0

3!

e−1 ≤ (|x − x | + |y − x |), 2

(4.2)

 1   1      F  (tx + (1 − t )x )dt − F (ty + (1 − t )x )dt   0 0  1    tx+(1−t )x ty+(1−t )x  = (e −e )dt  0  1

   t 2 (x2 − y2 ) t 3 (x3 − y3 ) + + · · · dt  =  t (x − y) + 2! 3! 0  1

 2 2   t (x + y) t (x + xy + y2 ) + + · · · dt  =  t (x − y) 1 +

|F  (x )−1 ([x, x ; F ] − [y, x ; F ])| =

 ≤

2!

0

1 0

t (1 + t +

and

3!

2

t + ...)dt |x − y| = 2!





1 0

tet dt |x − y| = |x − y|

(4.3)



  F  (ξ ) |F  (x )−1 ([x, v, y; F ] − [x, z, y; F ])| = |[x, v, z, y; F ](v − z)| =  (v − z)

  3!    e  eξ =  (v − z) ≤ |v − z|, f or some ξ ∈ D0 . 6 6

(4.4)

e    Then, we can choose M0 = M1 = e−1 2 , M2 = 1 and M = 6 in Theorem 3.1. By (3.5), we get r ≈ 0.254664790 and U(x , 3r ) ⊆ D0 holds. That is, all conditions in Theorem 3.1 are satisfied and Theorem 3.1 applies. Note that

|F  (x )−1 ([x, y; F ] − [z, u; F ])|  1    =  (F  (tx + (1 − t )y) − F  (tz + (1 − t )u))dt  0  1  1       =  (F  θ (tx + (1 − t )y) + (1 − θ )(tz + (1 − t )u) tx + (1 − t )y − (tz + (1 − t )u) dθ dt  0 0  1  1      =  (eθ (tx+(1−t )y)+(1−θ )(tz+(1−t )u) tx + (1 − t )y − (tz + (1 − t )u) dθ dt  0 0 

≤ ≤

1 0

(4.5)

e|t (x − z) + (1 − t )(y − u)|dt

e (|x − z| + |y − u|), 2

(4.5)

then, if we only use condition (4.5) instead of conditions (3.1) and (3.2) in Theorem Theorem 3.1, we get M0 = M1 = M2 = 2e , M = 6e and the small radius r ≈ 0.136065472 of convergence ball for method (1.2) than r given by (3.5). Let us choose x−1 = 0.136, x0 = 0.128. Suppose sequences {xn } is generated by method (1.2). Table 1 gives a comparison results of error estimates for Example 4.1, which shows that tighter error estimates can be obtained from (3.7) by using both condition (3.1) and condition (3.2) instead of by using only the condition (4.5). Example 4.2. Let X = Y = R, D0 = D = (0.72, 1.28) and define F on D by

F (x) = x3 − 0.75.

(4.6)

110

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113 Table 1 The comparison results of error estimates for Example 4.1.

n

The first estimates of (3.7) by using both conditions (3.1) and (3.2)

The first estimates of (3.7) by using only condition (4.5) instead of (3.1) and (3.2)

0 1 2 3

8.37016E-05 7.12968E-05 1.39545E-09 1.92636E-18

0.000102359 8.5783E-05 1.40671E-09 1.92646E-18

Let x−1 = 0.95, x0 = 1 be two initial points for the Kurchatov method (1.2). Then, we have A0 = 3.0025, x1 ≈ 0.916736053, t0 = F (x0 )| ≈ 0.133263947. Note that, for any x, y, z, v ∈ D, we have 0.05 and t1 = t0 + |A−1 0 −1 2 2 2 2 |A−1 0 ([x, y; F ] − [z, v; F ])| = |A0 (x + xy + y − (z + zv + v ))| = |A−1 0 ((x + z)(x − z) + xy − xv + xv − zv + (y + v)(y − v))| = |A−1 0 ((x + z + v)(x − z) + (x + y + v)(y − v))| ≤ |A−1 0 |(|x + z + v||x − z| + |x + y + v||y − v|).

(4.7)

| ≈ 1.278934221. Setting z Theorem Theorem 2.3 holds for constants L2 = L3 = 3 × 1.28 × |A−1 0 we deduce that condition (2.27) holds for L0 = 3.28 × |A−1 | ≈ 1.092422981 and L1 = 3.51 × |A−1 | 0 0

= Then, condition (2.28) in ≈ 2x0 − x−1 , v = x−1 in (4.7), 1.169025812. Furthermore, setting x = 2x1 − x0 , y = x0 , z = 2x0 − x−1 , v = x−1 in (4.7), we deduce that the second condition of | ≈ 0.943704282 and L = |2x1 + x−1 | × |A−1 | ≈ 0.927051493. (2.26) holds for L−1 = |2x1 + x0 | × |A−1 0 0 Using method (1.2), we get that t2 ≈ 0.155936165, t3 ≈ 0.164388084, t4 ≈ 0.165312739, t5 ≈ 0.165346407, t6 ≈ 0.165346537 and t7 ≈ 0.165346537. That is to say, we have t ≈ 0.165346537. Then, we have r0 = max (2(t1 − t0 ), t  − t0 ) ≈ 0.166527893, and U (x0 , r0 ) ≈ (0.833472107, 1.166527893) ⊂ D. Next. we verify that all conditions of Lemma 2.1 hold. In fact, by the definition of polynomial p, we get that α ≈ 0.737640298. We also have

0< 0<

L0 (t1 − t−1 ) + L1 (t0 − t−1 ) ≈ 0.272293345 ≤ α , 1 − (L−1 (2t1 − t0 ) + Lt0 )

L2 (t2 − t0 ) + L3 (t1 − t0 ) ≈ 0.372787434 ≤ α 1 − (L0 (2t2 − t1 ) + L1 t1 )

and

0<α ≤1−

(L0 + L1 )(t1 − t0 ) ≈ 0.787697259. 1 − (L0 + L1 )t0

By now, we see that all conditions of Theorem 2.3 are satisfied, so Theorem 2.3 applies. However, we will show some condition of [13, Theorem 3.1] is not satisfied and so it does not apply. In fact, we can obtain constants in [13, Theorem 3.1] as follows:

a = 0.05, c ≈ 0.083263947, p0 ≈ 1.278934221, q0 ≈ 0.333055787, s ≈ 1.636117697, r ≈ 0.340814461, r0 ≈ 0.095245569. Then, the condition V0 = U (x0 , 3r0 ) ≈ (0.714263293, 1.285736707) ⊂ D of [13, Theorem 3.1] is not satisfied. Next we will give two examples with multivariable. For x = (x1 , x2 , . . . , xm ) ∈ Rm (m > 1), we define x = x∞ = max1≤i≤m |xi | and the corresponding norm on A = (ai j ) ∈ Rm×m as m 

||A|| = max

1≤i≤m

|ai j |.

j=1

The divided difference [u, v; F ] of operator F = (F1 , F2 , . . . , Fm )t at the points u = (u1 , u2 , . . . , um )t , v = (v1 , v2 , . . . , vm )t ∈ Rm is an matrix in Rm×m , and its entries can be expressed by [8]

[u, v; F ]i j =

 1  Fi (u1 , . . . , u j , v j+1 , . . . , vm ) − Fi (u1 , . . . , u j−1 , v j , . . . , vm ) , uj − vj

1 ≤ i, j ≤ m.

Example 4.3. Let X = Y = R3 , D0 = D = ( − 1, 1)3 and define F = (F1 , F2 , F3 )t on D by

F (x) = F (x1 , x2 , x3 ) = (ex1 − 1, x2 2 + x2 , x3 )t . For the points u = (u1 , u2 , u3 )t , v = (v1 , v2 , v3 )t ∈ D, we get



e u1 − e v 1 ⎜ u1 − v1 [u, v; F ] = ⎜ ⎝ 0 0



0 u2 + v2 + 1 0

0



⎟ 0⎠. 1

(4.8)

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

111

Let x−1 = (0.1, 0.1, 0.1)t , x0 = (0.11, 0.11, 0.11)t be two initial points for the Kurchatov method (1.2). Here, we use xn instead of xn to distinct iterative points with its component for some integer n ≥ −1. Then, we have

2x0 − x−1 = (0.12, 0.12, 0.12),

t0 = 0.01,



1.116296675

A0 ≈



0

0

1.22

0

0

t1 = t0 + A−1 0 F (x0 ) = 0.12,

0



0⎠,



0.895819205

⎝ A−1 0 ≈

1

0

0

0.819672131

0

0

0



0⎠, 1

x1 ≈ (0.005835871, 0.009918033, 0).

Note that, for any x = (x1 , x2 , x3 )t , y = (y1 , y2 , y3 )t , z = (z1 , z2 , z3 )t , v = (v1 , v2 , v3 )t ∈ D, we have



ez1 − ev1 ex1 − ey1 − ⎜ x 1 − y1 z1 − v1 [x, y; F ] − [z, v; F ] = ⎜ 0 ⎝



0

0

x2 + y2 − z2 − v2

0

0



⎟ 0⎠.

(4.9)

0

In view of

     ex1 − ey1 ez1 − ev1   1 y +t (x −y ) v +t (z −v )  1 1 1 1 1 1    −e )dt   x1 − y1 − z1 − v1  =  0 (e  1  1      =  ev1 +t (z1 −v1 )+θ (y1 +t (x1 −y1 )−v1 −t (z1 −v1 )) y1 + t (x1 − y1 ) − v1 − t (z1 − v1 ) dθ dt  0 0 

≤ ≤

1



0

1 0

e|t (x1 − z1 ) + (1 − t )(y1 − v1 )|dθ dt

e (|x1 − z1 | + |y1 − v1 |), 2

we have

A−1 0 ([x, y; F ] − [z, v; F ])  e × 0.895819205

 (|x1 − z1 | + |y1 − v1 |), 0.819672131(|x2 − z2 | + |y2 − v2 |) 2  e × 0.895819205  ≤ max (x − z + y − v), 0.819672131(x − z + y − v) ≤ max

2 e × 0.895819205 = (x − z + y − v). 2

(4.10)

In particular, set z = 2x0 − x−1 and v = x−1 in (4.10), we have

A−1 0 ([x, y; F ] − A0 ) ≤

e × 0.895819205 (x − (2x0 − x−1 ) + y − x−1 ). 2

That is, we can choose constants L0 = L1 = L2 = L3 ≈ By the definition of A1 , we have



A1 ≈ ⎝

1.007672865

0

0

1.019836066

0

0

0

e×0.895819205 2

(4.11)

≈ 1.217544533 in Theorem 2.3.



0⎠ 1

and

A−1 0 (A1 − A0 ) ≈ 0.384068799. Since 2x1 − x0 − (2x0 − x−1 ) = 0.23 and x0 − x−1  = 0.01, the following inequality holds

A−1 (A1 − A0 ) ≈ 0.384068799 ≤ L−1 2x1 − x0 − (2x0 − x−1 ) + Lx0 − x−1  0 0.384068799 0.24

(4.12)

≈ 1.600286661. So, the second inequality of (2.26) is true. provided that we choose L−1 = L ≈ Using method (1.2), we get that t2 ≈ 0.148267584, t3 ≈ 0.161640408, t4 ≈ 0.163517484, t5 ≈ 0.163626179 and t6 ≈ 0.163627029. That is to say, we have t ≈ 0.163627029. Then, we have r0 = max (2(t1 − t0 ), t  − t0 ) = 0.22, and U (x0 , r0 ) = ( − 0.11, 0.33)3 ⊂ D. Next. we verify that all conditions of Lemma 2.1 hold. In fact, by the definition of polynomial p, we get that α ≈ 0.722714177. We also have

0<

L0 (t1 − t−1 ) + L1 (t0 − t−1 ) ≈ 0.256978034 ≤ α , 1 − (L−1 (2t1 − t0 ) + Lt0 )

112

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

0<

L2 (t2 − t0 ) + L3 (t1 − t0 ) ≈ 0.473079844 ≤ α 1 − (L0 (2t2 − t1 ) + L1 t1 )

and

0<α ≤1−

(L0 + L1 )(t1 − t0 ) ≈ 0.725454782. 1 − (L0 + L1 )t0

By now, we see that all conditions of Theorem 2.3 are satisfied, so Theorem 2.3 applies. Example 4.4. Let X = Y = R6 , D0 = D = ( − 1, 1)6 and define F = (F1 , F2 , . . . , F6 )t on D by



F (x) = F (x1 , x2 , . . . , x6 ) =

6 

x1

dxk − dx1 + e − 1,

k=1

6 

x2

dxk − dx2 + e − 1, . . . ,

k=1

6 

t x6

dxk − dx6 + e − 1

,

(4.13)

k=1

where, d ∈ R is a constant. For the points u = (u1 , u2 , . . . , u6 )t , v = (v1 , v2 , . . . , v6 )t ∈ D, we get

⎛ e u1 − e v 1

⎜ u1 − v1 ⎜ ⎜ d ⎜ ⎜ ⎜ d ⎜ [u, v; F ] = ⎜ ⎜ ⎜ d ⎜ ⎜ ⎜ d ⎜ ⎝

d

d

d

d

e u2 − e v 2 u2 − v2

d

d

d

d

d

e u3 − e v 3 u3 − v3

d d

d

u4

− ev4

e u4 − v4

d



d

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ d ⎟ ⎟. ⎟ ⎟ d ⎟ ⎟ ⎟ d ⎟ v6 ⎠ d

d

d

d

d

e u5 − e v 5 u5 − v5

d

d

d

d

e u6 − e u6 − v6

Let x−1 = (0.1, 0.1, 0.1, 0.1, 0.1, 0.1)t , x0 = (0.12, 0.12, 0.12, 0.12, 0.12, 0.12)t be two initial points for the Kurchatov method (1.2). Here, we use xn instead of xn to distinct iterative points with its component for some integer n ≥ −1. Then, we have

2x0 − x−1 = (0.14, 0.14, 0.14, 0.14, 0.14, 0.14),

⎛c

d

d

d

d

⎜d ⎜ ⎜d A0 = ⎜ ⎜d ⎜ ⎝d

c1

d

d

d

d⎟

d

c1

d

d

d

d

c1

d

d

d

d

d

d

d

d

where, c1 =

e0.14 −e0.1 0.14−−0.1

t0 = 0.02,



d

1

⎛c



d

d

d

d

d

c2

d

d

d

d⎟

d

c2

d

d

d

d

c2

d

c1

⎜d ⎟ ⎜ ⎟ ⎜d d ⎟, A−1 ⎜ = c 3⎜ 0 ⎟ d⎟ ⎜d ⎝d d⎠

d

d

d

c2

⎟ ⎟, d⎟ ⎟ d⎠

d

c1

d

d

d

d

c2

2

d

≈ 1.12757202, c2 = −4d − c1 , c3 =

0.033115086. Hence, we have

1 (d−c1 )(5d+c1 ) .

d⎟

Set d = 3, we have c2 ≈ −13.12757202 and c3 ≈

A−1 0 F (x0 ) ≈ (0.119515625, 0.119515625, 0.119515625, 0.119515625, 0.119515625, 0.119515625), t1 = t0 + A−1 0 F (x0 ) ≈ 0.139515625,

x1 ≈ (0.000484375, 0.000484375, 0.000484375, 0.000484375, 0.000484375, 0.000484375). Note that, for any x = (x1 , x2 , . . . , x6 )t , y = (y1 , y2 , . . . , y6 )t , z = (z1 , z2 , . . . , z6 )t , v = (v1 , v2 , . . . , v6 )t ∈ D, we have

⎛c

11

⎜0 ⎜ ⎜0 [x, y; F ] − [z, v; F ] = ⎜ ⎜0 ⎜ ⎝0 0

0

0

0



0

0

⎟ ⎟ ⎟ ⎟, 0⎟ ⎟ 0⎠

c22

0

0

0

0

c33

0

0

0

0

c44

0

0

0

0

c55

0

0

0

0

0

0

c66

where,

ckk =

exk − eyk ezk − evk − , x k − yk zk − vk

k = 1, 2, . . . , 6.

Similarly as the last example, we have

|ckk | ≤ 2e (|xk − zk | + |yk − vk |) ≤ 2e (x − z + y − v), k = 1, 2, . . . , 6.

(4.14)

I.K. Argyros, H. Ren / Applied Mathematics and Computation 273 (2016) 98–113

113

we have

A−1 ([x, y; F ] − [z, v; F ]) ≤ 0

e|c3 |(|c2 |+5|d|) 2

(x − z + y − v).

That is, we can choose constants L0 = L1 = L2 = L3 ≈ By the definition of A1 , we have

⎛c

≈ 1.265967683 in Theorem 2.3.



d

d

d

d

d

⎜d ⎜ ⎜d A1 ≈ ⎜ ⎜d ⎜ ⎝d

c4

d

d

d

d⎟

d

c4

d

d

d

d

c4

d

d

d

d

c4

d

d

d

d

d

4

e|c3 |(|c2 |+5|d|) 2

(4.15)



d⎟ ⎟, d⎟ ⎟ d⎠

c4

where, c4 ≈ 1.002868011. Then, we have

A−1 0 (A1 − A0 ) ≈ 0.11615517. Since 2x1 − x0 − (2x0 − x−1 ) ≈ 0.25903125 and x0 − x−1  = 0.02, the following inequality holds

A−1 (A1 − A0 ) ≈ 0.11615517 ≤ L−1 2x1 − x0 − (2x0 − x−1 ) + Lx0 − x−1  0

(4.16)

0.11615517 ≈ 0.416280148. So, the second inequality of (2.26) is true. provided that we choose L−1 = L ≈ 0.27903125 Using method (1.2), we get that t2 ≈ 0.139034507, t3 ≈ 0.138810276, t4 ≈ 0.138810795 and t5 ≈ 0.138810795. That is to say, we have t ≈ 0.138810795. Then, we have r0 = max (2(t1 − t0 ), t  − t0 ) ≈ 0.23903125, and U (x0 , r0 ) ≈ ( − 0.11903125, 0.35903125)6 ⊂ D. Next. we verify that all conditions of Lemma 2.1 hold. In fact, by the definition of polynomial p, we get that α ≈ 0.722714177. We also have

0< 0<

L0 (t1 − t−1 ) + L1 (t0 − t−1 ) ≈ 0.196138868 ≤ α , 1 − (L−1 (2t1 − t0 ) + Lt0 )

L2 (t2 − t0 ) + L3 (t1 − t0 ) ≈ 0.42698869 ≤ α 1 − (L0 (2t2 − t1 ) + L1 t1 )

and

0<α ≤1−

(L0 + L1 )(t1 − t0 ) ≈ 0.734593002. 1 − (L0 + L1 )t0

By now, we see that all conditions of Theorem 2.3 are satisfied, so Theorem 2.3 applies. Note that, we can verify that other choices of parameter d such as d = 5 and d = 10 are also suitable for this example. References [1] S. Amat, S. Busquier, M. Negra, Adaptive approximation of nonlinear operators, Numer. Funct. Anal. Optim. 25 (2004) 397–405. [2] I.K. Argyros, Computational theory of iterative methods, in: C.K. Chui, L. Wuytack (Eds.), Series: Studies in Computational Mathematics 15, Elservier Publ. Co., New York, USA, 2007. [3] I.K. Argyros, On a two point Newton-like methods of convergent R − order two, Intern. J. Comput. Math. 82 (2005) 219–233. [4] I.K. Argyros, A Kantororvich-type analysis for a fast iterative method for solving nonlinear equations, J. Math. Anal. Applic. 332 (2007a) 97–108. [5] I.K. Argyros, On a quadratically convergent iterative method using divided differences of order one, J. Korean. Math. S.M.E. Ser. B 14 (3) (2007b) 203–221. [6] I.K. Argyros, S. Hilout, Computational Methods in Nonlinear Analysis (Efficient Algorithms, Fixed Point Theory and Applications), World Scientific, 2013. [7] I.K. Argyros, S. Hilout, Weaker conditions for the convergence of Newton’s method, J. Complex. 28 (2012) 364–387. [8] J.A. Ezquerro, A. Grau, M. Grau-Sanchez, M.A. Hernández, On the efficiency of two variants of Kurchatov’s method for solving nonlinear equations, Numer. Algor. 64 (2013) 685–698. [9] L.V. Kantorovich, G.P. Akilov, Functional Analysis, Pergamon Press, Oxford, 1982. [10] V.A. Kurchatov, On a method of linear interpolation for the solution of functional equations, Dokl. Akad. Nauk. SSSR (Russian) 198 (3) (1971) 524–526. Translation in Soviet Math. Dokl. 12 (1974) 835–838 [11] H. Ren, Q. Wu, W. Bi, On convergence of a new secant like method for solving nonlinear equations, Appl. Math. Comput. 217 (2010) 583–589. [12] W.C. Rheinboldt, An adaptive continuation process for solving systems of nonlinear equations, Polish Acad. Sci. Banach Ctr. Publ. 3 (1977) 129–142. [13] S.M. Shakhno, On a Kurchatov’s method of linear interpolation for solving nonlinear equations, PAMM-Proc. Appl. Math. Mech. 4 (2004) 650–651. [14] J.F. Traub, Iterative Methods for the Solution of Equations, Englewood Cliffs, Prentice Hull, 1984.