The global convergence of the Polak–Ribière–Polyak conjugate gradient algorithm under inexact line search for nonconvex functions

The global convergence of the Polak–Ribière–Polyak conjugate gradient algorithm under inexact line search for nonconvex functions

Accepted Manuscript The global convergence of the Polak-Ribière-Polyak conjugate gradient algorithm under inexact line search for nonconvex functions ...

612KB Sizes 0 Downloads 40 Views

Accepted Manuscript The global convergence of the Polak-Ribière-Polyak conjugate gradient algorithm under inexact line search for nonconvex functions Gonglin Yuan, Zengxin Wei, Yuning Yang

PII: DOI: Reference:

S0377-0427(18)30666-6 https://doi.org/10.1016/j.cam.2018.10.057 CAM 12004

To appear in:

Journal of Computational and Applied Mathematics

Received date : 5 March 2018 Revised date : 14 October 2018 Please cite this article as: G. Yuan, Z. Wei and Y. Yang, The global convergence of the Polak-Ribière-Polyak conjugate gradient algorithm under inexact line search for nonconvex functions, Journal of Computational and Applied Mathematics (2018), https://doi.org/10.1016/j.cam.2018.10.057 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Highlights (for review)

Highlights •A convex parabolic surface for projection is defined and a projection PRP algorithm is proposed. •Any unsatisfactory iteration points are projected onto this convex parabolic surface to get good points. •The PRP conjugate gradient algorithm for general functions under inexact line search is obtained. •Competitive numerical results are reported to show the good performance of the new algorithm.

Manuscript Click here to view linked References

The global convergence of the Polak-Ribi`ere-Polyak conjugate gradient algorithm under inexact line search for nonconvex functions ∗ † ‡ Gonglin Yuan∗†

Zengxin Wei†

Yuning Yang†

Abstract. Powell (Lecture Notes in Math. 1066(1984)) and Dai (SIAM J. Optim. 13(2003)) constructed respectively a counterexample to show that the Polak-Ribi`ere-Polyak (PRP) conjugate gradient algorithm fails to globally converge for nonconvex functions even when the exact line search technique is used, which implies similar failure of the weak Wolfe-Powell (WWP) inexact line search technique. Does another inexact line search technique exist that can ensure global convergence for nonconvex functions? This paper gives a positive answer to this question and proposes a modified WWP (MWWP) inexact line search technique. An algorithm is presented that uses the MWWP inexact line search technique; the next point xk+1 generated by the PRP formula is accepted if a positive condition holds, and otherwise, xk+1 is defined by a technique for projection onto a parabolic surface. The proposed PRP conjugate gradient algorithm is shown to possess global convergence under inexact line search for nonconvex functions. Numerical performance of the proposed algorithm is shown to be competitive with those of similar algorithms. Keywords. PRP method; global convergence; inexact line search; nonconvex functions. AMS 2010 subject classifications. 90C26.

1.

Introduction

Consider min{f (x) | x ∈ ℜn },

(1.1)

where f : ℜn → ℜ and f ∈ C 2 . The conjugate gradient formula for (1.1) is designed as follows: xk+1 = xk + αk dk , k = 0, 1, 2, · · · , where xk is the kth iterative point, αk is the step size, and dk is the search direction and is defined as  −gk+1 + βk dk , if k ≥ 1, dk+1 = (1.2) −gk+1 , if k = 0, where gk+1 = ∇f (xk+1 ) is the gradient of f (x) at xk+1 and βk ∈ ℜ is a scalar that determines different CG formulas (see [3, 5, 6, 11, 13, 14, 15], among others). The scalar of the well-known PRP formula [14, 15] is defined as follows: g T (gk+1 − gk ) βkP RP = k+1 , (1.3) kgk k2

where gk = ∇f (xk ) at xk and k · k denotes the Euclidean norm. It has been proven that the PRP conjugate gradient formula shows very effective numerical performance for optimization problems but fails to converge for many functions in the more general case. The global convergence behavior of the PRP algorithm under exact line search has been shown by Polak and Ribi`ere [15], and a similar result under inexact Wolfe line search and a descent assumption has been established by Yuan [20] for strongly convex functions. However, Dai [2] reported an example that proves that the PRP algorithm may fail ∗ Corresponding

author. of Mathematics and Information Science, Guangxi University, Nanning, Guangxi, P.R. China. Tel: +86-07713232055. E-mails: [email protected], [email protected], [email protected]. ‡ This work was supported by the National Natural Science Foundation of China (Grant No. 11661009), the Guangxi Natural Science Fund for Distinguished Young Scholars (No. 2015GXNSFGA139001), and the Guangxi Natural Science Key Fund (No. 2017GXNSFDA198046). † College

1

to converge even when the function f is strongly convex when the parameter σ ∈ (0, 1) of the inexact Wolfe technique is sufficiently small. Furthermore, Powell [16] presented a counterexample to show the nonconvergence of the PRP method for nonconvex functions even when an exact line search technique is used and a less point counterexample is proposed by Dai [1], and Powell noted that the scalar should satisfy βk ≥ 0, which is crucial for global convergence [17, 19]. Later, Gilbert and Nocedal [7] proposed the modified PRP conjugate gradient formula βkP RP + = max{0, βkP RP }, which is globally convergent for general functions under the WWP line search and the assumption of a sufficient descent condition. Many modified conjugate gradient methods (see [9, 10, 18, 21, 22, 23, 25, 26], among others) have been proposed to achieve global convergence and good numerical performance based on the PRP algorithm. Grippo and Lucidi [8] designed a suitable line search technique for finding αk with αk = max{ρi

µ1 | gkT dk | : i = 0, 1, 2, · · ·} kdk k2

such that f (xk+1 ) ≤ f (xk ) − µ2 α2k kdk k2

(1.4)

T dk ≤ −µ4 kgk+1 k2 −µ3 kgk+1 k2 ≤ gk+1

(1.5)

and are satisfied, where ρ ∈ (0, 1), µ1 > 0, µ2 > 0 and 0 < µ4 < 1 < µ3 are positive constants. This technique can ensure that the PRP method globally converges for general functions. However, compared with the WWP line search technique presented below in (1.6) and (1.7), the technique represented by (1.4) and (1.5) above is not easy to implement, especially with regard to the inequalities in (1.5). Therefore, it is not widely used in optimization algorithms. As an alternative, the WWP inexact line search technique is used to find αk such that f (xk + αk dk ) ≤ fk + δαk gkT dk (1.6) and g(xk + αk dk )T dk ≥ σgkT dk ,

(1.7)

where δ ∈ (0, 1/2) and σ ∈ (δ, 1). However, it has been proven that when the WWP line search technique is used, the PRP conjugate gradient algorithm and the BFGS quasi-Newton method fail to globally converge for nonconvex functions [1]. To overcome this shortcoming, Yuan et al. [27] presented a modified WWP (MWWP) line search technique defined by f (xk + αk dk ) ≤ f (xk ) + δαk gkT dk + αk min[−δ1 gkT dk , δ

αk kdk k2 ] 2

(1.8)

and g(xk + αk dk )T dk ≥ σgkT dk + min[−δ1 gkT dk , δαk kdk k2 ],

(1.9)

where δ ∈ (0, 1/2), δ1 ∈ (0, δ), and σ ∈ (δ, 1). Further applications can be found in [12, 24]. The global convergence of the PRP method for nonconvex functions under the MWWP technique has been established only for the case of −δ1 gkT dk ≥ δ α2k kdk k2 ; moreover, it is known to fail for the case of −δ1 gkT dk < δ α2k kdk k2 since this case degenerates to the normal WWP line search case. This paper presents a further study of the global convergence of the PRP algorithm for general functions under the MWWP technique for both cases. This paper proposes a PRP algorithm using the MWWP line search technique that has the following attributes: ♠ A parabolic surface is introduced for use as a projection surface. ♠ The PRP search direction is used for every iteration. ♠ If −δ1 gkT dk ≥ δ α2k kdk k2 holds, the next point xk+1 is defined as normal. ♠ Otherwise, if −δ1 gkT dk < δ α2k kdk k2 , the projection onto the given surface is used to obtain the next point xk+1 . ♠ The proposed PRP algorithm for nonconvex functions under the inexact MWWP line search technique 2

globally converges. ♠ The numerical results for the proposed algorithm are competitive, as will be reported in this paper. The next section will introduce the inspiration for and the details of the new algorithm. Section 3 presents the properties of the proposed algorithm, and Section 4 reports the numerical results. A conclusion is presented in the last section.

2.

Motivation and Algorithm

It has been proven that the PRP algorithm exhibits ideal global convergence for exact/inexact line search techniques if the objective functions are uniformly convex since such functions have many advantageous. Might it be possible to use some of these advantageous of uniformly convex functions in the PRP conjugate gradient algorithm to achieve global convergence? This question motivates us to establish a convex parabolic surface as a surface for projection. Any unsatisfactory iteration points generated by the PRP algorithm are projected onto this surface to overcome the failure to converge. Let the next point generated by the normal PRP algorithm be defined as follows: υk = xk + αk dk . We define a parabolic with {x | g(υk )T (υk − x) + ζkυk − xk2 = 0},

(2.1) T

2

where ζ > 0 is a positive constant. It is not difficult to see that Pk = g(υk ) (υk − x) + ζkυk − xk can be regarded as the first and second terms of the expansion of a uniformly convex function around υk with a diagonal Hessian matrix and its eigenvalue 2ζ. For a point χk ∈ ℜn projecting onto the parabolic (2.1), if one simply set, this projected point can be done by the following definition χk+1 = χk −

g(υk )T (υk − χk ) + ζkυk − χk k2 g(χk ). kg(χk )k

(2.2)

Then, by the above definition (2.2), we obtain the next xk+1 by xk onto (2.1): xk+1 = xk −

g(υk )T (υk − xk ) + ζkυk − xk k2 g(xk ). kg(xk )k

(2.3)

The proposed algorithm is described below. Algorithm 1: Projection PRP Algorithm (P-PRP-A) Step 0 : Start with an initial point x0 ∈ ℜn ; positive constants ǫ ∈ (0, 1), δ ∈ (0, 1/2), δ1 ∈ (0, δ), σ ∈ (δ, 1) and ζ ∈ (0, δσ δ1 ); d0 = −g0 ; and k = 0. Step 1 : Check whether kgk k ≤ ǫ holds; if so, the algorithm stops. Step 2 : Find αk using the MWWP technique described by (1.8) and (1.9). Step 3 : Set υk = xk + αk dk . Step 4 : If −δ1 gkT dk ≥ δαk kdk k2 is true, let xk+1 = υk and go to Step 6. Step 5 : Otherwise (if −δ1 gkT dk < δαk kdk k2 ), set xk+1 as defined by (2.3). Step 6 : Check whether kgk+1 k ≤ ǫ holds; if so, the algorithm stops. Step 7 : Compute the search direction dk+1 as follows:  −gk+1 + βkP RP dk , if k ≥ 1, dk+1 = (2.4) −gk+1 , if k = 0. Step 8 : Let k = k + 1, and go to Step 2.

3

Remark A. (i) We consider Case i, defined by −δ1 gkT dk ≥ δαk kdk k2 , and Case ii, defined by −δ1 gkT dk < δαk kdk k2 , corresponding to Step 4 and Step 5, respectively. For Case i, the global convergence of the PRP algorithm for nonconvex functions has been established by Yuan et al. [27]. (ii) For Case ii, it is easy to see that the MWWP line search technique is equivalent to the normal WWP line search technique.

3.

Convergence Analysis

This section will prove the global convergence of the projection PRP algorithm. The proof requires the following assumptions. Assumption A. (i) The level set L0 = {x | f (x) ≤ f (x0 )} is bounded, and the function f (x) is twice continuously differentiable and bounded below. (ii) The gradient function g(x) = ∇f (x) is Lipschitz continuous; namely, there exists a positive constant L that satisfies kg(x) − g(y)k ≤ Lkx − yk, x, y ∈ ℜn . (3.5) (iii) The relations dTk gk < 0 and gk+1 dk ≤ −σ1 gkT dk

(3.6)

hold for all k, where σ1 ≤ σ. Remark B. (i) Regarding the relations (3.6) of Assumption A(iii), even more stringent assumption conditions are adopted in many papers (see [7, 20], among others) to ensure the global convergence of conjugate gradient methods. The inequality dTk gk < 0 is a normal assumption conditions (see [7, 17] etc.) and it can ensure that the MWWP line search technique is reasonable and that Algorithm 1 is well defined. If the condition gk+1 dk ≤ −σ1 gkT dk holds, the line search technique WWP will generate the strong Wolfe condition. (ii) Assumption A(i) implies that there exist some positive constants Xs and Gs that satisfy kxk ≤ Xs , x ∈ L0

(3.7)

kg(x)k ≤ Gs , x ∈ L0 .

(3.8)

and In a manner similar to [27], it is not difficult to obtain the following two lemmas; therefore, their proofs are omitted. Lemma 3.1 Suppose that Assumption A is true and that the sequence {xk , dk , αk , gk } is generated by P-PRP-A. Then, we have either ∞ X kgk k4 <∞ kdk k2 k=0

or

lim inf kgk k = 0.

k→∞

Lemma 3.2 Suppose that Assumption A is true and that the sequence {xk , dk , αk , gk } is generated by P-PRP-A. If the relation ∞ X 1 =∞ kdk k2 k=0

holds, then

lim inf kgk k = 0

k→∞

is true.

4

Now, we prove the global convergence of P-PRP-A for general functions. Theorem 3.1 Suppose that the conditions in Lemma 3.1 are true. Then, we have lim inf kgk k = 0.

k→∞

Proof. This theorem will be proven by contradiction. Let us suppose that the theorem does not hold, namely, that there exists a positive constant ε∗ that satisfies kgk k ≥ ε∗ , ∀ k ≥ 0.

(3.9)

By Step 2 of P-PRP-A, (1.8), and Assumption A, we have f (xk + αk dk ) ≤ ≤

f (xk ) + δαk gkT dk + αk min[−δ1 gkT dk , δ

αk kdk k2 ] 2

f (xk ) + αk (δ − δ1 )gkT dk .

Then, we obtain −αk (δ − δ1 )gkT dk ≤ f (xk ) − f (xk+1 ), and summing the above inequalities from k = 0 to ∞ yields (δ − δ1 )

∞ X

k=0

[−αk gkT dk ] < ∞.

(3.10)

Now, we discuss the direction computation defined in (2.4) for the following two cases. Case i: −δ1 gkT dk ≥ δαk kdk k2 . By Step 4 of Algorithm 1, we have xk+1 = xk + αk dk and kdk+1 k ≤ ≤ ≤

kgk+1 k+ | βkP RP | kdk k kgk+1 kkgk+1 − gk k kgk+1 k + kdk k kgk k Gs L Gs + ∗ ksk kkdk k, ε

(3.11)

where sk = xk+1 − xk = αk dk and the last inequality follows from (3.9), (3.8), and (3.5). By combining these expressions with (3.10) and the definition of this case, namely, −δ1 gkT dk ≥ δαk kdk k2 , we obtain ∞ X

k=0

δ(δ − δ1 )ksk k2 =

∞ X

k=0

αk ((δ − δ1 )δαk kdk k2 ) ≤

∞ X

[−δ1 (δ − δ1 )αk gkT dk ] < ∞.

k=0

Thus, ksk k → 0, k → ∞ holds, which implies that there exist an integer k1 ≥ 0 and a positive constant s1 ∈ (0, 1) that satisfy Gs Lksk k ≤ s1 , ∀ k ≥ k1 . ε∗ Case ii: −δ1 gkT dk < δαk kdk k2 . By Step 5 of Algorithm 1, we have xk+1 = xk −

g(υk )T (υk − xk ) + ζkυk − xk k g(xk ). kg(xk )k2 5

(3.12)

Similar to (3.11), we obtain kdk+1 k

≤ kgk+1 k+ | βkP RP | kdk k ≤ kgk+1 k +

kgk+1 kkg[xk −

g(υk )T (υk −xk )+ζkυk −xk k2 g(xk )] kg(xk )k

kgk k T Gs L g(υk ) (υk − xk ) + ζkυk − xk k2 ≤ Gs + ∗ k g(xk )kkdk k ε kg(xk )k Gs L ≤ Gs + ∗ ksk kkdk k, ε α g(xk +αk dk )T dk +ζα2k kdk k2 gk . kgk k

where ksk k = kxk+1 −xk k = k δαk kdk k2 ) and (1.9), we obtain

g(υk )T (υk − xk ) + ζkυk − xk k2

= ≥ ≥ = >

− g(xk )k

kdk k

(3.13)

By the definition of Case ii (namely, −δ1 gkT dk <

αk g(xk + αk dk )T dk + ζα2k kdk k2

αk σgkT dk + αk min[−δ1 gkT dk , δαk kdk k2 ] + ζα2k kdk k2 δ1 αk σgkT dk − ζαk gkT dk δ (σδ − ζδ1 ) αk gkT dk − δ 0, (3.14)

where σδ − ζδ1 > 0 follows from ζ ∈ (0, σδ/δ1 ). By δαk kdk k2 > −δ1 gkT dk > 0, we can deduce that there the relation deltaαk kdk k2 = O(−δ1 gkT dk ) holds, which implies that there exists a constant δ∗ > 0 such that αk kdk k2 ≤ −δ∗ gkT dk . So, by (3.6), we have g(υk )T (υk − xk ) + ζkυk − xk k2

= αk g(xk + αk dk )T dk + ζα2k kdk k2 ≤ −αk σ1 gkT dk + ζα2k kdk k2

≤ (σ1 + δ∗ ζ)(−αk gkT dk ).

(3.15)

By the definition of xk+1 in Step 5 of Algorithm 1, (3.15), and (3.10), we have ∞ X

k=0

ksk k

= = ≤ ≤

∞ X

k=0 ∞ X

k=0 ∞ X

k=0

kxk+1 − xk k k

g(υk )T (υk − xk ) + ζkυk − xk k2 g(xk )k kg(xk )k

[−(σ1 + δ∗ ζ)αk gkT dk ] ∞

X (σ1 + δ∗ ζ) [−αk gkT dk ] (δ − δ1 ) (δ − δ1 ) k=0

< ∞; therefore,

ksk k → 0, k → ∞ is true, which means that there exist an integer k2 ≥ 0 and a positive constant s2 ∈ (0, 1) that satisfy Gs L ksk k ≤ s2 , ∀ k ≥ k1 . ε∗

(3.16)

By (3.12) and (3.16), there exist an integer k0 = max{k1 , k2 } and a constant s0 = max{s1 , s2 } ∈ (0, 1) such that Gs L ksk k ≤ s0 , ∀ k ≥ k0 . ε∗ 6

Thus, by combining these expressions with (3.11) and (3.13), for all k > k0 , we obtain kdk+1 k ≤





Gs + s0 kdk k

Gs (1 + s0 + s20 + · · · + s0k−k0 −1 ) + s0k−k0 kdk0 k Gs + kdk0 k. 1 − s0

Gs + kdk0 k}; then, we obtain Let Ds = max{kd1 k, kd2 k, · · · , kdk0 k, 1−s 0

kdk k ≤ Ds ∀ k ≥ 1. This contradicts Lemma 3.2. Thus, the proof is complete. Remark. In the proof of this theorem, for any case (including Case i and Case ii, or the alternating situation), which does not influence the result of this theorem. It is not difficult to see that the given algorithm is an iteration numerical algorithm and the information is updating by k := k + 1 (for all k = 0, 1, 2, · · ·) until the stopped point satisfying kg(xk )k ≤ ǫ is found. More detail explain, in this proof, for the last iteration information k − 1, they may be generated by Case i or Case ii. At the iteration k, we directly use the information from k − 1 and we do not care the information coming from Case i or Case ii since we can always get the relation kdk k ≤ Ds (for all k), this relation is necessary for us to prove this theorem.

4.

Numerical Results

This section reports numerical results obtained using the proposed algorithm and two similar conjugate gradient algorithms. The normal PRP algorithm under the MWWP line search technique is called Algorithm 2, and the PRP algorithm under the WWP line search technique is called Algorithm 3. Tested problems: The problems, which were obtained from [28], and their given initial points are listed in Tables 1 and 2. Dimensionality: Large-scale problem instances with 3000, 9000, 15000, and 27000 variables are considered. σδ . Parameters: In these three algorithms, δ = 0.2, δ1 = 0.15, σ = 0.85 and ζ = 2δ 1

(xk+1 )| Stop rule (Himmelblau rule): If | f (xk ) |> ǫ1 , set T er1 = |f (xk|f)−f ; otherwise, set T er1 =| (xk )| f (xk ) − f (xk+1 ) |. If kg(x)k < ǫ (or T er1 < ǫ2 ) holds, the program is stopped, where ǫ1 = ǫ2 = 10−5 and ǫ = 10−6 . Experiment: The programs were written using MATLAB R2016a and run on a PC with an Intel Pentium(R) Xeon(R) E5507 CPU @2.27 GHz, 6.00 GB of RAM, and the Windows 7 operating system. Other settings: The step size αk for the line search was chosen such that the search number would be greater than 10, and the program was stopped if the number of iterations would exceed 1000. The numerical results are presented in Tables 3-5. Some definitions of notation used in these tables are as follows: Nr: tested problem. Dim: number of variables. Ni: number of iterations. Nfg: total number of iterations of the function and the gradient. Time: CPU time of the system in seconds. We used the tool of Dolan and Mor´e [4] to compare the effects of the three investigated algorithms; Figures 1-3 show the profiles for Ni, Nfg and Time, respectively. It is not difficult to see from these figures that Algorithm 1 exhibits the best performance among the three methods, whereas Algorithm 2 achieves superior performance and robustness compared with Algorithm 3. All of the findings presented above yield two conclusions: (1) the MWWP line search technique is competitive with the normal WWP line search technique for these test problems, and (2) the PRP method with the projection technique is more effective than the normal method without this technique for these problems.

7

Nr 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

Table 1: The test problems Problem x0 Extended Freudenstein and Roth Function [0.5, -2, ...,0.5, -2] Extended Trigonometric Function [0.2, 0.2, ...,0.2] Extended Rosenbrock Function [-1.2, 1, -1.2, 1, ..., -1.2, 1] Extended White and Holst Function [-1.2, 1, -1.2, 1, ..., -1.2, 1] Extended Penalty Function [1,2,3,...,n] Raydan 1 Function [1, 1, ..., 1] Raydan 2 Function [1, 1, ...,1] Diagonal 1 Function [1/n, 1/n, ..., 1/n] Diagonal 2 Function [1/1, 1/2, ..., 1/n] Diagonal 3 Function [1,1,...,1] Hager Function [1,1,...,1] Generalized Tridiagonal 1 Function [2,2,...,2] Extended Tridiagonal 1 Function [2,2,...,2] Extended Three Exponential Terms Function [0.1,0.1,...,0.1] Generalized Tridiagonal 2 Function [-1, -1, ..., -1, -1] Diagonal 4 Function [1, 1, ..., 1, 1] Diagonal 5 Function [1.1, 1.1, ..., 1.1] Extended Himmelblau Function [1, 1, ..., 1] Generalized PSC1 Function [3, 0.1, ..., 3, 0.1] Extended PSC1 Function [3, 0.1, ..., 3, 0.1] Extended Powell Function [3, -1, 0, 1, ...] Extended Block Diagonal BD1 Function [0.1, 0.1, ..., 0.1] Extended Maratos Function [1.1, 0.1, ...,1.1, 0.1] Extended Cliff Function [0, -1, ..., 0, -1] Extended Wood Function [-3,-1,-3,-1,...,-3,-1] Extended Hiebert Function [0,0,...,0] Extended Quadratic Penalty QP1 Function [1, 1, ...,1] Extended Quadratic Penalty QP2 Function [1, 1, ...,1] A Quadratic Function QF2 Function [0.5, 0.5, ...,0.5] Extended EP1 Function [1.5.,1.5.,...,1.5] Extended Tridiagonal-2 Function [1,1,...,1] BDQRTIC Function (CUTE) [1,1,...,1]

8

Performance profiles of these methods(Ni). Figure 1

1 0.9 0.8 0.7

Pp:r(p,s)<=t

Nr 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62

Table 2: The test problems (continued) Problem x0 ARWHEAD Function (CUTE) [1,1,...,1] NONDIA (Shanno-78) Function (CUTE) [-1,-1,...,-1] NONDQUAR Function (CUTE) [1,-1,1,-1,...,1,-1] DQDRTIC Function (CUTEr) [3,3,3...,3] EG2 Function (CUTE) [1,1,1...,1] DIXMAANA Function (CUTE) [2,2,2,...,2] DIXMAANB Function (CUTE) [2,2,2,...,2] DIXMAANC Function (CUTE) [2,2,2,...,2] DIXMAANE Function (CUTE) [2,2,2,...,2] Partial Perturbed Quadratic Function [0.5, 0.5, ..., 0.5] Broyden Tridiagonal Function [-1, -1, ..., -1] EDENSCH Function (CUTE) [0, 0, ..., 0] VARDIM Function (CUTEr) [1-1/n, 1-2/n, ..., 1-n/n] STAIRCASE S1 Function [1,1,...,1] LIARWHD Function (CUTEr) [4, 4, ...,4] DIAGONAL 6 Function [1,1, ..., 1] DIXMAANF Function (CUTE) [2,2,2,...,2] DIXMAANG Function (CUTE) [2,2,2,...,2] DIXMAANI Function (CUTE) [2,2,2,...,2] DIXMAANJ Function (CUTE) [2,2,2,...,2] DIXMAANK Function (CUTE) [2,2,2,...,2] DIXMAANL Function (CUTE) [2,2,2,...,2] DIXMAAND Function (CUTE) [2,2,2,...,2] ENGVAL1 Function (CUTE) [2,2,2,...,2] FLETCHCR Function (CUTEr) [0,0,...,0] COSINE Function (CUTE) [1,1,...,1] Extended DENSCHNB Function (CUTE) [1,1,...,1] DENSCHNF Function (CUTEr) [2,0,2,0,...,2,0] SINQUAD Function (CUTE) [0.1,0.1,...,0.1] BIGGSB1 Function (CUTE) [0, 0, ...,0]

0.6 Algorithm 1 Algorithm 2 Algorithm 3

0.5 0.4 0.3 0.2 0.1 0 2

4

6

8

10

12

t

9

14

16

18

20

Table 3: Numerical results

Nr 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

Dim 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000

Ni 11 11 11 11 1000 62 60 63 11 19 24 38 30 22 22 23 85 101 101 108 21 21 21 21 7 8 8 8 2 2 2 2 3 2 2 2 14 15 15 15 26 2 3 2 5 5 4 4 116 31 10 123 21 9 9 9 45 40 33 35 7 33 42 46 3 16 13 14 3 3 3 3 19 19 18 19 5 5 5 5 104 1000 388 264 10 12 9 9

Algorithm 1 Nfg Time 39 0.374402 39 0.811205 39 1.310408 39 2.340015 2022 11.715675 145 2.620817 140 3.541223 148 6.770443 40 0.0624 69 0.171601 69 0.312002 141 0.873606 92 0.499203 59 0.936006 58 1.59121 66 2.995219 195 0.249602 228 0.780005 231 1.372809 247 2.527216 50 0.093601 50 0.312002 50 0.546003 50 1.045207 25 0 27 0.124801 27 0.187201 27 0.312002 10 0.0624 10 0.0624 10 0.0624 10 0.0624 16 0 9 0 9 0 9 0.0624 36 0.124801 38 0.312002 38 0.483603 38 0.873606 130 0.421203 9 0.0624 15 0.249602 9 0.187201 14 0.124801 14 0.374402 11 0.499203 11 0.811205 288 3.026419 101 2.464816 35 1.232408 398 27.612177 44 0.124801 23 0.187201 23 0.249602 23 0.436803 102 0.982806 98 2.870418 78 3.603623 82 6.786043 27 0 162 0.312002 225 0.592804 303 1.52881 13 0.0624 39 0.499203 39 0.748805 46 1.56001 13 0.0624 12 0.0624 12 0.109201 12 0.109201 57 0.171601 65 0.358802 60 0.624004 68 1.248008 22 0.0624 22 0.0624 22 0.187201 22 0.374402 322 1.762811 2200 40.357459 800 24.99136 548 30.732197 76 0.0624 87 0.312002 70 0.374402 70 0.561604

Ni 11 11 11 11 56 61 60 63 22 52 53 38 32 43 63 23 85 101 101 108 21 21 21 21 5 5 5 5 2 2 2 2 8 4 4 4 14 15 15 15 26 2 3 2 5 5 4 4 30 29 22 22 21 9 9 9 45 40 33 35 3 3 3 3 3 3 3 3 34 35 36 37 21 22 23 21 5 5 5 5 110 993 394 338 27 37 27 55

Algorithm 2 Nfg Time 40 0.312002 40 0.811205 40 1.357209 40 2.371215 137 0.936006 143 2.152814 140 3.510022 148 6.19324 95 0.0624 190 0.343202 186 0.624004 149 0.748805 108 0.499203 126 1.872012 215 5.304034 63 2.917219 195 0.249602 228 0.733205 231 1.170008 247 2.340015 50 0.124801 50 0.312002 50 0.592804 50 0.873606 15 0 15 0.0624 15 0.124801 15 0.187201 9 0 9 0 9 0 9 0.0624 33 0.0624 17 0.0624 17 0.0624 17 0.124801 36 0.0624 38 0.312002 38 0.561604 38 0.982806 130 0.374402 9 0.0624 15 0.124801 9 0.187201 14 0.187201 14 0.483603 11 0.374402 11 0.811205 87 0.811205 88 2.215214 73 2.839218 66 4.71123 44 0.358802 23 0.140401 23 0.187201 23 0.358802 102 0.998406 98 2.730018 78 3.681624 82 6.801644 10 0.0624 10 0 10 0 10 0 10 0 13 0.0624 10 0.202801 11 0.312002 77 0.124801 79 0.249602 81 0.374402 83 0.670804 51 0.124801 56 0.374402 67 0.686404 61 1.138807 22 0.0468 22 0.0624 22 0.187201 22 0.312002 309 1.56001 2130 38.313846 804 24.507757 744 39.109451 163 0.187201 170 0.717605 148 0.811205 190 2.418015

10

Ni 11 11 11 11 59 63 61 65 66 48 56 61 32 43 43 23 85 101 101 108 21 21 21 21 5 5 5 5 2 2 2 2 8 4 4 4 14 15 15 15 26 2 3 2 5 5 4 4 37 45 40 36 21 8 8 8 46 35 30 35 3 3 3 3 3 3 3 3 39 41 43 44 25 32 26 23 7 7 7 7 144 1000 665 432 24 37 31 33

Algorithm 3 Nfg Time 40 0.327602 40 0.998406 40 1.248008 40 2.340015 134 0.686404 147 2.277615 142 3.634823 150 6.598842 267 0.187201 164 0.312002 189 0.733205 221 1.123207 108 0.499203 126 1.872012 161 3.588023 63 2.979619 195 0.187201 228 0.780005 231 1.170007 247 2.823618 50 0.187201 50 0.312002 50 0.546003 50 0.873606 15 0.0624 15 0.0624 15 0.0624 15 0.109201 9 0.0624 9 0.0624 9 0.0624 9 0.0624 32 0.0624 17 0.0624 17 0.0624 17 0.234001 36 0.124801 38 0.312002 38 0.499203 38 0.904806 130 0.374402 9 0.0624 15 0.124801 9 0.124801 14 0.187201 14 0.592804 11 0.670804 11 0.873606 98 0.998406 116 3.19802 102 4.539629 95 7.332047 44 0.187201 21 0.124801 21 0.187201 21 0.312002 104 1.060807 85 2.371215 73 3.276021 82 6.864044 10 0.0624 10 0 10 0.0468 10 0.0624 10 0.0624 10 0.124801 10 0.124801 11 0.249602 86 0.171601 90 0.249602 94 0.436803 96 0.686404 61 0.187201 75 0.499203 63 0.670804 55 1.154407 36 0.187201 36 0.124801 36 0.249602 36 0.436803 429 2.152814 2007 39.530653 1407 42.931475 932 49.545918 144 0.187201 170 0.624004 167 0.982806 168 1.669211

Table 4: Numerical results

Nr 23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

Dim 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000

Ni 35 8 9 5 79 68 69 70 23 23 26 28 4 4 4 4 27 30 31 33 47 32 45 32 3 2 2 2 3 7 9 12 14 21 6 4 7 7 7 7 5 4 4 4 38 41 43 35 69 19 14 9 195 195 194 194 30 5 15 11 31 33 34 35 50 54 55 57 21 22 22 23 13 71 14 9 28 34 46 89 38 95 98 49 7 7 7 7

Algorithm 1 Nfg Time 82 0.0624 26 0.124801 41 0.124801 20 0.0624 180 0.436803 160 1.154407 162 1.700411 164 3.385222 62 0.124801 63 0.187201 73 0.312002 72 0.546003 17 0 17 0 17 0.0624 17 0.124801 62 0.124801 70 0.312002 72 0.608404 76 0.920406 98 0.358802 70 0.468003 96 1.232408 89 1.856412 7 0 5 0.0624 5 0 5 0 10 0.0624 14 0.124801 18 0.187201 24 0.374402 28 0.0624 42 0.187201 14 0.124801 13 0.109201 27 0.124801 27 0.374402 27 0.561604 26 1.107607 15 0.0624 11 0.0624 11 0.0624 11 0.0624 76 0.124801 82 0.296402 86 0.561604 70 0.811205 262 3.432022 64 2.230814 54 2.792418 34 2.714417 395 0.483603 395 1.357209 393 2.028013 393 3.494422 133 0.795605 19 0.124801 55 1.794011 49 1.045207 66 1.170007 70 3.307221 72 5.600436 74 9.31326 104 2.090413 112 5.350834 114 9.141659 118 14.632894 48 0.748805 50 2.246414 50 3.619223 52 6.115239 56 0.624004 330 10.233666 64 3.244821 27 2.652017 76 11.980877 97 81.994126 124 284.873426 225 1701.95531 88 0.499203 205 3.588023 211 5.943638 110 5.366434 19 0.124801 22 0.436803 22 0.748805 22 1.310408

Ni 35 11 64 5 79 68 69 70 23 23 26 28 4 4 4 4 27 30 31 33 47 32 45 33 3 2 2 2 3 7 9 12 14 21 6 4 90 10 14 15 5 4 4 4 38 41 43 35 138 239 154 254 195 195 194 194 14 6 6 6 31 33 34 35 50 54 55 57 21 22 22 23 68 84 92 134 23 34 46 89 38 95 98 49 7 7 7 7

Algorithm 2 Nfg Time 82 0.109201 40 0.0624 257 0.702004 20 0.0624 180 0.374402 160 1.045207 162 1.747211 164 3.229221 62 0.0624 63 0.187201 74 0.405603 72 0.561604 17 0 17 0 17 0 17 0.0624 62 0.0624 70 0.249602 72 0.530403 76 0.858005 98 0.312002 70 0.499203 96 1.185608 97 1.887612 7 0.0624 5 0 5 0 5 0.0624 11 0.0624 14 0.0624 18 0.124801 24 0.312002 28 0.0624 42 0.187201 14 0.109201 13 0.124801 204 1.357209 43 0.608404 56 1.310408 57 2.184014 15 0.093601 11 0.0624 11 0.0624 11 0.202801 76 0.530403 82 0.951606 86 0.639604 70 0.795605 367 4.898431 666 25.786965 441 27.580977 720 75.442084 395 0.436803 395 1.170008 393 1.903212 393 3.322821 55 0.249602 26 0.187201 29 0.187201 29 0.374402 66 1.060807 70 3.338421 72 5.584836 74 9.094858 104 1.684811 112 5.584836 114 9.126058 118 15.007296 48 0.686404 50 2.215214 50 3.728424 52 6.30244 172 2.589617 208 9.188459 238 16.660907 349 39.811455 63 10.108865 97 81.760124 124 285.013827 225 1703.109717 88 0.546003 205 3.650423 211 4.976432 110 5.194833 19 0.124801 23 0.483603 23 0.858005 23 1.357209

11

Ni 35 11 18 5 79 68 69 70 23 23 26 28 4 4 4 4 27 30 31 33 47 32 45 33 3 2 2 2 3 7 9 12 14 21 6 4 90 10 14 15 5 4 4 4 38 41 43 35 1000 1000 131 9 195 195 194 194 16 6 6 6 31 33 34 35 50 54 55 57 21 22 22 23 319 472 564 690 23 34 46 89 38 51 98 49 7 7 7 7

Algorithm 3 Nfg Time 82 0.124801 40 0.0624 73 0.187201 20 0.124801 180 0.436803 160 1.731611 162 1.747211 164 3.10442 62 0.171601 63 0.187201 74 0.312002 72 0.499203 17 0.0624 17 0.0624 17 0.0624 17 0.124801 62 0.124801 70 0.249602 72 0.436803 76 0.811205 98 0.312002 70 0.499203 96 1.279208 95 1.809612 7 0.0624 5 0.0624 5 0 5 0 11 0.0624 14 0.0624 18 0.124801 24 0.312002 28 0.0624 42 0.234001 14 0.124801 13 0.0624 204 1.372809 43 0.483603 56 1.123207 57 2.168414 15 0.0624 11 0 11 0.0624 11 0.124801 76 0.124801 82 0.452403 86 0.546003 70 0.764405 2207 32.869411 2085 94.895408 323 21.87134 36 3.07322 395 0.483603 395 1.092007 393 1.653611 393 3.213621 76 0.187201 29 0.124801 29 0.187201 29 0.374402 66 1.232408 70 3.307221 72 5.678436 74 9.172859 104 1.747211 112 5.538036 114 8.876457 118 14.960496 48 0.811205 50 2.215214 50 3.634823 52 6.22444 644 11.232072 950 49.436717 1134 93.990603 1386 191.600428 63 10.124465 97 82.337328 124 285.684631 225 1708.086149 86 0.561604 127 1.934412 211 5.787637 110 5.397635 19 0.124801 23 0.436803 23 0.748805 23 1.310408

Table 5: Numerical results

Nr 45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

Dim 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000 3000 9000 15000 27000

Ni 131 146 154 163 11 11 11 11 3 3 3 3 13 15 16 17 56 28 1000 22 14 14 14 13 13 26 14 9 55 29 127 30 45 25 25 25 50 32 63 196 35 37 38 39 10 10 10 9 19 19 19 19 12 20 11 12 19 20 20 21 35 37 38 39 19 48 9 10 1000 995 996 995

Algorithm 1 Nfg Time 296 1.248008 332 4.321228 350 7.316447 370 13.806088 31 0.0624 30 0.124801 30 0.124801 30 0.249602 12 0.0624 12 0.0624 12 0.0624 12 0.124801 37 0.187201 41 0.499203 43 0.873606 45 1.731611 129 2.043613 84 3.338421 4316 230.990681 67 6.786043 40 0.561604 40 1.59121 38 2.480416 35 3.775224 54 0.561604 119 3.697224 60 3.12002 27 2.636417 127 1.872012 93 3.556823 352 24.242555 94 9.547261 128 1.731611 70 2.683217 74 4.66443 74 7.628449 179 2.262014 112 4.336828 240 14.242891 885 79.014507 76 1.248008 80 3.790824 82 6.364841 84 10.498867 27 0.187201 27 0.561604 27 0.904806 22 1.372809 63 0.312002 63 0.811205 64 1.357209 64 2.246414 42 0.249602 70 1.669211 42 0.936006 43 1.700411 43 0.0624 45 0.187201 45 0.249602 47 0.436803 74 0.124801 78 0.483603 80 0.686404 82 1.248008 76 0.421203 149 2.652017 36 0.811205 49 1.981213 6702 5.662836 6634 12.52688 6655 18.408118 6634 31.169

Ni 131 146 154 163 460 222 347 285 7 3 3 3 7 7 7 7 42 41 68 49 79 36 100 36 37 38 70 115 49 41 61 49 86 59 88 95 49 99 23 42 35 37 38 39 10 10 10 9 19 19 19 19 12 22 12 12 19 20 20 21 35 37 38 39 19 98 22 17 75 75 75 75

Algorithm 2 Nfg Time 296 1.248008 332 4.024826 350 7.254046 370 13.868489 1167 1.170008 582 1.450809 903 3.790824 741 5.397635 30 0.0624 13 0.0624 13 0.0624 13 0.0624 24 0.0624 24 0.358802 24 0.436803 24 0.748805 112 1.716011 109 4.508429 180 12.55808 128 14.383292 202 2.964019 91 3.806424 256 18.189717 103 11.044871 97 1.372809 98 4.227627 176 12.620481 295 33.774217 130 1.762811 108 4.63323 159 11.076071 130 14.461293 214 3.08882 153 6.27124 220 15.506499 253 28.12698 144 1.934412 279 11.731275 72 4.586429 113 12.651681 76 1.170007 80 3.790824 82 6.31804 84 10.186865 27 0.249602 27 0.561604 27 0.982806 22 1.372809 60 0.312002 60 0.733205 60 1.185608 60 2.168414 44 0.187201 101 2.496016 45 1.029607 45 1.856412 43 0.0624 45 0.109201 45 0.296402 47 0.374402 74 0.171601 78 0.374402 80 0.624004 82 1.185608 77 0.436803 293 5.382034 91 2.184014 71 3.07322 196 0.124801 196 0.421203 196 0.686404 196 1.294808

12

Ni 131 146 154 163 1000 1000 1000 1000 7 3 3 3 7 7 7 7 77 101 61 164 201 226 101 286 169 133 136 141 75 66 47 171 154 160 97 61 421 938 1000 796 35 37 38 39 10 10 9 9 19 19 19 19 32 15 15 14 19 20 20 21 35 37 38 39 19 48 22 21 1000 1000 1000 1000

Algorithm 3 Nfg Time 296 1.419609 332 4.087226 350 7.113646 370 14.02449 2012 2.059213 2012 5.694037 2012 9.32886 2012 16.723307 30 0.0624 13 0.0624 13 0.0624 13 0.0624 24 0.124801 24 0.249602 24 0.374402 24 0.889206 172 2.745618 220 10.576868 156 10.95127 346 44.070282 419 6.910844 481 23.212949 230 17.316111 615 76.752492 354 5.990438 310 14.118091 327 24.008554 345 40.451059 174 2.714417 150 6.754843 120 8.268053 360 45.630292 335 5.350834 340 15.865302 247 17.425312 157 17.300511 869 16.021303 1906 111.166313 2025 196.888862 1632 224.922242 76 1.248008 80 3.634823 82 6.458441 84 10.452067 27 0.249602 27 0.561604 22 0.748805 22 1.310408 65 0.312002 65 0.842405 65 1.294808 65 2.293215 148 1.185608 68 0.998406 67 0.624004 67 2.745618 43 0.0624 45 0.124801 45 0.187201 47 0.312002 74 0.187201 78 0.436803 80 0.686404 82 1.232408 77 0.499203 146 2.558416 91 2.293215 92 3.900025 2042 1.684811 2042 5.241634 2042 8.143252 2042 14.320892

Performance profiles of these methods(Nfg). Figure 2

1 0.9 0.8

Pp:r(p,s)<=t

0.7 Algorithm 1 Algorithm 2 Algorithm 3

0.6 0.5 0.4 0.3 0.2 0.1 0 2

4

6

8

10

12

14

16

18

20

t

Performance profiles of these methods(CPU-Time). Figure 3 0.9 0.8

Pp:r(p,s)<=t

0.7 0.6

Algorithm 1 Algorithm 2 Algorithm 3

0.5 0.4 0.3 0.2 0.1 0 2

4

6

8

10

12

t

13

14

16

18

20

5.

Conclusions

This paper studies the global convergence of the PRP conjugate gradient algorithm under an inexact line search technique for nonconvex functions. A new algorithm that incorporates a projection technique is proposed, which has the following properties: (1) the next iterative point xk+1 defined by the PRP method is accepted if a positive condition holds, whereas otherwise, the iterative point xk+1 is generated from a parabolic convex surface; (2) regardless of which method is used to define xk+1 , the same PRP conjugate gradient search direction is used to proceed to the next iteration; and (3) the global convergence for nonconvex functions and the competitive numerical performance of the proposed algorithm indicate that the new algorithm should be advantageous for certain applications. Acknowledgments. The authors would like to thank the editor and the referees for their valuable comments which greatly improve this paper.

References [1] Y. Dai, Convergence properties of the BFGS algorithm, SIAM J Optim., 13(2003), pp. 693-701. [2] Y. Dai, Analysis of conjugate gradient methods, Ph.D. Thesis, Institute of Computational Mathematics and Scientific/Engineering Computing, Chese Academy of Sciences, 1997. [3] Y. Dai and Y. Yuan, A nonlinear conjugate gradient with a strong global convergence properties, SIAM J. Optim., 10(2000), pp. 177-182. [4] E. D. Dolan, J. J. Mor´e, Benchmarking optimization software with performance profiles, Math. Program., 91(2002), pp. 201-213. [5] R. Fletcher, Practical Methods of Optimization, 2nd ed. John Wiley and Sons, New York, 1987. [6] R. Fletcher and C. M. Reeves, Function minimization by conjugate gradients, Comput. J., 7(1964), pp. 149-154. [7] J. C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim., 2(1992), pp. 21-42. [8] L. Grippo and S. Lucidi, A globally convergent version of the Polak-Ribi`ere-Polyak conjugate gradient method, Math. Program., 78(1979), pp. 375-391. [9] W. Hager and H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM J Optim., 16(2005), pp. 170-192. [10] W. Hager and H. Zhang, Algorithm 851: CG DESCEN T, A conjugate gradient method with guaranteed descent, ACM Trans. Math. Soft., 32(2006), pp. 113-137. [11] M. R. Hestenes and E. Stiefel, Method of conjugate gradient for solving linear equations, J Res. Nation. Bur. Stand., 49(1952), pp. 409-436. [12] X. Li, S. Wang, Z. Jin, and H. Pham, A conjugate gradient algorithm under Yuan-Wei-Lu line search technique for large-scale minimization optimization models, Mathematical Problems in Engineering, Volume 2018, Article ID 4729318, 11 pages, 2018. [13] Y. Liu and C. Storey, Effcient generalized conjugate gradient algorithms part 1: Theory, J. Appl. Math. Comput., 69(1992), pp. 17-41. [14] E. Polak, The conjugate gradient method in extreme problems, Comput. Math. Mathem. Phy., 9(1969). pp. 94-112. 14

[15] E. Polak, G. Ribi`ere, Note sur la convergence de directions conjugees, Rev. Fran. Inf. Rech. Op´erat., 3(1969), pp. 35-43. [16] M. J. D. Powell, Nonconvex minimization calculations and the conjugate gradient method, Lecture Notes in Mathematics, Vol. 1066, Spinger-Verlag, Berlin, 1984, pp. 122-141. [17] M. J. D. Powell, Convergence properties of algorithm for nonlinear optimization, SIAM Rev., 28(1986), pp. 487-500. [18] Z. Wei, S. Yao, and L. Liu, The convergence properties of some new conjugate gradient methods, Appl. Math. Comput., 183(2006), pp. 1341-1350. [19] G. Yuan, Modified nonlinear conjugate gradient methods with sufficient descent property for largescale optimization problems, Optim. Let., 3(2009), pp. 11-21. [20] Y. Yuan, Analysis on the conjugate gradient method, Optim. Meth. Soft., 2(1993), pp. 19-29. [21] G. Yuan and X. Lu, A modified PRP conjugate gradient method, Anna. Operat. Res., 166(2009), pp. 73-90. [22] G. Yuan, X. Lu, and Z. Wei, A conjugate gradient method with descent direction for unconstrained optimization, J. Comput. Appl. Math., 233(2009), pp. 519-530. [23] G. Yuan, Z. Meng, and Y. Li, A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations, J. Optim. Theory. Appl., 168 (2016), pp. 129-152. [24] G. Yuan, Z. Sheng, P. Wang, W. Hu, and C. Li, The global convergence of a modified BFGS method for nonconvex functions, J. Comput. Appl. Math., 327(2018), pp. 274-294. [25] G. Yuan, Z. Wei, and G. Li, A modified Polak-Ribi`ere-Polyak conjugate gradient algorithm for nonsmooth convex programs, J. Comput. Appl. Math., 255 (2014), pp. 86-96. [26] G. Yuan and M. Zhang, A three-terms Polak-Ribire-Polyak conjugate gradient algorithm for largescale nonlinear equations, J. Comput. Appl. Math., 286 (2015), pp. 186-195. [27] G. Yuan, Z. Wei, and X. Lu, Global convergence of the BFGS method and the PRP method for general functions under a modified weak Wolfe-Powell line search, Appl. Math. Model., 47(2017), pp. 811-825. [28] Andrei N. An unconstrained optimization test functions collection, Adv. Model. Optim., 10(2008), pp. 147-161.

15