Available online at www.sciencedirect.com
Applied Mathematics and Computation 194 (2007) 224–233 www.elsevier.com/locate/amc
A revised cut-peak function method for box constrained continuous global optimization Zheng-Hai Huang *, Xin-He Miao, Ping Wang Department of Mathematics, School of Science, Tianjin University, Tianjin 300072, PR China
Abstract In this paper, we modify the concept of cut-peak function given in Wang et al. [Y. Wang, W. Fang, T. Wu, A deterministic algorithm of global optimization using cut-peak functions, Technique Report, in: The Conference of Mathematical Programming of China, 2006], and then propose a revised cut-peak function algorithm for solving box constrained continuous optimization problems. The smoothing technique is used to overcome the difficulty arising from the nonsmoothness of the constructed function. By using the exterior penalty function method we find iteratively a better minimizer from the current local minimizer till a global minimizer of the concerned problem is found. Some preliminary numerical results are reported. 2007 Elsevier Inc. All rights reserved. Keywords: Global optimization; Cut-peak function; Global minimizer
1. Introduction Global optimization is concerned with the theory and algorithms on seeking global minimizers of multimodal functions. It has wide applications in almost all fields of engineering, finance, management as well as social science. Many methods have been proposed in the literature to solve the global optimization problems, however, how to solve such problems efficiently is still a problem with a great challenge. The existing literature in the field of global optimization can be usually divided into two classes: stochastic methods and deterministic methods. Usually, deterministic methods are more efficient than stochastic methods. In the last decades, some deterministic method (such as the filled function method [1–3,7–11,13,14]) has attracted much interest in the field of global optimization. The basic ideas of these methods is that they find a sequence of local minimizers with monotonically decreasing objective function, where the minimization sequence could leave from a local minimizer to a better minimizer of objective function through minimizing an auxiliary function constructed at the local minimizer. In the numerical experiments, one of the main difficult points is how to give a proper termination rule.
*
Corresponding author. E-mail addresses:
[email protected],
[email protected] (Z.-H. Huang).
0096-3003/$ - see front matter 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2007.04.045
Z.-H. Huang et al. / Applied Mathematics and Computation 194 (2007) 224–233
225
Recently, Wang et al. [15] proposed a cut-peak function method for solving unconstrained continuous global optimization (see Section 2 for the details). The implementation of this method is relatively easier than those existing in deterministic methods. However, when all other minimizers of the concerned problem is far away from the current local minimizer, it is possible that the global minimizers are cut down so that the global minimizer cannot be found. In this paper, we give a modification of the concept of cut-peak function introduced in [15], and then propose a revised cut-peak function method for solving box constrained continuous global optimization. The revised cut-peak function method may overcome the disadvantage mentioned above. To show the efficiency of the method, we will report some numerical results of the method for solving fifteen testing problems which were tested in some recent literature. For every problem tested in this paper, the method presented here can find a global minimizer of the concerned problem. This paper is organized as follows: In Section 2, we review some concepts and ideas of cut-peak function method in [15]. In Section 3, we describe the idea of the revised cut-peak function method and present a revised cut-peak function algorithm. In Section 4, we give some numerical implementation of the algorithm. Some conclusions are drawn in Section 5. 2. Cut-peak function method In this section, we review some concepts and the idea of the cut-peak function method introduced in [15]. In [15] the authors considered the following unconstrained continuous optimization: min f ðxÞ;
ð2:1Þ
x2X
where f is Lipschitz continuous and differentiable on a compact region X Rn . The following important concepts were introduced. Definition 2.1. wðr; xðkÞ ; xÞ is said a cut-peak function of f at the point xðkÞ with a positive parameter r if the following two conditions are satisfied: (i) xðkÞ is the unique maximum point of wðr; xðkÞ ; xÞ, and wðr; xðkÞ ; xðkÞ Þ ¼ f ðxðkÞ Þ; (ii) for any direction d 2 Rn , wðr; xðkÞ ; xðkÞ þ kdÞ is strictly decreasing with respect to step length k, and lim wðr; xðkÞ ; xðkÞ þ kdÞ ¼ f ðxðkÞ Þ cðrÞ > 1;
k!þ1
where cðrÞ is a positive scalar with respect to given constant r and is called the maximum-cut of wðr; xðkÞ ; xÞ at xðkÞ . Definition 2.2. F ðr; xðkÞ ; xÞ ¼ minðf ðxÞ; wðr; xðkÞ ; xÞÞ is said a choice function of f crossing through the point xðkÞ . The cut-peak function method proposed in [15] can be described as follows: Phase 1: Solve problem (2.1) and obtain a local minimizer xðkÞ . Phase 2: Solve min F ðr; xðkÞ ; xÞ iteratively with an initial point x :¼ xðkÞ þ ke (where k is a proper real number, E is a set of search vectors defined on the unit sphere of Rn , and e 2 E) and obtain a local minimizer ~x. Phase 3: If ~x 2 X and f ð~xÞ < f ðxðkÞ Þ, then set xðkÞ :¼ ~x; k :¼ k þ 1, go to Phase 2; otherwise, update the parameter k and search direction e, set k :¼ k þ 1, go to Phase 2. The algorithm terminates after all unit vectors in E are used. It is possible that the global minimizers are cut down when all other minimizers of the concerned problem is far away from the current local minimizer, or when the constructed cut-peak function is not suitable. This can be seen from the following example. Example 2.1. Choose the objective function f ðxÞ ¼ ex sinð2pxÞ, where x 2 ½0; 4 and xðkÞ ¼ 1:275; the cutðkÞ 2
35ðxx Þ , where x 2 ½0; 4 and xðkÞ ¼ 1:275; and the choice function peak function wðr; xðkÞ ; xÞ ¼ f ðxðkÞ Þ 1þðxx ðkÞ Þ2
226
Z.-H. Huang et al. / Applied Mathematics and Computation 194 (2007) 224–233 50
40
Objective function Cut-peak function Choice function
30
20
y
10
0 f(xk)
-10
-20
-30
-40 -0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
x Fig. 1. Objective function, cut-peak function and choice function.
F ðr; xðkÞ ; xÞ ¼ minðf ðxÞ; wðr; xðkÞ ; xÞÞ, where x 2 ½0; 4 and xðkÞ ¼ 1:275. Then the concept of objective function, cut-peak function, and choice function is shown in Fig. 1. Remark 2.1. It is easy to show that the function wðr; xðkÞ ; Þ given in Example 2.1 is a cut-peak function. However, it is not difficult to see from Fig. 1 that the global minimizer of the concerned problem is cut down so that it cannot be found by the cut-peak function method if this cut-peak function is used. Of course, we may modify the cut-peak function such that the cut-peak function method is efficient for this problem. For a general global optimization problem, however, we do not know in advance where the global minimizers are. A natural question is whether or not there is a function (which is similar to the cut-peak function) such that the disadvantage mentioned above can be overcome. We will answer this question in the following section. 3. Revised cut-peak function method 3.1. Basic ideas and algorithm In this paper, we consider the following box constrained continuous optimization: min
f ðxÞ
s:t:
l 6 x 6 u;
ð3:1Þ
where l; u 2 Rn with l < u, and f is differentiable on ½l; u. Let xðkÞ be a local minimizer of (3.1), which can be found by some usual local optimization methods.
Z.-H. Huang et al. / Applied Mathematics and Computation 194 (2007) 224–233
227
Definition 3.1. wðxðkÞ ; xÞ :¼ f ðxðkÞ Þ is said a revised cut-peak function of f at the point xðkÞ . It should be noted that the function wðxðkÞ ; Þ in Definition 3.1 is not satisfied the conditions of Definition 2.1, and hence it is not cut-peak function according to Definition 2.1. However, it has better property than the one defined in Definition 2.1. This can be seen latter. Definition 3.2. F ðxðkÞ ;xÞ :¼ minðf ðxÞ;wðxðkÞ ; xÞÞ is said a revised choice function of f crossing through the point xðkÞ . We use Example 2.1 with the revised cut-peak function given in Definition 3.1 to illustrate the relative concepts, which are shown in Fig. 2. Definition 3.3. Let x1 ; x2 be two minimizers of (3.1). If f ðx1 Þ < f ðx2 Þ, then x1 is said to be a minimizer lower x2 . It is obvious from Fig. 2 that any minimizers lower xðkÞ cannot be cut down. This is main reason why we need to revise the concept of the cut-peak function. Before we describe the revised cut-peak function method, we need the following auxiliary problem: min s:t:
F ðxðkÞ ; xÞ l 6 x 6 u;
ð3:2Þ
where the function F ðxðkÞ ; Þ is defined by Definition 3.2. Now, we describe the revised cut-peak function method. Algorithm 3.1 (A Revised Cut-Peak Function Algorithm). Let ei denote the ith column of the n n identity matrix for each i 2 f1; 2; . . . ; ng. Set E :¼ fei : i ¼ 1; 2; . . . ; ng. Choose an initial point xð0Þ 2 Rn and a tolerance parameter e. Set k :¼ 0 and m :¼ 0. Step 1. Find a local minimizer of problem (3.1), which is denoted by x , by using the initial point xðkÞ . Step 2. Choose a positive number s and a search direction ei 2 E, set xðkþ1Þ :¼ x þ sei . Find a local minimizer of problem (3.2), which is denoted by x1 , by using the initial point xðkþ1Þ . 50
Objective function Choice function Revised cut-peak function
40
30
y
20
10
0 f(xk) -10
-20
-30 -0.5
0
0.5
1
1.5
2
2.5
3
3.5
x Fig. 2. Objective function, revised cut-peak function and choice function.
4
4.5
228
Z.-H. Huang et al. / Applied Mathematics and Computation 194 (2007) 224–233
Step 3. If f ðx1 Þ < f ðx Þ, then set x :¼ x1 , k :¼ k þ 1, and m :¼ 0, go to Step 2; otherwise, choose a positive number s and a search direction ei 2 E with ei being different from the one in Step 2, set m :¼ m þ 1, go to Step 2. Step 4. If m ¼ 2n, then stop the algorithm, and output x* as an approximate global minimizer of problem (3.1). It is well known that it is very difficult for most algorithms for solving global optimization problems to give a proper stop criterion. In fact, for the (revised) cup-peak function method, the correctness of the obtained global minimizer and the cost of computation depend on the size of search direction set E. Theoretically, the set E should be large enough, however, in the practical numerical implementation, the size of E does not need too large. In Algorithm 3.1, E is comprised of 2n directions. As seen later, this simple stop criterion is satisfactory from our numerical implementation. 3.2. Smoothing of revised choice function In problem (3.2), the objective function is non-smooth. To overcome the difficulty from the non-smoothness, we use the following smoothing function [4,5,12]:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 /ðl; a; bÞ ¼ 0:5 ð1 þ lÞða þ bÞ ð1 lÞ ða bÞ þ 4l ; where a; b 2 R and l is a non-negative number. The following properties can be easily shown. Proposition 3.1 (i) minfa; bg ¼ /ð0; a; bÞ; (ii) The function / is continuously differentiable when l > 0. Thus, problem (3.2) can be solved by solving min s:t:
/ðl; f ðxðkÞ Þ; f ðxÞÞ l 6 x 6 u;
ð3:3Þ
iteratively, and making l ! 0. 3.3. Local optimal methods Let ^ pðxÞ ¼
n X
fðminð0; xi li ÞÞ2 þ ðminð0; xi þ ui ÞÞ2 g:
ð3:4Þ
i¼1
We use the exterior penalty function method to solve problem (3.1), where the penalty function is defined by pðg; xÞ ¼ f ðxÞ þ g^ pðxÞ
ð3:5Þ
with g being the penalty parameter. Similarly, we use the exterior penalty function method to solve iteratively problem (3.3) and make l ! 0, where the penalty function is defined by qðg; xÞ ¼ /ðl; f ðxðkÞ Þ; f ðxÞÞ þ g^ pðxÞ
ð3:6Þ
with g being the penalty parameter. In the penalty function methods mentioned above, we use the BFGS algorithm to solve the penalty problems.
Z.-H. Huang et al. / Applied Mathematics and Computation 194 (2007) 224–233
229
4. Algorithm implementation and numerical results 4.1. Testing problems We will test the following problems, which were tested in the literature of various global optimization methods. Problem 4.1. The three-hump camelback problem [1,14]: T
f ðxÞ ¼ 2x21 1:05x41 þ x61 =6 x1 x2 þ x22 ;
T
l ¼ ð3; 3Þ ;
u ¼ ð3; 3Þ :
The known global minimizer is x ¼ ð0; 0ÞT with the optimal value is f ðx Þ ¼ 0. The initial point in our numerical computation is ð2; 2ÞT . Problem 4.2. The six-hump camelback problem [6,1,8,14,15]: f ðxÞ ¼ 4x21 2:1x41 þ x61 =3 þ x1 x2 4x22 þ 4x42 ;
l ¼ ð3; 1:5ÞT ;
u ¼ ð3; 1:5ÞT : T
The two known global minimizers are respectively x ¼ ð0:089842; 0:712656Þ and x ¼ ð0:089842; T 0:712656Þ with the optimal value is f ðx Þ ¼ 1:031628. The initial point in our numerical computation T is ð2; 2Þ . Problem 4.3. The Treccani problem [1,14]: f ðxÞ ¼ x41 þ 4x31 þ 4x21 þ x22 ;
T
l ¼ ð3; 3Þ ;
T
u ¼ ð3; 3Þ :
The two known global minimizers are respectively x ¼ ð2; 0ÞT and x ¼ ð0; 0ÞT with the optimal value is T f ðx Þ ¼ 0. The initial point in our numerical computation is ð1; 1Þ . Problem 4.4. The Goldstein–Price problem [1,8,14,15]: f ðxÞ ¼ ½1 þ ðx1 þ x2 þ 1Þ2 ð19 14x1 þ 3x21 14x2 þ 6x1 x2 þ 3x22 Þ 2
½30 þ ð2x1 3x2 Þ ð18 32x1 þ 12x21 þ 48x2 36x1 x2 þ 27x22 Þ;
T
l ¼ ð3; 3Þ ;
T
u ¼ ð3; 3Þ :
T
The known global minimizer is x ¼ ð0; 1Þ with the optimal value is f ðx Þ ¼ 3. The initial point in our numerical computation is ð0:5; 0:5ÞT . Problem 4.5. The Branin problem [8,15]: 2
f ðxÞ ¼ ðx2 1:275x21 =p2 þ 5x1 =p 6Þ þ 10ð1 0:125=pÞ cosðx1 Þ þ 10;
T
l ¼ ð5; 0Þ ;
T
T
u ¼ ð10; 15Þ : T
The three known global minimizers are x ¼ ð3:1416; 2:2750Þ , x ¼ ð3:14159; 12:2750Þ , and x ¼ T ð9:42478; 2:47499Þ with the optimal value is f ðx Þ ¼ 0:3979. The initial point in our numerical computation T is ð5; 5Þ . Problem 4.6. The Rastrigin problem [8,15]: f ðxÞ ¼ x21 þ x22 cosð18x1 Þ cosð18x2 Þ;
T
l ¼ ð1; 1Þ ;
T
u ¼ ð1; 1Þ :
It has about 50 minimizers. The known global minimizer is x ¼ ð0; 0ÞT with the optimal value is f ðx Þ ¼ 2. The initial point in our numerical computation is ð2; 2ÞT . Problem 4.7. The two-dimensional Shubert problem I [6,1,14]: ( )( ) 5 5 X X f ðxÞ ¼ i cosðði þ 1Þx1 þ iÞ i cosðði þ 1Þx2 þ iÞ ; i¼1
i¼1
T
l ¼ ð10; 10Þ ;
T
u ¼ ð10; 10Þ :
230
Z.-H. Huang et al. / Applied Mathematics and Computation 194 (2007) 224–233 T
It has about 760 minimizers. One of the known global minimizers is x ¼ ð1:42513; 0:80032Þ with the optiT mal value is f ðx Þ ¼ 186:730909. The initial point in our numerical computation is ð2; 2Þ . Problem 4.8. The two-dimensional Shubert problem II [6,1,14]: ( )( ) 5 5 X X 1 f ðxÞ ¼ i cosðði þ 1Þx1 þ iÞ i cosðði þ 1Þx2 þ iÞ þ ½ðx1 þ 1:42513Þ2 þ ðx2 þ 0:80032Þ2 ; 2 i¼1 i¼1 l ¼ ð10; 10ÞT ;
u ¼ ð10; 10ÞT :
It exhibits the same characteristics as Problem 4.7. The known global minimizer is x ¼ ð1:42513; 0:80032Þ with the optimal value is f ðx Þ ¼ 186:730909. The initial point in our numerical computation is ð3; 3ÞT .
T
Problem 4.9. The two-dimensional Shubert problem III [6,1,8,14,15]: ( )( ) 5 5 X X 2 2 i cosðði þ 1Þx1 þ iÞ i cosðði þ 1Þx2 þ iÞ þ ðx1 þ 1:42513Þ þ ðx2 þ 0:80032Þ ; f ðxÞ ¼ i¼1
i¼1 T
l ¼ ð10; 10Þ ;
T
u ¼ ð10; 10Þ :
It exhibits the same characteristics as Problem 4.7. The known global minimizer is x ¼ ð1:42513; 0:80032Þ T with the optimal value is f ðx Þ ¼ 186:730909. The initial point in our numerical computation is ð4; 4Þ .
T
Problem 4.10. The Sine-Square problem I (n = 6) [6,1,8,14,15]: ( ) n1 X p 2 2 2 2 10 sin ðpx1 Þ þ ðxn 1Þ þ f ðxÞ ¼ ðxi 1Þ ð1 þ 10 sin ðpxiþ1 ÞÞ ; n i¼1 T
T
l ¼ ð10; 10; . . . ; 10Þ ;
u ¼ ð10; 10; . . . ; 10Þ : T
It has about 60 minimizers. The known global minimizers is x ¼ ð1; 1; . . . ; 1Þ with the optimal value is T f ðx Þ ¼ 0. The initial point in our numerical computation is ð5; 5; 5; 5; 5; 5Þ . Problem 4.11. The Sine-Square problem II (n = 6) [6,8,15]: ( ) n1 X p 2 2 2 2 10 sin ðpy 1 Þ þ ðy n 1Þ þ f ðxÞ ¼ ðy i 1Þ ð1 þ 10 sin ðpy iþ1 ÞÞ ; n i¼1 y i ¼ 1 þ ðxi 1Þ=4;
l ¼ ð10; 10; . . . ; 10ÞT ;
u ¼ ð10; 10; . . . ; 10ÞT :
It has about 30 minimizers. The known global minimizers is x ¼ ð1; 1; . . . ; 1ÞT with the optimal value is T f ðx Þ ¼ 0. The initial point in our numerical computation is ð3; 3; 3; 3; 3; 3Þ . Problem 4.12. The Sine-Square problem III (n = 6) [6,8,15]: ( ) n1 X 1 2 2 2 2 2 sin ð3px1 Þ þ ðxn 1Þ ð1 þ sin ð2pxn ÞÞ þ f ðxÞ ¼ ðxi 1Þ ð1 þ sin ð3pxiþ1 ÞÞ ; 10 i¼1 T
T
l ¼ ð10; 10; . . . ; 10Þ ;
u ¼ ð10; 10; . . . ; 10Þ : T
It has about 180 minimizers. The known global minimizers is x ¼ ð1; 1; . . . ; 1Þ with the optimal value is T f ðx Þ ¼ 0. The initial point in our numerical computation is ð4; 4; 4; 4; 4; 4Þ . Problem 4.13. The Simplified Rosenenbrock problem [1]: 2
2
f ðxÞ ¼ 0:5ðx21 x2 Þ þ ðx1 1Þ ;
T
l ¼ ð3; 3Þ ;
T
u ¼ ð3; 3Þ :
Z.-H. Huang et al. / Applied Mathematics and Computation 194 (2007) 224–233
231
T
The known global minimizers is x ¼ ð1; 1Þ with the optimal value is f ðx Þ ¼ 0. The initial point in our T numerical computation is ð2; 2Þ . Problem 4.14. The Generalized Schwefel problem [15]: f ðxÞ ¼
30 X
xi sinð
pffiffiffiffiffiffi jxi jÞ;
T
l ¼ ð500; . . . ; 500Þ ;
T
u ¼ ð500; . . . ; 500Þ :
i¼1 T
The known global minimizers is x ¼ ð420:9687; 420:9687; . . . ; 420:9687Þ . The initial point in our numerical T computation is ð400; . . . ; 400Þ . Problem 4.15. The Generalized Rastrigin problem [15]: f ðxÞ ¼
30 X
½x2i 10 cosð2pxi Þ þ 10;
T
l ¼ ð5:12; . . . ; 5:12Þ ;
T
u ¼ ð5:12; . . . ; 5:12Þ :
i¼1 T
T
The known global minimizers is x ¼ ð0; . . . ; 0Þ . The initial point in our numerical computation is ð2; . . . ; 2Þ . 4.2. Algorithm implementation In this subsection, we discuss specifically implementation of Algorithm 3.1 running in MATLAB. • The initial points were respectively given in the testing problems in Section 4.1. • In Step 1, we take the penalty parameter g = 100, and use the BFGS algorithm to find iteratively minimizers of the function pðg; xÞ defined by (3.5) with the initial point xðkÞ . The generated iteration point is denoted by ^xðkÞ . If g^ pð^xðkÞ Þ P 1:0e 7 (where ^ pðxÞ is defined by (3.4)), let g ¼ 1:5g and go to the next iteration; otherwise, a local minimizer of f is found. • In Step 2, we take the parameter of updated direction s = 1.2 and the penalty parameter g = 100. We simply take the smoothing parameter as a fixed number: l ¼ 1:0e 9. We use the BFGS algorithm to find iteratively minimizers of the function qðg; xÞ defined by (3.6) with the initial point xðkþ1Þ . The generated iteration point is denoted by ~xðkÞ . If g^ pð~xðkÞ Þ P 1:0e 7 (where ^pðxÞ is defined by (3.4)), let g ¼ 1:5g and go to the next iteration; otherwise, a local minimizer of F ðxðkÞ ; xÞ is found. • In Step 3, we take the parameter of updated direction s = 1.2.
4.3. Numerical results The computational results are summarized in Table 1, where • • • • •
• • •
PROB denotes the number of the problems we tested; DIM denotes the dimension of the problems we tested; Nf denotes the number of evaluations of the objective functions when the algorithm terminates; NJf denotes the number of evaluations of the gradient of the objective functions when the algorithm terminates; NNf denotes the number of evaluations of the objective functions when a global minimizer is found; (Note: During the practical computation, it is possible that a global minimizer has been found, however, we do not know it is a global minimizer. Thus, we have to continue to run the algorithm till the termination rule is satisfied) NNJf denotes the number of evaluations of the gradient of the objective functions when a global minimizer is found; SOLU denotes the obtained global minimizer when the algorithm terminates; VOpt denotes the optimal value of the obtained global minimizer when the algorithm terminates.
232
Z.-H. Huang et al. / Applied Mathematics and Computation 194 (2007) 224–233
Table 1 The numerical results of Problems 4.1, 4.2–4.15 PROB
DIM
Nf
NJf
NNf
NNJf
SOLU
VOpt
4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15
2 2 2 2 2 2 2 2 2 6 6 6 2 30 30
28 45 28 153 24 21 155 110 75 54 74 87 23 72 65
16 22 16 74 14 12 42 30 34 32 43 48 14 67 63
24 41 24 69 20 17 90 58 62 42 62 75 19 12 5
12 18 12 29 10 8 32 20 24 20 31 36 10 7 3
ð0:0192; 0:2011ÞT 107 ð0:0895; 0:7126ÞT ð2; 0ÞT ð0; 1ÞT ð3:1416; 2:2750ÞT ð0:2776; 0:2776ÞT 1012 ð1:4251; 5:4829ÞT ð1:4251; 0:8003ÞT ð1:4251; 0:8003ÞT ð1; . . . ; 1ÞT ð1; . . . ; 1ÞT ð1; . . . ; 1ÞT ð1; . . . ; 1ÞT ð420:9687; . . . ; 420:9687ÞT ð0:1532; . . . ; 0:1532ÞT 1013
4.504e16 1.031628 1.230e15 3.000000 0.3978874 2 186.7309 186.7309 186.7309 1.1531e14 3.9028e12 1.0482e12 1.6881e19 12569.49 0
5. Some remarks In this paper, we propose a revised cut-peak function algorithm for solving box constrained continuous global optimization problems. The new algorithm has a simple termination rule. Some preliminary numerical results are reported. From our numerical results, it is not difficult to see that • the algorithm may find a global minimizer for every tested problem given in this paper; and • the numbers of evaluations of the objective function and its gradient is less than those in the cut-peak function method [15]. The preliminary numerical results show that the new algorithm is promising. Acknowledgements This work was partially supported by the National Nature Science Foundation of China (No. 10571134) and the Science and Technology Development Plan of Tianjin (No. 06YFGZGX05600). This work was also partially supported by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry and the Scientific Research Foundation of Tianjin University for the Returned Overseas Chinese Scholars. References [1] R.P. Ge, A filled function method for finding a global minimizer of a function of several variables, Math. Program. 46 (1990) 191–204. [2] R.P. Ge, C.B. Huang, A continuous approach to nonlinear integer programming, Appl. Math. Comput. 34 (1989) 39–60. [3] R.P. Ge, Y.F. Qin, A class of filled functions for finding a global minimizer of a function of several variables, J. Optim. Theory Appl. 54 (2) (1987) 241–252. [4] Z.H. Huang, Locating a maximally complementary solution of the monotone NCP by using non-interior-point smoothing algorithms, Math. Method Operat. Res. 61 (2005) 41–55. [5] Z.H. Huang, J. Han, Z. Chen, A predictor–corrector smoothing Newton algorithm, based on a new smoothing function, for solving the nonlinear complementarity problem with a P0 function, J. Optim. Theory Appl. 117 (2003) 39–68. [6] A.V. Levy, A. Montalvo, The tunneling algorithm for the global minimization of functions, SIAM J. Sci. Stat. Comput. 6 (1) (1985) 15–29. [7] X. Liu, A computable filled function used for global optimization, Appl. Math. Comput. 126 (2002) 271–278. [8] X. Liu, Finding global minima with a computable filled function, J. Global Optim. 19 (2001) 151–161. [9] X. Liu, W. Xu, A new filled function applied to global optimization, Comput. Operat. Res. 31 (2004) 61–80.
Z.-H. Huang et al. / Applied Mathematics and Computation 194 (2007) 224–233
233
[10] S. Lucid, V. Piccialli, New classes of globally convexized filled functions for global optimization, J. Global Optim. 24 (2002) 219–236. [11] Y.L. Shang, L.S. Zhang, A filled function method for finding a global minimizer on global integer optimization, J. Comput. Appl. Math. 181 (2005) 200–210. [12] J. Sun, Z.H. Huang, A smoothing Newton algorithm for solving the LCP with a sufficient matrix that terminates finitely at a maximally complementary solution, Optim. Method Softw. 21 (4) (2006) 597–615. [13] Z. Xu, H. Huang, P. Pardalos, C. Xu, Filled functions for unstrained global optimization, J. Global Optim. 20 (2001) 49–65. [14] W.X. Zhu, A class of filled functions for box constrained continuous global optimization, Appl. Math. Comput. 169 (2006) 129–145. [15] Y. Wang, W. Fang, T. Wu, A deterministic algorithm of global optimization using cut-peak functions, Technique Report, in: The Conference of Mathematical Programming of China, 2006.