A filled function method with one parameter for unconstrained global optimization

A filled function method with one parameter for unconstrained global optimization

Applied Mathematics and Computation 218 (2011) 3776–3785 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation jour...

200KB Sizes 0 Downloads 21 Views

Applied Mathematics and Computation 218 (2011) 3776–3785

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A filled function method with one parameter for unconstrained global optimization q Hongwei Lin a,⇑, Yuping Wang b, Lei Fan b a b

Department of Mathematics, Faculty of Science, Xidian University, Xi’an, Shaanxi 710071, PR China School of Computer Science and Technology, Xidian University, Xi’an, Shaanxi 710071, PR China

a r t i c l e

i n f o

Keywords: Filled function Unconstrained global optimization Filled function method Global minima problem

a b s t r a c t The filled function method is considered as an efficient approach to solve the global optimization problems. In this paper, a new filled function method is proposed. Its main idea is as follows: a new continuously differentiable filled function with only one parameter is constructed for unconstrained global optimization when a minimizer of the objective function is found, then a minimizer of the filled function will be found in a lower basin of the objective function, thereafter, a better minimizer of the objective function will be found. The above process is repeated until the global optimal solution is found. The numerical experiments show the efficiency of the proposed filled function method. Ó 2011 Elsevier Inc. All rights reserved.

1. Introduction Global optimization is very important in many practical applications such as social science, engineering, finance management and so on. A lot of efficient and stable methods have been reported since the 1970s. In general, the existing approaches can be classified into two categories: deterministic approaches and probabilistic approaches. The former mainly includes the covering method [1,2], the filled function method [3–5], and the tuning method [6] and so on. The later includes the simulated annealing methods, genetic algorithms, particle swarm optimizations, deferential evolution and so on. Among these methods, the filled function method appears to have several advantages over others mainly due to its easy execution with a process that aims at successively finding smaller local minimizers and to ensure convergence of it. The filled function method was initially introduced by Ge [4,5] to solve unconstrained global optimization problems, and has been improved in recent years [7–12]. For minimization problems, the idea of the filled function method is to construct a filled function FF(x) and minimize FF(x) to escape from the current local minimizer to a point which falls into a lower basin of the objective function. Up to now, there are some drawbacks of the existing filled functions, such as the filled functions proposed in [9–11] are non-smooth functions to which the usual classical local optimization methods can not be used;the filled functions proposed in [4,7,8] have more than one parameters which are difficult to control; the filled functions proposed in [9,10] contain exponent term or logarithm term which will cause ill-condition problem. To eliminate all of the aforementioned drawbacks, a new filled function which has simple form with one parameter and without ill-conditioned terms is proposed in this paper. The remainder of this paper is organized as follows. The basic concepts and some notations are given in Section 2. Section 3 proposes a new filled function and analyzes the properties of it. A global optimization algorithm with some practical

q This work was supported by National Natural Science Foundation of China (No. 60873099) and the fundamental research funds for the central universities (No. 72005502). ⇑ Corresponding author. E-mail addresses: [email protected] (H. Lin), [email protected] (Y. Wang).

0096-3003/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2011.09.022

H. Lin et al. / Applied Mathematics and Computation 218 (2011) 3776–3785

3777

considerations is given in Section 4 together with numerical results on some test examples. Finally, some conclusions are drawn in Section 5. 2. Preliminaries Consider the following global minima problem:

ðPÞ min f ðxÞ; n x2R

where f(x) is a twice continuously differentiable function on Rn and satisfies

f ðxÞ ! þ1;

as kxk ! þ1: Qn

ð1Þ n

Then there exists a box X ¼ i¼1 ½li ; ui   R whose interior contains all minimizers of f(x). Generally, we assume that X is known and f(x) has only a finite number of minimizers in X. Then the problem (P) is equivalent to the following problem:

ðBCPÞ min f ðxÞ: x2X

  Definition 1. The basin [12] of f(x) at an isolated local minimizer x1 is a connected domain B x1 which contains x1 and  in which staring from any point the steepest descent sequences of f(x) converge to x1 , while from any point outside of B x1 , the minimization sequences of f(x) do not converge to x1 . Suppose that  x1 is an isolated maximizer of f(x), the hill of f(x) at  x1 is   the basin of f(x) at its local minimizer x1 .      It is clear that f ðxÞ > f x1 holds for any point   x 2 B x1 and  x – x1 .    If there is another minimizer x2 of f(x) and f x2 < ðPÞf x1 , then the basin B x2 of f(x) at x2 is said to be lower (higher)   than B x1 of f(x) at x1 .     Definition 2. The S-basin [4] of f(x) at an isolated local minimizer x1 is a connected domain S x1 contained in B x1 , in which for any x – x1 the inequality



T x  x1 rf ðxÞ > 0

ð2Þ

holds.     If r2 f x1 is positive definite, it is obvious that the minimal radius of S x1

  r x1 ¼ min kx  x1 k xRSðx1 Þ

ð3Þ

  is not zero. In what follows, we always assume that r2 f xi is positive definite for all local minimizers xi ; i ¼ 1; 2; . . . and   then r xi > 0. Suppose that x1 is a local minimizer which has been found so far, and the set of other local minimizers which are better   than x1 is denoted as Bm ¼ x0i ; i ¼ 1; 2; . . .. Let index I satisfies

 0    x  x  ¼ min x0  x ; x0 2 Bm: I 1 i 1 i

ð4Þ

     S1 ¼ x 2 Xjx  x1  6 x0I  x1 

ð5Þ

S 2 ¼ X n S1 :

ð6Þ

Let

and

Additionally, some useful symbols are defined as follows:

  M ¼ supðf ðxÞ  f x1 Þ;

ð7Þ

L1 ¼ sup kx  x1 k;

ð8Þ

L2 ¼ sup krf ðxÞk:

ð9Þ

x2S1 x2S1 x2S1

Definition 3 (A modified definition of the filled function). A function FF(x) is called a filled function of f(x) at a local minimizer x1 if it satisfies the following properties:   1. x1 is a maximizer of FF(x) and the whole basin B x1 is a part of a hill of FF(x) at x1 ;        2. if there is another minimizer x2 which satisfies f x2 < f x1 , I is defined by (4), for all x in S1 and f ðxÞ > f x1 , then x is not a stationary point of f(x);

3778

H. Lin et al. / Applied Mathematics and Computation 218 (2011) 3776–3785

        3. if there is another minimizer x2 which satisfies f x2 < f x1 , then the basin B x2 is lower than B x1 . There is a point  x in   B x2 that minimizes FF(x) on the line x1 and  x.

Based on Definition 3, a new filled function is proposed in Section 3. 3. A filled function and its properties Consider the following function which contains one parameter for problem (BCP):

2       FF x; x1 ; P ¼  f ðxÞ  f x1 þ P x  x1  ;

ð10Þ

where P > 0 is a parameter.     It is obvious that FF x; x1 ; P is twice continuously differentiable. The following theorems will show that FF x; x1 ; P is a filled function when P satisfies some conditions.   Theorem 1. Suppose x1 is a local minimizer of f(x) and FF x; x1 ; P is defined by (10), then x1 is a strict local maximizer of    FF x; x1 ; P for all P > 0.     Proof. When x 2 B x1 and x – x1 , one has f ðxÞ > f x1 . Since P > 0, then

2         FF x; x1 ; P ¼  f ðxÞ  f x1 þ P x  x1  < 0 ¼ FF x1 ; x1 ; P :   Thus, x1 is a strict local maximizer of FF x; x1 ; P for all P > 0. h     Theorem 2. Suppose x1 is a local minimizer of f(x), and x 2 S1 is a point different from x1 , if x satisfies f ðxÞ > f x1 , then x  x1 is L L a descent direction of FF x; x1 ; P at x when P > 12 2 . Where L1 and L2 are defined separately by (8) and (9). Proof. By definition (10), one has





h



 



i

2 rFF x; x1 ; P ¼  x  x1  rf ðxÞ þ 2ðf ðxÞ  f x1 þ PÞ x  x1 :

ð11Þ

    It is obvious that f ðxÞ  f x1 þ P > 0, for f ðxÞ > f x1 .    If x  x1 rf ðxÞ > 0, then it follows from (11) that

h 2  2 i    T T     x  x1 rFF x; x1 ; P ¼  x  x1  x  x1 rf ðxÞ þ 2 f ðxÞ  f x1 þ P x  x1  < 0:     Thus x  x1 is a descent direction of FF x; x1 ; P at x.       When x R S x1 , namely x  x1  > r x1 , then it is possible that

 T x  x1 rf ðxÞ 6 0:     To make x  x1 also be a descent direction of FF x; x1 ; P at x, it is required that

         x  x 2 x  x T rf ðxÞ þ 2 f ðxÞ  f x þ P x  x 2 > 0: 1 1 1 1

ð12Þ

Note that

 T      x  x1 rf ðxÞ < 2 f ðxÞ  f x1 þ P

ð13Þ

implies (12), and

   T  x  x1 rf ðxÞ 6 x  x1 krf ðxÞk  T even if x  x1 rf ðxÞ 6 0. Because P > L12L2 , then

   T  x  x1 rf ðxÞ 6 x  x1 krf ðxÞk < 2P

ð14Þ

and

    2P < 2 f ðxÞ  f x1 þ P :     Inequality (13) can be obtained by these two inequalities. Therefore, x  x1 is always a descent direction of FF x; x1 ; P at x   when f ðxÞ P f x1 and P > L12L2 . h

H. Lin et al. / Applied Mathematics and Computation 218 (2011) 3776–3785

3779

    If there is another local minimizer x2 which is better than x1 . Is there an x0 2 B x2 such that FF x; x1 ; P has a minimizer  0  along x  x1 ? The following theorems will answer this question. Theorem 3. If P is chosen to satisfy the inequality

P> 

  L01 L02  f ðx Þ þ f x1 ; 2

ð15Þ



    then FF x; x1 ; P is always decreasing in the direction x  x1 and FF x; x1 ; P has no minimizer or saddle point in X n fx1 g, where 0 0 ⁄ x is a global minimizer of f(x) in X, L1 ¼ supx1 ;x1 2X kx1  x2 k; L2 ¼ supx2X krf ðxÞk.     Proof. Note that FF x; x1 ; P being always decreasing in the direction x  x1 is equivalent to (12).  T   If x  x1 rf ðxÞ P 0, then (12) holds when P > f ðx Þ þ f x1 .   T If x  x1 rf ðxÞ < 0, then (12) can be obtained by

    x  x krf ðxÞk < 2ðP þ f ðxÞ  f x Þ: 1 1

ð16Þ

    While (16) can be obtained by (15). Therefore, FF x; x1 ; P is always decreasing in the direction x  x1 for all x 2 X

h

Theorem 3 means that if P is too large, the global minimizer x⁄ of f(x) will be lost. Therefore, ithas to take an upper bound for P in order to find an x0 in a lower basin of f(x) and to guarantee there is a minimizer of FF x; x1 ; P along the direction  0  x  x1 when there is a local minimizer x2 better than x1 .     Theorem 4. If x 2 S1 n S x1 satisfies f ðxÞ 6 f x1 and



T x  x1 rf ðxÞ < 0

ð17Þ

and P satisfies

 T 2P <  x  x1 rf ðxÞ:

ð18Þ

    Then FF x; x1 ; P is increasing at x in the direction x  x1 .   Proof. It follows from f ðxÞ 6 f x1 and (18) that

 T    x  x1 rf ðxÞ > 2P P 2ðf ðxÞ  f x1 þ PÞ:

ð19Þ

It follows from (11) and (19) that



  T x  x1 rFF x; x1 ; P > 0:

ð20Þ

This completes the proof. h        As stated in the previous theorems, FF x; x1 ; P is  decreasing along the direction x  x1 at x such that f ðxÞ > f 0 x1 when P  is sufficiently large. But for x such that f ðxÞ 6 f x1 , P must have an upper bound to guarantee that there is an x in a lower       and FF x; x1 ; P is increasing in the direction x0  x1 . Therefore, x along the direction FF x; x1 ; P has a minimizer    basin  x0  x1 in a lower basin of f(x). Thus, we have proved that FF x; x1 ; P is a filled function of f(x) at x1 when parameter P is appropriately chosen. 4. Filled function algorithm and numerical implementations 4.1. Filled function algorithm Based on the theorems in the previous sections. A new algorithm for finding a global minimizer of problem (P) with a parameter adjustment scheme is proposed as follows: Algorithm Step 0: Initialization. Choose a step size sb and an initial value for parameter P. Some directions di, i = 1, 2, . . . , 2n are given in advance, where di ¼ ð0; . . . ; 1; . . . ; 0ÞT ; i ¼ 1; 2; . . . ; n and di = din, i = n + 1, . . . , 2n, n is the dimension i of the optimization problem. Choose x1 2 X as an initial point and set k = 1, set nc = 0 and bbcs = 0.

3780

H. Lin et al. / Applied Mathematics and Computation 218 (2011) 3776–3785

Step 1: Minimize f(x) starting from an initial point xk 2 X and obtain a minimizer xk of f(x). Step 2: Construct function

2     FF x; xk ; P ¼  f ðxÞ  f ðxk Þ þ P x  xk  go to Step 3.   Step 3: Sequentially minimize FF x; xk ; P with initial point xik ¼ xk þ ddi for i = 1, 2, . . . 2n, where d > 0 is small enough   such that xki is near to xk enough. If minimizer  x 2 X of FF x; xk; P was found for some i, and e x is the corresponding minimizer of f(x) with initial point  x, and f ðe x Þ < f xk , then set xkþ1 ¼ e x and k = k + 1, P = P/cs and bbcs = 0, go back to Step 2; Otherwise, go to Step 4.   Step 4: If for each xik ¼ xk þ ddi ; i ¼ 1; 2; . . . 2n, the minimization sequence of FF x; xk ; P goes out of X, then output xk  as a global minimizer of the objective function; Otherwise, for  each minimization sequence in X of FFðx; xk ; PÞ i    with initial point xk ¼ xk þ ddi , a minimizer denoted as x of FF x; xk ; P is obtained, then a minimizer denoted as e x of f(x) is obtained with initial point  x. If all these e x ’s are worse than xk , set P = P + sb, cs = cs + 1bbcs = bbcs + 1 and go back to step 2. Before experiments, it is necessary for us to give some explanations on the above filled function algorithm.   10 In minimization of f(x) and FF x; xk ; P , we need to select a local optimization method first. In the proposed algorithm, the Quasi-Newton (BFGS) method is employed. 20 According to Theorem 3, sb is selected to prevent from losing  the  global  minimizer of the objective function. 30 In the algorithm, d is taken small enough to guarantee that rFF x; xk ; P  is greater than a threshold. In this paper we take threshold = 102. 4.2. Numerical experiments In this section, the proposed algorithm is tested on some benchmark problems taken from some literatures. (a) Three-hump back camel function

min f ðxÞ ¼ 2x21  1:05x41 þ 16 x61  x1 x2 þ x22 ; s:t:

3 6 x1 6 3; 3 6 x2 6 3:

The global minimum solution is x⁄ = (0, 0)T and f(x⁄) = 0. (b) six-hump back camel function

min f ðxÞ ¼ 4x21  2:1x41 þ 13 x61  x1 x2  4x22 þ 4x42 ; s:t:

3 6 x1 6 3; 3 6 x2 6 3:

The global minimum solution is x⁄ = (0.0898, 0.7127)T or x⁄ = (0.0898, 0.7127)T, and f(x⁄) = 1.0316. (c) Goldstein–Price function

min f ðxÞ ¼ gðxÞhðxÞ; s:t:

3 6 x1 6 3;

3 6 x2 6 3;

  where gðxÞ ¼ 1 þ ðx1 þ x2 þ 1Þ2 19  14x1 þ 3x21  14x2 þ 6x1 x2 þ 3x22 , 2 48x2  36x1 x2 þ 27x2 Þ. The global minimum solution is x⁄ = (0, 1)T and f(x⁄) = 3. (d) Treccani function

and

min f ðxÞ ¼ x41 þ 4x31 þ 4x21 þ x22 ; s:t:

3 6 x1 6 3;

3 6 x2 6 3:

The global minimum solutions are x⁄ = (0, 0)T and x⁄ = (2, 0)T and f(x⁄) = 0. (e) Rosenbrock function

min f ðxÞ ¼ 100ðx21  x2 Þ2 þ ðx1  1Þ2 ; s:t:

3 6 x1 6 3;

3 6 x2 6 3:

The global minimum solution is x⁄ = (1, 1)T and f(x⁄) = 0. (f) Shubert function

min f ðxÞ ¼



5 P

5

P icos½ði þ 1Þx1  þ i icos½ði þ 1Þx2  þ i ;

i¼1

s:t:

0 6 x1 6 10;

i¼1

0 6 x2 6 10:

 hðxÞ ¼ 30 þ ð2x1  3x2 Þ2 18  32x1 þ 12x21 þ

H. Lin et al. / Applied Mathematics and Computation 218 (2011) 3776–3785

3781

This function has 760 minimizers in total. The global minimum value is f(x⁄) = 186.7309. (g) Hartman function

min f ðxÞ ¼

4 P

" ci exp 

i¼1

s:t:

n P

# 2

aij ðxj  pij Þ

j¼1

0 6 xj 6 1;

i ¼ 1; . . . ; n;

where ci is the ith element of vector C, aij and pij are the elements at the ith row and the jth column of matrices An and Pn, respectively.

C ¼ ð1:0; 1:2; 3:0; 3:2Þ For n = 3,

0

3

10 30

1

B 0:1 10 35 C C B A3 ¼ B C; @ 3 10 30 A

0

0:3689

B 0:4699 B P3 ¼ B @ 0:1091

0:1 10 35

0:1170 0:2673

1

0:4387 0:7470 C C C: 0:8732 0:5547 A

0:03815 0:5743 0:8828

The known global minimizer is (0.1148, 0.5557, 0.8526)T so far. For n = 6,

0

10 3 B 0:05 10 B B @ 3 3:5 17 0

8

1

17

3:5 1:7

8

17

0:1

14 C C C; 8 A

8

1:7

10

17

0:05

10

0:1 14

0:1312 0:1696 0:5569 0:0124 0:8283 0:5886

1

B 0:2329 0:4135 0:8307 0:3736 0:1004 0:9991 C C B C: B @ 0:2348 0:1451 0:3522 0:2883 0:3047 0:6650 A 0:4047 0:8828 0:8732 0:5743 0:1091 0:0381 The known global minimizer is (0.2016, 0.1501, 0.4769, 0.2753, 0.3117, 0.6573)T so far. (h) Shekel function

min f ðxÞ ¼

m 1 h i  2 P kx  si k2 þ ci 100 xiþ1  x2i þ ð1  xi Þ2 ; i¼1

s:t:

0 6 xi 6 10;

i ¼ 1; 2; . . . ; 4;

where ci is the ith element of the vector C, and si is the ith row of the matrix S.

C ¼ ð0:1; 0:2; 0:2; 0:4; 0:4; 0:6; 0:3; 0:7; 0:5; 0:5Þ and

0

4 B1 B B B8 B B6 B B B3 S¼B B2 B B B5 B B8 B B @6

1

4

4

4

1

1

8

8

6 7

6 3

9

2

5

3

1

8

2

6

1 C C C 8 C C 6 C C C 7 C C: 9 C C C 3 C C 1 C C C 2 A

7 3:6 7 3:6 For m = 5, 7 and 10, the global minimum solution is (4, 4, 4, 4)T at present. The corresponding minimum values are 10.1523, 10.4028 and 10.5363, respectively. (i) Generalized Rosenbrock’s function

min f ðxÞ ¼

n1 h P i¼1

s:t:

i  2 100 xiþ1  x2i þ ð1  xi Þ2 ;

3 6 x1 6 3;

i ¼ 1; 2; . . . ; n:

3782

H. Lin et al. / Applied Mathematics and Computation 218 (2011) 3776–3785 n

zfflfflfflfflfflffl}|fflfflfflfflfflffl{ The global minimum solution is ð1; . . . ; 1Þ. We take n = 10, 20, 30 in the experiments, respectively. The numerical results of applying the proposed filled function method to the above test functions are presented. Some symbols used in the following Tables 1 and 2 are given as follows: No: The number of the problems; n: The dimension of the problems; Iter: The number of parameter adjustments needed to find the global minimizer, namely Iter = cs  bbcs; test(i): the ith initial point. To verify the efficiency of the proposed algorithm, we randomly take four initial points for every test problem as in Table 1; x1: An initial point x1 2 X. It is produced randomly over the intervals specified. x⁄: The optimum solution obtained by the proposed method. For problem (g) and n = 6, the four initial points are taken respectively as

cg1 ¼ ð0:8909; 0:9593; 0:5472; 0:1368; 0:1493; 0:2575ÞT ; cg2 ¼ ð0:8407; 0:2543; 0:8143; 0:2435; 0:9293; 0:3500ÞT ; cg3 ¼ ð0:1966; 0:2511; 0:6160; 0:4733; 0:3517; 0:8308ÞT ; cg4 ¼ ð0:1966; 0:2511; 0:6160; 0:4733; 0:3517; 0:8308ÞT : For problem (h) and m = 5, the four initial points are taken respectively as

ch51 ¼ ð3:8045; 5:6782; 0:7585; 0:5395ÞT ; ch52 ¼ ð5:3080; 7:7919; 9:3401; 1:2991ÞT ; ch53 ¼ ð5:6882; 4:6939; 0:1190; 3:3712ÞT ; ch54 ¼ ð1:6218; 7:9428; 3:1122; 5:2853ÞT : For problem (h) and m = 7, the four initial points are taken respectively as

Table 1 The selection of initial point. No

n

a

2

b

2

c

2

d

2

e

2

f

2

g

3

h (m)

i

6 4, m = 5 4, m = 7 4, m = 10 10 20 30

Test (1)   1:8883 2:4384   2:7450 1:7893   0:4694 2:4944   1:0724 1:5464   1:3385 2:7230   5:3103 5:9040 0 1 0:6551 @ 0:1626 A 0:1190

Test (2)   2:2381 2:4803   2:0543 2:8236   1:7532 2:7570   1:4588 0:6466   2:4172 1:9407   6:2625 0:2047 0 1 0:4984 @ 0:9597 A 0:3404

Test (3)   0:7924 2:4184   2:7430 0:0877   0:9344 2:7857   0:9329 1:4944   1:1690 1:0974   1:0883 2:9263 0 1 0:5853 @ 0:2238 A 0:7513

Test (4)   1:3290 1:2813   1:8017 2:1478   2:0948 2:6040   1:2363 2:8090   2:7013 2:7933   4:4795 3:5941 0 1 0:2551 @ 0:5060 A 0:6991

cg1 ch51 ch71 ch101 ci101 ci201 ci301

cg2 ch52 ch72 ch102 ci102 ci202 ci302

cg3 ch53 ch73 ch103 ci103 ci203 ci303

cg4 ch54 ch74 ch104 ci104 ci204 ci304

3783

H. Lin et al. / Applied Mathematics and Computation 218 (2011) 3776–3785 Table 2 The results of experiments. x⁄

Iter

(1) (2) (3) (4)

1.0e007 ⁄ (0.0279, 0.5301)T 1.0e007⁄(0.0093, 0.1309)T 1.0e008⁄(0.5375, 0.5591)T 1.0e007⁄(0.0279, 0.5301)T

5 5 3 4

Test Test Test Test

(1) (2) (3) (4)

(0.0898, 0.7127)T (0.0898, 0.7127)T (0.0898, 0.7127)T (0.0898, 0.7127)T

9 12 8 8

1 0.5 0.5 0.3 0.3 0.1 0.5 0.5

Test Test Test Test Test Test Test Test

(1) (2) (3) (4) (1) (2) (3) (4)

(0.0000, 1.0000)T (0.0000, 1.0000)T (0.0000, 1.0000)T (0.0000, 1.0000)T 1.0e007 ⁄ (0.0754, 0.1631)T 1.0e005 ⁄ (0.1733, 0.0355)T 1.0e005 ⁄ (0.1596, 0.0009)T 1.0e005 ⁄ (0.0383, 0.7667)T

5 5 40 29 89 61 121 121

2

0.8 1 1 1

Test Test Test Test

(1) (2) (3) (4)

(1.0000, 1.0000)T (1.0000, 0.9999)T (1.0000, 1.0000)T (1.0000, 1.0000)T

11 8 8 8

f

2

1 0.5 0.3 0.2

Test Test Test Test

(1) (2) (3) (4)

(4.8581, 5.4829)T (7.7083, 0.8003)T (1.4251, 5.4829)T (7.7083, 5.4829)T

18 16 12 18

g

3

0.5 0.8 0.2 1

Test Test Test Test

(1) (2) (3) (4)

(0.1146, 0.5556, 0.8525)T (0.1146, 0.5556, 0.8525)T (0.1146, 0.5556, 0.8525)T (0.1146, 0.5556, 0.8525)T

3 3 5 2

g

6

1 0.5 0.8 1

Test Test Test Test

(1) (2) (3) (4)

x⁄(g1) x⁄(g2) x⁄(g3) x⁄(g4)

2 3 2 2

h

4, m = 5

1 0.8 0.5 0.5

Test Test Test Test

(1) (2) (3) (4)

x⁄(h51) x⁄(h52) x⁄(h53) x⁄(h54)

29 38 20 28

h

4, m = 7

1 1 1 1

Test Test Test Test

(1) (2) (3) (4)

x⁄(h71) x⁄(h72) x⁄(h73) x⁄(h74)

20 25 17 13

h

4, m = 10

1 1 1 1

Test Test Test Test

(1) (2) (3) (4)

x⁄(h101) x⁄(h102) x⁄(h103) x⁄(h104)

19 17 13 9

i

10

0.5 1 1 1

Test Test Test Test

(1) (2) (3) (4)

x⁄(i101) x⁄(i102) x⁄(i103) x⁄(i104)

16 10 12 9

i

20

1 1 1 1

Test Test Test Test

(1) (2) (3) (4)

x⁄(i201) x⁄(i202) x⁄(i203) x⁄(i204)

19 19 14 23

i

30

1 1 1 1

Test Test Test Test

(1) (2) (3) (4)

x⁄(i301) x⁄(i302) x⁄(i303) x⁄(i1304)

26 18 29 21

No

n

sb

x1

a

2

0.5 0.3 1 0.8

Test Test Test Test

b

2

0.5 0.3 1 0.8

c

2

d

2

e

3784

H. Lin et al. / Applied Mathematics and Computation 218 (2011) 3776–3785

ch71 ¼ ð1:6565; 6:0918; 2:6297; 6:5408ÞT ; ch72 ¼ ð6:8912; 7:4815; 4:5054; 0:8382ÞT ; ch73 ¼ ð2:2898; 9:1334; 1:5238; 8:2582ÞT ; ch74 ¼ ð5:3834; 9:9613; 0:7818; 4:4268ÞT : For problem (h) and m = 10, the four initial points are taken respectively as

ch101 ¼ ð1:0665; 9:0169; 0:0463; 7:7491ÞT ; ch102 ¼ ð8:1730; 8:6869; 0:8444; 3:9976ÞT ; ch103 ¼ ð2:5987; 8:0007; 4:3141; 9:1065ÞT ; ch104 ¼ ð1:8185; 2:6308; 1:4554; 1:3607ÞT : For problem (i) and n = 10, the four initial points are taken respectively as

ci101 ¼ ð0:2968; 2:4971; 1:6261; 2:4800; 2:0857; 1:9549; 0:2301; 2:9768; 2:5309; 0:3439ÞT ; ci102 ¼ ð2:3601; 2:7714; 2:9722; 1:6495; 1:9038; 2:2122; 2:4934; 0:6013; 1:4408; 1:8004ÞT ; ci103 ¼ ð0:4115; 2:4639; 1:9089; 1:4172; 2:1268; 2:1836; 2:2158; 0:4782; 0:2992; 2:1303ÞT ; ci104 ¼ ð2:1182; 0:7323; 0:8943; 0:0795; 0:5892; 2:5442; 1:5605; 2:2601; 1:8966; 1:5603ÞT : For problem (i) and n = 20, the four initial points are taken respectively as

ci201 ¼ ð0:4964; 0:7021; 2:4163; 2:6687; 0:0548; 0:0645; 0:9737; 2:4003; 0:7845; 2:3328; 1:6851;  0:6616; 1:5499; 0:5765; 2:4213; 2:2082; 2:6523; 2:7368; 0:4513; 2:6413ÞT ; ci202 ¼ ð1:5913; 0:8810; 1:9272; 2:9076; 2:7419; 1:9861; 0:8947; 1:3903; 0:8865; 0:2945; 0:2821;  1:2221; 1:4682; 1:8663; 1:1207; 1:8989; 0:7891; 0:7537; 1:6814; 2:5132ÞT ci203 ¼ ð2:5763; 1:6543; 0:0793; 0:3848; 0:3193; 1:1619; 0:0511; 0:0646; 1:9058; 1:7690; 0:8659;  0:7283; 1:8695; 0:1970; 0:8956; 2:6340; 2:2557; 0:3009; 0:7349; 0:5223ÞT ; ci204 ¼ ð1:7535; 1:1925; 0:1745; 1:6171; 2:0659; 1:8314; 1:6445; 1:9758; 1:6340; 0:3858;  1:1334; 2:5403; 0:4188; 1:8911; 2:4293; 2:8785; 0:3668; 2:3333; 1:4516; 0:5477ÞT : For problem (i) and n = 30, the four initial points are taken respectively as

ci301 ¼ ð2:7785; 0:2808; 0:1268; 1:6104; 0:0666; 0:7444; 1:0748; 0:6269; 0:7954; 2:9279;  2:7736; 2:3110; 2:4797; 1:7771; 2:4077; 1:4288; 0:9879; 1:0784; 2:1807; 1:3274;  2:3594; 0:9225; 0:0350; 1:6743; 1:2902; 2:4223; 2:3455; 0:9950; 1:1925; 1:8131ÞT ; ci302 ¼ ð2:8168; 1:4644; 0:0001; 0:1205; 2:4283; 0:6592; 0:7060; 2:1567; 1:8329; 0:4603;  1:9025; 1:5604; 2:3191; 2:8280; 0:0606; 1:9924; 2:8721; 1:2762; 0:0028; 0:1735;  2:6423; 1:0918; 2:7454; 2:5713; 0:1299; 2:4196; 1:9089; 1:9053; 1:3346; 2:1008ÞT ; ci303 ¼ ð0:9576; 0:1116; 2:8378; 0:8939; 1:8020; 0:2772; 0:4057; 0:9519; 2:4992; 2:2010;  1:9597; 0:6544; 1:9883; 1:8202; 2:6372; 0:6045; 0:1613; 0:4992; 0:9412; 0:7678;  1:2481; 0:4101; 2:9071; 2:9044; 1:9970; 2:3627; 0:7655; 1:8113; 0:0619; 0:9630ÞT ; ci304 ¼ ð2:7098; 2:5220; 2:6839; 1:4271; 1:3853; 0:4630; 0:2872; 2:6564; 0:4935; 2:8983;  1:1913; 1:2066; 0:9980; 0:2348; 1:1886; 0:9992; 1:9312; 2:2319; 2:9945; 1:9733;  2:8044; 0:3672; 2:2912; 1:0151; 1:8574; 0:7865; 0:2356; 2:8898; 2:0616; 2:1331ÞT : Taking the above initial points, we list the experimental results of the proposed algorithm on the above testing problems in Table 2: (The initial P is taken as 1 for all the above problems.) In Table 2, some notations are listed as follows:

x ðg1Þ ¼ x ðg2Þ ¼ x ðg3Þ ¼ x ðg4Þ ¼ ð0:2017; 0:1500; 0:4769; 0:2753; 0:3117; 0:6573ÞT

H. Lin et al. / Applied Mathematics and Computation 218 (2011) 3776–3785

3785

x ðh51Þ ¼ x ðh52Þ ¼ x ðh53Þ ¼ x ðh54Þ ¼ ð4:0000; 4:0001; 4:0000; 4:0001ÞT x ðh71Þ ¼ x ðh72Þ ¼ x ðh73Þ ¼ x ðh74Þ ¼ ð4:0006; 4:0007; 3:9995; 3:9996ÞT x ðh101Þ ¼ x ðh102Þ ¼ x ðh103Þ ¼ x ðh104Þ ¼ ð4:0007; 4:0006; 3:9997; 3:9995ÞT x ði101Þ ¼ x ði102Þ ¼ x ði103Þ ¼ x ði104Þ ¼ ð1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000ÞT x ði201Þ ¼ x ði202Þ ¼ x ði203Þ ¼ x ði204Þ ¼ ð1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000ÞT x ði301Þ ¼ x ði302Þ ¼ x ði303Þ ¼ x ði304Þ ¼ ð1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000; 1:0000ÞT It can be seen from Table 2 that for most problems and most cases only a relative few number of parameter adjustments is needed to find a global optimal solution. This means the amount of computation needed is relatively small. This indicates the proposed algorithm is efficient and effective. Also note that the selection of sb is important for the experimental results. It determines parameter P. In order to prevent from losing the global minimizer of the objective functions, a sufficient small sb should be selected. But selection in this way will increase the number of parameter adjustments. It is a reasonable and practical way to choose the value of sb such that the parameter P is between its upper bound and lower bound given in Theorems 2 and 3 by try and error. 5. Conclusions The filled function method is a kind of efficient approaches for the global optimization. The existing filled functions have some drawbacks, for example, some are non-differentiable functions, some contain more than one parameters and some contain ill-condition terms and so on. These drawbacks may cause difficulty to find the globe optimal solutions. In order to overcome these shortcomings, a continuously differentiable filled function with only one parameter is proposed in this paper. The effectiveness of the new filled function method is demonstrated by numerical experiments on some test optimization problems. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

P. Basso, Iterative methods for the localization of the global maximum, SIAM J. Numer. Anal. 19 (4) (1982) 781–792. R. Mladineo, An algorithm for finding the global maximum of a multimodal, multivariate function, Math. Program. 34 (2) (1986) 188–200. R. Ge, Y. Qin, A class of filled functions for finding global minimizers of a function of several variables, J. Optim. Theory Appl. 54 (2) (1987) 241–252. R. Ge, A filled function method for finding a global minimizer of a function of several variables, Math. Program. 46 (2) (1990) 191–204. R. Ge, The theory of filled function method for finding global minimizers of nonlinearly constrained minimization problems, J. Comput. Math. 5 (1987) 1–9. A. Levy, A. Montalvo, The tunneling algorithm for the global minimization of functions, SIAM J. Sci. Stat. Comput. 6 (1) (1985) 15–29. Xian Liu, Wilsun Xu, A new filled function applied to global optimization, Comput. Oper. Res. 31 (2004) 61–80. Xian Liu, The barrier attribute of filled functions, Appl. Math. Comput. 149 (2004) 641–649. Xiaoli Wang, Guobiao Zhou, A new filled function for unconstrained global optimization, Appl. Math. Comput. 174 (2006) 419–429. Yong-Jun Wang, Jiang-She Zhang, A new constructing auxiliary function method for global optimization, Math. Comput. Model. 47 (2008) 1396–1410. Weixiang Wang, Youlin Shang, Liansheng Zhang, A filled function method with one parameter for box constrained global optimization, Appl. Math. Comput. 194 (2007) 54–66. L.C.W. Dixon, J. Gomulka, S.E. Herson, Reflections on Global Optimization Problem, Optimization in Action, Academic Press, New York, 1976. pp. 198– 435.