A global optimization algorithm using parametric linearization relaxation

A global optimization algorithm using parametric linearization relaxation

Applied Mathematics and Computation 186 (2007) 763–771 www.elsevier.com/locate/amc A global optimization algorithm using parametric linearization rel...

185KB Sizes 0 Downloads 15 Views

Applied Mathematics and Computation 186 (2007) 763–771 www.elsevier.com/locate/amc

A global optimization algorithm using parametric linearization relaxation Shao-Jian Qu *, Ke-Cun Zhang, Ying Ji Faculty of Science, Xi’an Jiaotong University, Xi’an 710049, PR China

Abstract In this paper a global optimization algorithm based on a parametric linearizing method for generalized quadratic programming (GQP), i.e., the quadratic programming problem with nonconvex quadratic constraints, is proposed. By utilizing the linearizing method initial nonconvex nonlinear problem GQP is reduced to a sequence of linear programming problems through the successive refinement of a linear relaxation of feasible region and of the objective function. The proposed algorithm is convergent to the global minimum of GQP by means of the subsequent solutions of a series of linear programming problems. Test results indicate that the proposed algorithm is extremely robust and can be used successfully to solve global minimum of GQP on a microcomputer.  2006 Elsevier Inc. All rights reserved. Keywords: GQP; Linearizing method; Global optimization

1. Introduction We shall be concerned with the well-known generalized quadratic problem, 8 f 0 ðxÞ > < min 0 GQPðX Þ : s:t: f j ðxÞ 6 bj ; j ¼ 1; . . . ; m; > : X 0 ¼ fx : 1 < x0i 6 xi 6 x0i < þ1; i ¼ 1; . . . ; ng; where f j ðxÞ ¼ cTj x þ xT Qj x; j ¼ 0; 1; . . . ; m. Without loss generality, it can be assumed that matrices Qj ¼ ðqjik Þnn ; j ¼ 0; 1; . . . ; m are all symmetric. Quadratically constrained problems are worthy of study both because they frequently appear in applications [1,2] and because many other nonlinear problems can be transformed into this form [3,4] and solving problems of this sort is known to be NP-hard. When each function f j(x), j = 1, . . . , m is convex, i.e., GQP is one quadratic programming with convex constraints, then efficient algorithms are available for solving GQP [5,6]. In the following of this paper we will assume there is at least one function f j(x), j = 1, . . . , m is nonconvex. *

Corresponding author. E-mail address: [email protected] (S.-J. Qu).

0096-3003/$ - see front matter  2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.08.028

764

S.-J. Qu et al. / Applied Mathematics and Computation 186 (2007) 763–771

While solving problems of this sort is NP-hard, many practical applications possess a relatively favorable structure that can be exploited. In particular, oftentimes the matrices Qj, j = 0, 1, . . . , m are relatively sparse [1,3,4]. To present provably global solutions to problems of this sort is difficult. The LagInt algorithm, a branch and bound algorithm which by exploiting the convex underestimation of the Lagrangian dual converts the original problem into a sequence of convex quadratic programs over subrectangles, has proven to be well-suited for GQP [7]. Vorrhis and Al-khayyal, combining the outer approximation algorithm and branch and bound algorithm, also proposed one efficient algorithm for GQP [8]. Provably convergent outer approximation algorithms can be applied to GQP. However, thus far these methods are shown to be practical only for very small problem sizes [9]. In this paper we presents a new global optimization algorithm for GQP by solving a sequence of linear programming problems over partitioned subsets. The main features of our method are: (1) Using a convenient parametric linearization technique to systematically convert a GQP problem into a series of linear programming problems and the solutions of these converted problems can be as close as possible to the global optimum of the original GQP problem by a successive refinement process. (2) Comparing with the linearization method in [11], the generated relaxed linear programming problems are embedded within a branch and bound algorithm without increasing new variables and constraints. (3) The proposed method cannot only solve general GQP problem, but bilinear programming [10] and quadratic programs with convex quadratic constraints [5,6]. (4) The proposed linear programming method for GQP is more convenient in computation than the convex programming problem of [7], thus any effective linear programming algorithm can be used to solve this nonlinear programming. Finally, numerical experiments are presented, which show that the proposed method stable treats all of the test problems in finding globally optimal solutions within a prespecified tolerance. The remainder of this paper is organized as follows. The next section states the linearizing method and the linear relaxation program. Section 3 describes the global algorithm and its convergence. Section 4 gives computational results. Finally the conclusion is presented. 2. Parametric linearizing method The principal structure in the development of a solution procedure for solving GQP is the construction of lower bounds for this problem, as well as for its partitioned subproblems. A lower bound on the solution of GQP and its partitioned subproblems can be obtained by solving a relaxation linear programming of GQP. Let S = {x 2 R1 : 1 < l 6 x 6 u < + 1}. For any x 2 S, define xðrÞ ¼ l þ rðu  lÞ;

ð1Þ

where r 2 {0, 1}, and x(0) = l and x(1) = u. Theorem 1. For any x 2 S, assume that the vertices of S are x(r), in the form (1). Let q(x) = x2 and its gradient function q0 ðxÞ ¼ oqðxÞ z 2 R1 such that the linear functions ox ¼ 2x. Then there exist vectors z;  ql ðx; S; rÞ ¼ zðrÞx þ qðxðrÞÞ  zðrÞxðrÞ; qu ðx; S; rÞ ¼ zð1  rÞx þ qðxðrÞÞ  zð1  rÞxðrÞ

ð2Þ ð3Þ

satisfy for any x 2 S ql ðx; S; rÞ 6 qðxÞ 6 qu ðx; S; rÞ

ð4Þ

and moreover ql ðxðrÞ; S; rÞ ¼ qu ðxðrÞ; S; rÞ ¼ qðxðrÞÞ;

ð5Þ l

u

where z(r), in form (1), are vertices of the interval vector Z ¼ ½z; z, and the functions q (x; S ,r), q (x; S, r) show that ql, qu have the argument x and depend on the two parameters S and r.

S.-J. Qu et al. / Applied Mathematics and Computation 186 (2007) 763–771

765

Proof. Since S is a bounded and close set, and the gradient function q 0 (x) = 2x is continuous over S, then there exist vectors z, z satisfy z 6 q0 ðxÞ 6 z 8x 2 X ;

ð6Þ

where z = 2l and z ¼ 2u. By the mean value theorem we have for all x 2 S qðxÞ ¼ qðxðrÞÞ þ q0 ðnÞðx  xðrÞÞ; where n = ux + (1  u)x(r) for some u 2 [0, 1]. Then (6) imply that for r = 0 the inequality oqðnÞ P zðrÞ; on

x  xðrÞ P 0

8x 2 X ;

hold, and for r = 1 the inequality oqðnÞ 6 zðrÞ; on

x  xðrÞ 6 0

8x 2 X ;

are valid. Therefore, by the mean value theorem we get qðxÞ ¼ qðxðrÞÞ þ

oqðnÞ ðx  xðrÞÞ P qðxðrÞÞ þ zðrÞðx  xðrÞÞ: on

Therefore, we have that ql(x; S, r) 6 q(x), "x 2 S and ql(x(r); S, r) = q(x(r)). Analogously, for r = 0 the inequality oqðnÞ 6 zð1  rÞ; on

x  xðrÞ P 0;

8x 2 X ;

hold, and for r = 1 the inequality oqðnÞ P zð1  rÞ; on

x  xðrÞ 6 0;

8x 2 X ;

are valid. By the mean value theorem it follows as above that T

T

qðxÞ 6 zð1  rÞ x þ qðxðrÞÞ  zð1  rÞ xðrÞ: Therefore, we have that qu(x; S, r) P q(x), "x 2 S and qu(x(r); S, r) = q(x(r)).

h

Theorem 1 gives the possiblity to construct several linear lower and upper bound functions. In order to generate the linear programming relaxation, the proposed strategy is that underestimates every nonlinear function f j(x)(j = 0, 1, . . . , m) with a linear function. From above theorem, we can give the linear relaxation of every function f j(x). Since 2xi xk ¼ ðxi þ xk Þ2  ðx2i þ x2k Þ ði; k ¼ 1; . . . ; nÞ, then function fj(x) can be written in summation form n n X n n n X X X X cji xi þ qjik xi xk ¼ ðcji xi þ qjii x2i Þ þ qjik xi xk f j ðxÞ ¼ i¼1

i¼1

k¼1

i¼1

i¼1;i6¼k

n n n h i X X 1 X 2 ¼ ðcji xi þ qjii x2i Þ þ qjik ðxi þ xk Þ  ðx2i þ x2k Þ : 2 i¼1;i6¼k k¼1 i¼1

ð7Þ

For convenience, for any X ¼ ½x; x  X 0 and for any x = (xi)1·n 2 X, the notations and functions of this paper are introduced as follows:

766

S.-J. Qu et al. / Applied Mathematics and Computation 186 (2007) 763–771

X ik ¼ xi þ xk ;

X ik ¼ xi þ xk ;

X ik ¼ xi þ xk ;

i 6¼ k;

ulik ðx; X ; rÞ ¼ zðrnþiþk ÞX ik þ X 2ik ðrnþiþk Þ  zðrnþiþk ÞX ik ðrnþiþk Þ; i 6¼ k; uuik ðx; X ; rÞ ¼ zð1  rnþiþk ÞX ik þ X 2ik ðrnþiþk Þ  zð1  rnþiþk ÞX ik ðrnþiþk Þ; uli ðx; X ; rÞ ¼ zðri Þxi þ x2i ðri Þ  zðri Þxi ðri Þ; uui ðx; X ; rÞ ¼ zð1  ri Þxi þ x2i ðri Þ  zð1  ri Þxi ðri Þ;

i 6¼ k;

2

n where i, k = 1, . . . , n, i 5 k, r 2 f0; 1g , xi(ri) and Xik(rn+i+k) are vertices of ½xi ; xi  and ½X ik ; X ik , respectively, which are defined as (1), z(ri), z(1  ri), z(rn+i+k) and z(1  rn+i+k) are defined as in Theorem 1 and are vertices of ½2xi ; 2xi , ½2xi ; 2x2 i , ½2X ik ; 2X ik  and ½2X ik ; 2X ik , respectively, which are defined as (1). From Theorem 1, n fix a vector r 2 f0; 1g , we have for any x 2 X and j 2 {0, 1, . . . , m} 2

ulik ðx; X ; rÞ 6 ðxi þ xk Þ 6 uuik ðx; X ; rÞ uli ðx; X ; rÞ

6

x2i

6

uui ðx; X ; rÞ

8i;

8i 6¼ k;

k 2 f1; . . . ; ng:

Therefore we have ( qjii x2i

P

uji ðx; X ; rÞ



qjii uli ðx; X ; rÞ;

if qjii > 0;

qjii uui ðx; X ; rÞ; if qjii < 0; ( j qik ulik ðx; X ; rÞ; if qjik > 0; 2 j j qik ðxi þ xk Þ P uik ðx; X ; rÞ :¼ qjik uuik ðx; X ; rÞ; if qjik < 0; ( j qik ðuli ðx; X ; rÞ þ ulk ðx; X ; rÞÞ; j j 2 2  qik ðxi þ xk Þ P /ik ðx; X ; rÞ :¼ qjik ðuui ðx; X ; rÞ þ uuk ðx; X ; rÞÞ;

if  qjik > 0; if  qjik < 0:

Then, from (7) it follows that for any x 2 X  X0: f j ðxÞ P f j ðx; X ; rÞ :¼

n X

ðcji xi þ uji ðx; X ; rÞÞ þ

i¼1

n n X 1 X ðuj ðx; X ; rÞ þ /jik ðx; X ; rÞÞ: 2 i¼1;i6¼k k¼1 ik

ð8Þ

Consequently, we construct the corresponding approximation relaxation linear programming (RLP) of GQP as follows: 8 f 0 ðx; X ; rÞ > < min f j ðx; X ; rÞ 6 bj ; j ¼ 1; . . . ; m; RLPðX Þ : s:t: > : x 2 X  X 0: Theorem 2. Denote the optimal objective function value of problem (P) by V[P], then V[RLP(X)] 6 V[GQP(X)], i.e., V[RLP(X)] provides a valid lower bound of the optimal value of problem GQP(X). Proof. From above discussion, we know the feasible region of problem RLP(X) is contained in the feasible region of problem GQP(X) and f 0(x) 6 f 0(x),"x 2 X. Then V[RLP(X)] provides a valid lower bound of the optimal value of problem GQP(X). h 3. Global optimization algorithm and its convergence In this section a branch-and-bound algorithm is developed to solve the GQP based on the former linear relaxation method. This method needs partitioning the set X0 into subhyperrectangles, each concerned with a node of the branch-and-bound tree, and each node is associated with a relaxation linear subproblem in each

S.-J. Qu et al. / Applied Mathematics and Computation 186 (2007) 763–771

767

subhyperrectangle. Hence, at any stage k of the algorithm, suppose that we have a collection of active nodes denoted by Rk , say, each associated with a hyperrectangle X, X  X0, "X 2 Rek. For each such node X, we will have computed a lower bound of the optimal value of GQP(X) via the solution b(X) of the RLP(X), so that the lower bound of optimal value of GQP on the whole initial box region X0 at stage k is given by bk ¼ minfbðX Þ : 8X 2 Rk . Whenever the lower bounding solution for any node subproblem, i.e., the solution of the relaxation linear programming LRP turns out to be feasible to the problem GQP, we update the upper bound of incumbent solution a if necessary. Then, the active nodes collection Rk will satisfy bðX Þ < a; 8X 2 Rk , for each stage k. We now select an active node to partition its associated hyperrectangle into two subhyperrectangles as described below, computing the lower bounds for each new node as before. Upon fathoming any nonimproving nodes, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained. The critical element in guaranteeing convergence to a global minimum is the choice of a suitable partitioning strategy. In this paper we choose a simple and standard bisection rule. This rule is sufficient to ensure convergence since it drives all the intervals to zero for the variables. This branching rule is as follows. Consider any node subproblem identified by the rectangle X ¼ ½x; x  X 0 . Let t ¼ argmaxfxi  xi : i ¼ 1; . . . ; ng

ð9Þ x þx ½xt ; t 2 t 

x þx ½ t2 t

and ; xt . and partition X by bisection the interval ½xt ; xt  into the subintervals The basic steps of the proposed global optimization algorithm are summarized in the following. Let b(Xk) refer to the optimal objective function value of RLP(Xk) and xk = x(Xk) refer to an element of corresponding argmin. 3.1. Algorithm statement Step 0: Initializing. Let the set of all active node R0 ¼ fX 0 g, the feasible set F :¼ ;, some accuracy tolerance e > 0, the upper bound a ¼ maxff 0 ðxÞ : x 2 X 0 g, and the iteration counter k :¼ 0. Solve the problem LRP(X0), obtaining the optimal objective function value b0 :¼ b(X0) and x0 :¼ x(X0). If f j(x0) 6 bj, " j 2 {1, . . . , m}, then let a = f 0(x0) and F = {x0}. If a 6 b0 + e, then stop with x0 as the prescribed solution to problem GQP(X0). Otherwise, proceed to Step 1. Step 1: Updating the upper bound. Select the midpoint xmid of Xk, is xmid is feasible to GQP(Xk) then F :¼ F [ {xmid}. Define the upper bound a :¼ minx2Ff 0(x). If F 5 ;, the best known feasible point is denote by xbest :¼ argminx2F f 0(x). Step 2: Branching. Using branching rule (9) to get two new subhyperrectangles and denote the set of new partition rectangles as X k . For each X 2 X k , compute the lower bound fLj :¼ minx2X f j ðx; X ; rÞ. If there exists some j 2 {0, 1, . . . , m} such that one of the lower bounds fLj satisfies fL0 > a or fLj > bj for some j 2 {1, . . . , m}, then the corresponding subrectangle X is eliminated from X k , i.e., X k :¼ X k n X , and skip to next element of X k . Step 3: Bounding. If X k 6¼ ;, solve RLP(X) to obtain b(X) and x(X) for each X 2 X k . If b(X) > a, set X k :¼ X k n X ; Otherwise, update a :¼ b(X), F :¼ F [ {x(X)} and xbest :¼ argminx2Ff 0(x). The partition set remaining is now Rk :¼ ðRk n X k Þ [ X k and a new lower bound is now bk :¼ infX 2Rk bðX Þ. Step 4: Convergence checking. Set Rkþ1 ¼ Rk n fX : a  bðX Þ 6 e; X 2 Rk g. If Rkþ1 ¼ ; then stop with a as the solution of GQP(X0) and xbest as an optimal solution. Otherwise select an active node Xk+1 such that X kþ1 ¼ argminX 2Rkþ1 bðX Þ, xk+1 :¼ x(Xk+1). Set k :¼ k + 1 and goto Step 1. 3.2. Convergence analysis In this section we give the global convergence of above algorithm. If the algorithm does not stop finitely, then the branching rule guarantees all the intervals to zero for the variables. On the other hand, we need to prove that as jXj ! 0 the linear relaxation problem RLP(X) approaching GQP(X), so it is not surprising that n2 the algorithm is shown to convergence to the global solution. Given X ¼ ½x; x  X 0 , fix r 2 f0; 1g , for any x 2 X and j 2 {0, 1, . . . , m}, define

768

S.-J. Qu et al. / Applied Mathematics and Computation 186 (2007) 763–771

( qjii x2i

6

 ji ðx; X ; rÞ u



qjii uui ðx; X ; rÞ; if qjii > 0;

qjii uli ðx; X ; rÞ; if qjii < 0; ( j qik uuik ðx; X ; rÞ; if qjik > 0; 2 j j  ik ðx; X ; rÞ :¼ qik ðxi þ xk Þ 6 u qjik ulik ðx; X ; rÞ; if qjik < 0; ( j qik ðuui ðx; X ; rÞ þ ulk ðx; X ; rÞÞ; if  qjik > 0; j j 2 2   qik ðxi þ xk Þ 6 /ik ðx; X ; rÞ :¼ qjik ðuli ðx; X ; rÞ þ uuk ðx; X ; rÞÞ; if  qjik < 0:

Then, from (7) it follows that for any x 2 X  X0: f j ðxÞ 6 f j ðx; X ; rÞ :¼

n n n X X 1 X  j ðx; X ; rÞÞ:  ji ðx; X ; rÞÞ þ ðcji xi þ u ð ujik ðx; X ; rÞ þ / ik 2 i¼1 i¼1;i6¼k k¼1

ð10Þ

Let Ejmax ¼

n X

2

jqjii jðxi  xi Þ þ

i¼1

n n X 1 X j 2 2 2 jq j ½ðxi  xi Þ þ ðxk  xk Þ þ ðX ik  X ik Þ ; 2 i¼1;i6¼k k¼1 ik

j

D ðxÞ ¼ f j ðxÞ  f j ðx; X ; rÞ; rj ðxÞ ¼ f j ðx; X ; rÞ  f j ðxÞ: 2

Theorem 3. Ejmax , Dj(x) and $j(x) are defined as above, fix vector r 2 f0; 1gn , then for any X ¼ ½x; x  X 0 , x 2 X and j 2 {0, 1, . . . , m}, we have (a) maxx2X Dj ðxÞ ¼ maxx2X rj ðxÞ ¼ Ejmax . (b) limkxxk!0 maxx2X Dj ðxÞ ¼ limkxxk!0 maxx2X rj ðxÞ ¼ 0:

Proof. (a) Firstly, fix vector r 2 f0; 1g max

xi 2½xi ;xi 

x2i



max X ik 2½X ik ;X ik 

uli ðx; X ; rÞ

¼ max

xi 2½xi ;xi 

X 2ik  ulik ðx; X ; rÞ ¼

n2

and for any j = 0, 1, . . . , m, i, k = 1, . . . , n, i 5 k, we will prove,

uui ðx; X ; rÞ max

X ik 2½X ik ;X ik 

2

 x2i ¼ ðxi  xi Þ ;

ð11Þ

uuik ðx; X ; rÞ  X 2ik ¼ ðX ik  X ik Þ2 :

ð12Þ

Since Hli ðxÞ :¼ x2i  uli ðx; X ; rÞ is convex about xi, for any xi 2 ½xi ; xi , Hli ðxÞ can obtain the maximum Hl;max i at point xi or xi . Consider ri = 0 or ri = 1 and by computation we have that Hl;max ¼ ðxi  xi Þ2 : i On the maximum

ð13Þ

other hand, Hui ðxÞ :¼ uui ðx; X ; rÞ  x2i is concave Hu;max at point 12 zð1  ri Þ. Consider ri = 0 or ri i

Hui ðxÞ

about xi, for any xi 2 ½xi ; xi , can obtain the = 1 and by computation we have that

Hu;max ¼ ðxi  xi Þ2 : i

ð14Þ

Therefore, from (14) and (13), we have (11) is true. The same as the proof of (11), we have that (12) is true.  j , we have that  ji , u  jik and / Using (11) and (12), from the definition of uji , ujik , /jik , u ik  ji ðx; X ; rÞ  qjii x2i ¼ jqjii jðxi  xi Þ2 ; max qjii x2i  uji ðx; X ; rÞ ¼ max u

xi 2½xi ;xi 

max X ik 2½X ik ;X ik 

xi 2½xi ;xi 

qjik X 2ik  ujik ðx; X ; rÞ ¼

max X ik 2½X ik ;X ik 

 jik ðx; X ; rÞ  qjik X 2ik ¼ jqjik jðX ik  X ik Þ2 ; u

 j ðx; X ; rÞ  qj ðx2 þ x2 Þ ¼ jqj j½ðxi  xi Þ2 þ ðxk  xk Þ2 : max qjik ðx2i þ x2k Þ  /jik ðx; X ; rÞ ¼ max / ik ik i ik k

x2½x;x

x2½x;x

ð15Þ ð16Þ ð17Þ

S.-J. Qu et al. / Applied Mathematics and Computation 186 (2007) 763–771

769

According to (15)–(17) and the definition of f j(x; X, r) and f j ðx; X ; rÞ, we have that max Dj ðxÞ ¼ max rj ðxÞ ¼ Ejmax x2X

ð18Þ

x2X

(b) Since from the definition of Xik and X ik , we have 2

2

2

2

ðX ik  X ik Þ 6 ðxi  xi Þ þ ðxk  xk Þ þ 2jxi  xi kxk  xk j 6 4kx  xk :

ð19Þ

Therefore, 06

Ejmax

6

n X

jqjii j

þ3

i¼1

then we complete the proof.

n n X X

! jqik j

j

kx  xk

2

ð20Þ

i¼1;i6¼k k¼1

h

The theorem shows that as the sub-hyperrectangle X  X0 is small enough, the solution of RLP(X) is sufficiently approaching the solution of GQP(X) and this guarantees the global convergence of the method. The global convergence of the algorithm will be given in the following theorem. Theorem 4. Assume the feasible set F of GQP(X0) is nonempty, then the algorithm either stops finitely with the global solution, or generates a infinite sequence {xk} which any accumulation point is the global solution. Proof. From the construction of the algorithm, we have bk is a nondecreasing sequence bounded above by minx2Ff 0(x), which guarantees the existence of the limit b :¼ mink!1bk 6 minx2Ff 0(x). Since {xk} is contained in a compact set X0, there must be one convergent subsequence {xs}  {xk} and suppose lims!1 xs ¼ ^x. Then from the proposed algorithm, there exists a decreasing subsequence {Xr}  Xs where X s 2 Rs with xr 2 Xr, br = b(Xr) = f 0(xr) and limr!1 X r ¼ f^xg. According to Theorem 4, we have limr!1 br ¼ limr!1 f 0 ðxr Þ ¼ limr!1 f 0 ðxr Þ ¼ f 0 ð^xÞ. Then all remains is to prove ^x is feasible to GQP(X0). Firstly, it is obvious ^x 2 X 0 , since X0 is closed. Secondly, by contradiction, we will prove f j ð^xÞ 6 bj ; j ¼ 1; . . . ; m. Assume f h ð^xÞ > bh for some h 2 {1, . . . , m}. Since f h is continuous and again from Theorem 4 the sequence {f h(xr; Xr, r)} converges to f h ð^xÞ, then by definition of convergence, there must be r, such that jf h ðxr ; X r ; rÞ  f h ð^xÞj < f h ð^xÞ  bh for any r > r. Therefore, for any r > r, we have f h(xr; Xr,r) > bh, which implies that RLP(Xr) is infeasible and violating the assumption that xr = x(Xr). This will result in a contradiction, thus the theorem is completed. h 4. Numerical experiments Now we give the numerical experiments for the proposed global optimization algorithm to illustrate its efficiency. The experiment is carried out with the C++ programming language and the convergence tolerance e = 106. The simplex method is applied to solve the linear relaxation programming problems. Example 1 (Ref. [13, p. 1513]) 8 min x21 þ x22 > > > > < s:t: 0:3x1 x2 6 1; > x1  x2 6 1; > > > : x 2 X 0 ¼ fxj2 6 x1 6 5; 1 6 x2 6 3g: For this problem although the objective function is convex, the local optimization solution may be not the global one because of the nonconvexity of the constraint set. Using our method the minimum point ðx1 ¼ 2:02344; x2 ¼ 1:65625Þ is achieved from all starting points for different values of r.

770

S.-J. Qu et al. / Applied Mathematics and Computation 186 (2007) 763–771

Example 2 (Ref. [12, p. 361]) 8 min x1 > > > < s:t: 1 x þ 12 x2  16 x22  16 x21 6 1; 4 1 1 2 > x þ 141 x22  37 x1  37 x2 6 1; > 14 1 > : 1 6 x1 6 5:5; 1 6 x2 6 5:5:

ð21Þ

These two nonlinear constraints define a nonconvex feasible region and even though the objective function is linear there still exists three distinct KKT points (3.832, 4.823), (4.0, 4.0), (1.178, 2.178). Therefore, local optimization approach can converge to either one of them at best depending on which basin of attraction the initial point is inside. In our computation, the linear relaxation is used. As the tolerance is chosen the global minimum point ðx1 ¼ 1:17707564; x2 ¼ 2:17708437Þ from all starting points within 0.01-CPU second. But use the method of [12], the solution ðx1 ¼ 1:178; x2 ¼ 2:178Þ can be obtained for different values of r. Example 3 (Ref. [14]) 8 2 4x2 þ ðx1  1Þ þ x22  10x23 > < min s:t: x21 þ x22 þ x23 6 2; > : ðx1  2Þ2 þ x22 þ x23 6 2:

ð22Þ

As we can see the unknowns of this problem are contained in the box pffiffiffi pffiffiffi pffiffiffi pffiffiffi pffiffiffi pffiffiffi B0 :¼ fðx1 ; x2 ; x3 Þ j 2  2 6 x1 6 2;  2 6 x2 6 2;  2 6 x3 6 2g: In our computation, the linear relaxation is used. Applying our algorithm we obtained the global minimum point ðx1 ¼ 1; x2 ¼ 0:220971; x ¼ 0:972272Þ and the optimal objective value 11.2882 for different values of r (after 3 iterations). Example 4. Consider the following problem: 8 f 0 ðxÞ :¼ 12 hx; Q0 xi þ hx; c0 i > < min s:t: f j ðxÞ :¼ 12 hx; Qj xi þ hx; cj i 6 bj ; > : 0 6 xi 6 10; i ¼ 1; . . . ; n:

ð23Þ

j ¼ 1; . . . ; m;

All elements of Q0 are randomly generated between 0 and 1; All elements of Qj(j = 1, . . . , m) were randomly generated between 1 and 0; All elements of c0 were randomly generated between 0 and 1; All elements of cj(j = 1, . . . , m) were randomly generated between 1 and 0; bj were randomly generated between 300 and n2 90; and r 2 f0; 1g were randomly generated. The result can be seen in the following table, where n stands for the dimension of our problem, m represents the constraint number, and T for the CPU time(s) of our algorithm (Table 1).

Table 1 Result of our method n

m

T

4 5 14 18 20 35 37 45 46 60

6 11 6 7 5 10 9 8 5 11

2.37678 6.39897 9.22732 15.841 11.9538 74.8853 77.1476 86.7174 44.2502 315.659

S.-J. Qu et al. / Applied Mathematics and Computation 186 (2007) 763–771

771

5. Concluding remarks In this paper a global optimization algorithm based on a parametric linearizing method is proposed for solving GQP. The linear relaxation programming is obtained by underestimating the objective and constraint functions of problem GQP using linear functions. The algorithm was shown to converge to the global minimum through the successive refinement of a linear relaxation of the feasible region and/or of the objective function and the subsequent solutions of a series of linear programming problems. Numerical tests show that the algorithm is efficient. References [1] M.H. Khammash, Synthesis of globally optimal controllers for robust performance to unstructured uncertainty, IEEE Transactions on Automatic Control 41 (1996) 189–198. [2] M.V. Salapaka, M.H. Khammash, T.V. Voorhis, Synthesis of globally optimal controllers in l1 using the reformulation-linearization technique, in: Proceedings of the IEEE Conference on Decision and Control, Tampa, FL, December 1998. [3] W.A. Lodwick, Preprocessing nonlinear functional constraints with applications to the pooling problem, ORSA Journal on Computing 4 (1992) 119–131. [4] C.A. Floudas, V. Visweswaran, Primal-relaxed dual global optimization approach, Journal of Optimization Theory and Applications 78 (1993) 187–225. [5] L.T.H. An, An efficient algorithm for globally minimizing a quadratic function under convex quadratic constraints, Mathematical Programming Series A 87 (2000) 401–426. [6] Y.Y. Ye, Approximating global quadratic optimization with convex quadratic constraints, Journal of Global Optimization 15 (1999) 1–17. [7] T.V. Voorhis, A global optimization algorithm using Lagrangian underestimates and the interval Newton method, Journal of Global Optimization 24 (2002) 349–370. [8] T.V. Voorhis, F.A. Al-khayyal, Difference of convex solution of quadratically constrained optimization problems, European Journal of Operational Research 148 (2003) 349–362. [9] R. Horst, U. Raber, Convergent outer approximation algorithms for solving unary problems, Journal of Global Optimization 13 (1998) 123–149. [10] P.M. Pardalos, J.B. Rosen, Constrained Global Optimization: Algorithms and Applications, Springer-Verlag, Berlin, 1987. [11] H.D. Sherali, C.H. Tuncbilek, A global optimization algorithm for polynomial programming problems using a reformulationlinearization technique, Journal of Global Optimization 2 (1992) 101–112. [12] C.D. Maranas, C.A. Floudas, Global optimization in generalized geometric programming, Computers and Chemical Engineering 21 (4) (1997) 351–369. [13] Y.J. Wang, K.C. Zhang, Y.L. Gao, Global optimization of generalized geometric programming, Computers and Mathematics with Applications 48 (2004) 1505–1516. [14] J.M. Peng, Y. Yuan, Optimality conditions for the minimization of a quadratic with two quadratic constraints, SIAM Journal of Optimization 7 (1997) 579–594.