Linearization method for a class of multiplicative programming with exponent

Linearization method for a class of multiplicative programming with exponent

Applied Mathematics and Computation 183 (2006) 328–336 www.elsevier.com/locate/amc Linearization method for a class of multiplicative programming wit...

186KB Sizes 2 Downloads 143 Views

Applied Mathematics and Computation 183 (2006) 328–336 www.elsevier.com/locate/amc

Linearization method for a class of multiplicative programming with exponent Peiping Shen a

a,*

, Hongwei Jiao

b

College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, PR China b Department of Mathematics, Henan Institute of Science and Technology, Xinxiang 453003, PR China

Abstract This paper presents a global optimization algorithm for a class of multiplicative programming with exponent under multiplicative constraints (MPE). By utilizing equivalent problem of MPE in the sense that they have the same optimal solution, tangential hypersurfaces and concave envelope approximations a linear relaxation of equivalent problem is received. Thus the initial nonconvex programming problem (MPE) is reduced to a sequence of linear programming problems through the successive refinement of a linear relaxation of feasible region of the objective function. The proposed algorithm is convergent to the globally optimal solution of MPE by means of the subsequent solutions of a series of linear programming problems. Numerical results indicate that the proposed algorithm is extremely robust and can be used successfully to solve global minimum of MPE on microcomputer.  2006 Elsevier Inc. All rights reserved. Keywords: Multiplicative programming; Global optimization; Linear relaxation; Branch and bound

1. Introduction Consider the following a class of multiplicative programming problem with exponent: 8 > < min G0 ðxÞ ðMPEÞ : s:t: Gj ðxÞ 6 bj ; j ¼ 1; . . . ; M; > : x 2 X ¼ ½x; x  RN ; where Gj ðxÞ ¼

Tj N X Y t¼1

*

!ajt cjti xi þ d jt

;

j ¼ 0; 1; . . . ; M:

i¼1

Corresponding author. E-mail address: [email protected] (P. Shen).

0096-3003/$ - see front matter  2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.05.074

P. Shen, H. Jiao / Applied Mathematics and Computation 183 (2006) 328–336

329

PN cjti, djt, ajt are all arbitrary real constant; i¼1 cjti xi þ d jt > 0 and bj > 0, j = 0, 1, . . . , M, t = 1, . . . , Tj; x > 0 and x, x are finite. In general, the problem MPE corresponds to a nonlinear optimization problem with nonconvex objective function and constraints set. MPE has attracted considerable attention in the literature because of their large number of practical applications in various fields of study, including financial optimization [1], plant layout design [2], robust optimization [3], and so on. Hence, it is very necessary to present good algorithm for MPE problem. Since MPE may possess many local minima, it is known to the hardest problems. In the last decade, many solution algorithms have been proposed for globally solving special forms of MPE. They can be classified as follows: outer-approximation methods [4], decomposition method [5], finite branch and bound algorithm [6,7], primal and dual simplex method [8], cutting plane method [9], heuristic methods [10], etc. Though optimization methods for special forms of MPE is ubiquitous, the global optimization algorithm based on the general forms of MPE problems has been little studied in the literatures. In the paper, a new branch and bound algorithm is given via solving a sequence of linear relaxations over partitioned subsets to find a global optimal solution of MPE. The proposed method is based on the equivalent problem, which has the same optimal solution as the problem MPE. By utilizing logarithmic characteristic, tangential hypersurfaces and concave envelope approximations a linear relaxation of equivalent problem is received. The initial nonconvex nonlinear programming problem MPE is systematically converted into a series of linear programming problems. The solution of these converted problems can be as close as possible to the globally optimal solution of MPE by successive refinement process. In this method, (1) a new linear relaxation for the equivalent problem of MPE is proposed, thus any effective linear programming algorithm can be used to solve nonlinear programming problem MPE; (2) the generated relaxed linear programming problems are embedded within a branch-and-bound algorithm without increasing new variables and constraints; (3) numerical computation shows that the proposed method is feasible. This paper is organized as follows. In Section 2 the equivalent transformation and linearization slack technique is presented for generating the relaxed linear programming. In Section 3 the proposed branch-andbound algorithm in which the relaxed subproblems are embedded is described, and the convergence of the algorithm is established. Numerical results of some problems in the area of application are considered in Section 4 and Section 5 provides a summary. 2. Linear relaxation programming In order to saving the problem MPE, we will first generalize a equivalent problem (P1) of MPE in the sense that they have the same optimal solution. In the following, we will only consider the problem (P1). The principal structure in the development of a solution procedure for solving problem P1 is the construction of lower bounds for this problem, as well as for its partitioned subproblems. A lower bound on the solution of problem P1 and its partitioned subproblems can be obtained by solving a linear programming relaxation problem. Toward this end, we utilize logarithmic property to receive the following equivalent problem: 8 > < Minimize W0 ðxÞ ðP1Þ : subject to Wj ðxÞ 6 ln bj ; j ¼ 1; 2; . . . ; M; > : x 2 X 0 ¼ ½x0 ; x0   RN ; where Wj ðxÞ ¼

Tj X t¼1

ajt ln

N X

! cjti xi þ d jt ;

j ¼ 0; 1; . . . ; M:

ð1Þ

i¼1

The linear relaxation of the problem (P1) can be realized by underestimating every function Wj(x) with a linear function WLj ðxÞ for every Pj N= 0, 1, . . . , M.  This linear function is constructed by underestimating every implicity separable term ajt ln c x þ d jt with a linear function. All the details of this linearization techi¼1 jti i nique for generating relaxations will be given in the following Theorem 1.

330

P. Shen, H. Jiao / Applied Mathematics and Computation 183 (2006) 328–336

For any X ¼ ½x; x  X 0  RN and "x = (xi)N·1 2 X, the notations and functions of this paper are introduced as follows: N X cjti xi þ d jt ; X jt ¼ i¼1

X ljt ¼

N X

minðcjti xi ; cjtixi Þ þ d jt ;

i¼1

X ujt ¼

N X

maxðcjti xi ; cjtixi Þ þ d jt ;

i¼1

Ajt ¼

lnðX ujt Þ  lnðX ljt Þ X ujt  X ljt

fjt ðxÞ ¼ ln

N X

; !

cjti xi þ d jt

¼ lnðX jt Þ;

i¼1

hjt ðxÞ ¼ lnðX ljt Þ þ Ajt

N X

! cjti xi þ d jt  X ljt

i¼1

gjt ðxÞ ¼ Ajt

N X

¼ lnðX ljt Þ þ Ajt ðX jt  X ljt Þ;

!

cjti xi þ d jt

 1  ln Ajt ¼ Ajt X jt  1  ln Ajt :

i¼1

Theorem 1. Consider the functions fjt(x), gjt(x) and hjt(x) for any x 2 X  X0, where j = 0, 1, . . . , M and t = 1, . . . , Tj. Then the following two statements are valid. (i) The function hjt(x) is the (affine) convex envelope of the function fjt(x) over X, and the function gjt(x) is a supporting hyperplane of fjt(x), which is parallel with hjt(x). Moreover, the functions fjt(x), gjt(x) and hjt(x) satisfy hjt ðxÞ 6 fjt ðxÞ 6 gjt ðxÞ; for 8x 2 X : (ii) The differences of fjt(x) and gjt(x), hjt(x) and fjt(x) satisfy maxx2X D1jt ðxÞ ! 0 and maxx2X D2jt ðxÞ ! 0 as xjt ! 0, where D1jt ðxÞ ¼ gjt ðxÞ  fjt ðxÞ; xjt ¼ X ujt  X ljt ;

zjt ¼

D2jt ðxÞ ¼ fjt ðxÞ  hjt ðxÞ; X ujt X ljt

:

Proof. (i) By X jt 2 ½X ljt ; X ujt , consider the function ln(Xjt). Since the function fjt(x) = ln(Xjt) is a concave and monotone increase function of Xjt over ½X ljt ; X ujt , its (affine) convex envelope is lnðX ljt Þ þ Ajt ðX jt  X ljt Þ ¼ hjt ðxÞ. Moreover, since the tangential supporting function for ln(Xjt) is parallel with the hjt(x), thus the point of tangential support will occur at Xjt = 1/Ajt, and the corresponding tangential supporting function is gjt(x) = AjtXjt  1  lnAjt. Hence, by geometric property of the function fjt(x) we have lnðX ljt Þ þ Ajt ðX jt  X ljt Þ ¼ hjt ðxÞ 6 fjt ðxÞ ¼ lnðX jt Þ; lnðX jt Þ 6 gjt ðxÞ ¼ Ajt X jt  1  ln Ajt : This completes the proof of (i). Next, we prove (ii). (ii) Denote H1jt ðX jt Þ ¼ D1jt ðxÞ, and H2jt ðX jt Þ ¼ D2jt ðxÞ, thus we have H1jt ðX jt Þ ¼ Ajt X jt  1  ln Ajt  lnðX jt Þ; H2jt ðX jt Þ ¼ lnðX jt Þ  lnðX ljt Þ  Ajt ðX jt  X ljt Þ:

P. Shen, H. Jiao / Applied Mathematics and Computation 183 (2006) 328–336

331

Since H2jt ðX jt Þ is a concave function of Xjt over ½X ljt ; X ujt , we have that H2jt ðX jt Þ can attain the maximum H2jt: max , at the point, Xjt = 1/Ajt, and H2jt: max ¼ lnð1=Ajt Þ  lnðX ljt Þ  Ajt ð1=Ajt  X ljt Þ ¼  ln Ajt  lnðX ljt Þ  1 þ Ajt X ljt ¼

ln X ujt  ln X ljt X ujt



X ljt

X ljt  1  ln

ln X ujt  ln X ljt X ujt  X ljt

 ln X ljt ¼

ln zjt zjt  1  1 þ ln : ln zjt zjt  1

ln z

Since zjt ! 1, zjt 1jt ! 1 as xjt ! 0, we can follow that H2jt: max ! 0 as xjt ! 0. On the other hand, since H1(Xjt) is a convex function about Xjt, for any X jt 2 ½X ljt ; X ujt , it follows that 1 H (Xjt) can attain the maximum, H1jt: max , at the point, X ljt or X ujt . Thus, H1jt ðX ljt Þ ¼  ln Ajt  lnðX ljt Þ  1 þ Ajt X ljt ¼ H2jt: max ; H1jt ðX ujt Þ ¼  ln Ajt  lnðX ujt Þ  1 þ Ajt X ujt : By Ajt X ujt  ln X ujt ¼ Ajt X ljt  lnX ljt , we have H1jt ðX ujt Þ ¼ H1jt ðX ljt Þ, therefore H1jt: max ¼ H1jt ðX ujt Þ ¼ H1jt ðX ljt Þ ¼ H2jt: max , and we can follow that H1jt: max ! 0 and H2jt: max ! 0 as xjt ! 0. This completes the proof of the Theorem 1. h Remark. From the Theorem 1, we can follow that the functions gjt(x) and hjt(x) enough approximate the function fjt(x) as xjt ! 0, respectively. Next, by means of Theorem 1 we can give the relaxation linear programming of problem P1. Assume that X k ¼ ½xk ; xk  represents either the initial bounds on the variables of the problem, or modified bounds as defined k 0 k k for some partitioned subproblem in a branch and bound scheme. Letting PNX ¼ ½x ; x   X , and denoting L Wjt ðxÞ as the underestimating function of each nonlinear function ajt ln i¼1 cjti xi þ d jt , by Theorem 1, for any x 2 Xk we have ! ( N X ajt hjt ðxÞ; if ajt > 0; L ajt ln cjti xi þ d jt P Wjt ðxÞ ¼ 8j; t: ð2Þ ajt gjt ðxÞ; if ajt < 0; i¼1 Thus,Psumming it over all the terms t (t = 1, . . . , Tj) for each i = 1, . . . , N, and denoting the resulting right-hand Tj side t¼1 WLjt ðxÞ, in this sum as WRj ðxÞ, we have Wj ðxÞ P

WRj ðxÞ

¼

Tj X

WLjt ðxÞ;

8x 2 X k ; j ¼ 0; 1; . . . ; M:

ð3Þ

t¼1

Consequently, we construct the corresponding approximation relaxation linear programming P2 of P1 in Xk as follows: 8 R > < Minimize W0 ðxÞ ProblemðP2Þ : subject to WRj ðxÞ 6 ln bj ; j ¼ 1; . . . ; M; > : x 2 X k: Based on the linear underestimators, every feasible point of P1 in subdomain Xk is feasible in P2; and the objective value of P2 is less than or equal to that of P1 for all points in Xk. Thus, P2 provides a valid lower bound for the solution of P1 over the partition set Xk. It should be noted that problem P2 contains only the necessary constraints to guarantee convergence of the algorithm. 3. Algorithm and its convergence In this section a branch and bound algorithm is developed to solve the P2 based on the former linear relaxation method. This method needs to solve a sequence of relaxation linear programming over partitioned subsets of X0 in order to find a global optimum solution.

332

P. Shen, H. Jiao / Applied Mathematics and Computation 183 (2006) 328–336

The branch and bound approach is based on partitioning the set X0 into sub-hyperrectangles, each concerned with a node of the branch and bound tree, and each node is associated with a relaxation linear subproblem in each sub-hyperrectangle. Hence, at any stage k of the algorithm, suppose that we have a collection of active nodes denoted by Qk, say, each associated with a hyperrectangle X  X0, "X 2 Qk. For each such node X, we will have computed a lower bound of the optimal value of P1 via the solution LB(X) of the P2, so that the lower bound of optimal value of P1 on the whole initial box region X0 at stage k is given by LBk = min{LB(X), "X 2 Qk}. Whenever the solution of the relaxation linear programming P2 turns out to be feasible to the problem P1, we update the upper bound of incumbent solution UB if necessary. Then, the active nodes collection Qk will satisfy LB(X) < UB, "X 2 Qk, for each stage k. We now select an active node to partition its associated hyperrectangle into two sub-hyperrectangles as described below, computing the lower bounds for each new node as before. Upon fathoming any nonimproving nodes, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained. The critical element in guaranteeing convergence to a global minimum is the choice of a suitable partitioning strategy. In our paper we choose a simple and standard bisection rule. This method is sufficient to ensure convergence since it drives all the intervals to zero for all variables. This branching rule is as follows: Assume that the sub-hyperrectangle Y 0 ¼ ½y 0 ; y 0   Y 0 is going to be divided. Then we select the branching variable y 0p satisfying p ¼ arg maxfy 0i  y 0i : i ¼ 1; . . . ; N g and partition Y 0 by bisection the interval ½y 0p ; y 0p  into the subintervals ½y 0p ; ðy 0p þ y 0p Þ=2 and ½ðy 0p þ y 0p Þ=2; y 0p . The basic steps of the proposed global optimization algorithm are summarized as follows. Let LB(Xk) refer to the optimal objective function value of P2 for the subhyperrectangles Xk and xk = x(Xk) for to an element of corresponding argmin. 3.1. Algorithmic statement Step 0: Initialization. 0.1: Initialize the iteration counter k :¼ 0; the set of all active node Q0 = {X0}; the upper bound UB = 1, and the set of feasible points F :¼ ;. 0.2: Solve the problem P2 for X = X0, obtaining LB0 :¼ LB(X) and x0 :¼ x(X). If x0 is feasible to P1 update F and UB, if necessary. If UB 6 LB0 + , where  > 0 is some accuracy tolerance, then stop with y0 as the prescribed solution to problem P1. Otherwise, proceed to Step 1. Step 1: Select the midpoint xm of Xk, if xm is feasible to P1 then F :¼ F [ {xm}. Define the upper bound UB :¼ minx2FW0(x). If F 5 ;, the best known feasible point is denoted b :¼ argminx2FW0(x). Step 2: Choose a branching variable xp to partition Xk to get two new sub-hyperrectangles according to the above selected branching rule. Call the set of new partition rectangles as X k . 2.1: P For each X 2 X k , calculate the lower bound WRj of WRj ðxÞ over the rectangle X, i.e., WRj :¼ Tj L t¼1 ajt Wjt ; j ¼ 0; 1; . . . ; M, where ( ajt hjt ðX ljt Þ; if ajt > 0; WLjt ¼ ajt gjt ðX ujt Þ; if ajt < 0: If there exists j = 0, 1, . . . , M such that one of the lower bounds WRj satisfies WR0 > UB; or WRj > ln bj for some j ¼ 1; . . . ; M;

2.2: Step 3: Step 4:

Step 5:

then remove the corresponding sub-rectangle X from X k , i.e. X k :¼ X k n X , and skip to next element of X k. If X k 6¼ ;, solve P2 to obtain LB(X) and x(X) for each X 2 X k . If LB(X) > UB, set X k :¼ X k n X ; otherwise, update the best available solution UB, F and b if possible, as in the Step 1. The partition set remaining is now Qk :¼ ðQk n X k Þ [ X k giving a new lower bound LBk :¼ inf X 2Qk LBðX Þ. Fathom any nonimproving nodes by setting Qk+1 = Qkn{X:UB  LB(X) 6 ,X 2 Qk}. If Qk+1 = ; then stop with exp(UB) is the solution of MPE, and b is an optimal solution. Otherwise, k :¼ k + 1, and go to Step 5. Select an active node Xk such that X k ¼ argminX 2Qk LBðX Þ, xk :¼ x(Xk), and return to Step 1.

P. Shen, H. Jiao / Applied Mathematics and Computation 183 (2006) 328–336

333

3.2. Convergence of the algorithm In this paper, the partition sets used by the proposed algorithm are all rectangular and compact. Tuy [11] showed that rectangular subdivisions are exhaustive. Hence, by the above branch rule the proposed algorithm is exhaustive. Therefore, if the proposed algorithm is infinite, then the subdivision of X produces an infinite, nested sequence of partition sets, {Xk} and an associated sequence of diameters d(Xk) satisfying the following: \ X k  X kþ1 ð8 k ¼ 0; 1; . . .Þ; lim dðX k Þ ¼ 0; and lim X k ¼ X k ¼ f^xg: k!1

k!1

k

Lemma 1. Given a function, Wj(x), in form (1), then WRj ðxÞ given by (3) is strongly consistent on X0, i.e. there exists subsequences {Xq} of subrectangle {Xk} and a corresponding sequence {xq} with xq 2 Xk satisfying WRj ðxq Þ ! Wj ð^xÞ as q ! 1. Proof. The branch rule provides that there is a sequence of subrectangles {Xk} produced by a subdivision that is exhaustive on X0, a corresponding sequence {xk} with xk 2 Xk. By an exhaustive subdivision, we have that X k ! f^xg and xk ! ^x. The upper and lower bounds on Xk are both sequences in a compact space, so there exists a convergent subsequence. Thus X q ¼ ½xq ; xq  ! ½^x; ^x as q ! 1. Corresponding to this subsequence, there is the sequence fðxq ; xq ; xq Þg ! ð^x; ^x; ^xÞ as q ! 1 with xq 2 ½xq ; xq . In addition, by formulations (1)– (3) we have Tj Tj Tj X X X Wj ðxÞ ¼ ajt fjt ðxÞ ¼ ajt fjt ðxÞ þ ajt fjt ðxÞ; ð4Þ t¼1

WRj ðxÞ ¼

Tj X

t¼1;ajt >0

WLjt ðxÞ ¼

t¼1

Tj X

t¼1;ajt <0

ajt hjt ðxÞ þ

t¼1;ajt >0

Tj X

ajt gjt ðxÞ:

ð5Þ

t¼1;ajt <0

From Theorem 1, it follows for x 2 X q :¼ ½xq ; xq  that ln zjt zjt  1  1 þ ln 8j; t ln zjt zjt  1 PN Xu where zjt ¼ X jtl ; xjt ¼ X ujt  X ljt ¼ i¼1 jcjti jðxqi  xqi Þ. 0 6 gjt ðxÞ  fjt ðxÞ 6

ð6Þ

jt

Since gjt(xq), fjt(xq) can be considered as the continuous functions of xq, and xjt is the function of ðxq ; xq Þ, by (6) we can obtain lim ðgjt ðxq Þ  fjt ðxq ÞÞ ¼ gjt ð^xÞ  fjt ð^xÞ ¼ 0:

q!1

ð7Þ

Using a similar process to the proof of (7), we have lim ðfjt ðxq Þ  hjt ðxq ÞÞ ¼ fjt ð^xÞ  hjt ð^xÞ ¼ 0:

q!1

ð8Þ

Therefore, it follows from (4), (5), (7) and (8) that WRj ðxq Þ ! Wj ð^xÞ as q ! 1 which completes the requirements for a strongly consistent underestimator. h Lemma 2. Given a function, Wj(x), in form (1), then Wj(x) has tight bounds, i.e. there exist lower and upper bounds of Wj(x), Wj(x) and Wj ðxÞ respectively, such that for any infinite sequence, Xk, produced by an exhaustive subdivision with X k ! f^xg, have limk!1 Wj ðxk Þ ¼ limk!1 Wj ðxk Þ ¼ limk!1 Wj ðxk Þ ¼ Wj ð^xÞ. Proof. We can obtain that, just as in the Lemma 1, for any infinite subrectangle sequence {Xk} of X0, X k ¼ ½xk ; xk , produced by an exhaustive subdivision with X k ! f^xg as k ! 1, there exists a sequence {xk} with xk 2 Xk such that xk ! ^x as k ! 1. Corresponding to this sequence, there is the sequence fðxk ; xk ; xk Þg ! ð^x; ^x; ^xÞ as k ! 1. Based on (4) and (5), we define the functions Wj ðxÞ :¼ WRj ðxÞ, in form (5), and Wj ðxÞ :¼

Tj X t¼1;ajt >0

ajt gjt ðxÞ þ

Tj X t¼1;ajt <0

ajt hjt ðxÞ:

ð9Þ

334

P. Shen, H. Jiao / Applied Mathematics and Computation 183 (2006) 328–336

By the Theorem 1, we have Wj ðxÞ 6 Wj ðxÞ 6 Wj ðxÞ;

for 8x 2 X k ¼ ½xk ; xk :

ð10Þ

On the other hand, by the definitions of X ljt , X ujt for any Xk, we have that 0 1 N N N X X X limk!1 X ljt ¼ lim @ cjti xki þ cjtixki þ d jt A ¼ cjti^xi þ d jt ; k!1

0

i¼1;cjti >0

i¼1;cjti <0

N X

N X

limk!1 X ujt ¼ lim @ k!1

cjtixki þ

i¼1;cjti >0

1 cjti xki þ d jt A ¼

ð11Þ

i¼1 N X

i¼1;cjti <0

cjti^xi þ d jt :

ð12Þ

i¼1

It follows from (ii) of the Theorem 1 that for any x 2 Xk, we can obtain gjt ðxÞ  max D1jt ðxÞ 6 fjt ðxÞ 6 hjt ðxÞ þ max D2jt ðxÞ; for 8 j; t; x2X

ð13Þ

x2X

where maxx2X D1jt ðxÞ; maxx2X D2jt ðxÞ is given by Theorem 1. Since wjt ! 1 as k ! 1 from (11), (12), thus, maxx2X D1jt ðxÞ ! 0; maxx2X D2jt ðxÞ ! 0 as k ! 1, respectively. Hence, by continuity of the functions fjt(x), ln(x) and Xjt, it follows from inequality (13) that ! N X 1 2 lim ðgjt ðxk Þ  max Djt ðxÞÞ ¼ lim ðhjt ðxk Þ þ max Djt ðxÞÞ ¼ ln cjti^xn þ d jt ¼ fjt ð^xÞ; x2X

k!1

k!1

x2X

i:e: lim gjt ðxk Þ ¼ lim hjt ðxk Þ ¼ lim fjt ðxk Þ ¼ ln k!1

k!1

k!1

N X

! cjti^xn

n¼1

¼ fjt ð^xÞ:

ð14Þ

n¼1

Consequently, we can obtain from (4), (5), (9), (10) and (14) lim Wj ðxk Þ ¼ lim Wj ðxk Þ ¼ lim Wj ðxk Þ ¼ Wj ð^xÞ:

k!1

k!1

k!1

That is the function Wj has tight upper and lower bounds, Wj ; Wj . h Let Xa be the set of accumulation points of {xk}, and let X* be the argminx2DW0(x), where D 5 ; is the feasible space of problem P1. Theorem 2. The above algorithm either terminates finitely with the incumbent solution being optimal to the problem P1, or else an infinite sequence of the branch and bound tree is generated such that LB :¼ lim LBk ¼ min W0 ðxÞ; and X a  X  : k!1

x2D

Proof. For every iteration, k = 0, 1, 2, . . ., by the algorithm we can obtain following conclusion: LBk 6 minx2DW0(x), X k 2 argminX 2Qk LBðX Þ, xk = x(Xk) 2 Xk  X0. Horst [12] gives that {LBk} is a nondecreasing sequence bounded by minx2DW0(x), which guarantees the existence of the limit LB :¼ limk!1LBk 6 minx2DW0(x). {xk} is a sequence on a compact set, therefore, it has a convergent subsequence. For any ^x 2 X a , there exists a sequence {xr} of {xk} with limr!1 xr ¼ ^x. By Ref. [11] and the Lemma 1, it follows that the subdivision of partition sets in step 2 is exhaustive on X0 and the selection of elements to be partitioned in step 2 is bound improving. Thus, there exists a decreasing subsequence {Xq}  Xr where Xr is the X space of the partition Qr with xq 2 Xq, LBq ¼ LBðX q Þ ¼ WR0 ðxq Þ, limq!1 xq ¼ f^xg. By Lemma 2, it follows that limq!1 LBq ¼ LB ¼ W0 ð^xÞ. All remains is to prove that ^x 2 D. X0 are closed sets, so ^x 2 X 0 . The remainder of the proof will be by contradiction. Assume ^x 62 D, then Wj ð^xÞ ¼ d > 0 for some j = 1, . . . , M. Since Wj is continuous, the sequence {Wj(xq)} convergence to Wj ð^xÞ. By definition of convergence, $qd such that jWj ðxq Þ  Wj ð^xÞj < das q > qd. Therefore for q > qd, Wj(xq) > 0 implying that P2 is infeasible and violating the assumption that xq = x(Xq). This will result in a contradiction, therefore ^x 2 D. That is LB ¼ W0 ð^xÞ ¼ minx2D W0 ðxÞ; ^x 2 X  . h

P. Shen, H. Jiao / Applied Mathematics and Computation 183 (2006) 328–336

335

Table 1 Computational results of test examples Example

1 2 3 4

Optimal solution x1

x2

1.0 1.0 1.6875 1.0

1.0 1.0 2.375 1.0

Optimal value

Iter

Max-node

Time

997.661265 275.074284 0.227609428 60.0

1 1 6 1

1 1 3 1

0 0 <1 0

x3

1.0

4. Numerical experiment To verify the performance of proposed global optimization algorithm, some test problems are implemented on Pentium III (433 MHz) microcomputer. The algorithm is coded in C++ and uses the elementary simplex method to solve the relaxation linear programming, and the convergence tolerance set to  = 108 in our experiment. For the test problems, the results obtained using the proposed algorithm are illustrated in Table 1. The experiment results show that our algorithm can globally solve the MPE problem effectively. In the above Table 1 the notations has been used for column headers: Iter: Number of algorithm iteration; Max-node: The maximal number of active nodes necessary; Time: Execution time in seconds. Example 1 Minimize

G0 ðxÞ ¼ ðx1 þ x2 þ 1Þ2:5 ð2x1 þ x2 þ 1Þ1:1 ðx1 þ 2x2 þ 1Þ1:9

subject to

G1 ðxÞ ¼ ðx1 þ 2x2 þ 1Þ ð2x1 þ 2x2 þ 2Þ 1 6 x1 6 3; 1 6 x2 6 3:

1:1

1:3

6 50;

Example 2 1:5

2:1

Minimize

G0 ðxÞ ¼ ð2x1 þ x2 þ 1Þ ð2x1 þ x2 þ 1Þ ð0:5x1 þ 2x2 þ 1Þ

subject to

G1 ðxÞ ¼ ðx1 þ 2x2 þ 1Þ ð2x1 þ 2x2 þ 2Þ

1:2

0:1

6 18;

0:5

6 25;

G2 ðxÞ ¼ ð1:5x1 þ 2x2 þ 1Þð2x1 þ 2x2 þ 1Þ 1 6 x1 ;

0:5

x2 6 3:

Example 3 Minimize subject to

x1 þ x2 þ 1 x1 þ 2x2 þ 1  37x1 þ 73x2 þ 13 63x1  18x2 þ 39 G1 ðxÞ ¼ 5x1  3x1 6 3; 1:5 6 x1 6 3; 2 6 x2 6 3:5: G0 ðxÞ ¼ 169 

Example 4 Minimize

G0 ðxÞ ¼ ðx1 þ x2 þ x3 Þð2x1 þ x2 þ x3 Þðx1 þ 2x2 þ 2x3 Þ

subject to

G1 ðxÞ ¼ ðx1 þ 2x2 þ x3 Þ ð2x1 þ 2x2 þ x3 Þ 1 6 x1 ; x2 ; x3 6 3:

1:1

1:3

6 100;

5. Conclusion In this paper, a global optimization algorithm is presented for a class of multiplicative programming problems with exponent. A transformation is employed to the initial nonconvex problem (MPE), equivalent

336

P. Shen, H. Jiao / Applied Mathematics and Computation 183 (2006) 328–336

problem P1 can be got. Next, a linear relaxation of P1 is then obtained based on the linear lower bounding of the objective function and constraints. The algorithm was shown to attain finite  convergence to the global minimum through the successive refinement of a linear relaxation of the feasible region and the subsequent solution of a series of linear programming problems. The proposed approach was applied to several test problems. In all cases, convergence to the global minimum was achieved. The numerical results are given to illustrate the feasibility and the robust stability of the present algorithm. Acknowledgement This paper is supported by the Social Science Foundation (05XRK008) of Nation (China), and the Natural Science Foundation (0511011500), Foundation of the Research Program for Soft Science (0513030920), the Natural Science Foundation of Education Department (2004110007) of Henan province. References [1] C.D. Maranas, I.P. Androulakis, C.A. Floudas, A.J. Berger, J.M. Mulvey, Solving long-term financial planning problems via global optimization, Journal of Economic Dynamics and Control 21 (1997) 1405–1425. [2] I. Quesada, I.E. Grossmann, Alternative bounding approximations for the global optimization of various engineering design problems, in: I.E. Grossmann (Ed.), Global Optimization in Engineering Design, Nonconvex Optimization and Its Applications, Vol. 9, Kluwer Academic Publishers, Norwell, MA, 1996, pp. 309–331. [3] J.M. Mulvey, R.J. Vanderbei, S.A. Zenios, Robust optimization of large-scale systems, Operations Research 43 (1995) 264–281. [4] T. Kuno, Y. Yajima, H. Konno, An outer approximation method for minimizing the product of several convex functions on a convex set, Journal of Global Optimization 3 (3) (1993) 325–335. [5] H.P. Benson, Decomposition branch and bound based algorithm for linear programs with additional multiplicative constraints, Journal of Optimization Theory and Applications 126 (1) (2005) 41–46. [6] T. Kuno, A finite branch and bound algorithm for linear multiplicative programming, Computational Optimization and Application 20 (2001) 119–135. [7] H.S. Ryoo, N.V. Sahinidis, Global optimization of multiplicative programs, Journal of Global Optimization 26 (2003) 387–418. [8] S. Schaible, C. Sodini, Finite algorithm for generalized linear multiplicative programming, Journal of Optimization Theory and Applications 87 (2) (1995) 441–455. [9] H.P. Benson, G.M. Boger, Outcome-space cutting-plane algorithm for linear multiplicative programming, Journal of Optimization Theory and Applications 104 (2) (2000) 301–322. [10] X.J. Liu, T. Umegaki, Y. Yamamoto, Heuristic methods for linear multiplicative programming, Journal of Global Optimization 4 (15) (1999) 433–447. [11] H. Tuy, Effect of the subdivision strategy on convergence and efficiency of some global optimization algorithms, Journal of Global Optimization 1 (1991) 23–36. [12] R. Horst, Deterministic global optimization with partition sets whose feasiblity is not known: Application to concave minimization, reverse convex constraints, DC-programming, and Lipschitzian optimization, Journal of Optimization Theory and Applications 58 (1) (1988) 11–37.