A new global optimization algorithm for signomial geometric programming via Lagrangian relaxation

A new global optimization algorithm for signomial geometric programming via Lagrangian relaxation

Applied Mathematics and Computation 184 (2007) 886–894 www.elsevier.com/locate/amc A new global optimization algorithm for signomial geometric progra...

180KB Sizes 57 Downloads 196 Views

Applied Mathematics and Computation 184 (2007) 886–894 www.elsevier.com/locate/amc

A new global optimization algorithm for signomial geometric programming via Lagrangian relaxation Shao-Jian Qu *, Ke-Cun Zhang, Ying Ji Faculty of Science, Xi’an Jiaotong University, Xi’an 710049, PR China

Abstract In this paper, a global optimization algorithm, which relies on the exponential variable transformation of the signomial geometric programming (SGP) and the Lagrangian duality of the transformed programming, is proposed for solving the signomial geometric programming (SGP). The difficulty in utilizing Lagrangian duality within a global optimization context is that the restricted Lagrangian function for a given estimate of the Lagrangian multipliers is often nonconvex. Minimizing a linear underestimation of the restricted Lagrangian overcomes this difficulty and facilitates the use of Lagrangian duality within a global optimization framework. In the new algorithm the lower bounds are obtained by minimizing the linear relaxation of restricted Lagrangian function for a given estimate of the Lagrange multipliers. A branch-and-bound algorithm is presented that relies on these Lagrangian relaxations to provide lower bounds and on the interval Newton method to facilitate convergence in the neighborhood of the global solution. Computational results show that the algorithm is efficient.  2006 Elsevier Inc. All rights reserved. Keywords: Signomial geometric programming; Lagrangian duality; Branch-and-bound algorithm

1. Introduction In this paper, we consider the global optimization of signomial geometric programming (SGP) problem of the following form: 8 > < min h0 ðtÞ SGPðX0 Þ s:t: hj ðtÞ 6 1; j ¼ 1; . . . ; m; > : X0 ¼ ft : 0 < tl 6 t 6 tu g; where hj ðtÞ ¼

Tj X t¼1

*

ajt

n Y

c

ti jti ;

j ¼ 0; 1; . . . ; m

i¼1

Corresponding author. E-mail address: [email protected] (S.-J. Qu).

0096-3003/$ - see front matter  2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.05.208

S.-J. Qu et al. / Applied Mathematics and Computation 184 (2007) 886–894

887

and Tj is one positive integer number, each ajt, and cjti is assumed to be any real number. In general, formulation SGP corresponds to a nonlinear optimization problem with nonconvex objective function and constraint set. Note that if we set ajt P 0, for all t = 1, . . . , Tj, j = 0, 1, . . . , m, then SGP reduces to the classical posynomial geometric programming formulation which laid the foundation for the theory of SGP problem. SGP has found a wide range of applications since its initial development. Though SGP is a special class of nonlinear programming, as noted by Refs. [1,2], many nonlinear programming may be restated as geometric programming with very little additional effort by simple techniques such a change of variables or by straightforward algebraic manipulation of terms. Its great impact has been in the areas in (1) (2) (3) (4)

Engineering design [3–5]; Manufacturing [6,7]; Chemical equilibrium [8,9]; Economics and statistics [10–12].

Hence, it is necessary to present good SGP algorithms. Local optimization approaches for solving SGP problems include three kinds of methods in general. First, successive approximation by posynomials, called ‘condensation’, has received the most popularity [13]. Second, Passy and Wilde [14] developed a weaker type of duality, called ’pseudo-duality’, to accommodate this class of nonlinear optimization. Third, adapted general nonlinear programming methods [15]. Though local optimization methods for solving SGP problem are ubiquitous, finding provably global solution to problems of this sort is difficult. Some specialized algorithms have been developed to global optimization SGP when each cjti is positive integer or rational number [16–18]. In this case that each cjti is real, Maranas and Floudas [8] proposed such a global optimization algorithm (RCA) based on the exponential variable transformation of SGP, the convex relaxation and branch-and-bound on some hyperrectangle region. In this paper, a global optimization algorithm is proposed to globally solve problem SGP, which each cjti in SGP is assumed to be real, that is based on using the exponential transformation and Lagrangian duality to generate lower bounds on the optimal objective function value. Lagrangian duality is a well-known optimization tool that can be employed in wide variety of contexts. Two difficulties arise in attempting to use Lagrangian duality to solve a nonconvex problem. From the following section, the first is the duality gap: solving the dual does not necessarily yield the primal solution. Recently many researchers have studied utilizing Lagrangian duality within a branch-and-bound framework that partitions the feasible region. They have proved that, for very general classes of nonconvex programs with a suitably refined partitioning of the feasible set, the duality gap is less than any specified tolerance e [19–21]. In each paper, these results motivate a convergent branch-and-bound algorithm that uses Lagrangian duality to generate bounds. While a suitable partitioning strategy can overcome the duality gap, the second difficulty with using Lagrangian duality for global optimization is that it requires the minimization a nonconvex function. Thus, it typically is proposed for problems whose structure ensures a tractable minimization subproblem. The papers mentioned in the previous paragraph provide a nice illustration of this. To minimize a general nonconvex function over a polytope, Ben-tal et al. [19], and Dur and Horst [20] use a convex envelope construction to ensure that the dual function generates a valid lower bound. Barrientos and Correa [21] transform quadratic programs so that their objective functions are separable. The Lagrangian subproblem thus reduces to the minimization of a separable quadratic function over variable bounds. In our paper, (1) we propose the linear relaxation of the Lagrangian over bound constraints which is more convenient in the computation than the convex relaxation [8]. This allows it to be naturally incorporated into a branch-and-bound scheme. (2) An exhaustive partitioning process guarantees that the linear relaxation of the Lagrangian approaches the Lagrangian, so it is not surprising that algorithm can be shown to converge to the global solution. (3) The generated relaxed linear programming does not increasing new variables and constraints [16–18]. The plan of the paper is as follows: In the next section, the linear relaxation of Lagrangian is presented for generating the lower bounds to SGP. In Section 3, the proposed branch-and-bound algorithm in which the relaxed subproblems are embedded is described, and the convergence of the algorithm is established.

888

S.-J. Qu et al. / Applied Mathematics and Computation 184 (2007) 886–894

Numerical results of some problems in the area of engineering design are considered in Sections 4 and 5 provides a summary. 2. Lagrangian relaxation We apply the exponential transformation ti = exp(xi), i = 1, . . . , n for the original formulation SGP(X0) and can obtain the following equivalent optimization: 8 > < min f0 ðxÞ 0 SGPðB Þ s:t: fj ðxÞ 6 1; j ¼ 1; . . . ; m; ð2:1Þ > : B0 ¼ ft : x0 ¼ ln tl 6 x 6 x0 ¼ ln tu g; where fj ðxÞ ¼

Tj X

ajt exp

n X

t¼1

! cjti xi

X

¼

t2P j

i¼1

ajt exp

n X

! cjti xi

i¼1

þ

X t2N j

ajt exp

n X

! cjti xi ;

ð2:2Þ

i¼1

j = 0, 1, . . . , m, Pj = {tjajt P 0, t = 1, . . . , Tj} and Nj = {tjajt < 0, t = 1, . . . , Tj}. According to (2.2) we can see that every function fj can be expressed the sum of convex and concave function. The principal structure in the development of a solution procedure for solving problem SGP(B0) is the construction of lower bounds for this problem, as well as for its partitioned subproblems. A lower bound on the solution of problem (2.1) and its partitioned subproblems can be obtained by solving a Lagrangian relaxation problem. In this paper, we present a global optimization algorithm that relies on Lagrangian duality to generate lower bounds on the optimal objective function value. Lagrangian duality is well-known optimization tool that can be employed in wide variety of contexts. Assuming that only the complex constraints of (2.1) are dualized, its Lagrangian function is given by m X /ðx; kÞ ¼ f0 ðxÞ þ kj ðfj ðxÞ  1Þ; ð2:3Þ j¼1

where k is dual variable. The Lagrangian dual problem of (2.1) is   max UðkÞ ¼ min /ðx; kÞ ; x2B0

kP0

ð2:4Þ

where B0 is defined as in problem (2.1). Any feasible dual solution k yields a lower bound /ðkÞ on f*, the optimal objective function value of problem (2.1). Since problem (2.1) is nonconvex, U(k*), the objective function value of the optimal dual solution k*, will not necessary equal f*. So there must be duality gap. Fortunately, a suitable partitioning strategy can overcome the duality gap. The linear relaxation of the problem (2.4) can be realized by underestimating /(x, k) with a linear function Lf(x, linear function is constructed by underestimating every implicity separable term ajt exp  Pn k). This  i¼1 cjti xi with a linear function. We describe the method as follows. Firstly, we know that the function exp(x) is a convex and increasing function about the single variable x. Then we can construct the affine concave envelope U(exp(x)) of exp(x) over the interval xl 6 x 6 xu, U ðexpðxÞÞ ¼ kðx  xl Þ þ expðxl Þ; u

l

u

ð2:5Þ

l

where k = (exp(x )  exp(x ))/(x  x ). Moreover, the tangential supporting function L(exp(x)) of exp(x) is parallel with the U(exp(x)) over the interval xl 6 x 6 xu, thus the point of tangential support will occur at x ¼ ln k, and the L(exp(x)) is LðexpðxÞÞ ¼ kð1 þ x  ln kÞ: The functions U(exp(x)), L(exp(x)) and exp(x) satisfy, LðexpðxÞÞ 6 expðxÞ 6 U ðexpðxÞÞ:

ð2:6Þ

S.-J. Qu et al. / Applied Mathematics and Computation 184 (2007) 886–894

889

The linear relaxation Lf(x, k) of the problem /(x, k) with any given k P 0 can be realized by underestimating Pn every implicity separable term ajt expð i¼1 cjti xi Þ with a linear function. According to the above discussion the Lf(x, k) can be constructed by the following method. For any B ¼ ½x; x  B0 and "x 2 B, the following notations are introduced: ! ! m m X X X X Lf ðx; kÞ ¼ kj ajt hjt ðxÞ  1 þ kj ajt gjt ðxÞ  1 ; k0 ¼ 1; t2P j

j¼0

Uf ðx; kÞ ¼

m X

X

kj

n X

ajt gjt ðxÞ  1

þ

t2P j

j¼0

X jt ¼

!

j¼0

t2N j

m X

X

kj

j¼0

! ajt hjt ðxÞ  1 ;

k0 ¼ 1;

t2N j

cjti xi ;

i¼1

X ljt ¼

n X

minfcjtixi ; cjti xi g;

i¼1

X ujt ¼

n X

maxfcjtixi ; cjti xi g;

i¼1

Ajt ¼

expðX ujt Þ  expðX ljt Þ X ujt  X ljt

hjt ðxÞ ¼ Ajt 1 þ

n X

; !

cjti xi  ln Ajt

¼ Ajt ð1 þ X jt  ln Ajt Þ;

i¼1

gjt ðxÞ ¼ expðX ljt Þ þ Ajt

n X

! cjti xi  X ljt

¼ expðX ljt Þ þ Ajt ðX jt  X ljt Þ:

i¼1

According to the results (2.5) and (2.6), it can be seen that hjt ðxÞ 6 expðX jt Þ 6 gjt ðxÞ:

ð2:7Þ

Then we can derive the following theorem. Theorem 2.1. Considering the functions /(x, k), Lf(x, k) and Uf(x, k) for any x 2 B  B0 and k P 0. Then the following three statements are valid. ðiÞ Uf ðx; kÞ P /ðx; kÞ 6 Lf ðx; kÞ; ðiiÞ ðiiiÞ

8x 2 B;

inf j/ðx; kÞ  Lf ðx; kÞj ¼ 0;

kxxk!0

inf j/ðx; kÞ  Uf ðx; kÞj ¼ 0;

kxxk!0

8x 2 B; 8x 2 B:

ð2:8Þ ð2:9Þ ð2:10Þ

Proof (i) From the definition of Lf(x, k) and (2.7), it is obviously that (2.8) becomes true. (ii) Firstly we will show that the maximal error of linearing relaxation exp(Xjt) using hjt(x) and gjt(x) is, Emax ¼ expðX ljt Þð1  Cjt þ Cjt ln Cjt Þ; where Cjt ¼

expðX ujt  X ljt Þ  1 X ujt  X ljt

;

8t ¼ 1; . . . ; T j ; j ¼ 0; 1; . . . ; m:

ð2:11Þ

890

S.-J. Qu et al. / Applied Mathematics and Computation 184 (2007) 886–894

The error of linearing relaxation exp(Xjt) using hjt(x) and gjt(x) can be denoted by the following forms respectively: Dhjt ðxÞ ¼ expðX jt Þ  hjt ðxÞ ¼ expðX jt Þ  Ajt ð1 þ X jt  ln Ajt Þ; Dgjt ðxÞ ¼ gjt ðxÞ  expðX jt Þ ¼ expðX ljt Þ  expðX jt Þ þ Ajt ðX jt  X ljt Þ: Since Dhjt ðxÞ is a convex function about Xjt, for any X jt 2 ½X ljt ; X ujt , it follows that the maximal error denoted by Emax will occur at the point X ljt or X ujt . By computation we get, Dhjt ðX ujt Þ ¼ expðX ujt Þ  Ajt ð1 þ X ujt  ln Ajt Þ ¼ Emax ; Dhjt ðX ljt Þ ¼ expðX ljt Þ  Ajt ð1 þ X ljt  ln Ajt Þ ¼ Emax : On the other hand, since Dgjt ðxÞ is a concave function about Xjt, for any X jt 2 ½X ljt ; X ujt ; it follows that the maximal error denoted by Egmax will occur at the point ln Ajt, and the value denoted by Egmax is Egmax ¼ expðX ljt Þ þ Ajt ð1 þ X ljt  ln Ajt Þ ¼ Emax : So we know Ehmax ¼ Egmax ¼ Emax , i.e., the assertion (2.11) is true. As we can see that Cjt ! 1 when X ujt  X ljt ! 0, so according to (2.11) we have Emax ! 0 when X ujt  X ljt ! 0. Therefore, according to the definition of Lf(x, k) and Uf(x, k), we can see that the assertion (ii) and (iii) are all true. h Remark. From Theorem 2.1, we can give the relaxation linear programming of problem (2.4). Let Bk ¼ ½xk ; xk 0 . Consequently we construct the corresponding approximation relaxation linear programming T of (2.4) in Bk with kk ¼ ðkk1 ; . . . ; kkm Þ P 0 as the corresponding dual variables as follows: aBk ¼ min Lf ðx; kk Þ: x2Bk

ð2:12Þ

3. Algorithmic statement The algorithm presented below will subsequently be referred to as the GDCAB algorithm. The branch-andbound approach is based on partitioning the set B0 into sub-hyperrectangles, each concerned with a node of the branch-and-bound tree, and each node is associated with a Lagrangian relaxation subproblem in each subhyperrectangle. Hence, at any stage k of the algorithm, suppose that we have a collection of active nodes denoted by Ql, say, each associated with a hyperrectangle B 2 B0, "B 2 Qk. For each such node B, we will have computed a lower bound of the optimal value of problem (2.1) via the solution aB of the problem (2.12) and k be the current estimate of the Lagrange multipliers within B. Let a and b be the overall lower and upper bounds on the problem’s optimal objective function value. An initial value for a and b can be computed by the minimization problem Lf(x, k) over B0 and Uf(x, k) over B0, respectively. No true upper bound can be computed, since any give problem instance could be infeasible. Whenever the solution of the Lagrangian relaxation turns out to be feasible to the problem (2.1), we update the upper bound of incumbent solution b if necessary. Then, the active nodes collection Qk will satisfy aB < b, "B 2 Qk, for each stage k. We now select an active node to partition its associated hyperrectangle into two sub-hyperrectangles as described below, computing the lower bounds for each new node as before. Upon fathoming any nonimproving nodes, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained. Algorithm GDCAB (A branch-and-bound algorithm for problem (2.1)) Initialization Let Q0 = {B0}, k1 = 0. Compute initial values for a and b. Iteration k (1) l ¼ arg minj faBj : Bj 2 Qk g. (2) Find lower bounds for region Bl: h ¼ 1; k1 ¼ lambdal ; and f 0 ¼ aBl ;

S.-J. Qu et al. / Applied Mathematics and Computation 184 (2007) 886–894

891

while aBl 6 b and fh > fh1 do /ðx; kh Þ ¼ f0 ðxÞ þ

m X

khj ðfj ðxÞ  1Þ

j¼1

and Lf(x, kh) be a Lagrangian relaxation of /(x, kh). xh ¼ arg minfLf ðx; k1 Þ : x 2 Bl g and f h ¼ minfLf ðx; k1 Þ : x 2 Bl g: If If If If

h = 1 or f h > aBl , let ~x ¼ xh , and ~ k ¼ kh . f h > aBl , aBl ¼ f h . aBl > b, then Qk = QknBl and goto (5). xh is feasible and f0(xh) 6 b, then b = f0(xh) and update incumbent solution (

1hj

¼

fj ðxh Þ  1;

if f j ðxh Þ > 1 or khj > 0;

0;

otherwise:

ð3:1Þ

Let khþ1 ¼ maxf0; khj þ vh 1hj g, where vh is the step-size parameter. j (3) Using ~x and ~ k as primal and dual starting points, apply Newton’s method to solve the Karush–Kuhn Tucker necessary conditions. If a feasible solution is found with a better objective function value than the incumbent solution, update b and the incumbent solution. (4) Apply the interval Newton method to find all solutions to the Fritz-John optimality conditions in Bl. If the interval Newton method identifies a feasible solution with a better objective function value than the incumbent solution, update b and incumbent solution. If the interval Newton method converges to a single point or proves that no solution to the optimality conditions exists within Bl, then Qk = QknBl. Otherwise partition Bl into two subregions: B2k and B2k+1 using any exhaustive partitioning process. Let aB2k ¼ aB2kþ1 ¼ aBl and kB2k ¼ kB2kþ1 ¼ kBl . Qk ¼ Qk n Bl [ fB2k ; B2kþ1 g: (5) If Qk = ;, stop. If an incumbent solution has been found, then the algorithm has found an optimal solution. Otherwise, the problem is infeasible. If jQkj P 1, goto (1).

Remark. This algorithm integrates three techniques for solving SGP problems. (1) Lagrangian relaxation is used to provide lower bounds. (2) Newton’s method is used to obtain local solutions to the prime problem. (3) The interval Newton method is used to facilitate convergence to the global solution. As its name indicates, the interval Newton method is similar to Newton’s method for solving systems of equations. While Newton’s method begins with a vector of values xk and finds a new vector of values xk+1, the interval Newton method begins with a vector of intervals Bk ¼ ½xk ; xk  and finds a new vector of intervals Bkþ1 ¼ ½xkþ1 ; xkþ1 . A full description of the method can be found in [24,25]. Like Newton’s method, the interval Newton method does not always converge. However, unlike Newton’s method, when the interval Newton method is successful, it allows a strong conclusion to be made regarding the original bounded region B0. (1) If Bk \ Bk+1 = ;, then B0 does not contain any solution to the given system of equations. (2) If the sequence of interval vectors collapses to a single point, then that point is the only solution to the given system of equations within B0.

892

S.-J. Qu et al. / Applied Mathematics and Computation 184 (2007) 886–894

4. The convergence of the algorithm To obtain the global convergence we always suppose the feasible set of problem (2.1) is nonempty and the step-size parameter chosen in this algorithm satisfy the well-known divergent series rule. For each region Bl, let tl be its level within the branch-and-bound tree (i.e. if Bl has three ancestors, then hþ1 h h 1 h tl = 4). P1Leth vh ¼ tl and kj ¼ maxf0; kj þ vh 1j g, then if the algorithm dose not stop finitely, then v ! 0 and h¼1 v ¼ 1. Theorem 4.1. If the algorithm stops finitely, then it terminates with the global optimal solution. Proof. If Qk = ;, then one of the following three statements must be true for each region Bl at the bottom of the branch-and-bound tree. (1) The Interval Newton method demonstrates that there are no points within the region that satisfy necessary conditions for optimality. (2) The Interval Newton method identifies the only point within the region which satisfies the necessary conditions. (3) The lower bound generated by the convex underestimate of the Lagrangian, aBl , is greater than b. If an incumbent solution has been found, then aBl is greater than the objective function value of a feasible solution to the primal problem. Otherwise, the lower bound aBl is greater than the largest possible objective function value of any feasible solution. If an incumbent solution exists, its optimality follows immediately. Otherwise, the problem must be infeasible. h For the convenience of our discussion, we suppose that x is any infinite path down the branch-and-bound 1 1 tree, fkxðkÞ gk¼1 and fxxðkÞ gk¼1 are the infinite sequence of dual solutions and primal solutions, respectively, found for nested partitions Bx(1)  Bx(2)  Bx(3)   along path x. Furthermore we suppose limk!1 xxðkÞ ¼ ~xx and limk!1 kxðkÞ ¼ ~ kx . Next, we will point out the limit points of the sequences of dual solutions and primal solutions satisfies the strict complementary condition, the limit point of the sequence of primal solutions satisfies both the feasibility and the globality. Theorem 4.2. The limit points ~xx and ~ kx satisfy: (i) If fj ð~xx Þ < 1, then ~ kx ¼ 0, j = 1, . . . , m; x (ii) fj ð~x Þ 6 1, j = 1, . . . , m; (iii) ~xx is a global optimal solution. Proof. (i) Define j ¼ 1  fj ð~xx Þ and 0 < . < 1. Since fj(x) is continuous and limk!1 ðxxðkÞ  xxðkÞ Þ ¼ 0, there kþ1Þ will eventually be some k such that 1  fj ð~xx P Þ P .j , "x 2 Bx(k) , k P k. Thus, kxð 6 j P    r1 1 kÞ xðkþrÞ xðkÞ maxf0; kxð  t ð. Þg and k 6 maxf0; k  ð. Þ t g. Since t ¼ 1, it is clear that    j j xð kÞ xð kþiÞ xð kþiÞ j j j i¼0 i¼0 limk!1kx(k) = 0. Next, we prove that the limit point of any infinite subsequence is feasible. (ii) If ~xx is infeasible, then there must be some constraint j for which fj ð~xx Þ  1 ¼ j > 0. The same as the proof of (1), there is some integer k such that fj(x)  1 = .j, for some constant . 2 (0, 1), "x 2 Bx(k), k P k. xðkþ1Þ xðkÞ xðkÞ P kj  txðkÞ ð.j Þ. It is clear that limk!1 kj ¼ 1. According to (i), as Thus, the dual solution kj k ! 1, no constraint which is satisfied with slack has a nonzero multiplier. So we have that xðkÞ limk!1 /ðxxðkÞ ; kxðkÞ Þ P limk!1 ðf0 ðxxðkÞ Þ þ kj ½fj ðxxðkÞ Þ  1Þ, where /(x, k) is defined as (2.3). So x(k) x(k) limk!1/(x , k ) = 1 can be obtained from the above inequality. But Theorem 2.1(ii) implies that limk!1Lf(xx(k), kx(k)) = limk!1/(xx(k), kx(k)) = 1. However, this is impossible, since the region Bx(k) would be fathomed when its lower bound exceeded the value of b. Thus, there cannot be an infinite subsequence whose limit point is infeasible. Third, we show that ~xx is a global optimal solution.

S.-J. Qu et al. / Applied Mathematics and Computation 184 (2007) 886–894

893

(iii) The conclusion of (ii) ensures that fj ð~xx Þ 6 1, j = 1, . . . , m. Hence f0 ð~xx Þ P f  , where f* is the global optimal value of (2.1). We have that Lf(xx(k), kx(k)) 6 f*, "k, from our algorithm. This and Theorem 2.1(ii) yield limk!1Lf(xx(k), kx(k)) = limk!1/(xx(k), kx(k)) 6 f*. According to the conclusion of (i), we have that limk!1 /ðxxðkÞ ; kxðkÞ Þ ¼ f0 ð~xx Þ. Thus, f0 ð~xx Þ ¼ f  and ~xx is a global optimal solution. h 5. Numerical experiments In this part, we apply Algorithm GDCAB to solve the following optimization. Examples 3 and 4 are common test problems (include engineering process control and design problems). The numerical experiments show that our method is efficient. The arithmetic is coded in C++, numerical test in PC, CPU Main Frequency 1.43GEMS 256Mrun circumstance VC++6.0. The numerical results by Algorithm GDCAB can be seen in Table 1. In this table, IN denotes the iteration number, t* denotes the approximate solution satisfying the stopping rule, and h0 denotes the value of function h0 at t*. Example 1 [23] 8 1 1 > < min h0 ðtÞ ¼ 0:5t1 t2  t1  5t2 s:t: h1 ðtÞ ¼ 0:01t2 t1 3 þ 0:01t 2 þ 0:0005t 1 t 3 6 1; > : 70 6 t1 6 150; 1 6 t2 6 30; 0:5 6 t3 6 21:

ð5:1Þ

Example 2 [22] 8 min > > > < s:t: > > > :

1:2 h0 ðtÞ ¼ t0:8 3 t4 1 1 h1 ðtÞ ¼ t1 t1 4 þ t 2 t 4 6 1;

ð5:2Þ

1 1 h2 ðtÞ ¼ t2 1 t 3  t 2 t 3 6 1; 0:1 6 t1 6 1; 5 6 t2 6 10;

8 6 t3 6 15;

0:01 6 t4 6 1:

Example 3 [23] 8 min > > > > > > s:t: < > > > > > > :

4 1 h0 ðtÞ ¼ 1:68  102 t1 t2 þ 3:6512  104 t1 t2 t1 3 þ 4  10 t 4

h1 ðtÞ ¼ 1:0425  104 t1 t1 2 6 1; 4 h2 ðtÞ ¼ 3:5  10 t1 t2 6 1; 1:25t1 1 t4

h3 ðtÞ ¼ 40 6 t1 6 44;

ð5:3Þ

41:63t1 1

þ 6 1; 40 6 t2 6 45; 60 6 t3 6 70;

0:1 6 t4 6 1:4:

Example 4 [23] 8 > < min s:t: > :

4 1 5 1 h0 ðtÞ ¼ 5t1 þ 5  104 t1 1 þ 20t 2 þ 7:2  10 t 2 þ 1:44  10 t 3 1 h1 ðtÞ ¼ tt1 þ 32t1 2 þ 120t 3 6 1;

ð5:4Þ

1 6 t1 ; t2 ; t3 6 100:

Table 1 Numerical report Example

IN

t*

h0

1 2 3 4

1754 175 3014 654

(88.6274,7.9621,1.3215) (0.1015,731972,8.0169,0.2395) (43.0187,44.8491,66.4581,1.1082) (107.9543,85.4785,204.4784)

83.6898 0.9514 1.4202791556 · 105 4.2590484 · 103

894

S.-J. Qu et al. / Applied Mathematics and Computation 184 (2007) 886–894

6. Conclusions In this paper, a new branch-and-bound algorithm via Lagrangian dual is proposed to solve the generalized geometric programming. The algorithm is based on the fact that the linear relaxation of the Lagrangian dual can be obtained and an efficient partitioning strategy can reduce the Lagrangian dual to any specified tolerance. The algorithm was shown to convergence to the global minimum through the successive refinement of a linear relaxation of the Lagrangian dual and the subsequent solutions of a series of linear programming problems. The proposed approach was applied to several test problems. In all cases, convergence to the global minimum was achieved. References [1] P. Hensen, B. Jaumard, Reduction of indefinite quadratic programs to bilinear programs, Journal of Global Optimization 2 (1) (1992) 41–60. [2] C.S. Beightler, D.T. Philips, Applied Geometric Programming, John Wiley and Sons, New York, NY, 1976. [3] M. Avriel, A.C. Williams, An extension of geometric programming with applications in engineering optimization, Journal of Engineering Mathematics 5 (3) (1971) 187–199. [4] T.R. Jefferson, C.H. Scott, Generalized geometric programming applied to problems of optimal control: I. Theory, JOTA 26 (1978) 117–129. [5] N.K. Jha, Geometric programming based robot control design, Computers and Industrial Engineering 29 (1–4) (1995) 631–635. [6] A.I. Ssnmez, A. Baykasoglu, T. Dereti, I.H. Filiz, Dynamic optimization of multipass milling operations via geometric programming, International Journal of Machine Tools and Manufacture 39 (1999) 297–320. [7] C.H. Scott, T.R. Jefferson, Allocation of resources in project management, International Journal on Systems Science 26 (1995) 413– 420. [8] C.D. Maranas, C.A. Floudas, Global optimization in generalized geometric programming, Computers and Chemical Engineering 21 (4) (1997) 351–369. [9] M.J. Rijckaert, X.M. Martens, Analysis and optimization of the Williams–Otto process by geometric programming, AICHE Journal 20 (4) (1974) 742–750. [10] J.C. Choi, D.L. Bricker, Effectiveness of a geometric programming algorithm for optimization of machining economics models, Computers Operations Research 23 (10) (1996) 961–975. [11] H.E. Barrel, R.L. Dykstra, Restricted multinomial maximum likelihood estimation based upon Fenchel duality, Statistics and Probability Letters 21 (1994) 121–130. [12] D.L. Bricker, K.O. Kortanek, L. Xu, Maximum likelihood estimates with order restrictions on probabilities and odds ratios: a geometric programming approach, in: Applied Mathematical and Computational Science, The University of Iowa, Iowa City, IA, 1995. [13] U. Passy, Generalized weighted mean programming, SIAM Journal on Applied Mathematics 20 (1971) 763–778. [14] U. Passy, D.J. Wilde, Generalized polynomial optimization, Journal on Applied Mathematics. 15 (5) (1967) 1344–1356. [15] K.O. Kortanek, X. Xu, Y. Ye, An infeasible interior-point algorithm for solving primal and dual geometric programs, Mathematic Programming 76 (1997) 155–181. [16] C.A. Floudas, V. Visweswaran, Quadratic Optimization, in: R. Horst, P.M. Pardalos (Eds.), Handbook of Global Optimization, Nonconvex Optimization and its Applications, Kluwer Academic Publishers, 1995, pp. 217–270. [17] H.D. Sherali, Global optimization of nonconvex polynomial programming problems having rational exponents, Journal of Global Optimization 12 (1998) 267–283. [18] H.D. Sherali, A reformulation–convexification approach for solving nonconvex quadratic programming problems, Journal of Global Optimization 7 (1995) 1–31. [19] A. Ben-tal, G. Eiger, V. Gershovitz, Global minimization by reducing the duality gap, Mathematical Programming 63 (1994) 193–212. [20] M. Dur, R. Horst, Lagrange duality and partitioning techniques in nonconvex global optimization, Journal of Optimization Theory and Applications 95 (1997) 347–369. [21] O. Barrientos, R. Correa, An algorithm for global minimization of linearly constrained quadratic functions, Journal of Global Optimization 16 (2000) 77–93. [22] R.J. Duffin, E.L. Peterson, Geometric programming with signomial, Journal of Optimization Theory and Applications 11 (1) (1973) 3–35. [23] M.J. Rijckaert, X.M. Martens, Comparison of generalized geometric programming algorithms, Journal of Optimization Theory and Application 26 (1978) 205–241. [24] Y. Lin, M.A. Stadtherr, Advances in interval methods for deterministic global optimization in chemical engineering, Journal of Global Optimization 29 (2004) 281–296. [25] N. Revol, Interval Newton iteration in multiple precision for the univariate case, Numerical Algorithms 34 (2003) 417–726.