Solving sum of quadratic ratios fractional programs via monotonic function

Solving sum of quadratic ratios fractional programs via monotonic function

Applied Mathematics and Computation 212 (2009) 234–244 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

217KB Sizes 6 Downloads 120 Views

Applied Mathematics and Computation 212 (2009) 234–244

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Solving sum of quadratic ratios fractional programs via monotonic function Peiping Shen *, Yongqiang Chen, Yuan Ma College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, PR China

a r t i c l e

i n f o

a b s t r a c t In this paper, a branch-reduce-bound algorithm is proposed for globally solving a sum of quadratic ratios fractional programming with nonconvex quadratic constraints. Due to its intrinsic difficulty, less work has been devoted to globally solving this problem. The proposed algorithm is based on reformulating the problem as a monotonic optimization problem, and it turns out that the optimal solution which is provided by the algorithm is adequately guaranteed to be feasible and to be close to the actual optimal solution. Convergence of the algorithm is shown and the numerical experiments are given to show the feasibility of the proposed algorithm. Ó 2009 Elsevier Inc. All rights reserved.

Keywords: Fractional programming Sum of ratios Nonisolated optimal solution Branch-reduce-bound Monotonic optimization

1. Introduction Consider the following sum of quadratic ratios problem with nonconvex quadratic constraints:

ðP1Þ

8 p P n ðyÞ > > dj dj ðyÞ > < min FðyÞ ¼ j j¼1

> s:t: > > :

F m ðyÞ P 0; m ¼ 1; . . . ; M; y 2 X0 ¼ fy j 0 < yli 6 yi 6 yui < 1; i ¼ 1;    ; n0 g;

where for each j ¼ 1; . . . ; p, and each m ¼ 1; . . . ; M, T

nj ðyÞ ¼ cj þ bj y þ yT Aj y; dj ðyÞ ¼ r j þ eTj y þ yT Dj y; F m ðyÞ ¼ hm þ wTm y þ yT Gm y; bj ; ej ; wm 2 Rn0 , and dj ; cj , r j and hm are all arbitrary real number, Aj ; Dj ; Gm are all n0  n0 matrixes. Then it follows that the constraint set is nonconvex to problem (P1). Sum of ratios fractional programs have attracted the interest of practitioners and researchers since at least the 1970s. During the past 10 years, interest in these problems has been especially strong. In part, this is because, from a practical point of view, these problems have spawn a wide variety of important applications, specially in transportation planning, government contracting, and finance and investment [1–7]. In addition, from a research point of view, these problems pose significant theoretical and computational challenges. This is mainly due to the fact that these problems are global optimization problems, i.e. they are known to generally possess multiple local optima that are not global optima [8]. During the past years, many global optimization algorithms have been proposed for solving special cases of problem (P1), which are intended most for the sum of linear ratios problem without coefficients dj . For instance, Charnes and Cooper * Corresponding author. E-mail address: [email protected] (P. Shen). 0096-3003/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2009.02.024

P. Shen et al. / Applied Mathematics and Computation 212 (2009) 234–244

235

showed that linear fractional programming problems can be solved by a variant of simplex algorithms [9,10]. At present there exist a number of algorithms for globally solving sum of ratios problem in which the numerators and denominators are affine functions or the feasible region is a polyhedron [11–13]. In addition, several algorithms have been proposed for solving nonlinear fractional programming problems [14–18], but in the most considered problems the feasible regions are polyhedrons or convex sets. Recently, using Lagrangian relaxation, Qu et al. [19] developed a branch and bound algorithm for soling problem (P1). In this paper a branch-reduce-bound algorithm is proposed for globally solving problem (P1) based on the recently developed theory of monotonic optimization [20,21]. The main feature of this algorithm is given as follows. First, different from other methods reviewed above (See [14–19]), we present a transformation of the problem based on the characteristics of problem (P1). Thus the original problem (P1) is equivalently reformulated as a monotonic optimization problem (P3) in the form studied in recent papers [20,21]. Second, by using a special procedure of monotonic optimization problem (P3), we propose the branch-reduce-bound algorithm for problem (P1). This algorithm consists in seeking the best nonisolated feasible solution. This solution, i.e. the nonisolated optimal solution which is computed by the algorithm is adequately guaranteed to be feasible and to be close to the actual optimal solution. Hence, the proposed algorithm can find a more appropriate approximate optimal solution which is also stable under small perturbations of the constraints. This stresses the importance of robust for practical implementation of global optimization methods, because the problem of finding feasible and stable solutions is a fundamental question for a global optimization problem. Third, compared with the method of Qu et al. [19], numerical results are improved obviously by using the proposed algorithm in the number of iterations, the optimal solution. The remainder of this paper is organized as follows. The next section converts the problem (P1) into a monotonic optimization problem. Section 3 introduces the concept of nonisolated optimality. In addition, a method for finding such a nonisolated optimal solution is presented in this section. The rectangular branching process, the reducing process and the upper bounding process used in this approach are defined and studied in Section 4. Section 5 incorporates this approach into an algorithm for solving problem (P1), and shows the convergence properties of the proposed algorithm. In Section 6, we give the results of solving some numerical examples with the proposed algorithm. 2. Equivalent reformulation A function f : Rn ! R is said to be increasing if f ðx0 Þ 6 f ðxÞ for all x0 ; x 2 Rn satisfying x0 6 x, i.e. x0i 6 xi , 8i ¼ 1; . . . ; n. Any function that can be represented as the difference of two increasing functions is said to be a d.m. function. In this section we show that any problem (P1) can be transformed into an equivalent monotonic optimization problem with increasing objective function and d.m. constrained functions. For convenience of the following discussions, without loss generality, let dj > 0ðj ¼ 1; . . . ; KÞ; dj < 0ðj ¼ K þ 1; . . . ; pÞ. Assumption 1. For each j ¼ 1; . . . ; p, suppose that nj ðyÞ > 0; dj ðyÞ > 0; 8 y 2 X0 . Based on Assumption 1, by using Bernstein Algorithm [12,13] or the algorithm GBQ [22], which are served as an important tool for finding bounds on nonconvex quadratic programming over rectangle set, we can obtain positive scalars Lj and U j such that 0 < Lj 6 dj ðyÞ 6 U j ; 8 y 2 X0 , for each j ¼ 1; . . . ; p. Next, by introducing an additional vector w ¼ ðw1 ; . . . ; wp ÞT 2 Rp , we can convert the problem (P1) into

ðP2Þ

8 p K P P > > min Gðy; wÞ ¼ dj nj ðyÞwj þ dj nj ðyÞwj > > > > j¼Kþ1 j¼1 > > > < s:t: F m ðyÞ P 0; m ¼ 1; . . . ; M; > > > > > > > > > :

Hj ðy; wÞ :¼ wj dj ðyÞ  1 P 0;

j ¼ 1; . . . ; K;

Hj ðy; wÞ :¼ 1  wj dj ðyÞ P 0;

j ¼ K þ 1; . . . ; p;

y 2 X0 ;

1 Uj

6 wj 6 L1j ;

j ¼ 1; . . . ; p:

The key equivalence result for problems (P1) and (P2) is given by the following Theorem 2.1. Theorem 2.1. If ðy ; w Þ is a global optimal solution for problem (P2), then y is a global optimal solution for problem (P1). Conversely, if y is a global optimal solution for problem (P1), then ðy ; w Þ is a global optimal solution for problem (P2), where 1 w ¼ dj ðy  Þ ; j ¼ 1; . . . ; p. Proof. The proof is similar to Theorem 2.1 in Ref. [15], it is omitted here.



From Theorem 2.1, notice that, in order to globally solve problem (P1), we may globally solve problem (P2) instead. In addition, it is easy to see that the global optimal values of problem (P1) and (P2) are equal. Hence, we only consider the problem (P2) below. Theorem 2.2. Any polynomial Q ðy1 ; y2 ; . . . ; yn Þ (in particular any affine or quadratic function) is a d.m. function on any box ½a; b  Rnþ .

236

P. Shen et al. / Applied Mathematics and Computation 212 (2009) 234–244

Proof. By grouping separately the terms with positive coefficients and those with negative coefficients, one can write Q ðyÞ ¼ Q þ ðyÞ  Q  ðyÞ, where each Q þ ; Q  is a polynomial with nonnegative coefficients, hence an increasing function, on  ½a; b  Rnþ . In the case of a linear function Q ðyÞ ¼ hc; yi we can write c ¼ cþ  c with cþ i ¼ maxf0; c i g, c i ¼ maxf0; ci g, so þ  þ  that Q ðyÞ ¼ hc ; yi, Q ðyÞ ¼ hc ; yi.  Note that the objective function and the constrained functions of problem (P2) are all polynomials in Rn0 þp . Let  þ  Gðy; wÞ ¼ Gþ ðy; wÞ  G ðy; wÞ, F m ðyÞ ¼ F þ m ðyÞ  F m ðyÞ and Hj ðy; wÞ ¼ H j ðy; wÞ  H j ðy; wÞ be the d.m. representations of Gðy; wÞ, F m ðyÞ and Hj ðy; wÞ as described in the problem (P2). Then by introducing an additional variable z 2 R to (P2), we can obtain the following equivalent problem:

8 min Gþ ðy; wÞ þ z > > > > > > s:t: F þm ðyÞ  F m ðyÞ P 0; m ¼ 1; . . . ; M; > > > > > Hþj ðy; wÞ  Hj ðy; wÞ P 0; j ¼ 1; . . . ; p; > < G ðy; wÞ þ z P 0; ðP3Þ > > > y 2 X0 ¼ fy j 0 < yli 6 yi 6 yui < 1; i ¼ 1; . . . ; n0 g; > > > > 1 > 6 wj 6 L1 ; j ¼ 1; . . . ; p; > Uj > j > >     : G yu ; 1L 6 z 6 G yl ; U1 : Clearly the objective function of (P3) is increasing and each constrained function is a d.m. function. To globally solve problem (P2), the branch and bound algorithm to be proposed will globally solve problem (P3). The validity of this approach follows from the following result. Theorem 2.3. If ðy ; w ; z Þ is a global optimal solution for problem (P3), then ðy ; w Þ is a global optimal solution for problem (P2). Conversely, if ðy ; w Þ is a global optimal solution for problem (P2), then ðy ; w ; z Þ is a global optimal solution for problem (P3), where, z ¼ G ðy ; w Þ. Proof. The proof of this theorem follows easily from the definitions of problems (P2) and (P3), therefore, it is omitted. n0 þpþ1

n0



p

In addition, for the sake of simplicity, let x ¼ ðy; w; zÞ 2 R with y 2 R , w 2 R , z 2 R and n ¼ n0 þ p þ 1. Then, without loss of generality, by changing the notation, the problem (P3) can be rewritten as the form

ðPÞ minfgðxÞ j hðxÞ P 0; x 2 X 0 ¼ ½xl ; xu g; where

 9 8  yl 6 xi ¼ y 6 yu ; > > i ¼ 1; . . . ; n0 ; i  i > > i = <  1 1 i ¼ n0 þ 1; . . . ; n0 þ p; X 0 ¼ fx 2 Rn j xli 6 xi 6 xui ; i ¼ 1; . . . ; ng ¼ x 2 Rn  Uin0 6 xi ¼ win0 6 Lin0 ; > >  > >     ; :  G yu ; 1 6 xi ¼ z 6 G yl ; 1 ; i ¼ n0 þ p þ 1 L U and

gðxÞ ¼ Gþ ðy; wÞ þ z

ð1Þ

is an increasing polynomial (polynomial with positive coefficients), while

hðxÞ ¼ min fuk ðxÞ  v k ðxÞg; k¼1;...;k0

with uk ðxÞ,

k0 ¼ M þ p þ 1;

ð2Þ

v k ðxÞ be increasing polynomials such that

8 þ if k ¼ 1; . . . ; M; > < F k ðyÞ; uk ðxÞ ¼ Hþk ðy; wÞ; if k ¼ M þ 1; . . . ; M þ p; > :  G ðy; wÞ þ z; if k ¼ M þ p þ 1;

ð3Þ

8  if k ¼ 1; . . . ; M; > < F k ðyÞ; v k ðxÞ ¼ > Hk ðy; wÞ; if k ¼ M þ 1; . . . ; M þ p; : 0; if k ¼ M þ p þ 1:

ð4Þ

and

Given an e > 0, for the monotonic optimization problem (P) and for any x0 2 Rn , the approach described below is available which terminates after finitely many steps at either a nonisolated feasible solution  x of (P) is found with gð xÞ 6 gðx0 Þ  e or an evidence is produced that no such  x exists. Hence, from now on we assume that the original problem (P1) has been converted to the form (P) with gðxÞ increasing and hðxÞ defined as in (1)–(4), then a global optimization algorithm will be presented for the problem (P).

P. Shen et al. / Applied Mathematics and Computation 212 (2009) 234–244

237

3. Nonisolated optimal solution An isolated optimal solution even if computable is often difficult to implement practically because of its instability under small perturbations of the constraints. Therefore, for solving problem (P) we only consider nonisolated feasible solutions of (P) from a practical point of view, which should be of interest. This motivates the following definitions. Definition 3.1. A nonisolated feasible solution x of (P) is called a nonisolated optimal solution if gðx Þ 6 gðxÞ for all nonisolated feasible solutions x of (P), i.e. if

gðx Þ ¼ minfgðxÞ j x 2 X 0 g; where X 0 denotes the set of all nonisolated feasible solutions of (P). Definition 3.2. Assume

fx 2 X 0 j hðxÞ > 0g–;: For

ð5Þ

e P 0, a nonisolated feasible solution x of (P) is called a nonisolated e-optimal solution if it satisfies gðxÞ  e 6 inffgðxÞ j hðxÞ P e; x 2 X 0 g:

Clearly for

ð6Þ

e ¼ 0, a nonisolated feasible solution which is nonisolated e-optimal is optimal.

The search for a nonisolated e-optimal solution of (P) can be achieved by the following approach: start from an initial nonisolated feasible solution (the best so far known), find a better nonisolated feasible solution, then reiterate the operation until an evidence is obtained that no better feasible solution than the current best exists. Next, we will show how this approach is formed. Let X ¼ ½a; b be any subrectangle of X 0 , and V be the objective function value of the best so far essential feasible solution x 2 X of (P) such x0 2 X to problem (P)(of course V 6 gðbÞ). Given an e > 0, we want to find a nonisolated feasible solution  x exists. that gð xÞ 6 V  e, or else establish that none such  x 2 X with gð xÞ < V  e. If Clearly, if gðaÞ P V  e, then, since g is increasing, gðxÞ P V  e, 8x 2 X, so there is no  gðaÞ < V  e and hðaÞ > 0, then a is an essential feasible solution with objective function value less than V-e. Therefore, we shall assume without loss generality that

hðaÞ 6 0;

gðaÞ < V  e:

ð7Þ

Under this assumption, we consider the following auxiliary problem

ðQ Þ maxfhðxÞ j gðxÞ 6 V  e; x 2 X 0 ¼ ½xl ; xu g: Since the function hðxÞ is continuous, gðxÞ is continuous and increasing, and

fx 2 X 0 j gðxÞ 6 V  eg ¼ clfx 2 intX 0 j gðxÞ < V  eg; it is clear that the problem (Q) has no isolated feasible point. Therefore, solving the problem (Q) is simpler than solving the original problem (P) from a nonisolated optimal point of view, and furthermore, the key results for (P) and (Q) are given as follows: Theorem 3.1. Under assumptions (5) and (7), let min(P) and max(Q) be the optimal values of problems (P) and (Q), respectively. Then ~Þ > 0 is a nonisolated feasible solution of the problem (P) with (i) Any feasible solution ~ x of the problem (Q) satisfying hðx gð~ xÞ 6 V  e. x is some nonisolated feasible solution of the problem (P) with gð^ xÞ ¼ V, then ^ x is a nonisolated e-optimal (ii) If maxðQ Þ < e and ^ solution of the problem (P). If maxðQ Þ < e and V ¼ gðxu Þ þ e, then the problem (P) has no nonisolated feasible solution.

Proof xÞ, we have ~ x–xl and every x ¼ xl þ kð~ x  xl Þ with 0 6 k 6 1 satisfies xl 6 x 6 ~ x. Then, for every k (i) Since hðxl Þ 6 0 < hð~ sufficiently close to 1, i.e. every x sufficiently close to ~ x, we have hðxÞ > 0, so x is a feasible solution of (P). Furthermore, x is feasible to (Q). Therefore, ~ x is a nonisolated feasible solution of (P) satisfying gð~ x Þ 6 V  e. gð~ xÞ 6 V  e because ~ (ii) If maxðQ Þ < e then

supfhðxÞ j gðxÞ 6 V  e; x 2 X 0 g < e; so for every x 2 X 0 satisfying hðxÞ P e, we must have gðxÞ > V  e ¼ gð^ xÞ  e. Therefore,

inffgðxÞ j hðxÞ P e;

x 2 X 0 g P gð^xÞ  e:

This means that, if ^ x is a nonisolated feasible solution of (P), then it is a nonisolated e-optimal solution of (P).

238

P. Shen et al. / Applied Mathematics and Computation 212 (2009) 234–244

If V ¼ gðxu Þ þ e, then fx 2 X 0 j hðxÞ P eg ¼ ;, i.e. the problem (P) has no nonisolated feasible solution, and the proof is complete.  Theorem 3.1 gives some valuable information, that is, under assumptions (5) and (7), by solving (Q) we can obtain whether or not a nonisolated feasible solution x of (P) exists such that gðxÞ 6 V  e. Thus, for solving the problem (P) we only need consider the problem (Q) in the following. 4. Key algorithm processes To globally solve the problem (Q), a branch-reduce-bound (BRB) algorithm will be proposed. This algorithm proceeds according to the standard branch and bound scheme with three key processes: branching, reducing (the partition sets) and bounding. The branching process consists in a successive rectangular partition of the initial box X 0 ¼ ½xl ; xu  following in an exhaustive subdivision rule, i.e. such that any infinite nested sequence of partition sets generated through the algorithm shrinks to a singleton. A commonly used exhaustive subdivision rule is the standard bisection. The reducing process consists in applying valid cuts (referred to as reduction cuts) to reduce the size of the current partition set X ¼ ½a; b  X 0 ¼ ½xl ; xu . The cuts aim at tightening the box containing the feasible portion currently still of interest. The bounding process consists in estimating an upper bound UBðXÞ for the objective function value hðxÞ over the feasible 0 portion contained in the valid reduction set X 0 ¼ ½a0 ; b  of a given partition set X ¼ ½a; b. Next, we begin the establishment of the approaches with the reduction cut and upper bound processes. 4.1. Reduction cut At a given stage of the BRB algorithm for (Q), let X ¼ ½a; b  X 0 be a rectangle generated during the partitioning procedure and still of interest. Restrict the problem (P) to X ¼ ½a; b:

minfgðxÞ j hðxÞ P 0;

x 2 X ¼ ½a; bg:

Given an e > 0, if a nonisolated feasible solution of (P) in X 0 is known with objective function value V, then we would like to recognize whether or not the box [a,b] contains a nonisolated feasible solution to (P) with objective function value at most equal to V. Therefore, the search for a nonisolated feasible solution of (P) in [a,b] such that gðxÞ 6 V  e can be restricted to the set HV \ ½a; b, where

HV :¼ fx j gðxÞ 6 V  e; hðxÞ P 0g:

ð8Þ

Since hðxÞ ¼ mink¼1;...;k0 fuk ðxÞ  v k ðxÞg with uk ðxÞ; v k ðxÞ being increasing functions (see (2)–(4)), we can also write

HV ¼ fx j gðxÞ 6 V  e; uk ðxÞ  v k ðxÞ P 0; k ¼ 1; . . . ; k0 g: 0

The reduction cut aims at replacing the rectangle [a,b] with a smaller rectangle ½a0 ; b   ½a; b without losing any point 0 0 x 2 HV \ ½a; b, i.e. such that HV \ ½a0 ; b  ¼ HV \ ½a; b. The rectangle ½a0 ; b  satisfying this condition is denoted by red½a; b, i.e. 0 0 red½a; b ¼ ½a ; b . 0 To help explain how red½a; b ¼ ½a0 ; b  is deduced by reduction cut, we first need to define the following three functions uik , i i wk and p for each i ¼ 1; . . . ; n. 0

0

Definition 4.1. Given two boxes [a,b] and ½a0 ; b  with ½a0 ; b   ½a; b, the functions uik , wik and

pi : ½0; 1 ! R are defined by

uik ðaÞ ¼ uk ðb  aðbi  ai Þei Þ; for k ¼ 1; . . . ; k0 ; wik ðbÞ ¼ v k ða0 þ bðbi  a0i Þei Þ; for k ¼ 1; . . . ; k0 ;

pi ðbÞ ¼ gða0 þ bðbi  a0i Þei Þ; where ei denotes the ith unit vector, i.e. a vector with 1 at the ith position and 0 everywhere else; the functions gðxÞ, uk ðxÞ; v k ðxÞ are given in (1), (3) and (4), respectively. Theorem 4.1 (i) If gðaÞ > V  e, or mink¼1;;k0 fuk ðbÞ  v k ðaÞg < 0, then HV \ ½a; b ¼ ;, i.e. red½a; b ¼ ;. 0 (ii) If gðaÞ 6 V  e, and mink¼1;;k0 fuk ðbÞ  v k ðaÞg P 0, then red½a; b ¼ ½a0 ; b  with

a0 ¼ b 

n X i¼1

min faik g  ðbi  ai Þei

k¼1;...;k0

and 0

b ¼ a0 þ

n X i¼1

min fbik g  ðbi  ai Þei ;

k¼1;...;k0 þ1

where, for i ¼ 1; . . . ; n, k ¼ 1; . . . ; k0 ,

P. Shen et al. / Applied Mathematics and Computation 212 (2009) 234–244

(

aik ¼ ( bik ¼

239

a ik ; if uik ðaÞ is not a constant and uik ða ik Þ ¼ v k ðaÞ with a ik 2 ð0; 1Þ; 1;

otherwise;

i ; b k

i Þ ¼ uk ðbÞ with b i 2 ð0; 1Þ; if wik ðbÞ is not a constant and wik ðb k k

1;

otherwise;

and

( bik0 þ1 ¼

i ; if b k0 þ1 1;

pi ðbÞ is not a constant and pi ðbik0 þ1 Þ ¼ V  e with bik0 þ1 2 ð0; 1Þ;

otherwise:

Proof (i) If gðaÞ > V  e, then gðxÞ P gðaÞ > V  e for every x 2 ½a; b, if mink¼1;...;k0 fuk ðbÞ  v k ðaÞg < 0, then

min fuk ðxÞ  v k ðxÞg 6 min fuk ðbÞ  v k ðaÞg < 0;

k¼1;...;k0

k¼1;...;k0

for every x 2 ½a; b. In both cases, HV \ ½a; b ¼ ;. (ii) Given any x ¼ ðx1 ; . . . ; xi ; . . . ; xn ÞT 2 ½a; b satisfying uk ðxÞ P v k ðxÞ, k ¼ 1; . . . ; k0 and gðxÞ 6 V  e. Let

aik0 , min faik g; bik00 , min fbik g; i ¼ 1; . . . ; n: k¼1;...;k0

ð9Þ

k¼1;...;k0 þ1

We first show that x P a0 below. Suppose that xja0 , then there exists some i such that

xi < a0i ¼ bi  aik0 ðbi  ai Þ; i:e: xi ¼ bi  aðbi  ai Þ with a > aik0 :

ð10Þ

By the definition of aik0 , we consider the following two cases. Case 1. If aik0 ¼ 1, then, from (10) we have xi < a0i ¼ bi  aik0 ðbi  ai Þ ¼ ai , conflicting with x 2 ½a; b. Case 2. If 0 < aik0 < 1, from the definitions of aik and aik0 , we can imply that

uik0 ðaik0 Þ ¼ v k0 ðaÞ:

ð11Þ

In addition, by using the definition of uik ðaÞ, we have uik0 ðaÞ is strictly decreasing in single variable a. Then, it follows from (10) and (11) that

    uk0 b  ðbi  xi Þei ¼ uk0 b  aðbi  ai Þei ¼ uik0 ðaÞ < uik0 ðaik0 Þ ¼ v k0 ðaÞ: Hence

uk0 ðxÞ 6 uk0 ðb  ðbi  xi Þei Þ < v k0 ðaÞ 6 v k0 ðxÞ with xi ¼ bi  aðbi  ai Þ; conflicting with uk0 ðxÞ  v k0 ðxÞ P 0. Based on the above results, in either case we have x P a0 ; i:e: x 2 ½a0 ; b. 0 0 Similarly, we also can show that x 6 b . Suppose that xib , then there exists some i such that 0

xi > bi ¼ a0i þ bik00 ðbi  a0i Þ; i:e: xi ¼ a0i þ bðbi  a0i Þ with b > bik00 : By the definition of

bik00 ,

ð12Þ

we consider two cases as follows. 0

Case 1. If bik00 ¼ 1, then from (12), xi > bi ¼ a0i þ bik00 ðbi  a0i Þ ¼ bi , conflicting with x 2 ½a; b. Case 2. If 0 < bik00 < 1, then, from the definitions of bik and bik00 , we can imply

wik00 ðbik00 Þ ¼ uk00 ðbÞ

ð13Þ

pi ðbik00 Þ ¼ V  e:

ð14Þ

or

Assume that (13) holds. From the definition of wik ðbÞ, we have wik00 ðbÞ is strictly increasing in single variable b. From (12) and (13), it follows that

v k ða0 þ ðxi  a0i Þei Þ ¼ v k ða0 þ bðbi  a0i Þei Þ ¼ wik ðbÞ > wik ðbik Þ ¼ uk ðbÞ; 00

00

00

00

00

00

240

P. Shen et al. / Applied Mathematics and Computation 212 (2009) 234–244

and hence

v k ðxÞ P v k ða0 þ ðxi  a0i Þei Þ > uk ðbÞ P uk ðxÞ with xi ¼ a0i þ bðbi  a0i Þ: 00

00

00

00

This conflicts with uk00 ðxÞ P v k00 ðxÞ. Assume that (14) holds. From the definition of pi ðbÞ, it is clear that pi ðbÞ is a strictly increasing function in single variable b. Then by (12) and (14), we can deduce

gða0 þ ðxi  a0i Þei Þ ¼ gða0 þ bðbi  a0i Þei Þ ¼ pi ðbÞ > pi ðbik00 Þ ¼ V  e; which means that

gðxÞ > gða0 þ ðxi  a0i Þei Þ > V  e with xi ¼ a0i þ bðbi  a0i Þ: This conflicts with gðxÞ 6 V  e. 0 0 From the above proof results, in either case we must have x 6 b , i.e. x 2 ½a0 ; b , and the proof is complete.



Remark 1. In order to obtain red½a; b, the computation of aik ðk ¼ 1; . . . ; k0 Þ and bik ðk ¼ 1; . . . ; k0 ; k0 þ 1Þ involves in solving the roots of several nonlinear or linear equations in a single variable. And the construct of these equations is similar, so their roots are obtained easily by a likewise computing fashion. Remark 2. It can easily be verified, the rectangle ½a; b ¼ red½a; b still satisfies 0

gða0 Þ 6 V  e; min fuk ðb Þ  v k ða0 Þg P 0: k¼1;...;k0

4.2. Upper bound For each rectangle X # X 0 , we intend to compute an upper bound UBðXÞ of the objective function of (P1) over X. Since

hðxÞ ¼ min fuk ðxÞ  v k ðxÞg; k¼1;...;k0

and uk ðxÞ,

v k ðxÞ are increasing, an obvious upper bound is

UBðXÞ ¼ min fuk ðbÞ  v k ðaÞg: k¼1;...;k0

Although very simple, this bound suffices to ensure convergence of the algorithm. However, the following Theorem gives a better and tighter bound. Theorem 4.2. (i) If hðaÞ > 0 and gðaÞ 6 V  e then a is a nonisolated feasible solution with gðaÞ 6 V  e. (ii) If gðbÞ > V  e and zðXÞ ¼ a þ hðb  aÞ with h ¼ maxfh j gða þ hðb  aÞÞ < V  eg, qi ¼ b þ ðzi ðXÞ  bi Þei ; i ¼ 1; . . . ; n, then an upper bound of hðxÞ over all x 2 ½a; b satisfying gðxÞ 6 V  e is

UBðXÞ ¼ max min fuk ðqi Þ  v k ðaÞg i¼1;...;n k¼1;...;k0

Proof (i) Obvious. (ii) Let

X i ¼ ½a; qi  ¼ fx j a 6 x 6 qi g ¼ fx 2 ½a; b j ai 6 xi 6 zi ðXÞg: The function gðxÞ is increasing and from assumption (7) it follows that gðaÞ 6 V  e < gðbÞ, therefore 0 6 h < 1 and gðzðXÞÞ ¼ V  e. From the definitions of zðXÞ and gðxÞ it is clear that gðx0 Þ > gðzðXÞÞ ¼ V  e for all x0 ¼ a þ nðb  aÞ with n > h. Since for each x > zðXÞ there exists x0 ¼ a þ nðb  aÞ with n > h, such that x P x0 , it follows that gðxÞ P gðx0 Þ > V  e. Let G ¼ fx 2 ½a; b j gðxÞ 6 V  eg, K ¼ fx j zðXÞ < x 6 bg, K i ¼ fx 2 ½a; b j xi > zi ðXÞg. Then

G  ½a; b n K ¼ ½a; b n \ni¼1 K i ¼ [ni¼1 fx 2 ½a; b j ai 6 xi 6 zi ðXÞg ¼ [ni¼1 X i :

P. Shen et al. / Applied Mathematics and Computation 212 (2009) 234–244

241

Since UBðX i Þ P maxfhðxÞ j x 2 X i g, it follows that

UBðXÞ ¼ maxfUBðX i Þ j i ¼ 1; . . . ; ng P maxfhðxÞ j x 2 [ni¼1 X i g P maxfhðxÞ j gðxÞ 6 V  e; x 2 ½a; bg:



An important property of the value UBðXÞ is that it satisfies:

maxfhðxÞ j gðxÞ 6 V  e; x 2 Xg 6 UBðXÞ 6 min fuk ðbÞ  v k ðaÞg: k¼1;...;k0

ð15Þ

This follows from the fact that hðxÞ ¼ mink¼1;...;k0 fuk ðxÞ  v k ðxÞg and uk ðxÞ; v k ðxÞ are increasing. More generally, we shall show in the next section that any upper bound UBðXÞ satisfying (15) ensures convergence of the algorithm. 5. Algorithm and its convergence Based on the previous key algorithm progresses in Section 4, an efficient algorithm is presented for solving (P). The basic steps of the algorithm are summarized in the following statement. Algorithm statement Step 0. Initialization. Given convergence tolerance e > 0. If no feasible solution is known, let V ¼ gðxu Þ þ e with X 0 ¼ ½xl ; xu ; otherwise, let ^ x be the best nonisolated feasible solution available, V ¼ gð^ xÞ. Let Q 0 ¼ fX 0 g, F 0 ¼ ;. Set q ¼ 0. Step 1. Reduction cut. For each rectangle X 2 Q q , compute its valid reduction redX, which we can obtain by using the reduction cut. Then, if redX ¼ ;, then delete X; otherwise, replace X by redX, and compute an upper bound UBðXÞ for hðxÞ over the feasible solutions in X and delete X if UBðXÞ < 0. Step 2. Fathoming step. Let Q 0q be the collection of rectangles that results from Q q after completion of Step 1. Let F 0q ¼ F q [ Q 0q . x is a nonisolated e-optimal solution of (P) if V ¼ gð^ xÞ, or the problem (P) is nonisolated If F 0q ¼ ; then terminate: ^ q infeasible if V ¼ gðxu Þ þ e; otherwise, let ½aq ; b  :¼ X q 2argmaxfUBðXÞ j X 2 F 0q g, and let UBq ¼ UBðX q Þ. x is a nonisolated e-optimal solution of (P) if V ¼ gð^ xÞ, or the problem (P) Step 3. Optimality check. If UBq < e, then terminate: ^ is e-nonisolated infeasible if V ¼ gðxu Þ þ e. q q Step 4. Updating feasible solution. If UBq P e, and gðb Þ > V  e, then compute xq ¼ aq þ cq ðb  aq Þ with gðxq Þ ¼ V  e; if q q q UBq P e, and gðb Þ 6 V  e, then let x ¼ a . x xq , V gð^ xÞ. Go to (4.1) If hðxq Þ P 0 then xq is a new nonisolated feasible solution of (P) with gðxq Þ 6 V  e, reset ^ Step 5. x unchanged. (4.2) If hðxq Þ < 0, go to Step 5, with ^ Step 5. Partitioning step. Divide X q into two subrectangles by the branching process. Let Q qþ1 be the collection of these two q þ 1, and return to Step 1. subrectangles of X q ; F qþ1 ¼ F 0q n fX q g. Reset q Algorithm convergence Theorem 5.1. (Convergence result). The above algorithm terminates after finitely many steps, yielding either a nonisolated e-optimal solution of (P), or an evidence that the problem is nonisolated infeasible. xÞ  e, hence the Proof. In Step 2, the event F 0q ¼ ; implies that we cannot find any feasible solution x with gðxÞ 6 V  e ¼ gð^ conclusion in Step 2 is correct. If UBq < e, then maxðQ Þ < e (see (15)), hence by Theorem 3.1, the same conclusion in Step 3. Observe that in Step 4, the point xq exists and satisfies gðxq Þ ¼ V  e, so if hðxq Þ P 0, then xq is a nonisolated feasible solution xÞ  e, justifying Step (4.1). Thus the conclusion is correct if one of the following events occurs: with gðxq Þ 6 gð^

F 0q ¼ ;; UBq < e; hðxq Þ > 0: It remains to show that at least one of these events must occur, i.e. that for sufficiently large k Steps 4 and 5 cannot occur. To this end, suppose now that the algorithm is infinite. Since each occurrence of Step (4.1) decreases the current best value at least by e > 0 while gðxÞ is bounded below it follows that Step (4.1) cannot occur infinitely often. Consequently, for all q sufficiently large, ^ x is unchanged, and hðxq Þ 6 0, while UBq P e. But, as q ! 1, we have, by exhaustiveness of the subdivision, q q x the common limit of b and aq as q ! 1. Since diam X q ! 0, i.e. kb  aq k ! 0. Denote by ~

e 6 UBq 6 min ½uk ðbq Þ  v k ðaq Þ; k¼1;...;k0

it follows that

e 6 q!þ1 lim UBq 6 min ½uk ð~xÞ  v k ð~xÞ ¼ hð~xÞ: k¼1;...;k0

242

P. Shen et al. / Applied Mathematics and Computation 212 (2009) 234–244

But by continuity, hð~ xÞ ¼ limq!þ1 hðxq Þ 6 0, a contradiction. Therefore, the algorithm must be finite, and the proof is completed. 

6. Numerical results To verify the performance of the proposed global optimization algorithm, we now report numerical experiments. Based on Assumption 1, by using our proposed Algorithm, for each j ¼ 1; . . . ; p, we can obtain positive scalars Lj and U j such that 0 < Lj 6 dj ðyÞ 6 U j ; 8 y 2 Y 0 ¼ fy 2 X0 j F m ðyÞ P 0; m ¼ 1; . . . ; Mg. The algorithm is coded in MATLAB and some test problems are implemented on a Pentium (R) 4 CPU 2.66 GHz with 512 MB memory microcomputer. Numerical results show that the proposed algorithm can globally solve the problem (P1). Below we only describe some of these sample problems and the corresponding computational results. For these problems, the numerical results which are compared with those of the paper [19] are illustrated in Tables 1 and 2. Example 1 (Ref. [19]).

8 > > > min > > < ðP1Þ s:t: > > > > > :

FðyÞ ¼

y21 þy22 þ2y1 y3 y23 þ5y1 y2

1 þ1:0 þ y2 2y yþy 2 8y 1

1

2

2 þ20

y21 þ y22 þ y3 6 5; ðy1  2Þ2 þ y22 þ y23 6 5; y 2 Y ¼ fy j1 6 y1 6 3; 1 6 y2 6 3; 1 6 y3 6 2g:

For solving this problem, we firstly give the upper and lower bounds of d1 ðyÞ and d2 ðyÞ. We can easily obtain the upper and lower bounds of d1 ðyÞ and d2 ðyÞ are U 1 ¼ 49; L1 ¼ 6 and U 2 ¼ 16; L2 ¼ 4, respectively. With e ¼ 0:1, the algorithm found a nonisolated e- optimal solution

b y ¼ ð1:04159; 1:39906; 1:06607Þ with objective function value 0:8339 at iteration 43. Example 2 (Ref. [19]).

8 > min > > > > > > < s:t: ðP1Þ > > > > > > > :

FðyÞ ¼ y2 2y 1

y2 2 1 þy2 8y2 þ20



y21 þ2y22 3y1 3y2 10 y1 þ1:0

2y1 þ y22  6y2 6 0; 3y1 þ y2 6 8; y21  y2  y1 6 0; y 2 Y ¼ fy j1 6 y1 6 3; 1 6 y2 6 3g:

Clearly, the upper and lower bounds of d2 ðyÞ are U 2 ¼ 4 and L2 ¼ 2, respectively. For d1 ðyÞ ¼ ðy1  1Þ2 þ ðy2  4Þ2 þ 3, we can easily obtain the upper and lower bounds of d1 ðyÞ are U 1 ¼ 16; L1 ¼ 4. Secondly, with e ¼ 0:001, the algorithm found a non^ ¼ ð1:66645; 2:99996Þ. isolated e-optimal minimum 1.8836 after 138 iterations at the nonisolated e-optimal solution y In Table 1, the notations have been used for column headers: Ref.: reference; Iter: the number of algorithm iteration. Remark 3. Example 1 was solved in [19] by the branch and bound algorithm based on the rectangular partition and the Lagrangian relaxation. With a fixed e > 0, an approximate solution was given as y ¼ ð1:32547; 1:42900; 1:20109Þ with objective function value 0.9020. However, this solution is actually e-approximate feasible in the following sense defined as

y21 þ y22 þ y3 6 5 þ e; ðy1  2Þ2 þ y22 þ y23 6 5 þ e:

Table 1 Computational results for test examples. Example

Refs.

Iter

e

Optimal solution

Optimal value

1

[ours] [19] [ours] [19]

43 84 138 142

0.1

(1.04159,1.39906,1.06607) (1.32547,1.42900,1.20109) (1.66645,2.99996) (1.66598,2.99899)

0.8339 0.9020 1.8836 1.8867

2

0.001

243

P. Shen et al. / Applied Mathematics and Computation 212 (2009) 234–244 Table 2 Computational results for randomly produced examples. n

m

K

p

T

3 5 8 8 9 12 15

2 4 6 5 5 6 12

1 2 3 4 5 6 8

2 2 3 4 5 6 8

0.6357 2.3168 6.5843 6.1562 17.2635 21.0623 35.7631

In this paper, to show that a feasible solution can be found with a better objective function value, the proposed algorithm found a nonisolated e-optimal solution

 ¼ ð1:04159; 1:39906; 1:06607Þ y with objective function value 0.8339. This solution is feasible to the problem. Example 3. Consider the problem

8 > > > min < ðP1Þ > s:t: > > :

FðyÞ ¼

K P j¼1

nj ðyÞ dj ðyÞ

F m ðyÞ 6 bm ;

p P



j¼Kþ1

nj ðyÞ dj ðy

m ¼ 1; . . . ; M;

y 2 X0 ¼ fy j 0 6 yi 6 10; i ¼ 1; . . . ; ng;

where for j ¼ 1; . . . ; p; m ¼ 1; . . . ; M,

1 1 hy; Q j1 yi þ hy; cj1 i; dj ðyÞ ¼ hy; Q j2 yi þ hy; cj2 i; 2 2 1 F m ðyÞ ¼ hy; Q m yi þ hy; cm i: 2 nj ðyÞ ¼

All elements of Q j1 ; Q j2 ; cj1 , and cj2 ðj ¼ 1; . . . ; pÞ were randomly generated between 0 and 1; all elements of Q m ; cm ðm ¼ 1; . . . ; MÞ were randomly generated between 1 and 0; bm were randomly generated in the interval [300,90]. Numerical results are summarized in Table 2, where average CPU seconds denoted by T are obtained by running the proposed algorithm for 10 tests problems. Table 2 shows the average performance of the proposed algorithm when the convergence tolerance e was fixed at 0.05. From the above computational results, we can obtain that solving all of the problems by the proposed solution algorithm in this paper yields the nonisolated e-optimal solutions with much better objective function values and being feasible. In addition, compared with the method of Qu et al. [19], numerical results illustrate the potential advantage of the proposed solution approach: not only a nonisolated optimal solution is obtained, less computational effort may be required for reaching a better objective function value. Acknowledgement Research supported by the Innovation Scientists and Technicians Troop Construction Projects of Henan Province and the Program for Science and Technology Innovation Talents in Universities of Henan Province. References [1] Y. Almogy, O. Levin, Parametric analysis of a multistage stochastic shipping problem, in: Proceedings of the 5th IFORS Conference, 1964, pp. 359–370. [2] C.S. Colantoni, R.P. Manes, A. Whinston, Programming, profit rates, and pricing decisions, Accounting Review 44 (1969) 467–481. [3] H. Konno, M. Inori, Bond portfolio optimization by bilinear fractional programming, Journal of the Operations Research Society of Japan 32 (1989) 143– 158. [4] H. Konno, H. Watanabe, Bond portfolio optimization problems and their application to index tracking: a partial optimization approach, Journal of the Operations Research Society of Japan 39 (1996) 295–306. [5] M.R. Rao, Cluster analysis and mathematical programming, Journal of the American Statistical Association 66 (1971) 622–626. [6] S. Schaible, Fractional programming, in: R. Horst, P.M. Pardalos (Eds.), Handbook of Global Optimization, Kluwer Academic, Dordrecht, 1995. [7] S. Schaible, Fractional programming with sums of ratios, in: E. Castagnoli, G. Giorgi (Eds.), Scalar and Vector Optimization in Economic and Financial Problems, Proceedings of the Workshop in Milan, 1995, Elioprint, Milan, 1996. [8] S. Schaible, A note on the sum of a linear and linear fractional function, Naval Research Logistics Quarterly 24 (1977) 691–693. [9] A. Charnes, W.W. Cooper, Programming with linear fractional functions, Naval Research Logistics Quarterly 9 (1962) 181–186. [10] S. Schaible, Fractional programming II, on Dinkelbach’s algorithm, Management Science 22 (1976) 868–873. [11] T. Kuno, A branch-and-bound algorithm for maximizing the sum of several linear ratios, Journal of Global Optimization 22 (2002) 155–174.

244

P. Shen et al. / Applied Mathematics and Computation 212 (2009) 234–244

[12] P.P. Shen, C.F. Wang, Global optimization for sum of linear ratios problem with coefficients, Applied Mathematics and Computation 176 (2006) 219– 229. [13] H.P. Benson, On the global optimization of sums of linear fractional functions over a convex set, Journal of Optimization Theory and Application 121 (2004) 19–39. [14] H.P. Benson, Using concave envelopes to globally solve the nonlinear sum of ratios problem, Journal of Global Optimization 22 (2002) 343–364. [15] H.P. Benson, Global optimization algorithm for the nonlinear sum of ratios problem, Journal of Optimization Theory and Application 112 (2002) 1–29. [16] R.W. Freund, F. Jarre, Solving the sum-of-ratios problem by an interior-point method, Journal of Global Optimization 19 (2001) 83–102. [17] Yanjun Wang, Kecun Zhang, Global optimization of nonlinear sum of ratios problem, Applied Mathematics and Computation 158 (2004) 319–330. [18] Pei-Ping Shen, Gui-Xia Yuan, Global optimization for the sum of generalized polynomial fractional functions, Mathematical Methods of Operations Research 65 (2007) 445–459. [19] Shao-Jian Qu, Ke-Cun Zhang, Jia-Kun Zhao, An efficient algorithm for globally minimizing sum of quadratic ratios problem with nonconvex quadratic constraints, Applied Mathematics and Computation 189 (2007) 1624–1636. [20] H. Tuy, Monotonic optimization: problems and solution approaches, SIAM Journal on Optimization 11 (2) (2000) 464–494. [21] H. Tuy, F. Al-Khayyal, P.T. Thach, Monotonic optimization: branch and cut methods, in: C. Audet, P. Hansen, G. Savard (Eds.), Essays and Surveys on Global Optimization, Springer, Berlin, 2005, pp. 38–39. [22] L.T.H. An, P.D. Tao, A branch and bound method via d.c. optimization algorithm and ellipsoidal technique for box constrained nonconvex quadratic problems, Journal of Global Optimization 13 (1998) 171–206.