A global optimization algorithm for sum of quadratic ratios problem with coefficients

A global optimization algorithm for sum of quadratic ratios problem with coefficients

Applied Mathematics and Computation 218 (2012) 9965–9973 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation jour...

201KB Sizes 2 Downloads 47 Views

Applied Mathematics and Computation 218 (2012) 9965–9973

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A global optimization algorithm for sum of quadratic ratios problem with coefficients q Ying Ji a,⇑, Yijun Li b, Pengyu Lu b a b

Academy of Fundamental & Interdisciplinary Sciences, Harbin Institute of Technology, China School of Management, Harbin Institute of Technology, China

a r t i c l e

i n f o

a b s t r a c t In this paper a global optimization algorithm for solving sum of quadratic ratios problem with coefficients and nonconvex quadratic function constraints (NSP) is proposed. First, the problem NSP is converted into an equivalent sum of linear ratios problem with nonconvex quadratic constraints (LSP). Using a linearization technique, the linearization relaxation problem of LSP is obtained. The original problem is then solvable using the branch and bound method. In the algorithm, lower bounds are derived by solving a sequence of linear lower bounding functions for the objective function and the constraint functions of the problem NSP over the feasible region. The proposed algorithm is convergent to the global minimum through the successive refinement of the solutions of a series of linear programming problems. The numerical examples demonstrate that the proposed algorithm can easily be applied to solve problem NSP. Ó 2012 Elsevier Inc. All rights reserved.

Keywords: Quadratic ratios problem Quadratic constraints problem Linearization relaxation Branch and bound Global convergence

1. Introduction In this paper, we consider the global optimization of the sum of quadratic ratios problem with coefficients and nonconvex quadratic function constraints,

NSPðXÞ :

8 p p P P n ðxÞ > > f j ðxÞ ¼ v j dj ðxÞ ; > < min f ðxÞ ¼ j j¼1

> s:t: > > :

j¼1

g m ðxÞ 6 0;

m ¼ 1; . . . ; M; X ¼ fx : xn 6 xn 6 xn ; n ¼ 1; . . . ; Ng;

where

1 1 T nj ðxÞ ¼ cj þ bj x þ xT Aj x; dj ðxÞ ¼ r j þ eTj x þ xT Dj x; 2 2 1 T T g m ðxÞ ¼ hm þ wm x þ x Gm x 2 and cj ; rj and hm are all arbitrary real number, bj ; ej ; wm 2 RN , and Aj ; Dj ; Gm 2 RNN are symmetric not positive semidefinite matrixes, j ¼ 1; . . . ; p; m ¼ 1; . . . ; M. So the constraint set is nonconvex. v j are real constant coefficients, j ¼ 1; . . . ; p. q This work was supported by the National Science Foundation Grant of China (No. 71071041), China Post-doctoral Science Foundation (No. 01107172) and Heilongjiang Province Post-doctoral Science Foundation (No. 01106961). ⇑ Corresponding author. E-mail address: [email protected] (Y. Ji).

0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2012.03.083

9966

Y. Ji et al. / Applied Mathematics and Computation 218 (2012) 9965–9973

The minimization of sum of ratios of two functions subject to constraints, has been studied extensively during the last several decades [1–3]. Since its initial development, it has spawn a wide variety of applications, especially in transportation planning, government contracting, economics and finance [1,3]. A special case of NSP is the conic trust-region subproblem,

8 T T > min 1þg asT s þ 12 ð1þs aBsT sÞ2 ; > < CTRðMÞ : s:t: 1 þ aT s > 0; > > : ksk 6 M; where g; a 2 RN and B 2 RNN is symmetric not positive semidefinite matrix. It arises as the main subproblems in a class of nonlinear programming algorithms called conic trust region methods which have been introduced by Davidon [4] and have been successfully developed in the works of Sun and Yuan [5] and Ariyawansa [6]. But to our knowledge there are no efficient algorithms to globally solving it. So the algorithm proposed in this paper recast the conic trustregion subproblem with branch and bound algorithm. From a research point view, these problems pose significantly theoretical and optimization problems, i.e., they are known to generally possess multiple local optima that are not global optima. During the past years, various algorithms have been proposed for solving special cases of problem NSP, which are intended only for the sum of linear ratios problem without coefficients v j . For instance, algorithmic and computational results for single ratio fractional programming can be found in [2,7,8] and in the literature cited therein. At present there exist a number of algorithms for globally solving sum of ratios problem in which the numerators and denominators are affine functions or the feasible region is a polyhedron [9,10,17,19]. To our knowledge three algorithms have been proposed for solving the nonlinear fractional programming [11–13]. But in the most considered problems the feasible regions are polyhedrons or convex sets. Gotoh and Konno, in their recent paper [16], formulated a maximization problem in the form of a convex-convex type quadratic fractional programming problem over polytope. Recently, Qu et al. [19] proposed an efficient algorithm for solving NSP problem. The purpose of this paper is to introduce a new global algorithm to solve the problem NSP. The algorithm works by solving a sum of linear ratios problem with coefficients and nonconvex constraints LSP that is equivalent to problem NSP. The main feature of this algorithm is summarized as follows. Firstly, at each iteration a linear programming problem is solved and the optimal objective value of this linear programming problem provides a lower bound on the optimal objective value of the original fractional programming problem NSP. Then the proposed linear programming method for problem NSP is more convenient in the computation than the parametric programming (or concave minimization) methods of [16,17], thus any effective linear programming algorithm can be used to solve this nonlinear programming problem NSP. Secondly, the given method cannot only solve general NSP problem, but sum of linear ratios problem [9,10]. We note that the methods reviewed above (see Refs. [16,17], for example) can only treat special cases of problem NSP. Our method is also different form Qu’s method [19], since their method is based on Lagrangian relaxation. Thirdly, the main computation involves solving a sequence of linear programming problems, for which standard simplex algorithm are available. Finally, numerical computation shows that the proposed method is superior to the method in [19]. The paper is organized in the following way. We start by deriving the equivalent fractional programming problem LSP to the problem NSP and the linear relaxation programming of the problem LSP. In Section 3, the new algorithm is introduced and convergence results are discussed. Section 4 discusses some computational considerations and reports some computational results obtained by solving some examples. Finally concluding remarks are given.

2. Linear relaxation programming In this section the equivalent problem LSP to the problem NSP is derived and a linearizing method is introduced to construct the linear relaxation programming of problem LSP which can provide the lower bounds of the optimal value in the branch and bound algorithm. 2.1. Equivalent nonconvex programming Assumption 1. Without loss of generality, let v j > 0ðj ¼ 1; . . . ; TÞ; v j < 0ðj ¼ T þ 1; . . . ; pÞ. For each j ¼ 1; . . . ; p, it holds that there exist positive scalars lj ; uj ; Lj and U j satisfy 0 < lj 6 nj ðxÞ 6 Lj , and 0 < uj 6 dj ðxÞ 6 U j for all x belonging to the feasible region. Introducing positive variables tj and sj , and setting tj ¼ nj ðxÞ and sj ¼ dj ðxÞ, the problem NSP then leads to the following equivalent problem LSP:

Y. Ji et al. / Applied Mathematics and Computation 218 (2012) 9965–9973

9967

8 p p T P P P t t > > min f ðxÞ ¼ fj ðxÞ ¼ cj sjj þ cj sjj ; > > > j¼1 j¼1 j¼Tþ1 > > > > > s:t: tj  nj ðxÞ 6 0; j ¼ 1; . . . ; T; > > > > > t j þ nj ðxÞ 6 0; j ¼ T þ 1; . . . ; p; > < LSPðX; XÞ : dj ðxÞ  sj 6 0; j ¼ 1; . . . ; T; > > > > d j ¼ T þ 1; . . . ; p; j ðxÞ þ sj 6 0; > > > > > g m ðxÞ 6 0; m ¼ 1; . . . ; M; > > > > > X ¼ fx : xi 6 xi 6 xi ; i ¼ 1; . . . ; Ng; > > : X ¼ fðt; sÞ 2 R2p : 0 < lj 6 t j 6 Lj ; 0 < uj 6 sj 6 U j g:

Theorem 1. If ð x; t;  xÞ is a global optimal solution for problem LSP, then t j ¼ nj ð xÞ; sj ¼ dj ð xÞ; j ¼ 1; . . . ; p and  x is a global optimal solution for problem NSP. Conversely, if  x is a global solutio for problem NSP, then ð x; t;  xÞ is a global optimal solution for problem LSP, where tj ¼ nj ð xÞ; sj ¼ dj ð xÞ; j ¼ 1; . . . ; p. Proof. The proof is similar to the Theorem 1 in [11]. h Let y ¼ ðx; t; sÞ 2 R2pþN , then without loss of generalization we can rewrite the problem LSP to the following problem:

8 p T P P > > v j yNþj y1 min f0 ðyÞ ¼ v j yNþj y1 > pþNþj þ pþNþj ; > > j¼1 j¼Tþ1 > > > > > > < s:t: fm ðyÞ 6 0; m ¼ 1; . . . ; 2p þ M; 0 0j ; j ¼ 1; . . . ; 2p þ Ng LSPðY Þ : Y 0 ¼ fy : y0j 6 yj 6 y > > > > ¼ fxi 6 xi 6 xi ; i ¼ 1; . . . ; N; > > > > > 0 < lj 6 tj 6 Lj ; j ¼ 1; . . . ; p > > : 0 < uj 6 sj 6 U j ; j ¼ 1; . . . ; pg; where fm ðyÞ ¼ cm þ aTm y þ 12 yT Hm y; m ¼ 1; . . . ; 2p þ M. From Theorem 1, notice that, in order to globally solve problem NSP, we can globally solve problem LSP instead. In addition, it is easy to see that the global optimal values of problem NSP and LSP are equivalent. In the following the main work is to globally solve the problem LSP. 2.2. Linear relaxation programming We need to construct the lower linear bounding function of the objective function and the constraint functions of probk  lem LSP which possesses particular structure. For convenience in exposition, in the following we assume that Y k ¼ ½yk ; y represents either the initial bounds on the variables of the problem, or modified bounds as defined for some partitioned subproblem in a branch-and-bound scheme. First we will show that any quadratic function can be decomposed into the difference of two convex functions. Theorem 2. Given any quadratic function qðyÞ ¼ c þ aT y þ 12 yT Hy, there must be one nonnegative number k such that 2 k  qðyÞ is convex.  Then this quadratic function can be represented as the difference of two convex functions, i.e., 2 kyk þ qðyÞ ¼ 2k kyk2 þ qðyÞ  12 kyk2 . Proof. Suppose kmin is the minimum eigenvalue of H. Then we can let

 k¼

0:01

if kmin P 0;

ð1Þ

jkmin j þ 0:01 otherwise:

So k defined by (1) guarantees the matrix kI þ H positive definite, where I is ð2p þ NÞ  ð2p þ NÞ identity matrix. Then it is obviously that the function 2k kyk2 þ qðyÞ is convex. Then the conclusion follows. h   Y 0  R2pþN , for any convex function uðyÞ defined in Y which is at least subdifferential, we have the Given any Y ¼ ½y; y following property: For any z 2 Y, the subdifferential set @ uðzÞ of uðyÞ at z is, @ uðzÞ ¼ ft : uðyÞ P uðzÞ þ ht; y  zig. Additional if uðyÞ is differentiable then @ uðzÞ ¼ fruðzÞg. Theorem 3. For any z 2 Y, we have uðyÞ P uðzÞ þ ht; y  zi; 8y 2 Y, where

t 2 @ uðzÞ.

9968

Y. Ji et al. / Applied Mathematics and Computation 218 (2012) 9965–9973

Proof. Since uðyÞ is convex on Y, then we have the conclusion.

h

The linear relaxation of the problem LSP can be realized by underestimating every function fm ðyÞ with a linear function Lfm ðyÞ for every m ¼ 0; 1; . . . ; 2p þ M. The linear function is construct by applying the property of the quadratic function and the particular structure of problem LSP. All the details of this linearization technique for generating relaxations will be given in the following Theorem 4.   Y 0  R2pþN ; z ¼ ðzn Þð2pþNÞ1 2 Y and 8y ¼ ðyn Þð2pþNÞ1 2 Y, the following notations are introduced: Given any Y ¼ ½y; y

 p T P P 1 1 1 1 ¼ v j yNþj þ ; cj yNþj ; u U U u j j j j j¼1 j¼1 j¼Tþ1   p p T P P P 1 1 1 1 ¼ cj yNþj þ ; cj y ; Uf0 ðyÞ ¼ v j yNþj max uj U j uj j¼Tþ1 Nþj U j j¼1 j¼1

Lf0 ðyÞ ¼

p P

v j yNþj min

1



Lfm ðy; zÞ ¼ Lf m ðy; zÞ 

km UqðyÞ; 2

km Lqðy; zÞ; 2 1 Lf m ðy; zÞ ¼ fm1 ðzÞ þ hrfm1 ðzÞ; y  zi; km fm1 ðyÞ ¼ kyk2 þ fm ðyÞ; m ¼ 1; . . . ; 2p þ M; 2 Lqðy; zÞ ¼ 2hz; yi  kzk2 ; Ufm ðy; zÞ ¼ fm1 ðyÞ 

 ÞT y  y T y ; UqðyÞ ¼ ðy þ y 1

where km is defined as (1) such that km I þ Hm is positive definite, z; z 2 Y are fixed vectors and functions Lf m ðy; zÞ and Ufm ðy; zÞ have the argument y and depend on parameters z and z, respectively. Theorem 4. Given z; z 2 Y  Y 0 , consider the functions f0 ðyÞ; Lf0 ðyÞ; Uf0 ðyÞ; f m ðyÞ Lfm ðy; zÞ and Ufm ðy; zÞ for any y 2 Y, where m ¼ 1; . . . ; 2p þ M. Then the following two statements are valid. (i) The functions Lf0 ðyÞ; Uf0 ðyÞ, and Lfm ðy; zÞ are all linear functions about y and Ufm ðy; zÞ is a convex function about y. Moreover the functions f0 ðyÞ; Lf0 ðyÞ; Uf0 ðyÞ; f m ðyÞLfm ðy; zÞ and Ufm ðy; zÞ satisfy:

Lf0 ðyÞ 6 f0 ðyÞ 6 Uf0 ðyÞ;

ð2Þ

Lfm ðy; zÞ 6 fm ðyÞ 6 Ufm ðy; zÞ;

m ¼ 1; . . . ; 2p þ M:

ð3Þ

(ii) The maximal errors of bounding f0 ðyÞ using Lf0 ðyÞ and Uf0 ðyÞ, and bounding fm ðyÞ using Lfm ðy; zÞ and Ufm ðy; zÞ; m ¼ 1; . . . ; 2p þ M, satisfy

lim LE0max ¼ lim UE0max ¼ 0;

ð4Þ

m lim LEm max ¼ lim UEmax ¼ 0;

ð5Þ

yk!0 ky

yk!0 ky

yk!0 ky

yk!0 ky

where

LE0max ¼ maxf0 ðyÞ  Lf0 ðyÞ; y2Y

UE0max ¼ maxUf0 ðyÞ  f0 ðyÞ;

LEm max ¼ maxfm ðyÞ  Lfm ðy; zÞ; y2Y

y2Y

 UEm max ¼ maxUfm ðy; zÞ  fm ðyÞ: y2Y

Proof. (i) Since z 2 Y is given beforehand, then the functions Lf0 ðyÞ; Uf0 ðyÞ, and Lfm ðy; zÞ are all linear functions about y. As we can see fm1 ðyÞ is convex over Y. So Ufm ðy; zÞ expressed as the difference of convex function and linear function is a convex function. Nþj ¼ Lj and 0 < uj ¼ ypþNþj 6 ypþNþj 6 y pþNþj ¼ U j ; j ¼ 1; . . . ; p, therefore Since 0 < lj ¼ yNþj 6 yNþj 6 y

yNþj

1 1 6 yNþj y1 ; pþNþj 6 yNþj pþNþj y ypþNþj

ð6Þ

j ¼ 1; . . . ; p. So we have p P j¼1

yNþj

p p P P 1 1 6 y y1 6 y : pþNþj j¼1 Nþj pþNþj j¼1 Nþj ypþNþj y

According to (7), Lf0 ðyÞ 6 f0 ðyÞ 6 Uf0 ðyÞ.

ð7Þ

Y. Ji et al. / Applied Mathematics and Computation 218 (2012) 9965–9973

9969

Since fm1 ðyÞ ¼ k2m kyk2 þ fm ðyÞ is convex, then from Theorem 3 we have that for any z 2 Y; fm1 ðyÞ P fm1 ðzÞ þ hrfm1 ðzÞ; y  zi, i.e., 1

fm1 ðyÞ P Lf m ðy; zÞ;

8y 2 Y:

ð8Þ

; hy  y; y   yi P 0 and kyk2 P 2hz; yi  kzk2 , so It is obvious that for any y; z 2 Y ¼ ½y; y

ÞT y  yT y  ¼ UqðyÞ: Lqðy; zÞ ¼ 2hz; yi  kzk2 6 kyk2 6 ðy þ y 

ð9Þ

 Then from fm ðyÞ ¼ kyk þ fm ðyÞ  k2m kyk2 ¼ fm1 ðyÞ  k2m kyk2 , (9) and (8), we have that (3) is true. Nþj ¼ Lj and 0 < uj ¼ ypþNþj 6 ypþNþj 6 y pþNþj ¼ U j ; j ¼ 1; . . . ; p, therefore (ii) Since 0 < lj ¼ yNþj 6 yNþj 6 y km 2

2

  yk pþNþj  ypþNþj ky y 6 Lj ;  ypþNþj ypþNþj p2min pþNþj  ypþNþj   yk y ky 6 Lj 6 Lj ; y2pþNþj p2min

1 0 6 yNþj y1 pþNþj  yNþj ypþNþj 6 Lj 1 0 6 yNþj y1 pþNþj  yNþj ypþNþj

where j ¼ 1; . . . ; p and

pmin ¼ minfypþNþj jj 2 f1; . . . ; pgg. Therefore

0 6 lim LE0max ¼ lim maxðf0 ðyÞ  Lf0 ðyÞÞ 6 lim yk!0 ky

yk!0 y2Y ky

p P

yk!0 ky j¼1

jcj jLj

  yk ky

p2min

¼ 0;

ð10Þ

i.e., limkyyk!0 LE0max ¼ 0. Similar to the proof (10), we have that limkyyk!0 UE0max ¼ 0. Since to prove

lim LEm max ¼ 0;

ð11Þ

yk!0 ky

    1 is equal to prove limkyyk!0 maxy2Y fm1 ðyÞ  Lf m ðy; zÞ ¼ 0 and limkyyk!0 maxy2Y UqðyÞ  kyk2 ¼ 0. By calculation we have

km 1 1 km 1   ykðkHm yk þ kHm zkÞ   yk2 þ ky 0 6 fm1 ðyÞ  Lf m ðy; zÞ ¼ ky  zk2 þ ðy  zÞT Hm y þ ðz  yÞT Hm z 6 ky 2 2 2 2   k   yk;   yk m ky   yk þ kHm yk þ kHm zk 6 K m ky 6 ky 2

ð12Þ

  yk þ kHm yk þ kHm zk 6 K m . where the third inequality from y 2 Y, so there must be one positive number K m such that k2m ky So from (12), we have that 1

lim maxðfm1 ðyÞ  Lf m ðy; zÞÞ ¼ 0:

ð13Þ

yk!0 y2Y ky

  yÞ 6 ky   yk2 , so Since 0 6 UqðyÞ  kyk2 ¼ ðy  yÞT ðy

  lim max UqðyÞ  kyk2 ¼ 0:

ð14Þ

yk!0 y2Y ky

Therefore combining  (14) and (13),we have (11) is true. 2 2 m km    2   Since UEm max ¼ 2 kyk  Lqðy; zÞ and 0 6 kyk  Lqðy; zÞ ¼ ky  zk 6 ky  yk, so UEmax ! 0 as ky  yk ! 0. h k   Y 0 , consequently we Next by means of Theorem 4, we can give the linear relaxation of the problem LSP. Let Y k ¼ ½yk ; y construct the corresponding approximation relaxation linear programming (LP) of problem LSP in Y k as follows:

8 > < min Lf0 ðyÞ; LPðY k Þ : s:t: Lfm ðy; zk Þ 6 0; > : y 2 Yk;

m ¼ 1; . . . ; 2p þ M;

ð15Þ k yk þy

where zk ¼ ðzkn Þð2pþNÞ1 2 Y k is one constant vector, in our computation we choose zkn ¼ n 2 n ; n ¼ 1; . . . ; 2p þ N. Based on the discussion above, every feasible point of LSP in subdomain Y k is feasible in LP; and the objective of LP is less than or equal to that of LSP for all points in yk . Thus, LP provides a valid lower bound for the solution of LSP over the partition set Y k . It should be noted that problem LP contains only the necessary constraints to guarantee convergence of the algorithm. Theorem 5. The linear programming LP (Y k ) provides the lower bound of the optimal value of the problem QFP over the rectangle Y k . Proof. The conclusion follows the discussion above. h

3. Branch and bound algorithm and its convergence In this section a branch and bound algorithm is developed to globally solve the problem LSP which is equivalent to the original problem NSP that relied on the former linear relaxation programming. This method needs to solve a sequence of

9970

Y. Ji et al. / Applied Mathematics and Computation 218 (2012) 9965–9973

linear programming the initial rectangle Y 0 or partitioned subrectangle Y k in order to find a global optimal solution. The basic idea of branch and bound methods is to generate sequences of nondecreasing lower bounds and nonincreasing upper bounds until they approach each other closely enough. Further more, in order to ensure convergence to a global optimum, some bound tightening strategies can be applied to enhance the solution procedure. The branch and bound approach is relied on partitioning the set Y 0 into subrectangles, and each is associated with a node of the branch and bound tree and each node is concerned with a relaxation linear programming in corresponding subrectangle. Hence at any stage k of the algorithm, suppose that we have a collection of active nodes indexed by Rk , say, each is associated with a rectangle Y q  Y 0 ; 8q 2 Rk . For each such node q, we will have computed the lower bound value LBV q via the solution of the LP (Y q ), so that the lower bound of optimal value of the problem LSP at stage k is given by LBVðkÞ ¼ minfLBV q : q 2 Rk g. We now select the active node qðkÞ ¼ argminfLBV q : q 2 Rk g for further considering. Then we partition the selected node’s associated rectangle into two subrectangles according to the following branching rules. For these two subrectangles the feasibility check are applied, in order to identify whether the subrectangles should be eliminated. Computing the lower bounds for each new undeleted nodes by solving the corresponding LP as before. If necessary and possible, we will update the upper bound of incumbent solution H . Then, the active nodes collection Rk will satisfy LBV q < H ; 8q 2 Rk , for each stage k. Upon fathoming any nonimproving nodes, we obtain a collection of active nodes for the next stage, and this process is repeated until convergence is obtained. 3.1. Branching rule The critical element in guaranteeing convergence to a global minimum is the choice of a suitable partitioning strategy. For rectangular subdivision, the bisection process via a longest edge as thoroughly discussed in Horst and Tuy [14], is often used. In our paper we choose this bisection rule. This method is sufficient to ensure convergence since it drives all the intervals to zero for the variables that are associated with the term that yields the greatest discrepancy in the employed approximation along any infinite branch of the branch-and-bound tree. This branching rule is as follows. kn ; n ¼ 1; . . . ; 2p þ Ng  Y 0 is going to be partitioned. Then the selection of Assume that the rectangle Y k ¼ fy : ykn 6 ykn 6 y the branching variable yl which possesses the maximum length in Y k and the partitioning of Y k is done using the following h i kn  ykn : n ¼ 1; . . . ; 2p þ Ng, and partition Y k by bisecting the interval ykl ; y kl into subintervals rules. Let l ¼ argmaxn fy   k k yk þy yk þy kl . ykl ; l 2 l and l 2 l ; y 3.2. Algorithm statement Using notions and procedures presented above we can now establish a branch and bound algorithm for problem LSP as follows. ALGORITHM LBBF Step 0: Initialization 0.1: Give a sufficient small positive number e and set k :¼ 0; Rk ¼ f1g; qðkÞ ¼ 1; Y qðkÞ ¼ Y 1 ¼ Y 0 . Set an initial upper bound H ¼ 1.



0.2: Solving the LP (Y qðkÞ ), denote the optimal solution and minimum value yqðkÞ ; LBV qðkÞ , then Hk ¼ f0 yqðkÞ . Set the initial lower bound LBVðkÞ ¼ LBV qðkÞ and the initial feasible point yk ¼ yqðkÞ . 0.3: If Hk  LBVðkÞ 6 e, then stop with yqðkÞ as the approximal global solution to problem LSP. Step 1: (Partitioning step) According to the branching rule stated in 3.1, we choose a branching variable yl to partition Y qðkÞ into two subrectangles Y qðk;1Þ and Y qðk;2Þ . Replace qðkÞ by these two new node indices qðk; 1Þ and qðk; 2Þ in Rk . Step 2: (Feasibility check) For each new node indices qðk; sÞ where s ¼ 1; 2, the corresponding rectangle Y qðk;sÞ ; ðs ¼ 1; 2Þ,  m ¼ min qðk;sÞ Lfm ðy; zk Þ of Lf0 ðyÞ and Lfm ðy; zk Þ over the rectangle  0 ¼ min qðk;sÞ Lf0 ðyÞ and Lf compute the lower bounds Lf y2Y y2Y Y qðk;sÞ respectively where m ¼ 1; . . . ; 2p þ M; s ¼ 1; 2. If there exists m ¼ 0; 1; . . . ; 2p þ M such that one of the lower  m > 0, for some m ¼ 1; . . . ; 2p þ M, then the corresponding node indices qðk; sÞ will  0 > LBVðkÞ or Lf bounds satisfies Lf be removed. If qðk; sÞ ðs ¼ 1; 2Þ have all been removed then return to Step 4. Step 3: (Updating upper bound and deleting step) For the remaining subrectangle update the corresponding parameters. Solve the programming LPðY qðk;sÞ Þ, where s ¼ 1 or s ¼ 2 or s ¼ 1; 2, and denote the optimal solutions and optimal val



ues yqðk;sÞ ; LBV qðk;sÞ . Then if possible update the best available upper bound Hk ¼ min Hk ; f 0 yqðk;sÞ . If LBV qðk;sÞ > Hk , then remove the corresponding node. Step 4: (Convergence test) Fathom any nonimproving nodes by setting Rkþ1 ¼ Rk  fq 2 Rk : LBV q P Hk  eg. If Rkþ1 ¼ ;, then terminate with Hk as the optimal value and y ðsÞ (where s 2 X) as the global solution, where X ¼ fs : f0 ðy ðsÞÞ ¼ Hk g; Otherwise, k ¼ k þ 1. Step 5:Set the lower bound LBVðkÞ ¼ minfLBV q : q 2 Rk g, then select an active node qðkÞ 2 argminfLBV q : q 2 Rk g, let yk ¼ yqðkÞ and go to step 1.

9971

Y. Ji et al. / Applied Mathematics and Computation 218 (2012) 9965–9973

3.3. Convergence analysis

Theorem 6. (convergence of algorithm LBBF) (i) If the above algorithm terminates finitely at iteration k, then yk is the global optimal solution to problem LSP; (ii) Otherwise the algorithm generates an infinite sequence of iteration such that along any infinite branch of the branch and bound tree, any accumulation point of the sequence fyk g will be the global optimal solution of the problem LSP, and Hk is nonincreasing while LBVðkÞ is nondecreasing, moreover they satisfy

lim Hk ¼ lim LBVðkÞ ¼ f  ;

k!1

ð16Þ

k!1



where f stands for the optimal value of problem LSP.

Proof (i) If algorithm terminates finitely at iteration k, then H ¼ LBVðkÞ. Thus the part (i) one follows the definition of H ; LBVðkÞ and yk . (ii) Assume now the algorithm is infinite, then it must generate an infinite sequence Y k of rectangles whose volumes decrease to zero. By the branching procedure, the whole sequence Y k shrink to a singleton. On the other hand, from the construction one sees that the sequence Hk is nonincreasing, while the sequence LBVðkÞ is nondecreasing and therefore fHk  LBVðkÞg is a nonincreasing sequence of positive numbers. Hence, the whole sequence fHk  LBVðkÞg must tend to zero. This and LBVðkÞ 6 f  6 Hk for every k imply that limk!1 Hk ¼ limk!1 LBVðkÞ ¼ f  . Since yk is feasible to the problem LSP and Hk ¼ f0 ðyk Þ, any cluster point of the sequence yk is feasible to the problem LSP and has the value f  , i.e., solves the problem. This theorem is proved. h 4. Numerical experiment There are at least three computational issues that may arise in using this global optimization algorithm. The first is how to decompose the quadratic function fm ðyÞ into the difference of two convex functions, i.e., how to compute the positive number km such that km I þ Hm is positive definite. Such a km can be computed by using the quite inexpensive Implicit Restarted Lanczos Method [15]. The second issue concerns Assumption 1 for the problem NSP. The positive scalars lj ; Lj ; uj and U j are available such that for all x 2 X; 0 < lj 6 nj 6 Lj and 0 < uj 6 dj 6 U j . The algorithm GBQ has established itself as an important tool for finding bounds on nonconvex quadratic programming over rectangle set [18]. So here we use algorithm GBQ to obtain or estimate the values of lj , Lj ; uj and U j on the given initial rectangle X. Third issue is how to solve the linear relaxation programming. In this paper we adopt the simplex algorithm to solve the linear relaxation programming. So the implement of the proposed global algorithm will depend upon the simplex algorithm. The algorithms were coded in C++ under a Windows system with convergence tolerance e ¼ 106 . In this section we present some computational tests on the performance of our algorithm for a number of sample problems which are chosen from [19]. The numerical experiments show that our method is superior to the method in [19] and the computational results can be seen in the following table. In the table, x : the optimal solution, f  : the corresponding optimal function value, IN: the number of iteration. Example 7

8 x2 þx22 þ2x1 x3 1 þ1:0 > min f ðxÞ ¼ 1x2 þ5x þ x2 2x xþx > 2 8x þ20 ; > 1 x2 1 2 > 3 2 2 > < 2 2 s:t: x1 þ x2 þ x3 6 5; > > > ðx1  2Þ2 þ x22 þ x23 6 5; > > : X ¼ fx : 1 6 xi 6 3; 1 6 x2 6 3; 1 6 x3 6 2g:

Table 1 Test results. This paper

[19]

Pro.

x

f

IN

x

f

IN

1 2

(1.62988, 1.64453, 1.0025) (1.66649, 2.99998)

0.8929 1.8831

108 114

(1.32547, 1.42900, 1.20109) (1.66598, 2.99899)

0.8792 1.8867

84 142

9972

Y. Ji et al. / Applied Mathematics and Computation 218 (2012) 9965–9973

Table 2 Test results. This paper

[19]

n

m

T

p

t

t

3 5 8 8 9

2 4 6 5 5

1 2 3 4 5

2 2 3 4 5

0.358 1.0 22.5 26.247 55.638

1.547 13.21 28.857 19.256 101.587

Example 8

8 x21 þ2x22 3x1 3x2 10 x2 > ; > x1 þ1:0 > min f ðxÞ ¼ x21 2x1 þx22 8x2 þ20  > > > > 2 < s:t: 2x1 þ x2  6x2 6 0; > > > > > > > :

3x1 þ x2 6 8; x21  x2  x1 6 0; X ¼ fx : 1 6 xi 6 3;

i ¼ 1; 2g:

Example 9. Consider the following problem

8 p T 1 P P hx;Q j1 xiþhx;cj1 i > > > min f0 ðxÞ :¼ 21hx;Q j2 xiþhx;cj2 i  < j¼1 2

> s:t: > > :

1hx;Q xiþhx;c i j1 j1 2 1hx;Q xiþhx;c i j2 j2

fm ðxÞ :¼ 12 hx; Q m xi þ hx; cm i 6 bm ; 0 6 xi 6 10;

;

j¼Tþ1 2

m ¼ 1; . . . ; M;

i ¼ 1; . . . ; n:

All elements of Q j1 ; Q j2 ðj ¼ 1; . . . ; pÞ are randomly generated between 0 and 1; All elements of Q m ðm ¼ 1; . . . ; MÞ were randomly generated between 1 and 0; All elements of cj1 , cj2 ðj ¼ 1; . . . ; pÞ were randomly generated between 0 and 1; All elements of cm ðm ¼ 1; . . . ; MÞ were randomly generated between 1 and 0; bm were randomly generated between 300 and 90. The result can be seen in the Table 2, where n stands for the dimension of our problem, M represents the constraint number, t for the CPU time (s) of our algorithm, T and p stands for the number of positive sum and the negative sum of objective function, respective. From Table 1 and 2, we can see that for the most problems our method is superior to the method in [19]. 5. Concluding remarks In this paper a global optimization algorithm is proposed for solving fractional programming with quadratic functions. Firstly, an equivalent problem LSP is derived. Then the linear relaxation programming is obtained by underestimating the objective and constraint functions of problem LSP using linear functions. The algorithm was shown to attain finite e convergence to the global minimum through the successive refinement of a linear relaxation of the feasible region and/or of the objective function and the subsequent solutions of a series of linear programming problems. Numerical tests show that the algorithm is efficient. References [1] H. Konno, M. Inori, Bond portfolio optimization problems and their application to index tracking: a partial optimization approach, J. Oper. Res. Soc. Jpn. 39 (1996) 295–306. [2] C.T. Chang, An approximate approach for fractional programming with absolute-value functions, Appl. Math. Comput. 161 (2005) 171–179. [3] C.S. Colantoni, R.P. Manes, A. Whinston, Programming, profit rates and pricing decisions, Account. Rev. 44 (1969) 467–481. [4] W.C. Davidon, Conic approximation and collinear scaling for optimizers, SIAM J. Numer. Anal. 17 (1980) 268–281. [5] Wenyu Sun, Ya-xiang Yuan, A conic trust-region method for nonlinearly constrained optimization, Ann. Oper. Res. 103 (2001) 175–191. [6] K.A. Ariyawansa, Deriving collinear scaling algorithms as extensions of quasi-Newton methods and the local convergence of DFP and BFGS related collinear scaling algorithm, Math. Program. 49 (1990) 23–48. [7] T. Ibaraki, Parametric approaches to fractional programs, Math. Program. 26 (1983) 345–362. [8] S.S. Chadha, Fractional programming with absolute-value functions, Eur. J. Oper. Res. 141 (2002) 233–238. [9] T. Kuno, A branch-and-bound algorithm for maximizing the sum of several linear ratios, J. Global Optim. 22 (2002) 155–174. [10] H. Konno, N. Abe, Minimization of the sum of three linear fractional functions, J. Global Optim. 15 (1999) 419–432. [11] H.P. Benson, Using concave envelopes to globally solve the nonlinear sum of ratios problem, J. Global Optim. 22 (2002) 343–364. [12] H.P. Benson, Global optimization algorithm for the nonlinear sum of ratios problem, J. Optim. Theor. Appl. 112 (2002) 1–29. [13] R.W. Freund, F. Jarre, Solving the sum-of-ratios problem by an interior-point method, J. Global Optim. 19 (2001) 83–102. [14] R. Horst, H. Tuy, Global Optimization, Deterministic Approaches, Springer, Berlin, 1990.

Y. Ji et al. / Applied Mathematics and Computation 218 (2012) 9965–9973 [15] [16] [17] [18]

9973

D.C. Sorensen, Implicit application of polynomial filters in a K-step Arnoldi Method, SIAM J. Matrix Anal. Appl. 13 (1992) 357–385. J.Y. Gotoh, H. Konno, Maximization of the ratio of two convex quadratic functions over a polytope, Comput. Optim. Appl. 20 (2001) 43–60. H.P. Benson, On the global optimization of sums of linear fractional functions over a convex set, J. Optim. Theor. Appl. 121 (2004) 19–39. L.T.H. AN, P.D. Tao, A. Branch, A branch and bound method via d.c. optimization algorithms and ellipsoidal technique for box constrained nonconvex quadratic problems, J. Global Optim. 13 (1998) 171–206. [19] S.J. Qu, K.C. Zhang, J.K. Zhao, An efficient algorithm for globally minimizing sum of quadratic ratios problem with nonconvex quadratic constraints, Appl. Math. Comput. 189 (2007) 1624–1636.