Fractional programming with convex quadratic forms and functions

Fractional programming with convex quadratic forms and functions

European Journal of Operational Research 173 (2006) 351–369 www.elsevier.com/locate/ejor Continuous Optimization Fractional programming with convex ...

284KB Sizes 0 Downloads 98 Views

European Journal of Operational Research 173 (2006) 351–369 www.elsevier.com/locate/ejor

Continuous Optimization

Fractional programming with convex quadratic forms and functions Harold P. Benson

*

Department of Decision and Information Sciences, University of Florida, Gainesville, FL 32611, USA Received 7 March 2004; accepted 7 February 2005 Available online 13 June 2005

Abstract This article is concerned with two global optimization problems (P1) and (P2). Each of these problems is a fractional programming problem involving the maximization of a ratio of a convex function to a convex function, where at least one of the convex functions is a quadratic form. First, the article presents and validates a number of theoretical properties of these problems. Included among these properties is the result that, under a mild assumption, any globally optimal solution for problem (P1) must belong to the boundary of its feasible region. Also among these properties is a result that shows that problem (P2) can be reformulated as a convex maximization problem. Second, the article presents for the first time an algorithm for globally solving problem (P2). The algorithm is a branch and bound algorithm in which the main computational effort involves solving a sequence of convex programming problems. Convergence properties of the algorithm are presented, and computational issues that arise in implementing the algorithm are discussed. Preliminary indications are that the algorithm can be expected to provide a practical approach for solving problem (P2), provided that the number of variables is not too large.  2005 Elsevier B.V. All rights reserved. Keywords: Global optimization; Fractional programming; Quadratic form; Branch and bound

1. Introduction Consider the single-ratio fractional program sup

*

nðxÞ ; dðxÞ

s.t. x 2 X ;

Tel.: +1 352 392 0134; fax: +1 352 392 5438. E-mail address: [email protected]fl.edu

0377-2217/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2005.02.069

ð1Þ

352

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

where X is a nonempty, closed convex set in Rn and n and d are real-valued functions defined on X such that d(x) > 0 for all x 2 X. Systematic studies and applications of single-ratio fractional programs generally began to appear in the literature in the early 1960s. Since then, a rich body of work on the classification, theory, solution and applications of these problems has been published. For an overview of this work, see, for instance, the monographs by Craven [5], Martos [21] and Stancu-Minasian [27], the surveys contained in the articles by Frenk and Schaible [10,11] and Schaible [23,25] and references therein. Suppose that, in addition to the assumptions given above for problem (1), n is concave on X, d is convex on X and, if d is not affine, then n is nonnegative on X. Then problem (1) is called as concave fractional program. A concave fractional program shares many important properties with general concave nonlinear programs. In particular, any local maximum for a concave fractional program is a global maximum, and, if n and d are differentiable on an open set containing X, then a solution of the Karush–Kuhn–Tucker optimality conditions is a maximum [1,21]. In addition, the objective function of a concave fractional program is semistrictly quasiconcave [1], and, by using an appropriate variable transformation, a concave fractional program can be formulated as an equivalent concave program [10,26]. Because of these properties, there are a number of approaches available for globally solving concave fractional programs. In fact, by transforming a concave fractional program (1) into an equivalent concave program, a great number of the methods of concave programming become available for solving problem (1). When the additional assumptions for problem (1) given in the previous paragraph do not hold, problem (1) is called a nonconcave fractional program. Nonconcave fractional programs arise in certain important applications. Included among these applications, for instance, are certain portfolio selection problems [19,22], stochastic decision making problems [27] and problems in economics [25]. In a nonconcave fractional program, a local maximum need not be a global maximum, and in general, many locally optimal solutions will exist that are not globally optimal solutions. Nonconcave fractional programs therefore fall into the domain of global optimization [16,28]. Most of the theoretical and algorithmic work in fractional programming applies only to concave fractional programs or to special cases of concave fractional programs. For this reason, Frenk and Schaible [10] and Schaible [24] have encouraged that more research be done into the solutions of nonconcave fractional programs. In this article, we are concerned with two nonconcave fractional programming problems. Each of these fractional programs involves maximizing a ratio of a convex function to a convex function, where at least one of the convex functions is a quadratic form. Let Q be an n · n positive semidefinite matrix of real numbers. Assume without loss of generality that Q is symmetric. The first problem (P1) of concern is given by : xT Qx max q1 ðxÞ ¼ T ; x Px

s:t: x 2 X ;

ðP1Þ

where P is an n · n positive definite matrix of real numbers, and X is a nonempty, compact convex set in Rn such that 0 62 X. We assume without loss of generality that P is symmetric. The second problem (P2) of interest is the problem : xT Qx ; max q2 ðxÞ ¼ gðxÞ

s:t: x 2 X ;

ðP2Þ

where X is a nonempty, compact convex set in Rn , and g : Rn ! R is a convex function such that g(x) > 0 for all x 2 X. Notice that problem (P1) is a special case of problem (P2). Our study of problem (P1) is motivated by the work of Lo and MacKinlay [19] and of Gotoh and Konno [13]. In [19], Lo and MacKinlay formulate a ‘‘maximally predictable portfolio’’ problem that has the form of problem (P1) with X equal to a polyhedron. The goal of the maximally predictable portfolio problem is to pick a portfolio of investments that maximizes the ratio of the variability in the portfolio return predicted

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

353

by a linear regression model to the variability of the portfolio return. Lo and MacKinlay argue that a portfolio that maximizes this ratio gives important new insights into asset return findings that are based upon less formal methods. In [13], Gotoh and Konno propose an algorithm for globally solving problem (P1) under the additional assumption that X is polyhedral. This algorithm implements the fractional programming approach of Dinkelbach [6] by using IbarakiÕs binary search algorithm [17]. In general, the main computational effort required in the Gotoh–Konno algorithm arises from the need to solve a sequence of separable, nonconvex optimization problems by a global optimization algorithm of Falk and Soland [9]. We study problem (P2) because it is a natural extension of the nonconvex fractional program (P1), and because of the need expressed by others for more research results for nonconvex fractional programs [10,24]. The purpose of this article is two-fold. First, the article presents and validates some theoretical properties of problems (P1) and (P2). Included among these properties is the result that, under a mild assumption, any globally optimal solution for problem (P1) must belong to the boundary of X. Also among these properties is a result showing that problem (P2) can be reformulated as a maximization of a convex function over a nonempty, compact convex set. The second purpose of the article is to propose an algorithm for globally solving problem (P2). This algorithm uses the branch and bound approach. In contrast to the Gotoh–Konno algorithm, the branch and bound algorithm calls for solving a single, nonconvex program rather than a sequence of nonconvex programs. Indications are that the branch and bound algorithm can be expected to provide a practical approach for globally solving problem (P2), at least for values of n that are not too large. The article is organized as follows. Theoretical results for problems (P1) and (P2) are given in Section 2. Section 3 presents and discusses the branch and bound algorithm for globally solving problem (P2). In Section 4, the solution of a sample problem via the algorithm is described. Some concluding observations are given in Section 5. 2. Theoretical properties First, we concretely demonstrate that problems (P1) and (P2) are global optimization problems. To help accomplish this, we need the following definitions and result. Definition 1. Let f be a real-valued function defined on a convex set C, where C  Rn . Then f is a quasiconcave (quasiconvex) function on C when f ½kx1 þ ð1  kÞx2  P min½f ðx1 Þ; f ðx2 Þ ðf ½kx1 þ ð1  kÞx2  6 max½f ðx1 Þ; f ðx2 ÞÞ for every x1, x2 2 C and k such that 0 6 k 6 1. Quasiconcave functions belong to the set of generalized concave functions, i.e., they possess some, but not all, of the properties of concave functions. In fact, quasiconcave functions constitute the weakest class of generalized concave functions. Similar comments apply to quasiconvex functions [1]. The next result is well known (see, e.g., [20]). Proposition 1. Let f be a real-valued function defined on the convex set C  Rn . Then f is quasiconcave (quasiconvex) on C if and only if fx 2 Cjf ðxÞ P ag ðfx 2 Cjf ðxÞ 6 agÞ is a convex set for each a 2 R.

354

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

The following two examples demonstrate that the objective function of problem (P1) is not, in general, quasiconcave over X, and it is not, in general, quasiconvex over X. Example 1. Consider problem (P1) with X ¼ fðx1 ; x2 Þ 2 R2 j  x1 þ 4x2 P 2; 2 6 x1 6 2; x2 6 2g;  Q¼

1 0 0 1

 ;

 P¼

1

0

0

2

 .

Let  LU ¼

x 2 X jq1 ðxÞ P

 5 . 6

Then, by Proposition 1, since (2, 1), (2, 1) 2 LU but (0, 1) 62 LU, q1 is not quasiconcave on X. Example 2. In problem (P1), let X ¼ fðx1 ; x2 Þ 2 R2 jx1 þ x2 P 1; xj 6 4; j ¼ 1; 2g;  Q¼

 1 2 ; 2 6

 P¼

1 0

 0 . 6

Consider LL ¼ fx 2 X jq1 ðxÞ 6 1g.   Then (1, 0), (0, 1) 2 LL, but 12 ; 12 62 LL. Therefore, by Proposition 1, q1(x) is not quasiconvex on X. From Examples 1 and 2, we surmise that the nonconcave fractional programs (P1) and (P2) do not share the basic properties of concave maximization or, even, of convex maximization problems. For instance, we do not expect problems (P1) and (P2) to have the property that any locally maximal solution is globally maximal. In fact, as we shall see, they do not have this property, and a stronger statement can be made. To make this statement, we need the following definition. Definition 2. Let f be a real-valued function defined on a set C  Rn . A point x* 2 C is called a strict locally maximal solution of f over C when there exists a d > 0 such that for every x 5 x* that belongs to the set Nd (x*) \ C, where N d ðx Þ ¼ fx 2 Rn jkx  x k < dg; we have f(x) < f(x*). From [1], if f is a quasiconcave function defined on a convex set C  Rn , then any strict locally maximal solution of f over C is a (strict) globally maximal solution of f on C. However, for problems (P1) and (P2), the corresponding result does not hold, even if X is polyhedral. For instance, in Example 1, it can be shown that x ¼ ð2; 1Þ is a strict locally maximal solution for q1(x) over X with q1 ðxÞ ¼ 56. However, since q1(2, 0) = 1 and (2, 0) 2 X, x is not a globally maximal solution of q1 over X. We have thus shown that in problems (P1) and (P2), locally maximal solutions, even in the strict sense, need not be globally maximal solutions. From [1], we may view the reason for this to be the fact that q1(x) and q2(x) need not be quasiconcave functions on X.

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

355

For certain classes of global optimization problems, locally or globally optimal solutions are guaranteed to exist on the boundaries or on certain subsets of the boundaries of their feasible regions. For instance, if a function g is continuous and quasiconvex on Rn , then the global maximum of g over a nonempty, compact convex set Y in Rn is attained at some extreme point of Y [15]. However for problem (P2), there is no guarantee that the global maximum is attained at an extreme point of X, or even on the boundary of X. This is shown by the following example. For any set Y, let oY and (int Y) denote the boundary and the interior of Y, respectively. Example 3. In problem (P2), let X ¼ fðx1 ; x2 Þ 2 R2 j0 6 xj 6 2; j ¼ 1; 2g; " Q¼

1

0

0

1

# ;

and, for each ðx1 ; x2 Þ 2 R2 , define g(x1, x2) by gðx1 ; x2 Þ ¼ x41 þ x42 þ 1. Then (1, 1) 2 (int X) and q2 ð1; 1Þ ¼ 23. For any a such that 0 6 a 6 2, it is not difficult to show that q2(a, 0), q2(0, a), q2(2, a) and q2(a, 2) are all less than 23. This implies that, in this problem, the global maximum of q2(x1, x2) over X is not attained on oX. For problem (P1), certain guarantees exist concerning the locations for globally and locally optimal solutions. To help establish the first of these guarantees, we need the following result. For a proof, see Proposition 3.18 and its proof in [28]. Proposition 2. Let M be an n · n symmetric indefinite matrix of real numbers, and let Y  Rn be a nonempty, compact set. Then any globally optimal solution to the problem max y T My;

s.t. y 2 Y

must belong to oY. Theorem 1. Let v1 denote the globally optimal value of problem (P1). Suppose that v1 < max q1 ðxÞ;

s:t: x 2 Rn n f0g.

ð2Þ

Then any globally optimal solution for problem (P1) must belong to oX. Proof. Let x* be a globally optimal solution for problem (P1). Then, from Dinkelbach [6], x* is a globally optimal solution to the problem max xT Qx  v1 xT P x;

s:t: x 2 X .

Notice that this problem may be written max xT ðQ  v1 P Þx;

s:t: x 2 X .

From Gantmacher [12],  ¼ max q ðxÞ; s:t: x 2 Rn n f0g; k 1

ð3Þ

356

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

where  k is the largest eigenvalue of the matrix P1Q. Thus, by assumption (2), v1 < k. From Gotoh and Konno [13], this implies that (Q  v1P) is an indefinite matrix. Since x* is a globally optimal solution to the problem (3), by Proposition 2, this implies that x* 2 oX. h Remark 1. Assumption (2) in Theorem 1 assumes that the condition x 2 X in problem (P1) is essential in the sense that if this condition were removed, then values for q1(x) larger than v1 could be achieved. Thus, Theorem 1 states that in problem (P1), whenever the condition x 2 X is essential, every globally optimal solution for problem (P1) must belong to oX. Remark 2. Let T = P1Q. As explained in the proof of Theorem 1,  k ¼ max q1 ðxÞ; s:t: x 2 Rn n f0g;  is the largest eigenvalue of the matrix T. Furthermore, from Gantmacher [12], k is attained only by where k eigenvectors associated with the eigenvalue  k of T. To check whether or not assumption (2) holds in Theorem 1, we may therefore proceed by computing k and then solving the convex programming problem max x1 ; s:t:

ðT   kIÞx ¼ 0; x 2 X;

where I denotes the n · n identity matrix. This problem either has an optimal solution or it is infeasible. If the former holds, then assumption (2) fails, while if the latter holds, then assumption (2) holds. Remark 3. Theorem 1 is not valid if assumption (2) is deleted. To see this, consider problem (P1) with X ¼ fðx1 ; x2 Þ 2 R2 j1 6 x1 6 2; 1 6 x2 6 2g; " # " # 1 0 1 0 Q¼ ; P¼ . 0 12 0 1 In this case,  k ¼ 1 is the largest eigenvalue of T, and the eigenvectors associated with k are all points in R2 of the form ðx1 ; 0Þ, where x1 6¼ 0. Therefore, in this case, 1 ¼ max q1 ðxÞ;

s:t: x 2 R2 n f0g;

ð4Þ 2

and the maximum in (4) is attained by any point in R of the form ðx1 ; 0Þ, where x1 6¼ 0. From this it is easy to see that the globally optimal solution set for the problem (P1) is given by fðx1 ; 0Þj1 6 x1 6 2g. Notice that this set contains many points in (int X), so that in this problem, the conclusion of Theorem 1 does not hold. This is because in this problem, v1 = 1, so that, by (4), assumption (2) does not hold. Remark 4. The example given in Remark 3 also shows that problems (P1) and (P2) need not have globally optimal solutions that are extreme points of X. Even without assumption (2), there must exist a globally optimal solution for problem (P1) that belongs to oX. In fact, we have the following stronger result. Theorem 2. Let X 0 be a nonempty compact set in Rn such that 0 62 X 0. Let Q and P be n · n, symmetric positive semidefinite and positive definite matrices, respectively. Then any locally optimal value for the maximum of q1(x) over X 0 is attained by at least one point that belongs to oX 0.

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

357

Proof. Let v1 denote a locally optimal value for the maximum of q1(x) over X 0, and let x 2 X 0 satisfy q1 ðxÞ ¼ v1 . If x 2 oX 0 , then the desired result follows. Therefore, suppose that x 62 oX 0 . Then x 2 ðint X 0 Þ. Consider the set F given by F ¼ fb 2 Rjb P 0; ð1 þ bÞx 2 X 0 g. Since x 2 ðint X 0 Þ, F 5 /. Also, because X 0 is compact, it is easy to show that F is compact. Therefore, we  > 0 to the problem may choose an optimal solution b max b;

s:t: b 2 F .

ð5Þ

 x 2 oX 0 , because if this were not the case, then for some e > 0, ð1 þ b  þ eÞx would belong Notice that ð1 þ bÞ 0   x ¼ q ðxÞ ¼ v1 . to X , which contradicts the optimality of b in (5). Also, by the definition of q1(x), q1 ½ð1 þ bÞ 1  Thus, the point ð1 þ bÞx has the desired properties, and the proof is complete. h It can be shown that for a concave fractional program (1) in which n(x) and d(x) are continuous functions, the globally optimal solution set is a convex set. Since problems (P1) and (P2) are nonconcave fractional programs, we may not expect this property to hold for these problems. Indeed, it does not, as can be shown by examples. For brevity, however, we omit providing such an example. Although the objective functions of problem (P1) and (P2) are generally neither quasiconcave nor quasiconvex over X, the next result shows that these problems can be reformulated as maximizations of quasiconvex functions over nonempty, compact convex sets. To show this, let m and M satisfy m ¼ min gðxÞ; M P max gðxÞ;

s:t: x 2 X ; s:t: x 2 X ;

and consider the problem (P3) given by : xT Qx ; max q3 ðx; tÞ¼ t s:t: t  gðxÞ P 0;

ðP3Þ

m 6 t 6 M; x 2 X; where X, g(x) and Q are as given in problem (P2). Then problems (P2) and (P3) are equivalent in the sense of the following result. Proposition 3. If ðx ; t Þ 2 Rnþ1 is a globally optimal solution for problem (P3), then x* is a globally optimal solution for problem (P2). Conversely, if x* is a globally optimal solution for problem (P2), then ðx ; t Þ 2 Rnþ1 is a globally optimal solution for problem (P3), where t* = g(x*). Proof. The proof of this result is straightforward and therefore is omitted. n

n

h

Since X is a nonempty, compact convex set in R and g : R ! R is a convex function, it is easy to see that the feasible region Z of problem (P3) is a nonempty, compact convex set in Rnþ1 . Also, from Propositions 3.30 and 5.20 in [1], since Q is a symmetric, positive semidefinite matrix and g(x) is positive for all x 2 X, q3(x, t) is a continuous, quasiconvex function on Z. From [16], this implies that the global maximum of problem (P3) is attained at an extreme point of Z, and that various global optimization algorithms for convex maximization problems, including, for example, outer approximation algorithms, can potentially be effectively used to globally solve problem (P3). By Proposition 3, this provides a potential avenue for globally solving problems (P1) and (P2).

358

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

To close this section, we will show that problem (P2) and, therefore problem (P1), can be reformulated as a maximization of a convex function over a nonempty, compact convex set. To yield a more concrete result, we assume henceforth in problem (P2) that X is given by X ¼ fx 2 Rn jgi ðxÞ 6 0; i ¼ 1; 2; . . . ; q; L 6 x 6 U g;

ð6Þ

where L, U 2 Rn and, for each i = 1, 2, . . . , q, gi (x) is a convex function on Rn . The assumption that L6x6U for all x 2 X can be made without loss of generality, since X is a compact set. Since Q is an n · n symmetric positive semidefinite matrix of real numbers, it is well known that we may choose an orthonormal basis {w1, w2, . . . , wn} of Rn such that for each j 2 {1, 2, . . . , n}, w j is an eigenvector of Q with eigenvalue aj P 0 [14]. Let W be the n · n matrix with columns w1, w2, . . . , wn. The matrix W and the eigenvalues aj, j = 1, 2, . . . , n, of Q can be found, for instance, by using the characteristic polynomial of Q to compute the eigenvalues a1,a2, . . . , an. The corresponding orthonormal eigenvectors w1, w2, . . . , wn can then be easily found. Let h(y) and hi(y), i = 1, 2, . . . , q, be functions defined for each y 2 Rn by hðyÞ ¼ gðWyÞ

ð7Þ

and hi ðyÞ ¼ gi ðWyÞ;

i ¼ 1; 2; . . . ; q;

ð8Þ

respectively, and consider the problem (P4) given by 2 n : X aj y j max q4 ðy; tÞ¼ ; t j¼1

ðP4Þ

s:t: t  hðyÞ P 0; m 6 t 6 M; hi ðyÞ 6 0;

i ¼ 1; 2; . . . ; q;

L 6 Wy 6 U . Theorem 3. (a) If ðy  ; t Þ 2 Rnþ1 is a globally optimal solution for problem (P4), then x* = Wy* is a globally optimal solution for problem (P2). (b) Conversely, if x* is a globally optimal solution for problem (P2), then ðy  ; t Þ 2 Rnþ1 is a globally optimal solution for problem (P4), where y* = W1x* and t* = g(x*). Proof. (a) Let ðy  ; t Þ 2 Rnþ1 be a globally optimal solution for problem (P4), and let x* = Wy*. Then t  gðx Þ ¼ t  gðWy  Þ ¼ t  hðy  Þ P 0;

ð9Þ

where the second equation is from (7) and the inequality follows from the feasibility of (y*, t*) in problem (P4). Similarly, it follows from (8) and the feasibility of (y*, t*) in problem (P4) that gi ðx Þ 6 0;

i ¼ 1; 2; . . . ; q.

ð10Þ

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

359

Since (y*, t*) is feasible in problem (P4) and x* = Wy*, we also know that m 6 t* 6 M and L 6 x* 6 U. Together with (6), (9) and (10), this implies that (x*, t*) is a feasible solution for problem (P3). Suppose that ðx; tÞ is a feasible solution for problem (P3) such that q3 ðx; tÞ > q3 ðx ; t Þ. Since W is an invertible matrix, we may choose y 2 Rn such that W y ¼ x. Then t  hðy Þ ¼ t  gðW y Þ ¼ t  gðxÞ P 0;

ð11Þ

where the first equation is from (7) and the inequality holds because (x; tÞ is a feasible solution for problem (P3). Similarly, from (6), (8) and the feasibility of (x; tÞ in problem (P3), it follows that hi ðy Þ 6 0;

i ¼ 1; 2; . . . ; q.

ð12Þ

By (6), since x ¼ W y and ðx; tÞ is feasible in problem (P3), we also know that m 6 t 6 M and L 6 W y 6 U . Together with (11) and (12), this implies that ðy ; tÞ is a feasible solution for problem (P4). Since W y ¼ x, by the definitions of q3 and q4, we obtain that n aj y 2j xT Qx y T ðW T QW Þy y T ðW 1 QW Þy X ¼ q4 ðy ; tÞ; ð13Þ ¼ ¼ ¼ q3 ðx; tÞ ¼ t t t t j¼1 where the third equation follows by the choice of W and the fourth equation follows because W 1 QW ¼ D;

ð14Þ

where D is the diagonal matrix with a1,a2, . . . , an along the main diagonal [14]. Similarly, it follows that q3 ðx ; t Þ ¼ q4 ðy  ; t Þ.

ð15Þ 







From (13) and (15), since q3 ðx; tÞ > q3 ðx ; t Þ, we obtain that q4 ðy ; tÞ > q4 ðy ; t Þ. But since ðy ; tÞ is a feasible solution for problem (P4), this contradicts that (y*, t*) is a globally optimal solution for problem (P4). Therefore, the supposition that ðx; tÞ exists is false, and (x*, t*) is a globally optimal solution for problem (P3). By Proposition 3, x* is a globally optimal solution for problem (P2). (b) To show this part of the theorem, let x* be a globally optimal solution for problem (P2), let y* = W1x*, and let t* = g(x*). By Proposition 3, (x*, t*) is a globally optimal solution for problem (P3). Therefore, x* = Wy*, and (x*, t*) is a feasible solution for problem (P3). By the same arguments used in part (a) of this proof for ðx; tÞ and ðy ; tÞ, where x ¼ W y , this implies that (y*, t*) is a feasible solution for problem (P4). Suppose that ðy ; tÞ is a feasible solution for problem (P4) such that q4 ðy ; tÞ > q4 ðy  ; t Þ. Let x ¼ W y . Then, since ðy ; tÞ is a feasible solution for problem (P4) and x ¼ W y , by the same arguments used in part (a) of this proof for (y*, t*) and (x*, t*), where x* = Wy*, this implies that ðx; tÞ is a feasible solution for problem (P3). Since x* = Wy*, by the definitions of q4 and q3, we see that n X aj ðy j Þ2 ðy  ÞT ðW 1 QW Þy  ðy  ÞT ðW T QW Þy  xT Qx ¼ ¼ ¼ ¼ q3 ðx ; t Þ; q4 ðy ; t Þ ¼     t t t t j¼1 



ð16Þ

where the second equation follows from (14) and the third equation follows by the choice of W. Similarly, it follows that q4 ðy ; tÞ ¼ q3 ðx; tÞ. 

ð17Þ 





Since q4 ðy ; tÞ > q4 ðy ; t Þ, (16) and (17) imply that q3 ðx; tÞ > q3 ðx ; t Þ. This, however, contradicts the global optimality of (x*, t*) in problem (P3), because ðx; tÞ is a feasible solution for problem (P3). Therefore, the supposition that ðy ; tÞ exists is false, so that (y*, t*) is a globally optimal solution for problem (P4). h Let Y  Rnþ1 denote the feasible region of problem (P4). Theorem 4. The set Y is a nonempty, compact convex set, and the objective function q4(y, t) of problem (P4) is a convex function on Y.

360

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

Proof. Since g(x) and gi (x), i = 1, 2, . . . , q, are convex functions on Rn , it is easy to show via (7) and (8) that h(y) and hi (y), i = 1, 2, . . . , q, are also convex functions on Rn . From this, it is easy to see that Y is a closed convex set. Since X 5 /, we may choose a feasible solution (x; t) for the problem (P3). Suppose that we then choose y 2 Rn such that W y ¼ x. Then, as shown in the proof of part (a) of Theorem 3, ðy ; tÞ 2 Y . Therefore, Y 5 /. By Theorem 1 in [7], this implies that Y is bounded. Let j 2 {1, 2, . . . , n}, and consider the function gj (y, t) defined for each (y, t) 2 Y by gj ðy; tÞ ¼

y 2j . t

ð18Þ

The numerator of gj (y, t) is the square of a linear function on Y, and the denominator of gj (y, t) is a positive linear function on Y. From [2, p. 119], this implies that gj (y, t) is a convex function on Y. By the choice of j, and since aj P 0, j = 1, 2, . . . , n, this implies that the function n X aj gj ðy; tÞ ð19Þ j¼1

is convex on Y. From (18), (19) is identical to q4(y, t).

h

3. Branch and bound algorithm We assume henceforth that g(x) is nonconstant on X. When g(x) is constant on X, problem (P2) becomes a standard quadratic convex maximization problem for which many algorithms are available [3,15,16]. By Theorems 3 and 4, to globally solve problem (P2), one can instead globally solve problem (P4), which is a maximization of a convex function over a nonempty, compact convex set. There are a number of general-purpose convex maximization algorithms that can potentially be used to globally solve problem (P4) [3,16]. However, in an attempt to globally solve problem (P4) in as efficient a manner as possible, we propose in this section a specialized branch and bound algorithm for globally solving this problem. For each j 2 {1, 2, . . . , n}, let LY j ¼ min y j ; s.t.

hi ðyÞ 6 0;

i ¼ 1; 2; . . . ; q;

L 6 Wy 6 U ; and let UY j ¼ max y j ; s.t.

hi ðyÞ 6 0;

i ¼ 1; 2; . . . ; q;

L 6 Wy 6 U . By using arguments similar to those used in the proof of Theorem 4, it can be seen that for each j 2 {1, 2, . . . , n}, LYj and UYj are finite numbers, and each of them is the optimal value of a convex program, which can be solved by standard convex programming methods. Define the hyperrectangle H by H ¼ H 1  H 2     H n; where for each j 2 {1, 2, . . . , n}, H j ¼ fðy j ; tj Þ 2 R2 jLY j 6 y j 6 UY j ; m 6 tj 6 Mg.

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

361

For each j = {1, 2, . . . , n}, define fj : H j ! R for each (yj, tj) 2 Hj by fj ðy j ; tj Þ ¼

aj y 2j . tj

Let f : H ! R be defined for each (y, t) 2 H, where y T ¼ ðy 1 ; y 2 ; . . . ; y n Þ 2 Rn ðt1 ; t2 ; . . . ; tn Þ 2 Rn , by n X f ðy; tÞ ¼ fj ðy j ; tj Þ.

and tT ¼ ð20Þ

j¼1

Define YT  R2n as the set of all ðy; tÞ 2 R2n that satisfy each of the conditions hi ðyÞ 6 0;

i ¼ 1; 2; . . . ; q;

L 6 Wy 6 U ; t1  hðyÞ P 0; t1 ¼ t2 ¼    ¼ tn ; where y T ¼ ðy 1 ; y 2 ; . . . ; y n Þ 2 Rn and tT ¼ ðt1 ; t2 ; . . . tn Þ 2 Rn . To globally solve problem (P4), the branch and bound algorithm to be presented in this section instead globally solves the problem (P5) given by max f ðy; tÞ s.t. ðy; tÞ 2 YT \ H .

ðP5Þ

By using arguments similar to those used in the proof of Theorem 4, it can be seen that problem (P5) involves the maximization of a convex function over a nonempty compact convex set. It is also evident that if (y*, t*) is a globally optimal solution for problem (P5), where y T ¼ ðy 1 ; y 2 ; . . . ; y n Þ 2 Rn and tT ¼ ðt1 ; t2 ; . . . ; tn Þ 2 Rn , then, for any j 2 f1; 2; . . . ; ng; ðy  ; tj Þ 2 Rnþ1 is a globally optimal solution for problem (P4). Conversely, if ðy  ; t Þ 2 Rnþ1 is a globally optimal solution for problem (P4), then ðy  ; t Þ 2 R2n is a globally optimal solution for problem (P5), where t 2 Rn has t** as each of its components. Let v denote the optimal value of problem (P5). Notice that problem (P5) has (n  1) more variables than problem (P4), as well as some additional constraints. It is hoped, however, that by globally solving problem (P5) instead of problem (P4), the branch and bound algorithm can generate smaller upper bounds. This, in turn, would lead to finding globally optimal solutions for problem (P2) more quickly. To globally solve problem (P5), the algorithm performs several key operations. These can be classified as either branching operations, bounding operations, or fathoming operations. 3.1. Branching The branching process iteratively partitions the rectangle H into smaller and smaller partition elements. This process helps the algorithm identify a location in H of a globally optimal solution for problem (P5). We assume that H has full dimension. Later, we will explain how this assumption can be relaxed. To help explain the branching process, we will need the following definition. Definition 3 [16]. Let V be a subset of Rp , and let J be a finite set of indices. A set fM j jj 2 J g of subsets of V is said to be a partition of V when S (a) V ¼ j2J M j and (b) Mi \ Mj = orMi \ orMj for all i, j 2 J, i 5 j, where orMi denotes the (relative) boundary of Mi.

362

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

During each iteration of the algorithm, a more refined partition is constructed of a portion of H that has not yet been excluded from consideration. The initial partition P0 of H consists of up to 2n partition elements. To explain the structure of these partition elements, for each j 2 {1, 2, . . . , n}, let S 1j denote the subset of Hj consisting of the simplex with vertices (LYj, M), (LYj, m) and (UYj, m), and let S 2j denote the subset of Hj consisting of the simplex with vertices (LYj, M), (UYj, M) and (UYj, m). 0 Consider now the set P of all 2n partition elements S of the form S ¼ S1  S2      Sn;

ð21Þ

where, for each j 2 {1, 2, . . . , n}, either S j ¼ S 1j or S j ¼ S 2j . Notice that since, for each j 2 {1, 2, . . . , n}, the set 0 fS 1j ; S 2j g is a partition of H j ; P is a partition of H. Also, from (21), it is not difficult to show that each par0 tition element S of P is a nonempty, compact polyhedron in R2n. It can also be shown that each of these 0 partition elements has dimension 2n, since H has full dimension. We refer to these partition elements of P as polyhedral partition elements. 0 Typically, the initial partition P0 of H equals P . In some cases, however, the algorithm is able to elim0 inate one or more of the polyhedral partition elements of P from consideration by one of the fathoming 0 operations to be explained below. In these cases, P0 consists of the polyhedral partition elements of P that remain after this elimination process. In either case, the union of the elements in P0 equals the portion of H that remains to be searched by the branch and bound algorithm. To explain how the algorithm forms subsequent partitions, we need the following definition, which is derived from [16]. Definition 4. Let T  R2 be a full-dimensional simplex with vertices v1 ; v2 ; v3 2 R2 . Assume without loss of generality that the edge connecting v1 and v2 is a longest edge of T. Then a subdivision of T into two fulldimensional simplices T1 and T2 is called a simplicial bisection of T when w, v2, and v3 are the vertices of T1 and v1, w, and v3 are the vertices of T2, where w denotes the midpoint of the edge connecting v1 and v2. From Horst and Tuy [16], the simplicial bisection {T1, T2} of T in Definition 4 is a partition of T in the sense of Definition 3. At the beginning of a typical step k P 1 of the algorithm, we have available a partition Pk1 of a subset of H that cannot yet be excluded from the search for a global optimal solution. The partition Pk1 consists of polyhedral partition elements S of the form (21) where for each j 2 {1, 2, . . . , n}, either S j ¼ S 1j , S j ¼ S 2j , or Sj is a two-dimensional simplex formed by applying simplicial bisection one or more times to S 1j or to S 2j . During step k, an element S of Pk1 is chosen for branching. The chosen element of S is subdivided into two polyhedral partition elements. To explain the subdivision process, let S be given by (21) where for each j 2 {1, 2, . . . , n}, the vertex set of Sj is given by fv1j ; v2j ; v3j g. The first step of this process is to choose, for each j = 1, 2, . . . , n, a longest edge of Sj. Let us assume without loss of generality that, for each j = 1, 2, . . . , n, a longest edge of Sj is the edge connecting v1j and v2j . Next, we find the maximum of kv1j  v2j k; j ¼ 1; 2; . . . ; n, and an index j* 2 {1, 2, . . . , n} that achieves this maximum. We then subdivide S into two new polyhedral partition elements S1 and S2 given by S 1 ¼ S 1  S 2      S j1  S j;1  S jþ1      S n and S 2 ¼ S 1  S 2      S j1  S j;2  S jþ1      S n ;

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

363

where {Sj*,1, Sj*,2} is a simplicial bisection of Sj*. It is not difficult to show that S1 and S2 are nonempty, compact, full-dimensional polyhedra in R2n . The sets S1 and S2 are then examined by a fathoming procedure to be explained below which attempts to detect if (YT \ Sj) is the empty set, j = 1, 2. The new partition Pk created in step k is given by P k ¼ ðP k1 n fSgÞ [ P k0 ;

ð22Þ

where P k0 is the subset of {S1, S2} remaining after deleting each element S j of this set for which the fathoming procedure detected that (YT \ S j) = /. Given that P k1 is a partition of a subset of H that still must be searched for a global optimal solution, it can be shown that P k given by (22) is also a partition of a subset of H that still must be searched. 3.2. Bounding Three types of bounds are computed by the algorithm. The first is a local upper bound UB(S) for the objective function f(y, t) of problem (P5) over (YT \ S), where S  R2n is a given polyhedral partition element. To help compute UB(S), the algorithm relies upon the concept of a concave envelope, which may be defined as follows. Definition 5 [16]. Let V  Rp be a compact convex set, and let q : V ! R be upper semicontinuous on V. Then qv : V ! R is called the concave envelope of q on V when (a) qv(x) is a concave function on V, (b) qv(x) P q(x) for all x 2 V, and (c) there is no function r(x) satisfying (a) and (b) such that rðxÞ < qv ðxÞ for some point x 2 V . Notice that the concave envelope qV is the ‘‘tightest’’ concave function that majorizes q on V. Let S  R2n be a given polyhedral partition element created by the algorithm. Then S is of the form (21), where for each j 2 {1, 2, . . . , n}, either S j ¼ S 1j , S j ¼ S 2j , or Sj is a two-dimensional simplex formed by applying simplicial bisection one or more times to S 1j or to S 2j . The local upper bound UB(S) computed by the algorithm is given by the optimal objective function value of the problem (PS) defined by max f S ðy; tÞ;

s.t. ðy; tÞ 2 YT \ S;

ðPSÞ

where f S(y, t) is the concave envelope of f(y, t) over S. We assume that UB(S) = 1 if (YT \ S) = /. Notice that finding UB(S) involves solving the convex programming problem (PS). Since f(y, t) is given by (20) and S is given by (21), by using the same logic as used in the proof of Theorem IV.8 in [16], it can be seen that for each (y, t) 2 S, n X S f S ðy; tÞ ¼ fj j ðy j ; tj Þ; ð23Þ j¼1 S

where for each j 2 f1; 2; . . . ng; fj j denotes the concave envelope of fj over the two-dimensional simplex Sj. S In addition, for each j 2 {1, 2, . . . , n}, since fj is a convex function on S j ; fj j is the unique affine function that agrees with fj at the vertices of Sj [16]. Therefore, the objective function of problem (PS) is an affine function whose formula can be explicitly determined. However, to increase the efficiency of the algorithm, we will suggest later a method for solving problem (PS) that does not require the explicit determination of the formula for f S(y, t).

364

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

The second type of bound computed by the algorithm is a global upper bound for the optimal value v of problem (P5). During each step k, k P 0, an upper bound bk of this type is computed according to the formula bk ¼ maxfUBðSÞjS 2 P k g. The third type of bound computed by the algorithm is a global lower bound for v. For each step k, k P 0, the lower bound ak of this type is given by ak ¼ f ðy k ; tk Þ; where (yk, tk) is the incumbent feasible solution for problem (P5), i.e., among all feasible solutions for problem (P5) found through step k of the algorithm, (yk, tk) achieves the largest value of f. 3.3. Fathoming There are two means by which a polyhedral partition element S created by the algorithm can be fathomed (eliminated from further consideration). A fathomed partition element need not be searched further for a globally optimal solution for problem (P5). The first means by which a partition element S may be fathomed is by passing the deletion by infeasibility test. To explain this test, assume that S is given by (21) where for each j 2 {1, 2, . . . , n}, either S j ¼ S 1j ; S j ¼ S 2j , or Sj is a two-dimensional simplex formed by applying simplicial bisection one or more times to S 1j or to S 2j . For each j 2 {1, 2, . . . , n}, let V(Sj) denote the vertices of Sj. The test consists of determining, for each pair of simplices S u ; S w 2 fS 1 ; S 2 ; . . . ; S n g; u < w, if either of the inequalities maxftu jðy u ; tu Þ 2 V ðS u Þg < minftw jðy w ; tw Þ 2 V ðS w Þg; minftu jðy u ; tu Þ 2 V ðS u Þg > maxftw jðy w ; tw Þ 2 V ðS w Þg holds. If at least one of these inequalities holds for at least one pair Su, Sw of such simplices, then the partition element S passes the test and is eliminated from further consideration. This is because in this case, for all points ðy; tÞ 2 R2n such that (y1, t1) · (y2, t2) ·    · (yn, tn) 2 S, tu 5 tw, so that (y, t) 62 YT. Hence, in this case, (YT \ S) = /, so that problem (PS) is not feasible. The second means by which a partition element S may be fathomed is by satisfying the inequality UBðSÞ 6 ak ; for some k P 0. When this inequality holds, there can be no feasible solution for problem (P5) that lies in (YT \ S) and has a larger objective function value than ak. 3.4. Algorithm Based upon the operations described in Sections 3.1–3.3, the steps of the branch and bound algorithm for globally solving problem (P5) may be stated as follows. Step 0. 0.1. Set P0 = /, a0 = 1. 0 0.2. (i) Let P denote the set of polyhedral partition elements S of the form (21), where for each j = 1, 2, . . . , n, either S j ¼ S 1j or S j ¼ S 2j . 0 0 (ii) If P ¼ /, go to step 0.3. Otherwise, remove any element S from P . (iii) Find the optimal value UB(S) of the convex program (PS). If UB(S) = 1, go to step 0.2(ii). Otherwise, find an optimal solution (ys, ts) to problem (PS) and continue. (iv) If UB(S) 6 a0, go to step 0.2(ii). Otherwise, set

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

365

a0 ¼ maxfa0 ; f ðy s ; ts Þg and choose (y0, t0) so that a0 = f(y0, t0). Set P0 = P0 [ {S} and go to step 0.2(ii). 0.3. If P0 = /, stop: v = a0 and (y0, t0) is a globally optimal solution for problem (P5). Otherwise, set b0 ¼ maxfUBðSÞjS 2 P 0 g. If b0  a0 = 0, stop: v = a0 = b0 and (y0, t0) is a globally optimal solution for problem (P5). Otherwise, go to step 1. Step k (k = 1, 2,. . .). k.1. Delete from Pk1 all polyhedral partition elements S such that UB(S) 6 ak1. k.2. Select any polyhedral partition element S 2 Pk1 such that UB(S) = bk1. Subdivide S via the branching process into two polyhedral partition elements S1 and S2. Set P k0 ¼ fS 1 ; S 2 g. k.3. Delete from P k0 each polyhedral partition element that passes the deletion by infeasibility test. k.4. For each polyhedral partition element S 2 P k0 , find the optimal value UB(S) of the convex program (PS) and, if one exists, find an optimal solution (ys, ts) to this problem. Let ak ¼ maxfak1 ; maxff ðy s ; ts ÞjS 2 P k0 gg. Let (yk, tk) satisfy ak = f(yk, tk). k.5. Set P k ¼ ðP k1 n fSgÞ [ P k0 ; bk ¼ maxfUBðSÞjS 2 P k g. k.6. If bk  ak = 0, stop: v = ak = bk and (yk, tk) is a globally optimal solution for problem (P5). Otherwise, go to step k + 1. 3.5. Convergence It is easy to see in the branch and bound algorithm that for each k P 0, ak 6 v. In addition, by using Definitions 3–5, it can be shown for each k P 0 that v 6 bk. Using these facts, it is easy to show that if the algorithm is finite, then it is valid. In particular, if the algorithm stops in step k, k P 0, then (yk, tk) is, indeed, a globally optimal solution for problem (P5), as claimed by the algorithm. When the algorithm is infinite, convergence is guaranteed by the following result. Theorem 5. Suppose that the branch and bound algorithm in Section 3:4 is infinite. Then it generates a sequence fðy k ; tk Þg1 k¼0 of feasible solutions for the problem (P5), every accumulation point of which is a global optimal solution for problem (P5). In addition, it generates a nondecreasing sequence fak g1 k¼0 of lower bounds for v and a nonincreasing sequence fbk g1 of upper bounds for v such that k¼0 lim ak ¼ lim bk ¼ v.

k!1

k!1

Proof. This result can be shown by using certain arguments and concepts commonly used in global optimization (see, e.g., [16]). For brevity, the proof is omitted. h 3.6. Computational issues The branch and bound algorithm globally solves problem (P2) by solving the equivalent convex maximization problem (P5). To formulate problem (P5), values for m and M must be computed, where from Section 2, m ¼ min gðxÞ;

s.t. x 2 X ;

ð24Þ

366

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

and M P max gðxÞ;

s.t. x 2 X .

ð25Þ

From (24), we see that calculating m involves solving a single convex programming problem. Many efficient algorithms are available for this [2]. From (25), to calculate M, the optimal value of a convex maximization problem must either be found exactly or overestimated. There are at least three ways to accomplish this. One possibility is to calculate the exact maximum Mg of the convex function g(x) over the nonempty, compact convex set X and to set M equal to Mg. There are many convex maximization algorithms available for calculating Mg, but doing so is relatively expensive computationally [3,16]. Another way to find M is to enclose X in a simplex and to set M equal to the maximum of g(x) over the vertices of this simplex. In [4], for example, a method for accomplishing this is given. Finding M in this way is expected to be less expensive computationally than finding Mg and setting M equal to Mg. However the algorithm is expected to converge more slowly if M is found in this way, since, by this procedure, M > Mg will generally hold. A third possibility is to perform one or more iterations of an outer approximation algorithm for convex maximization on the problem of maximizing g(x) over X. Algorithms of this type are described, for example, in [3,16] and in references therein. Since these algorithms use relaxation, at the end of each iteration of one of these algorithms, an overestimate of Mg is given. By stopping such an algorithm prematurely, then, a value of M greater than Mg is obtained. It is expected that this approach will require more computational effort than the second approach, but that it will provide a smaller value of M than that approach. In steps 0.2 and k.4, k P 1, of the algorithm, the optimal value UB(S) and, if one exists, an optimal solution (ys, ts) of the convex program (PS) must be found, where S is a polyhedral partition element of the form (21) created by the algorithm. This can be conveniently accomplished without explicitly determining the formula for the concave envelope f S(y, t) as follows. For each j = 1, 2, . . . , n, let fðy 1j ; t1j Þ; ðy 2j ; t2j Þ; ðy 3j ; t3j Þg denote the vertex set of simplex S j  R2 . To solve problem (PS), solve instead the equivalent problem (PS 0 ) given by max

n X ½c1j fj ðy 1j ; t1j Þ þ c2j fj ðy 2j ; t2j Þ þ c3j fj ðy 3j ; t3j Þ;

ðPS0 Þ

j¼1

s.t. y j ¼ c1j y 1j þ c2j y 2j þ c3j y 3j ; tj ¼ c1j t1j þ c2j t2j þ c1j þ c2j þ c3j ¼ 1;

c3j t3j ;

j ¼ 1; 2; . . . ; n; j ¼ 1; 2; . . . ; n;

j ¼ 1; 2; . . . ; n;

ðy; tÞ 2 YT ; c1j ; c2j ; c3j P 0;

j ¼ 1; 2; . . . ; n.

Then, from (23), [8] and [16], since any point in the simplex Sj, j = 1, 2, . . . , n, has a unique representation as a convex combination of the extreme points of the simplex, the optimal value of problem (PS 0 ) equals UB(S), and, when UB(S) is finite, for any optimal solution (ys, ts, cs) to problem (PS 0 ), (ys, ts) is an optimal solution to problem (PS). Notice that problem (PS 0 ) is a convex program with the same number of nonlinear constraints as problem (PS). Thus problem (PS 0 ) can be solved essentially as easily as problem (PS) by any number of convex programming methods [2]. Notice also that as S varies, the only changes that occur in problem (PS 0 ) are to subsets of the objective function coefficients and of the coefficients of the constraints that define yj and tj, j = 1, 2, . . . , n. Thus, to speed the solution of these problems, the branch and bound algorithm can poten-

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

367

tially use an optimal solution to one problem to good advantage as a starting solution to the next problem. The benefit of this idea will depend upon the particular convex programming algorithm and code used to solve these problems. Another issue that needs to be addressed is how to modify the algorithm computationally for problems (P5) where H does not have full dimension. Since g(x) is nonconstant on X, this can occur if and only if LYj = UYj for some j 2 {1, 2, . . . , n}. Suppose that w 2 {1, 2, . . . , n} and LYw = UYw = K. Then, from (20), by definition of fw(yw, tw), term w in the objective function of problem (P5) is given by fw ðy w ; tw Þ ¼ fw ðK; tw Þ ¼

aw K 2 tw

for all (yw, tw) 2 Hw, where H w ¼ fðy w ; tw Þ 2 R2 jy w ¼ K; m 6 tw 6 Mg. As a result, the polyhedral partition elements created by the algorithm will have less than full dimension. Nevertheless, since fw(yw, tw) is a convex function on the line segment Hw, only minor adaptations in the branching and bounding operations of the algorithm are required. In the algorithm, in each polyhedral partition element S of the form (21) that is created, Sw  Hw will be a line segment. The branching operation is unchanged, except that simplicial bisection of a simplex Sw  Hw is understood to mean that Sw is divided into two line segments at its midpoint. The bounding operation is also unchanged, except for the definition of problem (PS 0 ) used in the suggested implementation of this operation. If we let ðK; t1w Þ; ðK; t2w Þ denote the vertex set of the line segment Sw, then problem (PS 0 ) is as given earlier, except that all terms in the problem involving the variable c3w are omitted, the constraint defining yw is omitted and, in what remains, y 1w ; y 2w and yw are all set equal to K. In practice, to guarantee finiteness, the branch and bound algorithm can be modified to stop when bk  ak 6 e

ð26Þ

or when bk  ak 6 qbk

ð27Þ

for some k P 0, instead of when bk  ak = 0, where e and q are prechosen constants such that e > 0, 0 < q < 1. When (26) is satisfied, it is easy to show that the incumbent feasible solution (yk, tk) is an approximate globally optimal solution in the sense that f(yk, tk) P v  e. When (27) is satisfied, (yk, tk) satisfies f(yk, tk) P (1  q)v, so that at termination, (yk, tk) is guaranteed to attain at least 100(1  q)% of the optimal value v. Computationally, as in all branch and bound algorithms, a key to the speed with which the algorithm will solve problems in practice is the rate at which the global upper bounds bk, k = 0, 1, 2, . . ., decrease in value. The algorithm attempts to quickly decrease these bounds by searching over the hyperrectangle H, rather than over relaxations of H, and by using concave envelopes in the subproblems (PS). On the other hand, to accomplish this, the partitions created by the algorithm may contain a relatively large number of polyhedral partition elements, especially as n increases. Thus, it is expected that the algorithm will be practical for values of n that are not too large.

4. Sample problem We have globally solved a number of sample problems (P2) by using the branch and bound algorithm of Section 3. In each case, we used LINGO [18] to minimize g(x) over X and to solve the convex subproblems

368

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

(PS 0 ). Below we describe solution results for one of these sample problems. In this problem, the feasible region X of the problem (P2) was sufficiently small so that the maximum of g(x) over X was found by extreme point search. All numerical results are given to the nearest one-hundredth. Example 4. In this example, n = 2. The matrix Q is given by   1.00 0.50 Q¼ ; 0.50 0.33 and, for each ðx1 ; x2 Þ 2 R2 , the denominator function is gðx1 ; x2 Þ ¼ x21  2x1 þ x22  8x2 þ 20. The feasible region X consists of all ðx1 ; x2 Þ 2 R2 that satisfy the inequalities 2x1 þ x2  6 6 0; 3x1 þ x2  8 6 0; pffiffiffiffiffiffiffiffi pffiffiffi  x1 x2 þ 2 6 0; 1 6 x1 6 2.33; 1 6 x2 6 4.00.

ð28Þ

Notice that the function g3(x1, x2) in (28) is not convex on R2 . However g3(x1, x2) is convex on an open set defined by the other inequalities in X, which is sufficient for the algorithm. The algorithm globally solved the sample problem by globally solving the equivalent convex maximization problem (P5). In problem (P5) the hyperrectangle H was given by H ¼ H1  H2 ¼ fðy 1 ; t1 Þ 2 R2 j  3.05 6 y 1 6 0.22; 3.00 6 t1 6 13.78g  ðy 2 ; t2 Þ 2 R2 j1.82 6 y 2 6 2.77; 3.00 6 t2 6 13.78g. In step 0, four polyhedral partition elements were created to initiate the search over H for a global optimal solution. For each partition element S, the convex program (PS 0 ) was solved. For two of the partition elements, the upper bound value UB(S) that was found was 3.44. For the other two partition elements, the upper bound was 2.78. During the course of solving the convex program (PS 0 ) corresponding to the first partition element, the incumbent solution (y0, t0) was found, where y 0 ¼ ðy 01 ; y 02 Þ ¼ ð3.05; 2.77Þ;

ð29Þ

t0 ¼ ðt01 ; t02 Þ ¼ ð3.00; 3.00Þ;

ð30Þ

and f ðy 0 ; t0 Þ ¼ 3.44. As a result, in step 0.3, P0 = / was detected, and the algorithm stopped with the conclusion that (y0, t0) given by (29) and (30) is a globally optimal solution for problem (P5) with v = 3.44. With the aid of Theorem 3, we found that the corresponding globally optimal solution for problem (P2) is x* = (1.00, 4.00) with the objective function value q2(x*) = 3.44. 5. Concluding remarks We have studied two difficult global optimization problems, (P1) and (P2), each of which involves maximizing a ratio of two convex functions, where at least one of the two convex functions is a quadratic form.

H.P. Benson / European Journal of Operational Research 173 (2006) 351–369

369

Our first purpose was to present and validate several theoretical properties of these problems. Our second purpose was to propose a branch and bound algorithm for globally solving problem (P2). To our knowledge, this is the first algorithm that has been proposed for solving this problem. The algorithm has the advantage that it involves solving a sequence of convex, rather than nonconvex, programming problems. Preliminary indications are that the algorithm can be expected to provide a practical approach for solving problem (P2), provided that the number of variables is not too large. We anticipate performing and reporting the results of additional computational experiments with this algorithm in the future.

References [1] M. Avriel, W.E. Diewert, S. Schaible, I. Zang, Generalized Concavity, Plenum, New York, 1988. [2] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming: Theory and Algorithms, Wiley, New York, 1993. [3] H.P. Benson, Concave minimization: Theory, applications and algorithms, in: R. Horst, P.M. Pardalos (Eds.), Handbook of Global Optimization, Kluwer, Dordrecht, 1995, pp. 43–148. [4] H.P. Benson, An outcome space branch and bound-outer approximation algorithm for convex multiplicative programming, Journal of Global Optimization 15 (1999) 315–342. [5] B.D. Craven, Fractional Programming, Heldermann, Berlin, 1988. [6] W. Dinkelbach, On nonlinear fractional programming, Management Science 13 (1967) 492–498. [7] S.S. Erenguc, H.P. Benson, An algorithm for indefinite integer quadratic programming, Journal of Computers and Mathematics with Applications 21 (1991) 99–106. [8] J.E. Falk, K.R. Hoffman, A successive underestimation method for concave minimization problems, Mathematics of Operations Research 1 (1976) 251–259. [9] J.E. Falk, R.M. Soland, An algorithm for separable nonconvex programming problems, Management Science 15 (1969) 550–569. [10] J.B.G. Frenk, S. Schaible, Fractional programming, in: C.A. Floudas, P.M. Pardalos (Eds.), Encyclopedia of Optimization, vol. II, Kluwer, Dordrecht, 2001, pp. 162–172. [11] J.B.G. Frenk, S. Schaible, Fractional programming, in: N. Hadjisavvas, S. Komlosi, S. Schaible (Eds.), Handbook of Generalized Convexity and Generalized Monotonicity, Springer, Berlin, 2004, pp. 333–384. [12] F.R. Gantmacher, The Theory of Matrices, Chelsea, New York, 1959. [13] J-Y. Gotoh, H. Konno, Maximization of the ratio of two convex quadratic functions over a polytope, Computational Optimization and Applications 20 (2001) 43–60. [14] K. Hoffman, R. Kunze, Linear Algebra, Prentice Hall, Englewood Cliffs, N.J., 1961. [15] R. Horst, P.M. Pardalos, N.V. Thoai, Introduction to Global Optimization, Kluwer, Dordrecht, 1995. [16] R. Horst, H. Tuy, Global Optimization: Deterministic Approaches, Springer, Berlin, 1993. [17] T. Ibaraki, Parametric approaches to fractional programs, Mathematical Programming 26 (1983) 345–362. [18] LINGO/PC, Release 6.0, Lindo Systems, Inc., Chicago, 2000. [19] A.W. Lo, A.C. MacKinlay, Maximizing predictability in the stock and bond markets, Macroeconomic Dynamics 1 (1997) 102– 134. [20] O.L. Mangasarian, Nonlinear Programming, McGraw-Hill, New York, 1969. [21] B. Martos, Nonlinear Programming: Theory and Methods, North-Holland, Amsterdam, 1975. [22] J.A. Ohlson, W.T. Ziemba, Optimal portfolio policies for an investor with a power utility function facing a log normal securities market, Journal of Financial Quantitative Analysis 11 (1976) 57–71. [23] S. Schaible, Fractional programming, in: R. Horst, P.M. Pardalos (Eds.), Handbook of Global Optimization, Kluwer, Dordrecht, 1995, pp. 495–608. [24] S. Schaible, Fractional programming: Applications and algorithms, European Journal of Operational Research 7 (1981) 111–120. [25] S. Schaible, Fractional programming—some recent developments, Journal of Information and Optimization Sciences 10 (1989) 1–14. [26] S. Schaible, Parameter-free convex equivalent and dual programs of fractional programs, Zeitschrift fuer Operations Research 18 (1974) 187–196. [27] I.M. Stancu-Minasian, Fractional Programming: Theory, Methods and Applications, Kluwer, Dordrecht, 1997. [28] H. Tuy, Convex Analysis and Global Optimization, Kluwer, Dordrecht, 1998.