Unboundedness of a convex quadratic function subject to concave and convex quadratic constraints

Unboundedness of a convex quadratic function subject to concave and convex quadratic constraints

114 European Journal of Operational Research 63 (1992) 114-123 North-Holland Theory and Methodology U n b o u n d e d n e s s of a convex quadratic...

626KB Sizes 0 Downloads 71 Views

114

European Journal of Operational Research 63 (1992) 114-123 North-Holland

Theory and Methodology

U n b o u n d e d n e s s of a convex quadratic function subject to concave and convex quadratic constraints R.J. Caron and W. Obuchowska, Department o f Mathematics and Statistics, University o f Windsor, Windsor, Ont., Canada N 9 B 3P4

Received September 1989; revised October 1990

Abstract: We present conditions for the existence of upper and lower bounds on convex quadratic objective functions subject to concave and convex quadratic constraints. We also present techniques for determining whether or not the conditions are satisfied. Keywords: Quadratic objective function; convex quadratic constraints; concave quadratic constraints; unboundedness

I. Introduction The problem of quadratically constrained quadratic programming (QCQP) is significant because of its many applications, for example, to location and production planning problems [10]. The Q C Q P problem also has applications in methods for the solution of nonlinear programming problems [12]. Solution algorithms and theoretical results for the problem Q C Q P can be found [1,2,4,5,7]. In this p a p e r we are concerned with two related problems. The first problem is to determine when feasible regions defined by both concave and convex quadratic constraints are unbounded. The second problem is to determine whether or not a convex objective function is unbounded from above and below over an unbounded quadratically constrained region. These problems were first considered in [9] under the restrictive assumption that the rank of the Hessian matrix of each of the quadratic func-

tions is equal to its number of nonzero diagonal elements. In the special case that all Hessian matrices are zero the feasible region is linear, and necessary and sufficient conditions for it to be unbounded can be found, for example, in [8]. If all the quadratic constraints are convex then results given by Rockafellar [11] imply that the feasible region is unbounded if and only if it contains a half-line. In this paper, we denote a half-line by x ( p , s) = { x l x = p

+ ts, t>~0},

where s is nonzero. In order to define the regions of interest, we define the index sets Ira, = { 1 , . . . , m l } , I m = {1 . . . . . m}, and Im\ml = { m 1 -1-1 . . . . . m}, and let Bi, ai, and c i be real symmetric positive semidefinite (d, d)-matrices, d-vectors, and real scalars, respectively, for each i ~ I m. Then, the nonconvex quadratically constrained region is given by

0377-2217/92/$05.00 © 1992 - Elsevier Science Publishers B.V. All rights reserved

R.J. Caron, W. Obuchowska / Quadratically constrained quadratic functions

115

where

2. System (1)

,.~v= {X ~- ~d l Oi( x )

T h e following algorithm will d e t e r m i n e whether or not system (1) has a solution. This algorithm will be used in subsequent sections to check the conditions for unboundedness. In this p a p e r the set Z (possibly with a subscript) is a subset of Ira\mr and the set Z C (possibly with a subscript) is the complement of Z in lm\m. Further, we always choose Z so that if Bg is a zero matrix, then i ~ Z. The set R z is given by

1 TB i x <_ci, i = a Y x + 2X

tim, }

is a convex feasible region and where

~c=

{xE~dlei(x) = aTx -- ½xTBi x < c i, i ~ Im\ml }

is a nonconvex feasible region. We assume that ~ ' is nonempty, which implies that both ~ , v and ~ , c are nonempty. We note that since the zero matrix is positive semidefinite, some of the constraints may be linear. Now let B be a real symmetric positive semidefinite (d, d)-matrix and let a be a d-vector. The convex quadratic objective function is given by

Rz:{s laTs<_OVj Z}

Q ( x ) = aTx + ½xTBx.

Kz~:=(x~"dlx

~,c.

In order to check the sufficient conditions for regions defined by concave constraints, we must be able to determine whether or not there exists a vector s satisfying

Bis~Ovafs<~O

= ~_, aisi, a i > ~ O V i ~ Z ~ } . i~Z ~

In this p a p e r we first establish necessary and sufficient conditions for ..q~v and ~ c to contain half-lines, that is, to be unbounded. We also prove that ~ , c is unbounded if and only if it contains a half-line. Afterwards, we give necessary and sufficient conditions for such a half-line to exist in ~ , and we show that the existence of a half-line is not a necessary condition for ~ to be unbounded. Finally, we give sufficient conditions for the existence of u p p e r and lower bounds on Q(x) over the regions ~,v, ~ c , and ~ ' , and show that the conditions are necessary for the region

s:/:O,

In order to better understand the algorithm it is, perhaps, useful to first consider the following lemmas. The lemmas will later be used to prove that the algorithm terminates after a finite number of steps (cf. T h e o r e m 1). First, the cone spanned by a set of vectors s i, i e Z ~ is given by

~i~Im\rn

1.

(l)

This can be done using Algorithm A given in Section 2 of this paper. In order to implement the algorithm, it is only necessary to solve LPs where all the constraints are inequalities with zero right-hand sides. In Section 3 we discuss the unboundedness of ~ ' , in Section 4 we discuss the unboundedness of Q ( x ) over ~ ' , and in Section 5 we give the concluding remarks.

L e m m a 1. There exists a vector s ~ K zc such that sTBis, > 0 Vi ~ Z ~, where for each i ~ Z c, s~Bis i >0. Proof. Without loss of generality, we can assume that Z c = I k \ m ¢ where Ik\m, = {m 1 + 1. . . . . k}. The proof i s by induction. For k = m 1 + 1, we set s =Sml+L. For k > m ~ + 1, we suppose that a vector g ~ Kzc satisfies s X B i s > 0 Vi ~ I k_ l\m," NOW consider the point g + ts k. We have that (~

T

^

+ tSk) Bi(s + tsk) =~TBig + 2tgTB~sk T + t 2skBis k.

(2)

For each i G I k _ l \ m j we have that either T T skBis k -_ 0 or SkBiS ~ > 0, and for i = k we have T T that SkBiS k > 0. If skBis k = 0, then B~sk = 0 since B i is positive semidefinite. Therefore, if s Tk Bis k = 0, we have T

(g+tSk) Bi(g+tSk)=gTBig>O

Vt>~O.

If s kT B~sk > 0, then, since the quadratic term in (2) will dominate, there exists a ~"> 0 such that (g + tSk)TBi(g + ts k) > 0 Vi ~ Ik\m. Vt >~~. We set s = g + ~'sk and note that s ~Kz~. [] L e m m a 2. Let Z be arbitrary but fixed. For each i ~ Z c, there exists a vector s i ~ R z such that

R.J. Caron, W. Obuchowska / Quadratically constrained quadratic functions

116

s~Bis i > 0 and sfB~s i = 0 Vj ~ Z, if and only if there exists a vector s ~ R z such that sTBi s > 0 ~¢i ~ Z ~, and sTBis = 0 Vj ~ Z. Proof. Suppose that for each i ~ Z c there exists a vector p~ ~ R z such that p~B~p~ > 0. From L e m m a 1 it follows that there exists a vector s ~ Kz~ such that sTBis > 0 Vi ~ Z c. Since s ~ Kz~ and p~Bjp i = 0 Vi ~ Z c, Vj ~ Z, we have sXBjs = 0 Vj ~ Z . Also, since s ~ K z ~ and p~ ~ R z, for each j ~ Z we have that a~s =

E

aiaTpi ~ O,

Vi~Z c

which implies that s ~ R z. Now suppose that there exists an s ~ R z such that sTBi s > 0 Vi ~ Z c, and sTBjs = 0 Vj ~ Z. The result follows by setting Pi = s Vi ~ Z L []

Algorithm A. Purpose: To determine whether or not there exists a vector s satisfying conditions (1). Step 1. Set Z 0 = {i ~ Ira\m, I B~ = 0}. If Z 0 = ¢, then stop since there is a solution to (1). Otherwise, set n = 0 and go to Step 2. Step 2. If there is a nonzero vector s ~ R z , then go to Step 3. Otherwise, stop since there is no solution to (1). Step 3. Determine the set

2~ = {i l i ~ Z~ and (Bis = 0 Vs ~ R z , ) ) . If Z^,C- ¢, then stop since there is a solution to (1). Otherwise, set Z~+~ = Z~ u ~c, replace n with n + 1, and go to Step 2.

Now suppose that termination is in Step 2. We will show that there is no solution to (1). That is, we will show that for all s 4:0 there is an i ~ Im\ml such that Bis = 0 and a~s > 0. The proof is by contradiction. Let g be any nonzero vector and suppose that there is no index i ~ Im\m~ such that B i g = 0 and a ~ g > 0. Since B i g = 0 for all i ~ Z 0 we must have that a~g-4< 0 for all i ~ Z0, which implies that g ~ Rz, ,. Thus, Big = 0 for all i ~ Z 1, and it follows that g ~ R z . Using induction, we get that Big = 0 for all i c Zn, which implies that g ~ Rz°. This contradicts Rzo being empty. Therefore, (1) has no solution. If termination is in Step 3, then Z ~ = ¢ . Therefore, for each i ~ Z~ there exists a vector si ~ R z , such that B~s~ 4: O. From L e m m a 2 it follows that there exists an s e K z ~ " such that Bis ~ 0 Vi ~ Z c. Since each s i ~ R z , and s ~ Kz~, we have s ~ Rz, ,, that is, aTis <.40 Vi ~ Zn. Finally, since Z , u Z~ = Im\m,, s is a solution to (1). [] Some comments concerning Algorithm A are in order. In Step 2 of the algorithm it is necessary to determine whether or not there exists a nonzero vector s ~ R z . In Step 3, it is necessary to determine whether or not the set , ~ is empty, which means that for each i ~ Z~ we must determine whether or not there is a vector s ~ R z , such that Bis 4=O. These steps can be carried out using L e m m a 3 given below. As the result is straightforward, no proof is given. L e m m a 3. Let P = [ P l . . . . . Pa] be a positive semidefinite matrix, and let R = {s I a~s <~0 Vi ~ I} where I is some index set. For k = 1 , . . . , d, define z~- = min{ + p [ s Is ~ R}

Theorem 1. Algorithm A terminates after a finite number o f iterations indicating whether or not system (1) has a solution.

Proof. In the worst case, the algorithm will require m - m ~ - 1 iterations to terminate in Step 3 with 2~ being the empty set. If termination is in Step 1, then for each i ~ Im\m, there exists a vector s~ such that Bgs~ 4: O. Since each n i is positive semidefinite, we note that Bis ~ 0 is equivalent to sTB~s > 0. It then follows from L e m m a 1 that there exists a vector s such that Bis 4:0 for all i ~ Im\m, so that there is a solution to (1).

and z~ = m i n { - p [ s l s ~ R } . Then Ps = 0 for all s ~ R if and only if z~ = z~ = 0 for each k = 1 , . . . , d . In order to implement L e m m a 3, and, in fact, in order to implement all results in this paper, it is only necessary to solve LPs with the special form given in L e m m a 3. That is, LPs where all the constraints are inequalities with zero righthand sides. These LPs have a single extreme point at the origin. Thus, either the origin is a

R.J. Caron, W. Obuchowska /Quadratically constrained quadratic functions

solution, or the LP is unbounded. It is interesting to note that solving such an LP is equivalent to determining whether or not there is a solution to System 1 of Farkas' theorem of the alternative [6].

117

X'- Y=I

X'- 2X-y=O

X

3. Unboundedness of 2

In Sections 3.1-3.3 we consider the regions 3 v, 3 c, and .9~, respectively.

3.1. Unboundedness of ~,~v Figure 1. The feasible region for Example 1 is unbounded

Theorem 2. The region ~ v

is unbounded if and only if it contains a half-line with direction vector s satisfying Bis=O

Vi~Im,,

(3)

axis<~O ViClm,.

(4)

Proof. It follows from Theorem 8.4 in [11] that ~ v is unbounded if and only if it contains a half-line. We will now show that 3 v contains a half-line if and only if there exists a vector s satisfying (3) and (4). The backward implication is proved first. Let p be any point in ~ v . For each i ~ Ira, we have that Bis = 0, so that

Oi( P + ts) = axip + taxis + ½pTBiP. Condition (4) and p ~ 3 v then imply that Qi(P + ts) <~c i, t >l O. Thus, x(p, s) ~ v . The forward implication is now proved. Let p ~ , v and let s be any n-vector. Suppose that for each i ~ Imj the inequality Qi(P + ts) - c i <~0 is satisfied for all t >/0. This implies that

Clearly, this is satisfied for all i ~ lm~ and for all t >~0 if and only if aVis <~0 Visitor, which is condition (4). [] Example 1. Consider the region ~.~v in Figure 1, which is represented by the following constraint set: X2

- y ~< 1,

x2- 2x-y

<~O.

The region is unbounded and the vector s = (0, 1) w satisfies conditions (3) and (4). Example 2. Consider the region ~ , v in Figure 2, which is represented by the following constraint set: x 2 + y 2 ~< 1, x 2 - 2x - y ~< 0. The region is bounded and since the Hessian matrix for the first constraint is positive definite,

[aVip + lpTBiP --Ci] + [aTs + pTBis]t +[lsTBisIt2KO

x 2- 2 x - y = O =

is satisfied for all t >/O. If, for some k, Bks is nonzero, then, since p ~ v , Q~(p + t s ) - c~ represents a convex parabola that is nonnegative for t > NI where N~ is a finite positive root of Q~(p + t s ) - c~ = O. This contradiction proves that Bis = 0 Vi E l m . Therefore, the inequality reduces to

[aTp + ½pTBiP--cil + [axislt <~0.

-

-

x

Figure 2. The feasible region for Example 2 is bounded

R.J. Caron, IV..Obuchowska / Quadratically constrained quadratic functions

118

there is no nonzero vector s with BlS = 0. Thus, condition (3) cannot be satisfied.

Corollary 2. Let R ( B i) denote the range space of the matrix B i. I f ~ c is bounded, then for each nonzero

3.2. Unboundedness o f 2 c

s~

L e m m a 4. A half-•e with direction vector s is contained in the region ,9~c if and only if s ~ Rz(s), where Z ( s ) = {j ~ Ira\m, IBis = 0}.

aTs > O.

Proof. Suppose that x ( p , s) is a half-line in ~ c and that s ¢~Rzm. Therefore, there exists an index j e Z ( s ) such that a~s > 0. We have that

Q j ( p + ts) - cj = Qj( p ) - cj + tats, which is positive for t sufficiently large. This contradicts x ( p , s) ~ c . Thus, s ~ R z m . Let p e ~ c and let s 4= 0 belong to Rz(s). For j e Z(s), we have that

Qj( p + ts) - cj = Qj( p ) - cj + tats. Since afs <~0, it follows that Q j ( p + ts) - cj <~0 for all t >/0. For i e ZC(s), we have that

Qi( p + ts)

-

c i =

Qi( P ) - ci + tVQi( p ) Ts -- ½t2sTBis.

Since sTBis > 0, we have that Q~(p + ts) - c i <_~ 0 for t sufficiently large. Thus, there exists a scalar T > 0 such that p + t s e ~ C V t > T , and the half-line x(/3, s) e ~ c w h e r e / ) = p + Ts. [] In Corollary 1 we show that it is only possible for , ~ c to be bounded if there are linear constraints. In Corollary 2 we show that the linear constraints must provide the boundary of ~ c in every direction s that is in the intersection of the range spaces of the nonzero constraint Hessian matrices. Corollary 1. I f Bi, i e Im\ml , are all nonzero, then ,YZc is unbounded.

[") R ( B i ) i~lm\m, Bi4=O

there exists an i e l m , , , , ~ with B i = 0 such that

Proof. We prove the contrapositive. Suppose that s satisfies (5). Let p e ~ c and consider the halfline x ( p , s). If i ~ Im\ml with B i = 0, then aTs <~0 SO that Qi(P + ts) = aTi(p + tS) <~Ci for all t >/0. If i e Im\m~ with B i 4=0, then sTBi s > 0 so that Qi(P + ts) <~c i for all t sufficiently large. Thus, there exists a scalar T > 0 such that p + ts ~ t ? c V t > T, and the half-line x ( ~ , s ) ~ c where ~ = p + Ts. [] The above results suggest the following theorem. Theorem 3. The region ~ c is unbounded if and only if it contains a half-fine with a direction vector s satisfying system (1). The following notation will be used in the proof of T h e o r e m 3. We define the s e t s / ~ z K, ~ cZ K K = 0 . . . . . n, recursively, using Algorithm B given below.

Algorithm B Purpose: To define notation for the proof of T h e o r e m 3. '~ Step O. Define Mj = 0 V j e Z 0 and set K = 0. Step 1. Set

t~z = {X e Rd I aTx <~cj + Mj

VjeZ,,},

~ cZ~ = {x e RdlQj(x) ~
,/.K

Mj= E E p,) 1

z

s=l

Proof. Clearly, for each B i there exists a Pi such that p T n i P i > 0. Set Z = ¢ so that Rz{s) = R a in L e m m a 4. Then . ~ c contains a half-line and is therefore unbounded. [3

(5)

K',T

n

K

t=l

Replace K with K + 1 and go to Step 1. We note that since the sets R z , r = 1 , . . . , n 1, are unbounded, then so too are the sets R z , K

R.J. Caron, W.. Obuchowska / Quadraticallyconstrainedquadraticfunctions = 1, . . . . n - 1. Also, since Rz, is bounded, so too is Rzn. It is clear from the above algorithm that

119

It then follows from the definition of Z^¢n_ 1 that

Bfi = 0 V ~ Z n _ 1. Therefore, we have that, for each j ~ Z c,

t~z, ' = ~ , ,

(6) Tn--I

and that

½xTBjx = ~1 E

~ , c _C~zC ___~q,zc,, . . .

__C_~q, co.

Tn

1

E l~s~tt(psn-l'T n ) ijjp t"-I

s=l t=l

(7)

Tn--I Tn--I <~½ E

Proof of Theorem 3. Clearly, if .~,c contains a half-line x(p, s), then ~ c is unbounded. We first note that if s is a solution to (1), then Lemma 4 implies that x(p, s) is a half-line in 3 c. We need only show that if ~ , c is unbounded, then there exists a vector s satisfying (1). We will prove the contrapositive. Suppose that there is no solution to (1). It follows from Theorem 1 that Algorithm A would terminate in Step 2. Since termination is in Step 2, there is no nonzero vector s ~ R z . It then follows from Theorem 2 that /~zo is bounded. Also, since we did not terminate earlier, it follows that / ~ , is unbounded for K = 0 , . . . , n - 1. If it could be shown that ~'zc,, c_/~zo, then it would follow from (7) that

Therefore, i f Qj(x) <~cj, j ~ Zn_l, ^c then a~x ~
Since /~z, is bounded, this would then imply that ~ , c is bounded. We will prove, by induction, that j~,c __c_/~z,. It follows from (6) that ~c0c_/~z0. Now suppose that ~'zc~ c/~z~, for K = 0 . . . . . n - 1. Let x ~ c . It then follows from (7) that x ~ c _ . From the induction hypothesis, we then have that x ~/~z,_,. Since/~z, ~ is unbounded, we can write x = p + s where Tn 1

t-I

t=l,...,r

x~<2, - x ~< 0, - y 4 0, y--x2~
We use Algorithm A. In Step 1 we set Z 0 = {1, 2, 3}. In Step 2 we note that the set Rzo= {(x, y)l x ~< 0, - x ~< 0, - y ~<0} contains, for example, the nonzero vector s = ^(0, 1) T. Since Bns = 0 for all s ~Rzo, we have Z~ = {4} and Z 1= { 1 , 2 , 3 , 4 } . In Step 2 we have that R z , = {(x, y) l x ~ < 0 , - x ~ 0 , - y ~ < 0 , y~<0}, which

Tn--I

p = E,~.tPt -1,

EA,=I, t=l

0~/~t~l,

~-l, s ~ R z , _ . Y

X=2

Y

E (pn-1)TBjPt-I

s=l t=l

X=2

/

+ y=l

--

~

(a)

~X

X

(bl

Y=5

~x

^

(e)

Figure 3. (a). The feasible region for Example 3 is bounded; (b) the region Rzo for Example 3 is unbounded; (c). the region Example 3 is bounded and it contains oql'c

^

Rz~ for

R.J. Caron, W.. Obuchowska / Quadratically constrained quadratic functions

120 -X ~ ~y=l

y

Rc

3.3. The unboundedness of A simple combination of the results in the two previous subsections yields the following theorem (no proofs are necessary).

X-(1/25)y'=1

I

-X'-

S

y=l

The region P is unbounded if it contains a half-line, and it contains a half-line if and only if there is a direction vector s satisfying T h e o r e m 4.

s¢O, Bis=O

ViEIm,,

a~s
Vi~Im,,

Bis ~ O o r a ~ s < ~ 0

V i e I,,,\,.,.

Rc

Figure 4. The feasible region for Example 4 is u n b o u n d e d

contains only the zero vector. Thus, we terminate in Step 2 with the conclusion that there is no solution to (1) and that ~2'c is bounded. We now relate this example to the proof of Theorem 3. Figure 3b shows that /?z0 is unbounded. To determine M4, we note that the extreme points of /~z0 are^(0, 0) T and (2, 0) T. Thus, M 4 = 4. The region Rzl, which is shown in Figure 3c, is defined by the following constraint set: x~<2, - x ~ O, - y ~<0,

We note that the region ~ ' can be unbounded without containing a half-line. Consider the simple two-dimensional example where ~,~ is defined by the two constraints y - x 2 ~<0 and y - x 2 >/0. The region is unbounded along a quadratic curve and does not contain a half-line.

4. The u n b o u n d e d n e s s o f

Q(x) over

It is a simple matter to show that the convex function Q(x) is unbounded from below if and only if there is a vector s satisfying

Bs=O,

aTs < 0.

(8)

y~<5.

Also, it is clear that Q(x) is always unbounded from above. However, the vector s is a direction of unboundedness if and only if

Figure 3c also shows that /~z~ is bounded and that ~ c _/?z~.

Bs ~ 0 vaTs > 0

Example 4. Consider the feasible region in Figure 4, which is represented by the following constraint set: - x 2 - y ~< 1, -x2+y~
(9)

Theorems 5 and 6 below, which give necessary and sufficient conditions for the unboundedness of Q(x) on ~ along a half-line, follow immediately from Theorem 4 and from conditions (8) and (9). As was observed earlier, it is possible for ~ , and therefore for Q(x) on ~', to be unbounded along a quadratic curve. Thus, Theorems 5 and 6 do not give necessary conditions for the unboundedness of Q(x) on ~'. However, if there are only concave constraints, then the conditions are both necessary and sufficient for the existence of upperbounds on Q(x). The methods given below to check the conditions are easily

R.J. Caron, W.. Obuchowska /Quadratically constrained quadraticfunctions adapted to check the conditions when only one type of constraint is present. Theorem 5. The function Q( x ) is unbounded from below on ~ along a half-line if and only if there exists a vector s satisfying the following conditions:

Bs = 0,

(10)

aTs < 0,

(11)

Bis=O

Vi~Im,,

(12)

a~s<~O

Vi~Im,,

(13)

Bis-~OVaTs<~O

ViEIm\m.

(14)

Theorem 6. The function Q( x ) is unbounded from

above on a~ along a half-line if and only if there exists a vector s satisfying the following conditions: Bs 4:0 VaTS > O,

(15)

Bis = O Vi ~ Im,,

(16)

aTis<~O V i E I m , ,

(17)

B~S--gOVaTs<~O

ViElm\m.

4.1. The conditions of Theorem 5 We first make a change of variables to eliminate conditions (10) and (12). Let N be a matrix whose columns are a basis for the intersection of the null spaces of the matrices B, B~ . . . . . Bm. (Such a matrix N can be found, for example, using the method described in [3].) Note that if the intersection contains only the zero vector, which will happen, for example, if any of the matrices are positive definite, then it is impossible to satisfy (10)-(12) simultaneously. Otherwise, we set s = N s, a = NTa, a i = NTai Vi ~ I m and _Bi = N T B i N s ~ti ~ Im\ml. Thus, conditions (10)-(14) are equivalent to

aT_s < 0, afs~
(19)

Vi~Im,,

ni_s 4 : 0 V aTs < 0

(20)

Vi~]m\m.

Let a o = a and _B0 = 0 , and B_i=O V i ~ I m . Then any solution to conditions (19)-(21) is a solution to _$4:0,

(21)

_Bi_S:~0va/T_s~<0

V i ~ { O } U I m.

(22)

We now apply Algorithm A to (22) starting with _Z0 = {0} w Z 0 u Im: If there is no solution to (22), then there is no solution to (19)-(21), which implies that there is no solution to (10)-(14). If there is a solution to (22), then, since _Z0 4: ¢, termination is in Step 3 of the algorithm with some set _Z = Z n __Z 0. From L e m m a 1, it follows that there exists a vector s* ~ R_z where _Bis* 4: 0, i ~ Z c. We now consider the LP min{aTs I s ~ Rz_}. If the LP has the solution s = 0, then there is no s satisfying (10)-(14). Otherwise, the LP is unbounded from below and there exists an g ~ R z with aTff < 0. Clearly, there then exists some A 0, such that

(18)

Although it is a simple matter to write down the sufficient conditions, it is not obvious how to check those conditions. In the next two subsections we discuss methods to do this. The methods are based on using Algorithm A to solve systems like those in (1).

121

+

< 0,

s* + Ag_ ~ Rz_, _Bi(s*+A_g) 4:0

V i ~ _ Z c,

which implies that conditions (10)-(14) are satisfied by s = N ( s * + A_~).

4.2. The conditions of Theorem 6 We first make a change of variables to eliminate condition (16). Let N be a matrix whose columns are a basis for the intersection of the null spaces of the matrices Bi, i ~ I m ¢ If the intersection contains only the zero vector, then it is impossible to satisfy (15)-(16) simultaneously. Otherwise, we set s =N_s, a 0 = - - N T a , a i = NTai Vi ~ Ira, B_o = NTBN, n i = 0 Vi ~ Ira,, and B i = N T B i N V i ~ I m \ m . Thus, any solution to (15)-(18) is a solution to s4:0,

_Bis4:0vaT_s~0

V i ~ { O J U I m.

(23)

We now apply Algorithm A to (23) starting with _Z0 = Z 0 U Ira,- If there is no solution to (23), then there is no solution to (15)-(18). If there is a solution to (23), then Algorithm A terminates with some set Z = Z n ~_Z_o. From L e m m a 1, it follows that there exists a vector _s* ~ R_z where Bi_S * #= 0, i ~ Z c. If {0} ~ Z c, then

R.J. Caron, 14(.Obuchowska / Quadratically constrained quadratic functions

122

Y /Q(x)=;~6(x)=O. l// ,/ ///fx-(l/25)y'=1

\

/ x

so that the intersection of the null spaces contains only the zero vector. Thus, we cannot satisfy (10)-(14) and we can make no conclusion about the lower-boundedness of Q(x) over the feasible region. However, we can conclude that Q(x) is not unbounded from below along a half-line in ~,. We now check the conditions of T h e o r e m 6, that is, we determine whether Q(x) is unbounded from above. The change of variable matrix is N = (0, 1) T, so that s ~ •, and (23) reduces to

l 1/

s=~0, 2s~OvO~O, .

0s~0V-~40, Figure 5. The function Q(x) is bounded from below and is u n b o u n d e d from above on

s* satisfies (15)-(18). If {0} ~ _Z, then we consider the LP

~0v0s~0. We see that s = 1 is a solution. It follows that s = N_s = (0, 1) T satisfies (15)-(18) so that Q(x) is unbounded from above on ~ . See Figure 5.

min{a~_s I _s~ R_z). If the LP has the solution s = 0, then there is no s satisfying (15)-(18). Otherwise, the LP is unbounded from above and there exists an g ~ R z with a0V_g< 0. Clearly, there then exists some A > such that _~0~(s* + A_~) < 0,

s* + Ag ~ Rz_, _Bi(s*+A_g)=~0

V i ~ _ Z c,

which implies that conditions (15)-(18) are satisfied by s = N(_s * + A_~).

Example 5. We consider the behaviour of the function Q(z) = - 36x + y2 over the unbounded feasible region in Figure 5 which is represented by the following constraint set:

5. Concluding remarks We have presented sufficient conditions for a convex quadratic function to be unbounded from above, and to be unbounded from below, over a feasible region defined by both convex and concave quadratic constraints. We have also shown that the conditions are necessary for the existence of upper bounds if the constraints are concave. In addition, we have presented a technique for checking the conditions which only requires the solution of a finite number of LPs having a special structure. Finally, this paper presents a novel algorithm for determining a solution to system (1). Future research includes determining necessary conditions for the unboundedness of ~ .

x 2 - y <~ - 1 , x - ~ l y 2 <~1. We have that Ira, = {1} a n d Im\ml = {2}. W e first check the conditions of T h e o r e m 5, that is, we first determine whether Q(x) is unbounded from below. We have that

Acknowledgements This work was supported by the Natural Sciences and Engineering Research Council of Canada under grant number A8807, and by the University of Windsor under the C.P. Crowley Scholarship.

R.J. Caron, IV.. Obuchowska / Quadratically constrained quadratic functions

References [1] Baron, D.P., "Quadratic programming with quadratic constraints", Naval Research Logistics Quarterly 19 (1972) 105-119. [2] Cole, F., Ecker, J.G., and Gochet, W., "A reduced gradient method for quadratic programming with quadratic constraints and lp-approximation problems", European Journal of Operational Research 9(1982) 194-203. [3] Dongarra, J.J., Moler, C.B., Bunch, J.R., and Stewart, G.W., Linpack: Users' Guide, SIAM, Philadelphia, PA, 1979. [4] Ecker, J.G., and Niemi, R.D., "A dual method for quadratic programs with quadratic constraints", SIAM Journal of Applied Mathematics 28 (1975) 568-576. [5] Fang, S.C., and Rajasekera, J.R., "Controlled perturbations for quadratically constrained quadratic programs", Mathematical Programming 36 (1986) 276-289. [6] Mangasarian, O.L., Nonlinear Programming, McGrawHill, New York, 1969.

123

[7] Mehrotra, S., and Sun, J., "A method of analytic centres for quadratically constrained convex quadratic programs", SlAM Journal of Numerical Analysis 28 (1991) 529-544. [8] Murty, K.G., Linear Programming, Wiley, New York, 1983. [9] Obuchowska, W., "Badanie ograniczonosci funkcji celu w zagadnieniu rozszerzone optymalizacji kwadratowej", Research Papers of the Academy of Economics, Wroctaw, Poland, 1985. [10] Phan-huy-Hao, E., "Quadratically constrained quadratic programming: Some applications and a method for solution", Zeitschrift fiir Operations Research 8 (1982) 105119. [11] Rockafellar, R.T., Convex Analysis, Princeton University Press, Princeton, NJ, 1970. [12] Vogel, C.R., "A constrained least squares method for nonlinear ill-posed problems", presented at the SIAM Conference on Optimization, Houston, TX, 1987.