ELSEVIER
Operations Research Letters 17 (1995) 209-214
Condition numbers for polyhedra with real number data S t e p h e n A. V a v a s i s a'*' t Y i n y u Y e b' 2 aDepartment of Computer Science, Cornell UniversiW, Ithaca, NY, USA bDepartment of Management Sciences, The University of lowa, Iowa Ci~, 1A 52242, USA Received 1 April 1994; revised 1 February 1995
Abstract We consider the complexity of finding a feasible point inside a polyhedron specified by homogeneous linear constraints. A primal-dual interior point method is used. The running time of the interior point method can be bounded in terms of a condition number of the coefficient matrix A that has been proposed by Ye. We demonstrate that Ye's condition number is bounded in terms of another condition number for weighted least squares discovered by Stewart and Todd. Thus, the Stewart-Todd condition number, which is defined for real-number data, also bounds the complexity of finding a feasible point in a polyhedron. Keywords: Polyhedron; Interior point algorithms; Condition-based complexity
1. Introduction
Consider the following polyhedron: ~q~ = {X: A x = 0, eTx = 1, X ~ 0},
where A e R m×n with rank m is given, e is the vector of all ones, and T denotes transpose. This is the homogeneous form proposed by Karmarkar [1]. is said to be feasible if and only if ~ 4: 0. Given an A, there is a unique partition of the columns of A, A = (B, N), such that the set e~p = {x: A x = 0, eTxB = 1,
XB
~ 0,
XN
= 0},
has a strictly feasible p o i n t o r a n i n t e r i o r p o i n t in the p o s i t i v e o r t h a n t i.e., a n x e ~ p s u c h t h a t xB > 0, a n d t h e d u a l set ~ d = {S: S = A T y for s o m e y , eTs = 1,Sn = O, SN >>-0}
* Corresponding author. E-mail:
[email protected]. 1This work was supported in part by the National Science Foundation, the Air Force Office of Scientific Research, and the Office of Naval Research, through NSF grant DMS-8920550. Also supported in part by an NSF Presidential Young Investigator award with matching funds received from AT&T and Xerox Corp. Part of this work was done while the author was visiting Sandia National Laboratories, supported by the U.S. Department of Energy under contract DE-AC04-76DP00789. 2 Research was supported in part by NSF grant DDM-9207347. Part of this work was done while the author was on a sabbatical leave from the University of Iowa and visiting ithe Cornell Theory Center, Cornell University, Ithaca, NY 14853, supported in part by the Cornell Center for Applied Mathematics and by the Advanced Computing Research Institute, a unit of the Cornell Theory Center, which receives major funding from the National Science Foundation and IBM Corporation, with additional support from New York State and members of its Corporate Research Institute.
0167-6377/95/$09.50 © 1995 Elsevier Science B.V. All rights reserved SSDI 0 1 6 7 - 6 3 7 7 ( 9 5 ) 0 0 0 1 9 - 4
XA. Vavasis, K Ye / Operations Research Letters 17 (1995) 209-214
210
has a strictly feasible point, i.e., a point in ~i~d such that sN > O, It is also known that xN = 0 for any x e ~ . Thus, :~ is infeasible if and only if A = N. There is no loss of generality in the assumption that N is in homogeneous form. Consider the standard nonhomogeneous linear feasibility system:
We use So = 1 + eXATy and s = - eyo + ATy to denote the slack variables of (LD). Obviously, x °=l/(n+l) and x ° = e / ( n + l ) , y O = 0 and yO = _ 1 with slack s o = 1 and s o = e are feasible points for the (LP) and (LD), respectively. Moreover, they are on the central path with initial duality gap
X = {x: 2 x = b, x / > 0}.
0 0 + (xO)Ts° + (1 -- eTx°)( -- yO) _ #o _ XoSo
We can construct a related homogeneous system using A = (4, - b). Then, J ( is feasible if and only if column b is in B, the unique partition identified for ~ . This transformation from nonhomogeneous to homogeneous equations has an impact on our complexity analysis; see the concluding remarks. Suppose we want to answer the following question using an interior point method:
n+2
In this note, we solve (P) with a primal-dual interior point method. The running-time of this method is bounded in terms of a condition number of A. Then, we show how this condition number relates to another condition number of A defined by [6] and [7]. Other condition numbers of A were used in error bound analysis [2, 3] and convergence analysis [5].
n+l"
Starting from this point, an O(w/nL ) (primal-dual) interior-point algorithm will generate a sequence of {(xk,~)}, starting from (x°,y°), such that min (xRsk) >1 0~ k and pk ~ (1 -- fl/x/n + 2)/~k-1 O<.j<~n
(1) for some constants 0 < ct, fl < 1. Here,/a k represents the scaled duality gap at the kth iteration: (n + 2)# k =
(P) Is ~ feasible?
1
=
k k ..1- ( x k ) T s k -If. (1 XOS0
- eTxk)(- y~)
-
All successive iterates satisfy strict feasibility: (xk,x k) > 0 and (sk, s k) > 0, yk < 0, 1 -- eTXk > 0. Consider a condition number of A, defined as ap = minj~8 {max xj: X e : p } , ad = minion {max s j: s e : d },
(2)
a(A) = min(ap, aa).
2. An interior-point algorithm for solving (P) First, we propose to apply an interior-point algorithm to solve a related linear programming (LP) problem: (LP)
minimize subject to
XO
- (Ae)xo + A x = O, erx <<.1, (Xo, x) ~ O.
The dual of (LP) is (LD)
maximize subject to
Yo
-eyo+Ary>>-O, eVATy <~ 1, Yo ~< O.
(We assign % (ad) to 1 if B (N) is null). We note that a(A) depends only on the nullspace of A (or equivalently, the range space of A T) and is invariant if A is premultiplied by a nonsingular m x m matrix. The same is true of the parameter )~(A) introduced below. Nonetheless, we continue to write the dependence in terms of A. It has been shown that the interior-point algorithm generates a sequence of partitions (B k, N k) = A such that, after O(x/~(lloga(A)l + l o g n ) ) iterations, we have convergence to B k = B and N k = N (see [11]). It seems that this result could be used to answer question (P) to determine whether A = N or not. However, this requires a(A) as a priori information. Without knowledge of a(A), we have to provide a witness whether A = N or not.
S.A. Vavasis, Y. Ye / Operations Research Letters 17 (1995) 209 214
Here is an example of the analysis in [11]. It has been shown that if A = N, then for any k, the sequence (x~, ~ ) satisfies
211
3. Relation to another condition number
l <. j <. n
Let A be an m x n matrix, and let [I'll be some p-norm. Let ~ be the set of all positive definite n × n diagonal matrices. Let
for any s* in ~ a (for example, see [11]). Thus,
S -- {s~ ~": Ilsll = 1 and s = AXy for s o m e y ~ W"}.
s~ >~~s* /(n + 2),
s)>~a(A)/(n+2),
1 ~j~
(3)
Consider a least-squares projection at the kth iteration of the algorithm
Let X = {xE ~": ADx = 0 for some D E ~ } . Define
minimize
l[d~ll
subject to
AVdr + Skd, = eyko,
po(A) = inf{ IIs - x l l : x ~ X , seS}.
(5)
where Sk is diag(s*). We have a closed form for ds, that is
Theorem 1 (Stewart [6]). For any nonzero matrix A, po(A) > O.
as = Yok Pn(s') ,(S k) t e,
We now define 2(A) = 1/po(A). In the case that A is full rank, there is an alternative definition:
(4)
where PA is the projection matrix to the null space of A. Let
y+ = yk + dy
and
s + = s ~ + s~a~ = -
A T O ~ + a , ) = - - A T y +.
Note that from (3) and (4) Ildsll ~< lY~["II(S*)- '11' Ilell ~< (n - -~a(A) "+l Y2)~1"5 l. Thus, if ly~l = - y~ ~< x~ - y~ = (n + 2 ) # * <
~a(A) (n + 2) 1"5'
Theorem 2 (Stewart [6], O'Leary [4]). Let A be an m × n matrix of rank m. Then ~(A) = sup{IIAV(ADAV)-IADI[: D 6 ~ } .
This quantity has been independently analyzed by Todd [7]. Suppose ZA is a basis for the nultspace of A, that is, AZA = 0 and any x such that Ax = 0 can be written as x = Zaq. Then one checks that 1/~(Z T) is precisely equal to the infimum of the distance between St :
then ][dsll < 1 and
s + =Sk{e+ds)>O. That is, after a constant-factor scaling, s ÷ is an interior point in ~d with A = N, therefore, proving A = N. Thus, a witness that (P) is infeasible can be also found in O(x//-n(I log a(A) l + log n)) iterations. Similarly, the case A = B can be proved in the same number of iterations. In fact, all other cases can be completed in about the same number of iterations to obtain (B, N), and to generate feasible points in ~p and ~d, respectively. Thus, a(A) represents a measure of difficulty in solving (P): the smaller a(A), the harder the problem.
(6)
{s~.~n: s :
DATw for some w6 ~ ' , D 6 ~ } ,
and X ' = {x6 ~": Ax = 0 and Ilxll = 1}. Notice that this fact means that ](Z~) does not depend on which nullspace basis is chosen. In this section, we explore the relation between a(A) and ~(A). More specifically, we show
1
a(A) >~ - ~(A) + 1"
(7)
Therefore, to solve problem (P) we need at most O(x/~(log :~(A) + log n)) interior-point algorithm iterations.
212
S.A. Vavasis, Y. Ye / Operations Research Letters 17 (1995) 209-214
We first have the following lemma. L e m m a 1. Let A be an m x n nonzero matrix, and suppose the columns of A are partitioned arbitrarily as [B, N]. Then 1. ~(A) ~> 1; 2. Assumin9 A has rank m, ]~(A) - ~(ZTA)] ~< 1; 3. ~(ZTB) <~ ]~(ZTA), where ZB, ZA are nullspace bases for the nullspaces o f B, A, respectively. Proof. The first two inequalities are proved in Vavasis [9]. We now prove the third one. Let the size of B be m × p. For simplicity, assume B is composed of the initial p columns of A. Suppose u, v • ~P are chosen so that u = DBTw for some D • ~ and w • R " , and so that Ilvll = 1 and By = 0. We must prove that I l u - v i i / > 1/~(ZS). Let u ' • R n be defined by u' = D ' A T w ,
Proof. Assume B has nl columns. For each i • B and any/~ > 0, consider the optimization problem max
xi + la ~, log(xj),
subject to
X•~p.
This problem has a unique solution satisfying X B ( - ei - BTy -- e2) =/re,
where XB is diag(xB), el is the ith unit vector. Thus, upon taking the inner product with e, -- 2 = nxlt + xi
and X B -- X B B T F --
where fi = y / ( n l p + x~). Note that as/~ ~ 0 , x~ approaches the maximizer x* of max
where D ' = d i a g ( D , eI). Here, I denotes the (n - p) x (n - p) identity and e > 0 is a small parameter. Observe that
(:). can be made arbitrarily small as we let e tend to zero. Let v' be the extension of v to an n-vector obtained by filling in zeros. Observe that II v' II = II r II = 1. Observe also that Av = 0. Thus, by definition,
xi,
subject to X • ~ p , which is positive since ~ p has nonempty interior. Thus, XB -- XBBT.v tends to zero as/~ ~ 0 except for the ith entry. Choose a diagonal matrix D such that D has/~ in the ith diagonal position and l's elsewhere. Then IIXB- DXBBTfi[[ ~ Xi as/~ ~ 0. Hence 1
x, > / ~ ( z ~ ) ' since xB is in the range of ZB, [IXBIh = 1, and ( D X B ) - l w , w = DXBB'rfi, is in the null space of Z~. Thus,
Ilu'-v'll/> 1/~(z~).
1
x* ~> xi >~ )~(Z~)' But this implies that the same inequality must hold for u and v because the last n - p components of u' - v' are arbitrarily small. []
for each i • B,
i.e., ap = min(x* . . . . . x*)/>
We now prove several relations between a(- ) and
~(.). 3. Assume point. Then Theorem
ap >~ 1/)~(ZTB).
1
e + - - . Xsei, n~l~ + xi n l # + xi 1~
,
1
~(z~)
.
[]
We have similar result for the dual
~p
has an interior feasible
Theorem 4. Assume ~ d has an interior feasible point. Then crd >~ 1/,~(A).
S.A. Vavasis, Y. Ye / Operations Research Letters 17 (1995) 209-214
Proof. Assume N has n~ columns. F o r each i e N and any p > 0, consider the optimization problem max
si + p ~ log(sj),
213
Theorem5. 1 a(A) ~> - )~(A) + 1
jeN
Proof. For any partition A = (B, N), we have from L e m m a 1 and Theorem
subject to S ~ d . This problem has a unique solution satisfying
1 S N ( - ei - xN - 2e) = pc,
sB=0
and
sN + N T y = O,
BxB+NxN=O,
where xN, xs are Lagrange multipliers, SN is diag(sN), and e~ is the ith unit vector. Since sTxN = sTx = 0, we again have -
2 =nllt
+ si
1
1
I>
1 +
where ZB is the null space basis for B. Moreover, from Theorem 4 we have 1 ad ~ ~(A)' Therefore, we have desired result for a ( A ) = min(ap, ad). []
and SN.~N + SN = S~?,N -- N T y p 1 -- - e -~ - SNei, nap + si nll~ + si
4. A bound on ~(A) for polyhedra with rational data
•
where £ = - x / ( n l p + sl). Note also that as p ~ 0, s~ approaches the maximizer s* of max
Si,
subject to
s~d.
Finally, if A is rational, we provide a bound for ~(A) in terms of size of A. A similar result; is due to Tuncel [8]. Theorem 6. L e t A be rational and L be its bit size. Then
~(A) ~< 2 °(L~. Choose a diagonal matrix D such that Dn = #I and DN has/~ in the ith diagonal position and l's elsewhere. Then
Proof. Consider the least-squares problem min [ID(ATy -- c)ll. Y
For all D e 9 , its minimizer is as/~ ~ 0. Hence, 1
and fi is in a bounded polyhedron of the f o r m
s* >1 si >1 - ~(A)'
P(D)={y:ATy~<,
since A2 = 0 and s = - ATy with [4sill = 1. This implies 1 a~ >/2(A---)"
.~ = ( A D 2 A T ) - 1AD2c,
[]
Finally, for any A, we have the following theorem.
=,or
~> c},
where the actual relation for each inequality depends on D [7]. Thus, fi can be written as a convex combination from m + 1 vertices of P(D); i.e., m+l i=1
i = 1 .... , r e + l ,
214
S.A. Vavasis, E Y e / Operations Research Letters 17 (1995) 209-214
where Bi is a basis of A and ci is the subvector of c corresponding Bi. Thus, II(BT)-~II ~<2°~L~ for i = 1..... m + 1, and I1~11 =
~1 cxi(B[)- lci i=l m+l
~< ~, ~,ll(B[)-~c, II i=1 m+l
problems with near-degeneracies that make ap and ad arbitrarily small, independent of A. A different approach to nonhomogeneous problems is to apply a new kind of interior point method insensitive to such near-degeneracies. This is the subject of a longer paper [10] by the authors. An open question raised by this work is whether ~r(A) has an upper bound in terms of ~(A). We conjecture that the answer is affirmative.
~< ~ ~ l l ( n [ ) - l l l ' l l c , II i=l m+l
References
~< ~ ~ll(B~)-lll'llcll i=1 m+l
~< ~ ~i2°(L)llcll t=l
= 2°'L~llcll. Therefore, IIAT(AD2AW) - ~hDZcll
= IIZTPll ~< I[a~ll.llPll ~< 20~L)IIclI, which implies the theorem.
[]
5. Remarks
We have shown that the complexity of finding an interior point for a homogeneous polyhedron is bounded by ~(A). As mentioned in the introduction, this result can be generalized to nonhomogeneous polyhedra. The difficulty with this generalization is that appending b as a column of A could increase the value of ~(A). This increase is particularly undesirable for problems like flow problems, in which the constraints have the form Ax = b with small integers entries for A but arbitrary real numbers on the right-hand side vector. It is not hard to construct nonhomogeneous
[1] N. Karmarkar, "A new polynomial-time algorithm for linear programming", Combinatorica 4, 373 395 (1984). [2] Z.Q. Luo and P. Tseng, "Error bounds and convergence analysis of matrix splitting algorithms for the affine variational inequality problem", SIAM J. on Optim. 2, 43-54 (1992). I-3] O.L. Mangasarian, "Simple computable bounds for solutions of linear complementarity problems and linear programs", Math. Programming Study, 25, 1 12 (1985). 1-4] D.P. O'Leary, "On bounds for scaled projections and pseudoinverses", Linear Algebra Appl. 132, 115-117 (1990). [5] R. Polyak, "Modified barrier functions (theory and methods)", Math. Programming 54, 177-222 (1992). [6] G.W. Stewart, "On scaled projections and pseudoinverses", Linear Algebra Appl. 112, 189 193 (1989). [7] M.J. Todd, "A Dantzig Wolfe-like variant of Karmarkar's interior-point linear programming algorithm", Oper. Res. 38, 1006-1018 (1990). [8] L. Tuncel, "'A pseudo-polynomial complexity analysis for interior-point algorithms", Technical Report CORR 9316, Department of Combinatorics and Optimization, University of Waterloo, Waterloo, Ontario, Canada, 1993. [9] S.A. Vavasis, "Stable numerical algorithms for equilibrium systems", SIAM J. Matrix Anal. Appl. 15, 1108-1131 (1994). [10] S.A. Vavasis and Y. Ye, "An accelerated interior point method whose running time depends only on A", Technical Report 93-1391, Department of Computer Science, Cornell University, Ithaca, NY, 1993. [11] Y. Ye, "Toward probabilistic analysis of interior-point algorithms for linear programming", Math. Oper. Re.~. 19, 38-52 (1994).