Operations Research Letters 10 (1991) 309-312 North-Holland
August 1991
On the convex programming approach to linear programming J.R. Rajasekera A T& T Bell Laboratories, Box 900, Princeton, NJ 08540, USA
S.C. Fang North Carolina State University, Box 7913, Raleigh, N C 27695, USA
For a general linear program in Karmarkar's standard form, Fang recently proposed a new approach which would find an &optimal solution by solving an unconstrained convex dual program. The dual was constructed by applying generalized geometric programruing theory to a linear programming problem. In this paper we show that Fang's results can be obtained directly using a simple geometric inequality. The new approach provides a better &optimal solution generation scheme in a simpler way. linear programming, entropy, convex programming, geometric programming, duality theory
1. Introduction Karmarkar's algorithm [4] has generated much research in linear programming. Recent algorithms include the barrier function approach and the application of Newton method [3,5]. A new method, recently proposed by Fang [2], showed that the geometric programming dual of a linear program in K a r m a r k a r ' s standard form, associated with an entropy-based barrier function, is an unconstrained minimization problem with a convex objective function. Moreover, for any given tolerance e > 0, the barrier parameter, which appears in the dual objective function, can be controlled so that an e-optimal p r i m a l - d u a l solution pair of a linear program can be generated by the optimal solution to the geometric dual. This new approach suggests a new way of looking at linear programs and allows the application of already developed unconstrained optimization methods for solving linear programs. In this paper we show that Fang's results can be accomplished in a direct and simpler way. We use a simple arithmetic-geometric type inequality to naturally generate the entropy-based barrier function and, subsequently, obtain a more efficient e-optimal solution generation scheme for computations.
2. Entropy-based barrier function Consider a linear programming problem in K a r m a r k a r ' s standard form [4] (P)
Min s.t.
c'x Ax = O,
(1)
e'x : 1,
(2)
x>_0,
(3)
where c and x are n-dimensional column vectors, A is a m x n matrix, 0 is an m-dimensional zero vector, and e is an n-dimensional one vector. We assume that n > 1 and program (P) has a strictly positive feasible solution. Following Karmarkar, we also assume that constraint (3) can be modified to x > 0. 0167-6377/91/$03.50 © 1991
Elsevier Science Publishers B.V. All rights reserved
309
Volume 10, Number 6
OPERATIONS RESEARCH LETTERS
August 1991
The standard dual linear program of (P) is the following: (D)
Max
w,,,+1 m
s.t.
~ _ , a i j w i+w,,,+ 1 < c a ,
j=l,2
(4)
. . . . . n,
i=1
w~R,
..... m+l.
i=1,2
(5)
Note that w, ~ R, for i = 1, 2 . . . . . m, and win+1 = minj=l ...... ( C J - E7-~a~jwi)' is always a dual feasible solution. To show that entropy-based barrier function plays a natural role in solving problem (P), we consider the simple geometric inequality [1] n
n
e') > I--[ {e":'/x, }x,, j=l
(6)
j=l
which holds for arbitrary yj, xj > 0, j = 1, 2 . . . . . n, and E~=~xj = 1. The equality holds if and only if X,= )~e';'
j=1,2
. . . . . n,
(7)
where )~ > 0 is a constant. For given bt > 0, we substitute yj = (F.,~%~a,jwi - c ~ ) / # , for j = 1, 2 . . . . . n, in (6) and take the logarithm of both sides. Then, Y'. j:l
a,aw, - cj xj 1 i=1
xy log xj +/~ log
exp
j=l
1
a,gw, - C l / P "
'
(8)
i=1
which holds true for arbitrary w, x > 0, and Y'Xi = 1. The equality holds if and only if xj=),exp
~a,jw,-cj
/~
,
j=l,2
(9)
. . . . . n.
i=1
Now, assume that x also satisfies the constraint (1) of problem (P). Then ~ j = ~ a , j x j and ~_, j=l
a,jw, - cj x j = i=1
aOx j i=1
j=
-
cjxj= j=l
--
0, i = 1, 2 , . .
m,
cjxj. j=l
Hence, after rearrangement, (8) reduces to - # log
exp
a~jw~ - c~ /l~
< c ' x + I~ ~_, x : log x j ,
i
(10)
j=l
which holds true for arbitrary w and x > 0, satisfying constraints (1-2). The equality holds if and only if (9) is true. In (10) we see the entropy function, ~2~=1Xj log x j, naturally became an associated part of the objective function of problem (P) under the constraints (1-3). Also notice that the left hand side of (10) is equivalent to the geometric programming dual obtained in Fang [2].
3. e-Optimal solution Minimizing the right hand side of (10) under constraints (1-3) has been proposed in [3]. Here, we can simply consider the maximization of the left hand side of (10), with unconstrained w. If we define h(w; t,)=
310
- l ~ log
exp
aijwi-c j /#
,
(11)
Volume 10, Nu mb er 6
O P E R A T I O N S R E S E A R C H LETTERS
August 1991
then it is a concave function of w. Also, under the feasibility assumption of problem (P) inequality (10) implies that h ( w ; la) is bounded from above. Hence a unique maximum solution w* exists. Taking derivatives at w*, we have -1
{,# [{ exp
1
}]) i
~_, aijwi* - cj / i ,
i=1
exp
j=l
)]
aijw,* - cj /At aij = O, i=1
i = 1 , 2 . . . . . m. (12)
Checking the second-order derivatives, we see w* is the maximum of h ( w ; At). Let us define x * as follows: 1
exp
I(
)]
for j = 1, 2 . . . . . n
~_, aijwi* - cj /At
i =1
(13)
Then, (12) implies that x * satisfies constraint (1), and (13) implies that x* satisfies constraints (2-3) of problem (P). Hence x* is feasible to problem (P). Moreover, each xa* satisfies the condition specified in (9), and hence, (10) becomes an equality with x and w replaced with x* and w *, respectively. This leads to the following theorem: Theorem 1. Given At > O, let w * be the m a x i m u m of the concave function h ( w ; At). I f x * is defined by (13), then n
h(w *'
At) = -At log
exp ~j=l
Now we further define w£+ 1 =
min
aow** - cj /At
= c ' x * + At Y'. x .1* log x *
k\ i=1
(m) cj
j = l ..... n
(14)
./= 1
(15)
Y'. aijwi*
i=1
Then w* = (wl*, w2*. . . . ,w,,;+l) * is feasible to program (D). Without loss of generality we assume the minimum of the right hand side in (15) occurs at j = 1. Taking the logarithm in (13), and multiplying the result by At, we have
(16) Hence by (14) At log xl* = -Win*+, + C'X * + At }2 X 2 log X[.
(17)
j=l
The duality theory of linear programming implies that 0 < c ' x *
-
w* re+l,
and (17) becomes
n
0 -< C ' X * -- Wm* + t
=
At log xl*
-
At E
x7 log x j*
(18)
j=l n
= At ~_. l o g ( x ~ ' / x 7 ~ ) X2 .
(19)
j=l
311
Volume 10, Number 6
OPERATIONS RESEARCH LETTERS
August 1991
Table 1 Dependence of/~ on the problem size (e = 0.001) n 100 1000 10000 100000
~t = e/log n
# = e e/n
0.000 0217 0.000 014 4 0.000 010 8 0.000 008 6
0.000 002 71 0.000 000 271 0.000 000 0271 0.000 000 002 71
N o w , c o n s i d e r i n g the g e o m e t r i c i n e q u a l i t y with the fact that x * > 0 a n d ~ jn_ l x j . = 1, we have ~ /n= 1 x l , _> Iqnj . l ( x ~ ' / x * ) X ? . Since 1 > xl*, we k n o. w n n > ] - ] j = l ( x.l. / x / . ) x * a n d ~ns= l l o g ( x ~ ' / x T ) x* < log n. Hence, . (19) reduces to
0 < c'x * - w*+ 1 ~ log n. F o r any t o l e r a n c e e > 0, we define /z = e / l o g n to get 0 < c ' x * - w * + l following result:
(20)
< e. T h e r e f o r e we have the
Theorem 2. For given e > 0, let t~ -- e / l o g n and w * be the m a x i m u m of the concave function h(w; t~). f i x * is defined by (13), then 0 < c ' x * - w*= 1 < e, and, hence ( x * , w * ) is an e - o p t i m a l p r i m a l - d u a l solution
pair. A s p o i n t e d out in [2], once/L is k n o w n , v a r i o u s m e t h o d s for u n c o n s t r a i n e d o p t i m i z a t i o n can be a p p l i e d to find the o p t i m u m w*, a n d hence, to find an e - o p t i m a l s o l u t i o n to the l i n e a r p r o g r a m m i n g p r o b l e m .
4. Conclusion
W e have shown the equivalence of linear p r o g r a m m i n g a n d c o n v e x p r o g r a m m i n g using s i m p l e a l g e b r a b a s e d on a g e o m e t r i c inequality. F o r solving a p r o b l e m o f n v a r i a b l e s with a given t o l e r a n c e e ( > 0), we can set /~ = e / l o g n, i n s t e a d of /1 = e e / n , (e = 2 . 7 1 8 . . . ) , as r e p o r t e d in [2], a n d find the m a x i m u m of h(w; t~). T h e w a y # changes with the p r o b l e m size is i l l u s t r a t e d in T a b l e 1. In c o m p u t i n g the m a x i m u m of h(w; #), the m o s t difficult task is to a v o i d the overflow error, b e c a u s e of the involved e x p o n e n t i a l functions. A l s o the o p t i m a l value x 7 given in (13) is e x t r e m e l y sensitive to the exact values o b t a i n e d t h r o u g h the e x p o n e n t i a l functions. W h e n a large value o f / z is allowed, we have lesser p r o b l e m s in c o m p u t a t i o n . Therefore, the new scheme is b e t t e r while d e a l i n g with large linear p r o g r a m s .
References
[1] R.J. Duffin, E.L. Peterson and C. Zener, Geometric Programming - Theory and Applications, John Wiley, New York, 1967. [2] S.C. Fang, "A new unconstrained convex programming approach to linear programming", North Carolina State University, OR Research Report 243, Feb. 25, 1990, to appear in Z Oper. Res. [3] P.E. Gill, W. Murray, M. Saunders, T.A. Tomlin and M.H. Wright, "On projected Newton barriers for linear programming and equivalence to Karmarkar's projective method", Math. Programming 36, 183-209 (1986). [4] N. Karmarkar, "A new polynomial time algorithm for linear programming", Combinatorica, 4, 373-395 (1984). [5] J.C. Lagarias and D.A. Bayer, "Karmarkar's linear programming method and Newton's method", Bell Laboratories Technical Report 11218-870810-22TM, Aug. 10, 1987. [6] D.G. Luenberger, Linear and Nonlinear Programming, second ed., Addison-Wesley, Reading, MA, 1984.
312