196
Some aspects of integer programming duality Stanislaw W A L U K I E W I C Z
(P)
v(P) = max{cxlAx <~b,x >~O, x ~ Z"}
Systems Research Institute, Polish Academy of Sciences,
Warsaw. Poland Received May 1979 Revised January 1981
Two approaches to duality i n integer programming are briefly reviewed in this paper with emphasis on their applicability. We introduce the so called integer Lagrangean relaxation for a given problem and show how it can be used in a construction of an appropriate dual problem for which strong duality holds. A branch-and-bound algorithm for solving both primal and dual problems is described. Finally we stress the importance of a reformulationof a given integer problem in seeking its optimal dual variablesand their interpretation,
1. Introduction
we are interested in the construction of a dual problem (D) to (P) such that as many results as possible from linear programming duality are valid in the integer case. By Z" we denote the set of all n-dimensional integer vectors and by F(P) the set of all feasible solutions to (P). We assume that all data in (P) are integer. N o w we repeat the well-known facts from linear programming duality since they will be used in this paper in a discussion of similarities and differences between linear and integer programming duality. Given an m by n matrix A of reals and tWO vectors b E R" and c E R", one can write two linear programming problems (P)
v(P) = max{cxlAx <~b, x >t 0}
and A typical situation in the application of mathematical programming is: Data have been collected, a mathematical model is being built and solved, and the results will be implemented in the future. Therefore, one is not only interested in an optimal solution but also in the results of a sensitivity analysis of a model and, in particular, in the valuation of constraints at an optimal point as just a first step of it. The two above problems a r e solved for linear programming models, due to linear programming duality, while the situation is much more complicated in the case of integer programming models. The aim of this paper is to consider different approaches to duality in integer programming with emphasis on their applicability. More precisely, given the following integer programming problem
The main part of this work has been done during my stay at the Institute of Datalogy, University of Copenhagen. I am most grateful to Jakob Krarup for his comments and suggestions on the subject of this paper. Thanks are also due to referees for their helpful comments in improving earlier versions of the
paper. North-Holland Publishing Company European Journal of Operational Research 7 (1981) 196-202
(D)
v(D)=nfin{ub}uA>~c,u>~O}.
We will consider (P) as the linear programming relaxation_of (P). Let_v(P)= - o o if F(P)= ~t and v ( D ) - + oo if F(D)= ~J, in accordance with standard conventions. We formulate the well-known results of linear programming duality in the form of three theoreins:
Theorem 1 (Weak duality), cx <. ub for any x E F(P), u ~ F(D). Theorem 2 (Strong duality) If either(rj) or (D) has a finite optimal value, then there exists at least one optimal primal feasible solution ~ to (P) and at least one optimal dual feasible solution ff to (D) and for any optimal ~ and ff we have c~ = fib. Theorem 3 (Complementary slackness). Let ~ and ff be a pair of optimal solutions to (P) and (D) respectively. Then (i) .~j(cj -- ffaj) = 0, j = 1,... ,n, (ii)ffj(b~-a~)=0,
i = 1, .... m,
where aj ( a~) is a column (row) of A.
0377-2217/81/0000-0000/$02.50 © North-Holland Publishing Company
S. Walukiewicz / Integer programming duality
Following Koopmans [11] we give an economic interpretation of the above results assuming that (P) is a problem of profit maximization, where b represents available resources, x the activity levels, and Ax the resources required by the process, if the activity levels are x. Then u represents (shadow) prices of the resources with the following properties: (a) No process in the technology makes a positive net profit (since c - uA ~< 0); (b) Every process in use makes a zero profit (by (i)); (c) No resource has a negative price (since u I> 0); (d) Every resource used below the limit of its availability has a zero price, (by (ii)). To demonstrate the importance of the economic aspects of integer programming duality we consider a simple capital budgeting problem, Example 1. The problem concerns a firm confronted with a variety of possible investment projects and a fixed capital. The objective is to select projects which lead to the highest value for the firm. Let cj be the value of project j, j = I,..., n, and aij the amount of expenditure required for project j in period i discounted to the present, i = 1,..., m, and b, the fixed capital in period i. The capital budgeting problem can be modelled by the following integer programming problem: n max ~, cjxj, j=l ,I s.t. ~ a,jxj <~b,, i= 1,...,m, (1) j: t
1, xj =
0,
if projectj is selected, otherwise.
It is easy .to see that one has to use discrete variables to model the above situation. Consider for a moment the above problem without the ( integrality requirements. Then by Theorem 3 the optimal prices ff~,...,Um in this linear programruing problem play a role of opportunity costs for supply of capital. In the capital budgeting problem it is interesting to have a valuation of constraints (1) in such a way that this valuation is as close as possible to the opportunity costs for supply of capital. To avoid misunderstanding we will use the term "multipliers' instead of 'prices', as in integer
i 97
programming, generally speaking, there is no vector of multipliers, which can evaluate the constraints in the sense of linear programming duality. Therefore in this paper we consider different methods for computing optimal multipliers for a given integer programming problem and show that these multipliers have some economic interpretation. In the next section we give a short description of the two most often used methods for computing multipliers for a given integer problem. Next we introduce the so-called integer Lagrangean relaxation of a given integer programming problem and show how it can be used in a construction of an appropriate dual problem such that strong duality holds. An algorithm for solving both primal and dual problems is given in Section 4. Finally, we emphasize the importance of reformulations of a given integer programming problem. Although such an approach itself cannot guarantee to find the optimal prices, it has proved to be useful in many applications and it can be used in a combination with methods discussed in this paper. Throughout this paper we consider only pure integer, programming problems. Some possible extensions are considered in the concluding remarks.
2. Methods In integer programming a dual problem is not uniquely defined for a given primal problem but it depends on the method for computing (optimal) multipliers. And not all relations of the linear programming duality hold for such a primal-dual pair. It is reasonable to measure the difference between linear and integer cases as a duality gap defined in the following way
A-Iv(P)-v(D)I, where Ix I denotes the absolute value of x. We say that the i th constraint represents a free good if it can be removed from (P) without changing an optimal solution to (P). In linear programming a free good has always a zero price. We briefly review two most commonly used methods for computing optimal prices in integer programming. One of the earliest attempts to find an economical meaningful dual for a given integer problem was made by Gomory and Baumol [5]. It is well known that if F(P):/:ill the Gomory fractional
S. Walukiewicz/ Integerprogrammingduafity
198
cutting plane algorithm guarantees convergence to an optimal solution in a finite, say r, number of steps. So at the last iteration the LP-solution is integer and the last linear programming problem gives optimal multipliers for both the original constraints and the addeA cutting planes. It.is generally very difficult to give these extra constraints any physical meaning and hence make an economic sense of their multipliers. Gomory and Baumol therefore suggest relating these new constraints back to the original ones, and recompute the multipliers. The process of recomputation is based on the fact that any cutting plane can be represented as a linear combination with nonnegative coefficientsof the earlierconstraints with both structural, nonnegativity constraints and added Cuts included. Let u i, i = I.....m + n + r, be the optimal multipliersobtained at the last iteration of Gomory's algorithm and let hr x <~ h,0 be the last cut constructed by this algorithm. Then m+n+r
hr=
m
x
w,a~,
i=!
I.
[14]). On the other hand recent computational experiments suggest that such an approach is useful if the integer programming is fairly close to the LP optimum (see Williams [16]). Optimal multipliers for a given problem (P) can be established by Lagrangean relaxation in the following way: For any k = 1,2,..., m we define a Lagrangean relaxation of (P) as integer programming problem (see Geoffrion [4]) (R(uk))
g(uk)-----max(CX+Uk(bk--akx)),
s.t. a~x <<.b~, i = l,...,m, i ~ k, x ~> 0, x • Zn. It is easy to see that F(P)C F(R(uk)) and g(uk)>I cx for any u k >I 0, k = I,...,m, x ~ F(P). The optireal multiplier u k is defined as an optimal solution to the problem
(Dk)
+r
hro-- [ +~
shown that after some modifications of b in (P), the strong duality holds (see also Weingartner
w,bi],
g(u'~)=ming(uk), s.t. ~k ~>0.
i=i
where Ix] means the integer not exceeding x and all wi~>0. Having all w, we recompute the multipliers in the following way u~ = u~ + w,u, for i = I, .... m + n + r - 1, ' * 0. u, = u, = We set r = r - 1 and repeat the recomputation process for the ( r - l)st cut and continue until the original inequalities remain. At the end of the recomputation process we have U* : ~',u~ f o r / = l , . . . , m + n ,
L0
for i = m + n + I, .... m + r and define the optimal value of the dual as m
v(D) = ~ u*b.
We define o ( D ) = cx* + u*(b- Ax*) and observe that in general A > 0 and that a free good receives a zero multiplier. In many applications such multipliers have an economic interpretation. Held and Karp's work [6,7] on the travelling salesman problem initiated a stream of successful applications of Lagrangean relaxations in solving integer programming problem~'.. The general de. • i. • ° scnptmn of such an approach, is given m Geoffrion'~ paper [4]. 3. Integer Lagrangean relaxation In this section we will consider a bounded, feasible, and equality--constrained integer programming problem with a nonnegative fight-hand side. Obviously, any practical problem can be written in the following form (p)
v(p)=max{cxlAx=b,O<_.x<<.d,x~Zn},
~=t The severe drawback of this method is that the valuations u* depend on the route by which optimality was achieved and this in general is not unique. It is easy to see that the weak duality holds but 'neither the strong duality, i.e., A = O, not the complementary slackness holds for such optimal prices. It can happen that even a free good has a
where d E Z ran, b ~ Z m, c ~ Z", d E Z" and b >~ O. It is well known that m > l equality constraints of (P) may be aggregated into one constraint without changing the set of feasible solutions to (P) (see e.g. Garfinkel and Nemhauser [3, p. 242] and Rosenberg [12]). Therefore the problem
nonzero multiplier. Alcaly and Klevorick [1] have
(Pa)
v(P~)=max{cxlCtAx=ub, O<~x<~d, x E Z " }
S. Walukiewicz / Integer programming duafitv
199
has the same optimal solution as (P), where ~ ~ Z " is a vector of aggregation coefficients for the sys-
straints and therefore two problems, each with the addition of
tern Ax=b,
u(b-Ax')>~ l O<~x<~d,
xEZ",
and we have F(Pa)= F(P). Let 0 be a set of all aggregation coefficients for the above system. Since we have assumed that (P) is bounded, such a set is nonempty, e.g., it is sufficient to take ~ = ( I , M , M 2 , . . . , M ' - I ) , where M > maxt [a~x - b, I, 0 ~< x <~ d, x ~ Z".
Obviously, U is unbounded, since if ~ E / ) , then k~ E U for any integer k =/= 0. For any given finite, m-dimensional integer vector u we define the integer Lagrangean relaxation of (P) as the integer programming problem (R(u)) v(R(u))=g(u):max(cx+u(b-Ax)), s.t. uAx = ub, O<~x<~d, x E Z n. Problem (R(u)) is a relaxation of (P) as F(P) c_ F(R(u))andcx=cx+u(b-Ax)foranyx~F(P) and any u ~ Z ~. Similar to the case of ordinary (continuous) Lagrangean relaxation we try to solve the sequence of problems (R(u)) or their modifications for different multiplier vectors u ~ Z m instead of solving problem (P). We show that the above sequence is finite and the last multiplier vector has interesting properties, Let x' be an optimal solution to (R(u')) for given u' E Z m. Only two possibilities may occur: (1) x ' ~ F(P) and (2) x' E F(P). (l) If x' 6 F(P), then (b - Ax') :p 0 although u'b - u'Ax' = 0 (here 0 is an m-dimensional vector of zeros). So for this u ' E Z m the solution of integer Lagrangean relaxation is not a solution to our initial problem (P). As g(u) >I cx for any x E F(R(u')) the ideal choice would be to take u as an
g(u) uEZ"
satisfying one more requirement uAx' =/= ub.
u(b-Ax')<~ -I
respectively. So we find the first two constraints in the dual (D') to (P), which can be written in the form (D')
v(D')-ming(u), s.t. u ~ F(D')
These constraints eliminate at least from the optimal solutions to (D') all integer vectors ku', where k =p=-0. Let u" be a finite optimal solution to one of the above problems. (It is easy to see that at least one of these problems is feasible.) Then in the next iteration we construct (R(u")) and find its optimal solution. If this solution is infeasil:le to (P), then in the next iteration two new alternative constraints are added to (2) and (3) and so on. (2) If x' ~ F(P), then it is an optimal solution to (P). We call the corresponding multiplier vector u* an optimal muitiplier vector for (P). Let U* be the set of all optimal multiplier vectors for (P). We observe that any aggregation coefficient vector is at the same time an optimal multiplier vector, therefore U* =/:~. Moreover in [13] we show by numerical example that U* :~/J. Since (R(u)) is a bounded problem and no u can be repeated in the above iteration process, in a finite number of steps we arrive to the case x ' E F(P), i.e., we find an optimal solution both to problems (P) and (D'). We summarize the results obtained so far in the following Theorem 4. For a bounded, feasible, equaii:y constrained problem (P), the dual is (D')
v(D') = g(u) = min max(cx + u(b - Ax)), s.t. u E F(D')
and for such a primal-dual pair the following results
optimal solution to the program
rain s.t.
or
(2)
(3)
Since all data in (P) are integer, this condition is equivalent to the alternative of two inequality con-
hold: (a) (Weak duality) cx<~g(u) for any x E F(R(u)) _DF(P) and]or any u E F(D'); (b) (Strong duality) There exists an optimal solution u* to (D') such that for any optimal solution x* to (P), cx* = g( u* ) ;
200
S. Walukiewtcz / Integer programming duality
(c) 0 C_ U*, ~,ut 0 :¢: U*. Part (ii) of complementary slackness is obviously. satisfied as we have an equafit)' constrained probiem, i.e.,
u*(b i - a , x * ) = O ,
i = l . . . . . m.
Theorem 5. The ith constraint in (P) represents a free good if and only if there exists an optimal multiplier vector u* with u* = 0. Proof. If the i th constraint represents a free good, then, by definition, it can be removed from (P) without any change of v(P). But this is equivalent to c o n s i d e r ( R ( u ) ) s u c h t h a t f o r a n y u ~ Z " , u ~ = 0 . If u, = 0, then the ith constraint is added in (R(u)) with zer~ ,.c, " ent,.icr,, "~':-: ~ but this means that a , x = b i may be dropped from the formulation of (P). For such a defined dual, the set of all optimal solutions is unbounded, which is impractical, therefore we reformulate the dual to the form
We describe now a branch-and-bound algorithm for solving both the primal problem (P) and the dual problem (D).
Step I (Initialization). Set k = 0 and u ° = 0. Step 2 (Solving the integer Lagrangean relaxation). Solve (R(uk))
(D k)
Since b i> 0 the problem (D) is bounded and the set of all optimal solutions is bounded. In other words, any aggregation vector tT~>0 is only feasible to (D). Following Forgo [2] we give one more interpretation of problems (P), (D) and variables u. Suppose the profit maximization problem (P) models the activity of an economic unit. The equations A x = b represent the physical constraints on raw materials, energy, labour, etc. Then the variables u ~ zm can be interpreted as prices and the single constraint uAx = ub can be interpreted as a financial restriction. The prices u* assure that if-the financial requirement is met, then the physical constraints are automatically satisfied if the level of activity is optimal. Therefore an economic unit can be motivated to operate optimally through prices and such prices u* always exist, as U* =/=ill, and moreover, u* ~ Z m, i.e., the prices have the same nature as data in (P). From the Theorem 4 we have that (P)
= Ic * -
v(D~) = min ub, s.t. uAx r =/::ub, r = 0 , 1 , . . . , k - 1, u~O, u ~ Z m
s.t. u E F(D), u i> 0, u ~ Z " .
I
v ( R ( u k ) ) = m a x cx,
s.t. ukAx = ukb, 0 ~< x ~< d, x E Z". Let x t be an optimal solution to the above problem. If A x k : b, then x* = x k and u* = u k (i.e., x k and u k solve the primal and the dual problem respectively) and stop. Otherwise set k = k + 1 and go to Step 3. Step 3 (Solving the dual problem).
v(D) -- min ub
(D)
a:
" 4. An a!gerithm
Let u k be an optimal solution to (D k). Go to Step 2. Remarks (1) This algorithm is finite, since any aggregation vector ~, fi >i 0 is a feasible sohltion to (D) and u*b ~ f~b. Therefore at most ,,, l'I (fi; + 1) ;: iterations are needed to find optimal solutions to (P) and (D). To obtain a good estimation one may introduce an appropriate definition of the 'smallest' aggregation coefficient vector. (2) At each iteration in Step 3 we may add one or all constraints associated with all optimal solutions to (R(uk)).
5. Reformulations of integer programming problems Consider two integer programming problems
*bl = I x* -
From now on we will consider a slight modification of the integer Lagrangean relaxation assureing that the objective functions in (P) and (R(u)) are the same.
.,
.(Pi)
v(Pi) = max j = ~ l cjxj, nt s.t. ~ a~jxj <~b~, j= xj = 0 o r
1,
i = l , . . . , m !, j= l,...,n !
S. Walukiewit7 / Integer programming dua60'
and ,,2 (P2)
v(P2) = max ~ cjxj, j---I n2 s.t. ~ ~,jxj <~ b~, j: I xj = 0 o r 1,
i = i ..... m2, j = l,...,n2
Problems (Pi) and (P2) are equivalent if F(PI)= F(P2) or there exists an 1-1 correspondence between F ( P I ) a n d F(P2). Problem ( P 2 ) i s a tighter equivalent formulation of_(Pi), if F(P~)= F(P2) and F(P2) C_F(P I ), where (Pi) is a linear programming relaxation of (P~) obtained by substituting O<~xj ~< 1 instead of xj = 0 or 1. The convex hull of F(P t) is the tightest possible equivalent formulation for which a linear programming optimum is integer and therefore all results of linear programming are valid. Unfortunately the convex hull constraints usually have little physical meaning and we do not know how to find efficiently the convex hull for a given problem. Williams has shown in [15] and [16] that some practical integer problems can be efficiently reformulated in such a way that the resulting problems have unimodular matrices of coefficients which guarantees that the linear programming optimum will be integer, Moreover, in such a reformulation each constraint has some physical meaning. Generally speaking, in this nonalgorithmic approach we can only guarantee that F(P2)C_ F(P I) and this is obtained by increasing the number of constraints. So, generally, we have m 2 1> m I and n 2 = n I. In this section we review briefly the method for constructing a tighter equivalent formulation based on the idea of a rotation of an integer constraint without adding and eliminating any of its feasible solution (see Kianfar [10], Kaliszewski and Walukiewicz [8,9]). In this approach we rotate each constraint separately using a dynamic programming type procedure. A constraint which cannot be rotated without adding to or eliminating any of its feasible solution is called a strongest cut (constraint) and an integer programming problem in which each constraint is a strongest one is called an almost linear (integer programming) problem. Our computational experiments [9] have shown that for some examples taken from literature an almost linear programming is in fact a linear programming problem, so for such problems all resuits from linear programming duality are valid. If
2t) i
it is not the case, as F(P2) C_F(Pi), the linear programming optimum is closer to the integer solution and there is a good chance that the Gomory/Baumol method from Section 2 will produce for such a formulation valuations of integer constraints which have some physical meaning, as it has been demonstrated in [16]. It is worthwhile to note that the reformulation time takes in average 5% of the solution time by the Gomory fractional algorithm and never exceeds 30~ of it. Our computational experiments also show that this method is well suited for capital budgeting and project selection problems, for which the method described by Williams [151, generally speaking, falls. In this approach we have n 2 = n ! and m 2 ~
6, Concluding remarks A major reason for the widespread use of linear programming models is the existence of simple procedures for performing sensitivity analysis. So far we have had no such procedure for integer programming and it is a key objective of integer programming duality. It is easy to see that the Gomory/Baumol approach and the approach based on Lagrangean relaxation can bt. extended to the case of mixed integer programming problems, while the approach based on integer Lagrangean cannot. On the other hand the last approach can be used in decomposition of integer programming problems and computing optimal multipliers for nonlinear integer constraints.
References
Ill R.E. Alcaly and A.V. Klevorick, A note on dual prices of integer programs, Econometrica 34 (1966) 206-214.
[21 F. Forgo, Shadow prices and decomposition for integer
programs, DM74-6, Department of Mathematics, Karl Marx University of Economics, Budapest (1974). [3] R.S. Garfinkel and G.L. Nemhauser, Integer Programming (Wiley, New York, 1972). [41 A.M. Geoffrion, Lagrangean relaxation for integer programming. Math. Programming Study 2 ( !974) 82- i i 4. [51 E.R. Gomory and W.J. Baumol, Integer programming and pricing, Econometrica28 (1969)521-550. [6] M. Held and R.M. Karp, The traveling salesman problem and minimum spanning trees, Operations Res. 18 (1970) l138- I! 62.
202
S. Walukiewicz/lnteger programming duality
[7] M. Held and R.M. Karp, The traveling salesman problem and minimum spanning trees: Part II, Math. Programming I (1971)6-25. [8] I. Kaliszewski and S. Walukiewicz, Tighter equivalent formulations of integer programming problems, in: A. Prekopa, Ed., Survey of Mathematical Programming, Proceedings of the IX. International Mathematical Programming Symposium, Budapest (August 1976). [9] I. Kaliszewski and S. Walukiewicz, A computationally efficient transformation of integer programming problems, Mimeo, Systems Research Institute, Warsaw (1978). [10] F. Kianfar, Stronger inequalities for 0-1 integer programming, Operations Res. 19 (1971) 1373-1392. [! !] T.C. Koopmans, Concepts of optimality and their uses, Nobel Memorial Lecture, I I December 1975, Math. Programming ! ! (! 976) 2 ! 2- 228.
[12] I.(; ~o.~enberg, Aggregation of equation in integer pro. gramming, Discrete Math. 10 (1974) 325-341. [13] S. Walukiewicz, On integer programming duality, in: J. Krarup and S. Walukiewicz, Eds., Proceedings of DAPS-79 (April 1980). [14] H.M. Weingartner, Mathematical Programming and the Analysis of Capital Budgeting Problems (Academic Press, London, 19"74). [15] H.P. Williams, Experiments in the formulation of integer programming problems, Math. Programming Study 2 (1974) ! 80-197. |!6] H.P. Williams, The economic interpretation of duality for practical mixed integer programming problems, Mimeo, University of Edinburgh (1977).