Solving integer linear programming
problems
81
REFERENCES 1.
ARROW, K. J., et al., Studies in linear and non-linear programming, translation, II., Moscow, 1962).
Standford U.P., 1958. (Russian
2.
FIACCO, A. V. and MCCORMICK, G. P., Xon-linear Mir, Moscow, 1972).
Wiley,, 1968 (Russian translation,
3.
HESTENES, M. R., Multiplier and gradient methods, J. Optimiz.
4.
POWELL, M. J. D., A method for non-linear constraints in minimization problems, in: Optimization, Acad. Press, London. 283-298, 1969.
5.
POLYAK, B. T., Iterative methods. using Lagrange multipliers, for solving extremal problems with equation-type constraints, Zh. Iehisl. Mat. mat. Fir,, IO, No. 5, 1098-1106, 1970.
6.
POLYAK, B. T., Rate of convergence of the method of penalty function, Zh. @hisl. No. 1, 3-11, 1971.
7.
POLYAK, B. T., and TRET’YAKOV, N. V., The method of penalty estimates for constrained extremum problems, Zh. vychisl. Mat. mat. Fiz., 13, No. 1, 34-46, 1973.
8.
ERMOL’EV, Yu. M., Stochastic models and optimization
9.
KUSHNER, H. J., Stochastic approsimation Ann.Statist.,
2, No. 4. 713-723,
programming,
Theor. Appl..
methods, Kibernetika,
4, No. 5, 303-320,
Mat. mat. Fiz., 11,
No. 4, 109-119,
algorithms for constrained optimization
1969.
1975.
problems,
1974.
10. BELEN’KII, V. Z., et al., Iterative methods in the theory of games and programming, v teorii ip i programmirovanii), Nauka, Moscow, 1974.
(Iterativnye metody
11. POLYAK, B. T., The convergence and rate of convergence of iterative stochastic algorithms, I., General case, Avtomatika i telemekhan., No. 12, 83-94, 1976. 12. TSYPKIN, Ya. Z., Adaptatiou and learning in automatic s?‘stems (Adaptatsiya i obuchenie v avtomaticheskikh sistemakh), Nauka. \foscov~. 1968. 13. BAKUSHINSKII, A. B.: and POLYAK;. B. T.. The solution of variational inequalities, Dokl. Akad. A;auk SSSR. 219. No. 5.1038-1041. 1974.
iX.S.R. Comput. Maths Math. Phys. Vol. 19, pp. 8169 0 Pergamon Press Ltd. 1980. Printed in Great Britain.
0041-5553/79/0201/0081507.50:0
COMPUTER EXPERIMENTS ON SOLVING INTEGER LINEAR PROGRAMMING PROBLEMS BY LOCALLY STOCHASTIC ALGORITII.1IS” V. A. PIR’YANOVICH Minsk: (Receised
A FAMILY of approximate
algorithms
6 Februap,
1976 1
for solving zero-one integer linear programming
problems
is described. A generator of conditions of problems with known optimal solutions is proposed for the experimental study of the algorithms. The results of computer experiments are outlined.
*Zh. vjGchis1. Mat. mat. Fiz., 19, 1. 79-87.
1SSH 19, 1
1979.
82
K A. Pir’yunovich
Introduction It is well-known [ 1,2] that the scope of exact methods for solving large-scale applied problems of discrete optimization is limited. Many such problems occur in subsystems of operational-production planning and control, when it is required to obtain the optimal (or near-optimal) solution in a limited time. As a rule, such problems are solved by a variety of heuristic approximate methods (they are surveyed in [2 J ). The present paper describes a family of such methods for solving zero-one integer linear programming (i.1.p.) problems.
The idea underlying the methods, which are based on a combination of random search with local optimization, is simple, and amounts to the following. A random mechanism is used for selecting the initial solution of the problem (by a solution, here and below, we mean any (0.1 )-vector I that satisfies the problem constraints). Next, a “neighbourhood” of this solution is defied in a natural way: it consists of a relatively small number of other solutions. Among the solutions belonging to the neighbourhood there is the locally optimal solution. The process of random selection of an initial solution, and of passage from this to the locally optimal solution, is repeated several times. Of the two locally optimal solutions, obtained at the previous and the running steps of the search, the better one, in the sense of the value of the target function, is stored each time. The best of the locally optimal solutions, obtained after a fired number of search steps, is in fact taken as the optimal solution. The scheme described, for the approximate solution of zero-one i.1.p. problems, was first realized in [3], and was then further developed in [4]. The algorithms of the present paper are modifications of the algorithms given in [4] : they employ a faster procedure for obtaining random permutations of numbers, and also more promising methods for adapting the search to the initial data of the problems. Approximate methods were primaril!. devised for solving large-scale problems. Since no appropriate test problems are available for revealing the comparative efficiencies of the different methods, it is a matter of urgency to devise a generator of the conditions of problems with known optimal solutions. Such a generator is described below. The efficiency of our algorithms for solving i.1.p. problems, and the “quality” of the problems obtained with the aid of our generator, will be illustrated by the results of numerical computer experiments.
1, Formulation of the problem
Consider problem Z: n L(X)=
c 1-1
C,Xj
+mas,
(1.1)
?I E l==l Cj20,
a+O,
n,.x.
b,>O,
i=l,
x,=OVl,
2.. . . , m,
j=l,
2,. . . , n.
(1.3)
a3
Solving integer lineor programming problems
We can assume without
loss of generality
We shall also assume that the constraints “severity”,
that ail the coefficients
of the target function
cl > . . . > c,,.
satisfy the condition
(1.1) are arranged in order of decreasing
i.e. we have (1.3)
where hi=
Obviously,
(2 a,,-b,)hi-i,
i=l? 2,. . . , m.
we can easily arrange for this by a simple permutation
elimination
of those of them for which ASO,
in advance to be satisfied by any (0. l)-vector. weputm:=m1. X= (z,, . . . . z,,)
The set of solutions
i= (1, 2.
of the constraints
and
. . , m}, i.e. those which are known
In any of these latter cases, if any are encountered,
of problem Z is denoted by G.
Given any two solutions x’, x”, the distance between them r(X’, x”) is defined as
r(X’,X”)=
We define the neighbourhood
9, 15j’-5,“l
of the K-th order Q~(x0) r(XO, X)
solutions that satisfy the inequality
of the solution x0 as the set of all where
K E { 1. 2, . . . ,
n}.
A solution J? of problem Z is called a locally optimal solution of K-th order if L(XO)>L(X)
forall
XEQ~(XO).
A solution X1 of problem Z is called optimal if t(F)
2. Procedure for obtaining
> L(x) for all X E G.
permutations
Of the algorithm described below for solving problem Z, the main one is the procedure G, (see [S] ) for generating the random permutation J = 0’1, . . . , jn) of elements of the sequence &=(j&..,jnO). Procedure Ga: (1) form the initial sequence
Jo= (ii”, . . . , ino) ;
(2) put k = n; (3) using a source of random numbers, a random number
5 and evaluate 7) = 1 - pl;
uniformly
distributed
in the interval (0, l), obtain
(4)evaluatei=
[nXkt
11, where [a] denotes the integral part of the number a;
(5) interchange the i-th and k-th elements of the sequence Jn_ k, thereby obtaining the sequence Jn _ k + 1: (6) put k: = k - 1~and if k > 1, return to 3; if k = 1, then the random permutation J: = Jn _ 1 is obtained.
It was shown in [5] that, with the parameter value (Y= 1, the generated random permutations are equiprobable (the probability of obtaining, at any access to procedure G1, any of n! permutationsis P(j,,...,jn)=l/n!), while with cy> 1, they have the property of “closeness” to the initial number sequence. Experimental data concerning procedure G, are also to be found in [5].
3. An algo~thm for solving problem Z Let us first describe the general scheme of the algorithms. At the zero step of each algorithm, it is adapted to the initial parameters of the problem. For this, from one of the relations
z;j=Cj,
z;=-,
Cj
CJ 7Tj=-t
a,--_a;-
( ~~)-i,
uj=cj
*=t
a, T
where a;+=mas i
aij -
bz
,
aij
ff
j_~IlliIl
-
1
,
b,
the so-called “weighting factors” vi, j = 1, 2, . . . , n, are evaluated, and the sequence JO= (jio, , , . , ii,“) such that Z*j,o3. . . 2ZTj,n is stored. At the S-th search step (S 2 I), using procedure G,, the random permutation J = @I, . . . , jn) of the elements of the sequence Jo is generated. In the order determined by permutation J,
the elements of the initial (random) soIution of problem Z are evaluated; this solution is reduced by means of a simple procedure to a locally optimal solution of the 2nd order. Each specific algorithm of the family L(cu,0) described below is isolated by fling the values of its parameters: LYis the parameter of procedure Go for obtaining the random pe~utations, and 0 is the order number of one of the four relations quoted above for calculating VP Algorithm L(cr, 0) Step 0. From the relation 8, P f (1, 2, 3, 41, we calculate the values Yj,j = I, 2,. . . , n; such that r~j,o~. . . ‘U. H ,,+ Jo= (ii”, . . . , j,,“)
we store the sequence
Step S, S > 1. 1. With the aid of procedure G,, a~ { 1, 2, . . .}, we generate the random permutation J = &, . . , in) of elements of the sequence Jo. 2. We construct the solution X0=(x,‘, recurrence relation:
. . . , z,,‘),
whose components are given by the
Solving inreger linear programming problems
,y=
1,
if
*,?7Lp
i=l,27..
U{j,Zj~+UfJldbi,
c
85
(3.1)
I<7
0 wherer=1,2,...,
otherwise
12.We store the remainders
Ai=bi-
of each of the constraints
of the “resources”
9.
aijq
i = 1.2, . . . , m.
3.Weputk=l,y=O. 4. If x$
= 0, we put I = n and pass to 5; otherwise. we proceed to 9.
5. If xl0 = 1, we proceed to 7; otherwise we proceed to 6. 6. We evaluate I: = I - 1, and if I > k, we return to 5; otherwise we proceed to 9. 7.We calculate 8. We put
?=I.
and A;=Ai-a,k+a,, J~“=I.
z:@=O,
9. We put k: = k + 1 and if k <
~7 we
return to 6, if for at least one i we have
and store the new remainders
Ai=Ji:
i=l.
J,
. . , m.
return to 4: otherwise we proceed to 10
10. If y = 1, we return to 3: otherwise we proceed to 11. 11. For each of the components J,O=O. Jo{I.2,...,n}.wecheck the inequalityAi-aij>O. If all m inequalities hold? we put the component -‘;iO= 1, and store the new remaindersA,:=Ai-aij, i=l? 2,. . . . m. It was shown in [4] that the initial solution x0, obtained from expression (3.1). is locally optimal of the 1st order, while the result of transforming it with the aid of paras. 3- 11 is a locally optimal solution of the 2nd order. Hence every search step of algorithm .L(cr?fl) guarantees that a locally optimal solution of the 2nd order of problem Z is found.
4. Generator
of test problems
In view of the absence of test problems of high dimensionality efficiencies
of different
suitable for comparing
the
methods, it is a matter of urgency to devise generators of conditions
of
problems with known optimal solutions. Below we consider a generator of conditions of problem Z. whose input parameters are the dimensionality of the required problem, and the desired percentage of unities in its optimal solution. Generator rl 1. We specify the number m of constraints, the number desired percentage p of unities in its optimal solution.
17 of
variables in problem Z, and the
V. A.
86
Piryanovich
2. We will calculate k = Ip X n/100] ; we set the right-hand sides of the constraints m, and the coefficients of the target function cj =j, J’= 1,2, . . . , n. bi=k,i=1?2,..., 3. We obtain (e.g. by procedure Cl ) a random equiprobable permutation J = Gi, . . . , j,,) ofthenumbersl,2 ,..., n;wefiidI=min {jl ,..., ill}.
[j,/~+o.%%] , k-l-i) ,
1 if r < k, and &jr =min ( 4.Foreachr=1,2 ,..., n,WeputQmj,= k; we calculate aij,=[tX (a,j,+l) J, i =1, 2,. . , , m-l. Here, [ E (0, 1) is a random number, renewed each time we find the next element of the matrix of constraints.
ifr >
Nore. To obtain the conditions of problems with n > 10 000 variables, the number 0.9999 in para. 4 of the generator has to be replaced by another similar number with the number of nines in its fractional part equal to the number of digits contained in the number n. By virtue of the method of obtaining parameters (1.2) of problem Z by means of the generator Fl (and in particular, starting from the structure of the last row of the matrix A = {Uij}mx,) we easily obtain: Proposition.
The optimal solution of problem Z, whose parameters are obtained by the in which the components zj*=l for generator Fl, is the vector x’= (x1*, . . . , x,*), . . npu j=j~-i, . . . , in. I’lir * - . , jr for Xj*=O The optimal value of the target function is L
5. Computer experiments and conclusions To find the most efficient of the algorithms t(o, fi), they were used, with parameter values (Y= 2,3, and p = 1,2,3,4, for solving on the Minsk-32 computer (in FORTRAN) Petersen’s test problems [6] , a series of problems obtained by means of generator Fl, and problems whose parameters were obtained by means of the random mechanism of [4], while their optimal solutions were found by the exact method of [7] . The algorithms with parameter values CI= 1 and cy> 3 were not investigated. TABLE 2
10:06
IO:10 10:15 10:20 10:28 05:39 05:50
z 9
9 18 25 35
h ii1 18 52 8 1
84: -
2::
g$
1’10” 2’30”
10:20 10:30 40140 60:40 10:50 1o:io 10:100
::
11
grr
9
26
iti
1’10”
ii 24 35
152 3; -
139 345 -
1:
I
1’15” 1’50” 2’30” 3’40”
Solving integer
linear
prqramming problems
87
With 01 = 1 our search adaptation procedure becomes meaningless; and further, it was shown experimentally in [4] that the convergence of the methods is much worse in this case than with Q = 2 or 3. It was also shown in [4] that parameter values (II> 3 are undesirable; as a rule, solutions giving a large initial growth of the values of the target function are then found quite rapidly, but later discovery of the precise optimum proves to be more difficult than with Q = 2 or 3. The explanation is that, with (Y> 3, the generated random permutations are so close to the initial number sequence that, even with CY = 4, they differ by only a few components, while with Q:> 4 they are often completely repeated [S] . The experimental comparison of the algorithms with CY = 2 and 3 (p = const), continued in the present paper, confirmed the conclusion drawn in [4], that it is better to use the value cr = 3. In this case, the computer time spent in seeking solutions with a given accuracy (including the optimal solutions) proved to be on average 5-100/c less than with cy= 2. The experimental study of the algorithms with different values of fl showed a clear advantage in using /3= 2, 3,4, rather than /_?= 1. The computer time in finding solutions with given accuracy in the case 0 = 1 was 1S-3 times more than with p = 2,3,4. A slight but still perceptible advantage was revealed by algorithms using fi = 3 or 4, rather than p = 2. It proved more difficult to decide which of L(3,3) and L(3,4) is the better. Each was used to solve 22 problems with from 6 to 100 variables, and the results were much the same. The total of “better” solutions (solutions with given accuracy, obtained in less time) was 10 for algorithm L(3,3), and 12 for L(3,4). To assess the efficiency of algorithm L(3,4), we show in Table 1 the results of using it to solve Petersen’s test problems, in Table 2 the results for problems obtained by the random mechanism, and in Table 3 the results for problems obtained by generator rl. The notation in the tables is as follows: m is t!le number of constraints in the problems, n is the number of variables, K is the number of ones in the optimal solution, A@?%)is the number of the search step at which the solution is obtained with not more than II % deviation of the target function value from the known optimal value, and T, 0. is the time taken to perform 100 search steps. A dash means that a solution of the indicated accuracy was not found after 1000 planned search steps. TABLE 3 m
: ?I
K
.\‘IiOb)
.-l-CO%)
10: 10 IO:13
5 i
:E
10
05139
14
05:50
::
10:50 1o:io 6O:iO
25 3j i
3 1 11 7 34 30 3 27 14
3 1 2: 34 292 516 14
‘I
2h 4i?” 1 '(KY
I’Oj” l’lj” 2’3i1” 2’3 j” 9’10”
6O:;O 6O:iO IO:80 1o:si1 IO:80 10:100 10:100 10:100 80:200
9’1”” 5 3i 312 308
40;
<
9’30”
3’30” 3’3cI’ 3’2j” 6’00” 5’40” 5’50” 20’00”
88
V. A. Pir :vonorich
Let us now compare the data of Tables l-3 [8-131
with the results obtained by other authors
when solving similar problems (after introducing
of the capacities of different computers,
suitable corrections
to take account
and also increasing the solution time to 4 times, if
the program was written in level 1: 1 language). In [8] results of solving Peterson’s test problems method of constructing
[6] by different modifications
of the
a sequence of plans [7], and by the Balas additive algorithm
[6, IO]
are given, and also the results of [9-l
I]. where the Balas filter method was studied experimentally
Comparison
of Table 1 with the results obtained in [8] shows that method L(3,4)
comparable
in efficiency with the method of constructing
cases better than the various modifications comparison
of the Balas algorithm. This also follows from a
of the data of Table 2 with the results and conclusions
to the problems of [ 121 are solved by L(3.4)
of [ 121. Problems similar
3-4 times faster. Comparison
of the data of
Table 2 with the results of solving similar problems by the cut and branch method gave preference to L(3.4)
is
a sequence of plans, and is in most
[ 131 again
in many cases.
To end nrr analysis of algorithms L(ar, fl), notice the following. The efficiency of each algorithm L(cr, p) depends on the number K of ones in the required optimal solution. The greater the difference A = jk’--n/2 j, OG~Gfz/2. the more efficient the algorithms. This conclusion is confirmed experimentally by the results of solving problems with 70,80, and 100 variables, with different values of h’ (see Table 3). Its theoretical justification comes from the fact that the probability
of obtaining
using the equiprobable
the optimal solution by one-time application
random permutation
il, . . . , jpl of numbers
of expression (3.1)
1. 2. . . . , n, is equal to
(see [2], p. 97). The fact that equiprobable permutations are P=K!(n-K)!/n!=(C,k)-’ not used in algorithms L(cr, 0). but instead, special sequences that take account of the features of the particular problems to be solved, merely increases this probability the probability
of the numbers of unity components
first K elements of the random permutation> subsequent
elements).
Operations
of each random solution.
3-11
P (due to the increase in
of the optimal solution appearing among the
and of zero components
appearing among its ?r - K
of the algorithm are designed for possible improvement
obtained in accordance with expression (3.1).
The practical utility of the basic ideas of algorithms L(o. 0) is confirmed solving two practical problems (on the Minsk-32, in FORTRAN): equipment
by the results of
compiling the graph of
repairs in an energy system (with 804 variables. a satisfactory
solution was obtained
in 1 hour), and optimization of ore shipment from mining areas to a potash pit [ 141 (with 288 variables, a solution providing 9 19 loading of the main conveyor system was obtained in 13 mins).
6. Main results 1. The efficiency of each of algorithms ,!,(a, /3) depends on the number K of ones in the required optimal solution. The greater the difference A = [K - n/2], the greater the efficiency of the algorithm. 2. Algorithm L(3,4)
proved to be the most efficient of the family L(cr, fl),
3. From the point of view of time for solving exactly problems with up to 70 variables, algorithm L(3,4) is comparable with the method of constructing a sequence of plans, and in most cases is better than the various modifications of the Balas algorithm or the cut and branch method.
Solving integer linear programming problems
4. By a preliminary
re-ordering of the constraints
(I .3), the problem-solving
of solution:
with Petersen’s test problems
all of whose parameters
(1 .I) in order of decreasing “rigidity”
time can be reduced by 35-75s.
5. From the point of view of difficulty Fl are comparable
89
(1.2) are obtained
the problems obtained
by the random mechanism.
6. By using generator Fl, we can obtain test problems of any dimensionality required percentage
of ones in the optimal solution.
test problem by generator
The computer
with the
tune needed to obtain a
Fl is negligible. The time depends solely on the time T,, required to
obtain the random permutation
of numbers
1, 2, . . . , n (for example T4ooo = 6 set, see [5] and
the time taken to perform not more than 131~ t 1) X (n + 1) arithmetic required to evaluate the parameters In conclusion
by the generator
[6] and demand more effort than the problems,
and logical operations,
of problem Z.
the author sincerely thanks V. A. Emelichev for useful discussions. Translated
b)l D. E. Brown.
REFERENCES 1.
KOVALEV. M. M., Discrere optimization (Diskretnaya optimizatsiya),
2.
FINTEL’SHTEIN, Yu. Yu. Approximate methods and applied problems of discrete programming (Priblizhennye metody i priklandnye zadachi diskretnogo progammirovaniya). Nauka, !vioscoa. 1976.
3.
EMELICHEV, V. A., KOVALEV, M. M., and KONONENKO, A. M., Two approximate methods for solving integer linear programming problems with Boolean variables, in: Automaric control s)jstems (Avtomatizirovannye sistemy upravleniya), No. 5, 180-l 84, TSNIITU, Minsk, 197 1.
4.
PIR’YANOVICH, V. A., Locally stochastic algorothms for solving integer linear programming problems, Ix Akad. .Vauk BSSR. Ser. .fiz:matem. nauk, No. 4, 131-132, 1977.
5.
PIR’YANOVICH, V. A., On an efficient method for obtaining random permutations, in: Automaric systems for planning computations in Republic planning organs, Mathematical aspects, No. 9, 71-76, NIIEMP, Minsk, 1977.
6.
PETERSEN, C. C., Computational experience with variants of the Balas algorithm applied to the selection of R and D projects, Manag. Sci., 13. No. 9. 736-750, 1967.
I.
EMELICHE\-, V. A., On the theory of discrete optimization, 273-276,197l.
8.
EMELICHEV, V. A. and KRAVERSKII, I. M., Computer experiment on solving integer linear programmute problems by constructing a sequence of plans, Zh. vychisl. Mat. mat. FL, 13, No. 2,467-471. 1973.
9;
BALINSKI, M. L. and SPIELBERG, K., Methods for integer programming: algebraic, combinatorial enumerative, Progr. Operat. Rex, 3, 195-292, 1969.
Isd-vo BGU, Minsk. 1977.
Dokl. Akad. ,Vauk B SSSR. 198. No. 2.
and
10. GEOFFRION, A. M., An improved implicit enumeration approach for integer programming, Operar. Res., 17, No. 3,437-454, 1969. 11. BYRNE, J. L., and PROLL, L. G.. Initializing Geoffrion’s implicit enumeration algorithm for the zero-one linear programming problem, Compur.. J.. 12, No. 4, 381-384, 1969. 12. PANCHENKO, A A., Some algorithms for solving particular problems of linear programming with Boolean variables, Kibernetika, No. 2, 117-119, 1976. 13. VENGEROVA, I. V., and FINKEL’SHTEIN, Yu. Yu., Experimentai study of the cut and branch method. Izv. Akad. n’auk SSSR. Tekhn. kibernetika, No. 3, p. 85. 1974. 14. PIR’YANOVICH, V. A., et al.. Optimization of ore shipment from mining areas to a potash pit, Izr. vuzor, Gornyizh., No. 5, 45-48, 1977.