Discrete
Applied
Mathematics
235
18 (1987) 235-245
North-Holland
A NEW ENUMERATION PROBLEM Horatio
Hideki
SCHEME
YANASSE
Insiituto de Pesquisas Espaciais, INPE/MCT,
Nei Yoshihiro
C.P. 515, MO Jo& dos Campos, SP, Brazil
SOMA
fnstituto Tecnoldgico de Aerondutica, ITA/CTA, Received
6 March
number
a new enumeration
by some observations of solutions
provide
as compared
scheme to solve the one-dimensional
on number
of linear diophantine
and its special features quirements
12200 MO JosP dos Campos, SP, Brazil
1987
This paper presents motivated
FOR THE KNAPSACK
a reduction
theory,
more specifically
equations. in running
with other exact (dynamic
knapsack
on the determination
This new algorithm
of the
is pseudopolynomial
time and in the computational programming)
problem
memory
re-
methods.
1. Introduction The Knapsack
Problem
(KP) considered
here is of the form
n
Maximize
Z= C
CjXj
j=l
subject
to
jg, ajxj = b, XjEtN,
j-l,2
,...) n.
integers. This is not We assume that 6, Cj and aj, j= 1, . . . , n are all nonnegative a major assumption, as in most practical situations such values are nonnegative integers (see [21], [3]). Without loss of generality, we also assume that the aj’s are ordered in increasing order. The KP appears in several situations. It can appear directly as a formulation of a problem as seen in [5], [23], [4], [14]. It can appear as a subproblem of a more general problem as in [l 11, [ 171, or it can be a result of some transformation (using, for instance, aggregation) of other mathematical programming problems (see [29],
Gw. The KP has been the focus of attention of many researchers. There is a long list of exact and heuristic methods proposed for its solution. For the exact methods we can cite the works of [12], [7], [13], [18], [15], [26], [33], 161, only to list a few. For the heuristic methods we can cite the works of [32], [27], [22], [31] and [19]. 0166-218X/87/$3.50
0
1987, Elsevier
Science Publishers
B.V. (North-Holland)
236
H.H.
Yanasse, N. Y. Soma
The exact methods are based mainly on traditional techniques of Integer Programming (see [26]) and Dynamic Programming (see [12]). The KP is NP-hard ([lo]). The best exact methods known are all pseudopolynomial algorithms. To the best of our knowledge, the best algorithm proposed for the KP is of O(nb) ([13]) and requires an O(6) of computer memory. Here we develop a new enumeration scheme motivated by some observations on Partition in Number Theory. We present in Section 2 the idea that motivates the new scheme, in Section 3 we formalize the algorithm, consider some computational issues and present a small example. In Section 4 we make some additional comments about the new scheme.
2. Motivation In [28] it is stated that the number equation j$, ajxj = b
of natural
withajEiN-{0}, j= 1, . . ..n.
is given by the coefficient
solutions
to the linear
diophantine
bEtN-{O}, n>O
of U/I (see also [16]) in the development
of the product (1)
fi, ( j;U J”?. We have iI:I, (j~~u’“)=(l+u”‘+u2”‘+...)(1+u”i+uz”~+...)... ...(l+u~“+u2~~l+...)=~
Cltli.
(2)
r=O If we develop this product we see that i can be put as a nonnegative tion of al,a2, . . . , a,. By this we mean i= i
for some I]ElN, j=l,...,
ljaj
integer combina-
n.
j=l
We are interested in the case where i= b (a specific right-hand side). All the possible nonnegative integer combinations of al, a2, . . . , a, that will give b will be collected in Expression 2, and C, is the number of such combinations. To solve the KP we must consider all these feasible solutions and choose the best one. We need then to enumerate all these feasible solutions. Looking at Expression 2 it came to our attention that we will be able to enumerate all the feasible solutions if we use the slide ruler principle. Consider the scheme below. 0
I
al
I
1
I
0
a1
a2
I I
a2
a, I
.
I
a3
RULER
I
. . .
2
an 1 a,
RULER
1
A new enumeration
scheme for the knapsack
237
problem
Initially, the only values of interest in ruler 1 and 2 are 0, a,, u2, . . . , a,. If we slide ruler 2 to the right such that we match the mark 0 of ruler 2 with aI in ruler 1, we obtain indications (new marks in ruler 1) of 2a,, a, +az, . . . ,a, +a1 in ruler 1 just below the marks of a,,+, . . . , a, in ruler 2. 0 I
I
0
a2
01
01 at+
20,’
I
I
a2
a3
02
.. .
0,
al+
I
,..
a,
on
an al+
RULER
2
RULER
1
0,
Keeping matching the mark 0 in ruler 2 with the very next mark in ruler 1 (including the new marks that were identified), say in the scheme above, 20,) we obtain new marks corresponding to 3a1,2a, + a2,2a, +a3, . ...2a, + a,, in ruler 1 corresponding to the marks a,, a2, . . . , a, respectively in ruler 2. Following this procedure we will be able to reach b (or not reach it at all if the problem is infeasible). Using the previous idea we formulate an algorithm to solve the KP.
3. The algorithm The idea proposed in the last section is used noting that the KP has an additive objective function. So, instead of carrying a,, a2, . . . , a, in the ruler, we might carry c,, c2, ,.. , c, in their places, so that we will now carry the objective values corresponding to each mark. Since our problem is one of maximization, we always carry the besr objective function value encountered so far, corresponding to each mark. 3.1.
Formulation
Our algorithm Algorithm
may be formulated
1
Step 0. Initialization : z(b)=-1; i=a,; POINTER=a,; forj-1 ton do begin Z(aj) SOLIN
= Cj; =j;
end; while a, sisb-a, do begin if Z(i)=0 then Z(i)=-1; i=i+ 1; end;
as:
H.H.
238
Z(k) carries the best objective
Comment. side equal that
k. If Z(k)
the problem with
POINTER ruler
is negative
is infeasible
keeps the index
soLm(k) solution
Yanasse, N. Y. Soma
right-hand
is a pointer
for for
value encountered
some
k
this right-hand
of the last variable side equal
that
keeps
so far for the right-hand
at the end of the algorithm
this implies
side.
incorporated
to compose
the feasible
k. the position
of the mark
0 of ruler
2 relative
to
1.
Step 1: j= I; while (j I n) and
(POINTER + Uj I b)
begin if (POINTER + then begin
aj
I
b
-
or (POINTER +
a,)
ZLIN(POINTER
do
+ aj) = Z(POINTER)
if ZLIN(POINTER then begin Z(POINTER
Uj
=
+ cj;
+ Uj) > Z(POINTER
+ aj)
+ aj) = ZLIN(POINTER
SOLIN(POINTER
b)
+ aj);
f aj) =j;
end; end; 1;
j=j+ end;
Step
2:
keeps the objective
nIN(k)
Comment. at this step
for
the right-hand
value relative
side equal
to the feasible
solution
obtained
k.
POINTER = POINTER + 1;
if POINTER> b - a, then goto Step 3; if Z(POINTER) < 0 then goto Step 2 else got0 Step 1;
Step
3:
if Z(b)<0
Comment.
Step4:
The
“the
the optimal
optimal
for i=l
ton
POINTER =
Step 5 : while
then STOP. else
Z(b)
“is
solution
problem
is infeasible” value”.
can be obtained
dox,=O;
b;
POINTER > 0
do
begin + 1; xSOLIN(P”lNTtR) = xS”LIN(POINTER) POINTER = POINTER - aSOL,NCPO,h’TbR);
end;
with
the following
steps:
A new enumeration scheme for the knapsack problem
Comment.
The optimal
Next, we illustrate 3.2.
solution
can be obtained
Algorithm
xi, i= 1, . . . , n.
printing
1 with a numerical
example.
Numerical example
Consider
the problem: Maximize
5x1 + 10x2 + 6x3
subject
2x, + 5x,+6x3
to
X,EN, When
XIEiN, X,EIN.
we apply the algorithm
Step 0: 2(2)=5,
= 8,
to this problem
obtain:
Z(3) = Z(4) = - 1) Z(8)=-1,
Z(5) = 10, Z(6) = 6. SOLIN(2) = 1,
SOLIN(3) = SOLIN(4) = 0,
SOLIN(5) = 2,
SOLIN(8) = 0,
SOLIN
we would
= 3.
POINTER = 2.
Step I:
ZLIN(2 + 2) = Z(2) OLIN
+ c, A ZLIN(~)
= 5 + 5 + ZLIN(~)
= 10,
> Z(2 + 2) + Z(4) = zLIN(4), SOLIN
POINTER +
= 1. POINTER +
a2 > b - a, and
ZLIN(2 + 6) = Z(2) + c3 + ZLIN(8) ZLIN@)
> Z(8)
+
a2 # 6.
= 11,
Z(8) = ZLIN(~), SOLIN(8) = 3.
Step 2:
POINTER = 3, POINTER 5 6, Z(POINTER)
= - 1.
POINTER = 4, POINTER 5 6, Z(POINTER)
Step I:
= 10 > 0.
ZLIN(~ + 2) = Z(4) ZLIN(~)
> Z(6)
+ c, -+ ZLIN(~)
+ Z(6)
SOLIN POINTER +
Step 2:
= 1.
a, > b - a, and
POINTER = 5, POINTER 5
POINTER = 6, POINTER I
POINTER +
a2> 6.
6, Z(POINTER)> 0.
Step I: POINTER+ a, > b - a, and Step 2:
= 10 + 5 -+ ZLIN(~)
= 15,
POINTER +
a, # 6.
6, Z(POINTER)> 0.
= 15.
239
240
H.H.
Step 1:
POINTER +
Yanasse, N. Y. Soma
a, = 6,
ZLIN(POINTER ZLIN(8)
+ aI) = ZLIN(8)
=
Z(6) + Cl +
= 15 + 5 = 20,
ZLIN(8)
> z(8)
+ z(8)
= 20,
SOLIN
Step
2:
POINTER = 7, POINTER >
Step
3:
Z(b) = 20.
Step
4:
x,=x2=xj=0,
= 1.
b - a,.
POINTER = 8. Step
5: X~~~~~(8)=X~~~~~(8) + 1 + POINTER = 6. x~~~1~(6)=x~,,,,(6)
xl
=
1 )
+ 1 --+x1 = 29
POINTER = 4. GX.IN(4) = GX.IN(4) + 1 + XI = 3 7 POINTER = 2. %,,,,(2)
=~SOLIN(2)
+ 1 + XI = 49
POINTER = 0. Therefore,
the optimal x, =4,
3.3.
solution
obtained
is:
Z(b) = 20.
x,=x,=0,
Analysis of the algorithm
To prove timal
that Algorithm
solution
The
following
Proposition
1 works,
is achieved two
propositions
1. Algorithm
1
Proof. The procedure
reaches
right-hand
to
assume
side
equal
we have to show
whenever
b
its finiteness
and that the op-
it exists. give
an answer
to these
questions.
provides an optimal solution whenever it exists. an optimal
is considered.
solution For
since every
instance,
feasible
without
solution
with
loss of generality,
that
f /,aj=b,
K5n
j=l
is a feasible
solution
to our KP.
scheme?
The answer
is yes. Note
reached
the position
Will
K-l C j=l
this solution
that to reach
/jaj+(IK-l)aK=b-aK
b
be analysed
by the enumeration
with this combination
we must have
A new enumeration
in some intermediate this point.
scheme
for
step of our algorithm
This new point b - aK, in turn,
the knapsack
problem
241
and with our slide ruler we added aK to would be reached
if there were a position
K-I
I,aj+(IK-2)a,=b-2aK
C
(if f,r2)
j=l
or
K-2
fjaj+(I,~,-l)ak~,=b-a~-aK_,
C
(if /,=l)
j=l
that was reached before by our algorithm. We could go on with this reasoning and finally conclude that we would reach b if position a, is reached. But this position is certainly reached by our algorithm. Since the feasible solution considered was arbitrary, and we always carry the best objective value, we reach the optimal solution. 0 The algorithm is finite since our ruler 2 moves at most to b positions and in each position it makes at most n additions and/or comparisons. The complexity is given in the following proposition. Proposition Proof.
2. Algorithm
Consider
1 is of order O((6 - a,)n - CT=, aJ).
the scheme
below
RULER- 2
The number
of additions
and comparisons
1 RULER-l
is of order
O((b-a,)n+(a,-a,~,)(n-l)+...+2(a,-a,)+(a,-a,)-na,) =O(n(b-a,)-an-a,_, =0
c
n(b-a,)-
i aj .i= I )
-...-a2-a,)
242
H.H.
Yanasse, N. Y. Soma
since in the worst case the pointer will assume all the values between a, and b-a,, sliding ruler 2 over ruler 1 as in the previous figure. Observe that for each value of POINTER greater than b-a,, the number of additions and comparisons is O(n - 1) for b-a,
b, . . . , POINTER + a2 > 6, respectively. It can be easily seen that Algorithm 1 requires a memory size of O(b-2a,). (Recall that b-2a, is greater than n, in nontrivial cases.) 3.4.
Computational
Algorithm were done on Table 1. The related data’
experiments
1 was coded in ALGOL. Some limited computational experiments a Burroughs/6800 computer. The results obtained are summarized in a,‘s and c,‘s were generated randomly in the interval [ 1,700]. By ‘corwe mean that cj/‘aj = K, for all j.
Table I. Computational
results of Algorithm I
Number of variables, n
correlated data
b
average running time (seconds)
150
Yes
(0.1) i a, i-1
8.7
150
No
(0.1) i aJ ,=I
8.2
350
Yes
(0.1) jJ aJ
9.4
350
No
(0.5) i a, ,=I
43.2
1-I
There are many codes in the literature for the O-l knapsack problem with inequality (see, for instance, [25], [35], [9]) but they are not suited for the specific problem we deal here with. For the traditional methods based on dynamic programming ([12], [ 131, [7]) one can easily verify that the number of operations using Algorithm 1 will be smaller. Hence, no computational performance comparisons were done.
4. Final comments This new enumeration scheme considers a very simple idea which saved computations and memory requirements to solve the KP. As can be seen, one could think that there is some similarity between this scheme and the dynamic programming approach to solve the KP (see for instance [12],
243
A new enumeration scheme for the knapsack problem
[13]). What
are the differences
between
our approach
and
theirs?
Basically,
in
dynamic programming we obtain at intermediate stages the optimal solution at each state prior to moving to the next stages. In our approach as it is presented, this does not happen. We compute the best solutions so far achieved at different stages and at the end we obtain the optimal solution of all stages. We could obtain, if we wished, the optimal solution at each stage prior to moving to other stages, as in dynamic programming, without problems. The natural question that arises is: “Why the savings on computations then, as compared to dynamic programming?” Note that in the previous dynamic programming methods all right-hand sides from 1 to b are checked, but in this new scheme (similar to all recent O-l dynamic procedures) we check only the values from 1 to b that are ‘relevant’ and that makes the difference. In fact, one can see this scheme as a branching procedure with a completely different implementation than the traditional ones encountered in the literature. Another important observation here is that many researchers have raised the issue that the KP with equality constraint is in general harder to solve than with inequality (see, for instance, [20]). There are algorithms that perform better for inequality constraint than for equality constraint. Our algorithm does not seem to be affected by this. Like the O-l pure dynamic procedures, it is easy to solve each kind of problem. Other researchers present algorithms with performances that are very dependent on the ratios cJ/aj’s of the problem’s parameters (see [33], [l], [S], [9], [18], [24]). Our algorithm is not affected at all for these cases, but still it is highly dependent on the data of the problem. We should point out that the assumptions on the Cj’S can be dropped completely. For the scheme proposed, one can consider any additive objective function. It is important to remind that the algorithms known to be quite efficient for the O-l knapsack problem of [2], [9], [35], [30], [25] among others cannot be compared directly with Algorithm 1 since they consider O-l variables and they assume an inequality constraint. In our problem there are no explicit bounds on the integer variables and we require that the constraint should be met with equality. Base on the basic idea used to elaborate this new scheme, algorithms for the linear diophantine problem, that is: “Is there a nonnegative integer solution to Cy=,ajxj=b?“, can be suggested. In [34] one of such algorithms is presented.
References [l] J.H.
Ahrens
and G. Finke,
Merging
and sorting
applied
to the zero-one
knapsack
problem,
Opera-
tions Research 23 (1975) 1099-l 109. [2] E. Balas and E. Zemel, An algorithm for large zero-one knapsack problems, Operations Research 28 (1980) 1130-1154. [3] R. Bulfin, R. Parker and C. Shetti, Computational results with a branch-and-bound algorithm for the general knapsack problem, Naval Res. Logist. Quart. 26 (1979) 41-46.
244
H.H.
[41 B. Chvatal, [5] G.B.
Hard
Dantzig,
knapsack
Discrete
[6] E. Denardo and branch-and-bound,
Yanasx,
problems,
extremum
N. Y. Somu
Operations
problems,
B. Fox, Shortest-route Operations Research
Research
Operations
methods: 2. Group 27 (1979) 548-566.
[7] B. Faaland, Solution of the value-independent Research 21 (1973) 332-336. [S] D. Fayard and G. Plateau, Resolution Programming 8 (1975) 272-307. [9] D. Fayard and G. Plateau, 28 (1982) 269-287. [IO] M.R.
Garey
[ll]
(Freeman,
P. Gilmore
Research
[12] P. Gilmore [13] P. Gilmore Research
problem problem:
expanded by partitioning,
comparison
of the O-I knapsack
and Intractability
networks,
Operations
of methods, problem,
~ A Guide
and
Math.
Computing
to the Theory
of NP-
1979).
A linear
programming
approach
to the cutting
stock
problem
II,
Multistage
cutting
stock problems
of two and more dimensions,
Opera-
13 (1965) 94-120.
and
R. Gomory,
The theory
and
computation
of knapsack
functions,
Operations
14 (1966) 104551074.
[14] R.L. Graham, 416-429.
Bounds
on multiprocessing
[15] H. Greenberg, An algorithm (1969) 159-162. [16] H. Griffin, [17] C.M.
Computers
5 (1957) 266-277.
knapsack,
11 (1963) 863-888.
and R. Gomory,
tions Research
for the solution
San Francisco,
and R. Gomory,
Operations
knapsack
of the O-l knapsack
An algorithm
and D.S. Johnson,
completeness
28 (1980) 1402-1411.
Research
Elementary
Harvey,
Analysis
of Numbers
Research
(North-Holland,
[18] E. Horowitz
for the computation
Theory
Operations
anomalies,
SIAM
of knapsack
(McGraw-Hill,
J. Appl.
functions,
to Linear
Math.
J. Math.
New York,
~ An Introduction
Amsterdam,
and S. Sahni,
timing
17 (1969)
Anal.
Appl.
26
1954).
Optimization
and
Decision
1979).
Computing
partitions
with applications
to the knapsack
problem,
.I.
ACM 21 (1974) 277-292. [19] O.H.
Ibarra
blems,
and C.E. Kim, Fast approximation
algorithms
for the knapsack
and sum of subset pro-
J. ACM 22 (1975) 4633468.
[20] R. Kannan,
Polynomial-time
aggregation
of integer
programming
problems.
J. ACM
30 (1983)
133-145. [21] C.A. Kluyver and H.M. (1975) 1277144. 1221 E.L. Lawler, 339-356.
Salkin,
The knapsack
problem:
algorithms
for knapsack
Fast approximations
[23] J. Lorie and L. Savage,
Three
problems
in capital
a survey,
rationing,
[24] S. Martello and P. Toth, bound algorithm, Europ.
An upper bound for the zero-one J. Oper. Res. 1 (1977) 169-175.
[25] S. Martello
Algorithm
and P. Toth,
problems,
for the solution
Naval Math.
J. Business knapsack
Res. Logist. Oper.
Quart.
2
Res. 4 (1979)
28 (1955) 229-239.
problem
and branch-and-
of the O-1 single knapsack
problem,
Com-
puting 21 (1978) 81-86. [26] S. Martello and P. Toth, The O-l knapsack problem, in: N. Christofides, A. Mingozzi, P. Toth and C. Sandi, eds., Combinatorial Optimization (Wiley, New York, 1979)). [27] S. Martello and P. Toth, Worst-case analysis of greedy algorithms for the subset-sum problem, Math. Programming 28 (1984) 198-205. [28] G. Mignosi, Sulla equazione lineare intermediata, Periodic0 di Matemutica 23 (1908) 173-176. [29] D.C. Onyekwelu, Computational viability of a constraint aggregation scheme for integer linear programming [30] G. Plateau
problems,
Operations
and M. Elkihel,
Research
A hybrid
Research 49 (1985) 277-293. [31] S. Sahni, Approximate algorithms
method
3 I (1983) 795-861. for the O-l knapsack
for O-I knapsack
problems,
problem,
Methods
of Operations
J. ACM 22 (1975) 115-124.
245
A new enumeration scheme for the knapsack problem [32] S. Senju and Y. Toyoda,
An approach
to linear programming
with O-1 variables,
Management
Sci.
15 (1968) 196-207. [33] J. Shapiro Operations [34] N.Y.
and H.M. Wagner, A finite renewal algorithm Research 15 (1967) 319-341. A pseudopolynomial Soma and H.H. Yanasse,
equations [35] P. Toth,
(INPE Dynamic
(1980) 29-45.
,Sao Jose dos Campos, programming
for the knapsack time
INPE-4133.PRE/1044,
algorithms
for the zero-one
algorithm
and turnpike for
linear
models,
diophantine
1987). knapsack
problem,
Computing
25