A penalty algorithm for solving convex separable knapsack problems

A penalty algorithm for solving convex separable knapsack problems

ARTICLE IN PRESS JID: AMC [m3Gsc;November 1, 2019;9:19] Applied Mathematics and Computation xxx (xxxx) xxx Contents lists available at ScienceDire...

350KB Sizes 0 Downloads 90 Views

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 1, 2019;9:19]

Applied Mathematics and Computation xxx (xxxx) xxx

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A penalty algorithm for solving convex separable knapsack problems R.S.V. Hoto a, L.C. Matioli b, P.S.M. Santos c,∗ a

PGMAC-Universidade Estadual de Londrina, Londrina, Brazil DM - Universidade Federal do Paraná, Curitiba, Brazil c CMRV-Universidade Federal do Piauí, Parnaíba, Brazil b

a r t i c l e

i n f o

a b s t r a c t

Article history: Available online xxx

In this paper, we propose a penalized gradient projection algorithm for solving the continuous convex separable knapsack problem, which is simpler than existing methods and competitive in practice. The algorithm only performs function and gradient evaluations, sums, and updates of parameters. The relatively complex task of the algorithm, which consists in minimizing a function in a compact set, is given by a closed formula. The convergence of the algorithm is presented. Moreover, to demonstrate its efficiency, illustrative computational results are presented for medium-sized problems.

MSC: 65K05 90C25 90C51 90C30

© 2019 Elsevier Inc. All rights reserved.

Keywords: Separable knapsack problem Exterior projections Gradient method Bregman distances

1. Introduction We are interested in solving The Continuous Convex Separable Knapsack Problem:

(CSKP ) :

⎧ ⎪ ⎪ Minimize ⎪ ⎪ ⎨ n 

f (z ) :=

n 

f i ( zi ),

s.t.

i=1

⎪ bi zi = c, ⎪ ⎪ ⎪ ⎩ i=1

(1)

li ≤ zi ≤ ui , i = 1, 2, . . . , n,

where fi : R → R are differentiable and convex functions and bi > 0, li < ui for all i = 1, 2, . . . , n with c > 0 such that b, l ≤ c ≤ b, u, see, for example, [1]. In our approach, we are interested in the problems where c − b, l  > 0. The knapsack is a thoroughly studied problem in past literature. A survey regarding algorithms and applications for the nonlinear knapsack problem, also known as the nonlinear resource allocation problem, is presented in [1]. The problem considered by Bretthauer and Shetty [1] is somewhat more general than the problem (1) and as was indicated by the authors, and references therein, there is a variety of applications regarding the knapsack problem; included among them are ∗

Corresponding author. E-mail addresses: [email protected] (L.C. Matioli), [email protected] (P.S.M. Santos).

https://doi.org/10.1016/j.amc.2019.124855 0 096-30 03/© 2019 Elsevier Inc. All rights reserved.

Please cite this article as: R.S.V. Hoto, L.C. Matioli and P.S.M. Santos, A penalty algorithm for solving convex separable knapsack problems, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124855

ARTICLE IN PRESS

JID: AMC 2

[m3Gsc;November 1, 2019;9:19]

R.S.V. Hoto, L.C. Matioli and P.S.M. Santos / Applied Mathematics and Computation xxx (xxxx) xxx

financial models, production and inventory management, stratified sampling, optimal design of queuing network models in manufacturing, computer systems, subgradient optimization, and health care. The paper [2] surveys the history and applications of the problem of minimizing a separable, convex and differentiable function over a convex set, defined by bounds on the variables and an explicit constraint described by a separable convex function, as well as algorithmic approaches to its solution. They found that the most common techniques are based on finding the optimal value of the Lagrange multiplier for the explicit constraint, most often through the use of a type of line search procedure. They analyze the most relevant references, especially regarding their originality and numerical findings, summarizing with remarks on possible extensions and future research. More recently in [3] it provides an up-to-date extension of the survey of the literature of the field, complementing the survey in [2] with more then 20 books and articles, totaling over one hundred references methodically analyzed. Besides of that they contributes with an improvement of the pegging (that is, variable fixing) process in the relaxation algorithm, and an improved means to evaluate sub-solutions. Finally they provided a rigorous numerical evaluation of several relaxation (primal) and breakpoint (dual) algorithms, incorporating a variety of pegging strategies, as well as a quasi-Newton method. With regard to the continuous knapsack problem, which is the object of study of this paper, certain well known methods exist that deserve significant attention. With few variations on its formulation, the most known are multiplier search methods and variable pegging methods. The last one is also called variable fixing techniques. A substantial portion of papers is focused on a particular case of the problem (1), when the objective function is quadratic with continuous or integer variables. This formulation is called quadratic knapsack problem, including, between them, the support vector machines for training. In this case, the primary techniques used for solving this problem are quasi-Newton and Newton methods, projected gradient method, branch and bound, and relaxation techniques among others, for example [3–14]. For the non-quadratic case, relatively less research papers compared with the quadratic case, some of them found in the literature are [1,3,15–17]. In this paper, we present an algorithm for non-quadratic continuous convex separable knapsack problem. However, it includes a particular case of the quadratic problem. We start by reformulating the problem (1) as a penalized upper unbounded problem restricted to the unit simplex. Thus, as proposed by Beck and Teboulle [18,19], we use a Bregman distance to define a penalized algorithm with explicit projections. Convergence analysis and numerical results are also presented. The paper is organized as follows. In Section 2, we present the theoretical basis of our work, i.e., a change of variables and explicit projections on the unit simplex. Section 3 is devoted to introducing our algorithm and its convergence analysis. In Section 4, we report our numerical experiments. Finally, we infer overall conclusions in Section 5. 2. Preliminaries This section provides the resource material necessary for paper presentation. We start by proposing a change of variable to transform the problem (1) into an equivalent problem whose constraints are suitable to our approach, specifically

xi =

bi ( zi − li ) c − b, l  ⇔ zi = xi + li , i = 1, . . . , n, c − b, l  bi

(2)

hence, we obtain: n 

( bi zi ) − c =

i=1

n 

[(c − b, l  )xi + bi li ] − c

i=1 n 

= (c − b, l  )

xi + [b, l  − c]

i=1

 = (c − b, l  )

n 

 xi − 1 ,

(3)

i=1

and for each i = 1, 2, . . . , n, we have

li ≤ zi ≤ ui ⇐⇒ 0 ≤ xi ≤ ui , where ui =

(4)

bi (ui −li ) . c−b,l 

Hence, using (2)–(4), we can rewrite (1) as follows:

⎧ ⎪ ⎪ Minimize ⎪ ⎪ ⎪ ⎪ ⎨ n 

xi = 1, ⎪ ⎪ i=1 ⎪ ⎪ ⎪ ⎪ ⎩

f (x ) =

n  i=1



fi

c − b, l  xi + li bi



s.t. (5)

0 ≤ xi ≤ ui , i = 1, 2, . . . , n.

Please cite this article as: R.S.V. Hoto, L.C. Matioli and P.S.M. Santos, A penalty algorithm for solving convex separable knapsack problems, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124855

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 1, 2019;9:19]

R.S.V. Hoto, L.C. Matioli and P.S.M. Santos / Applied Mathematics and Computation xxx (xxxx) xxx

Example 1. As an example we present a Quadratic Knapsack Problem, i.e., f i (xi ) = i = 1, . . . , n, we have:

f i ( xi ) = where



pi = pi

1 p x2 − ai xi + di , 2 i i

c−b,l  bi

2 

, ai =

(a −p l i

i i )(c−b,l  ) bi

and di =

1 2

1 2 2 pi xi

3

− ai xi , xi ∈ R, where ai , pi ≥ 0,



pi li2 − ai li , i = 1, 2, . . . , n.

Following [18,19] and references therein, we consider a non-empty open convex set C ⊂ Rn with closure C , V ⊂ Rn a linear manifold and the projection-like map π ( · , · ) defined as follows. For any g ∈ Rn , and any x ∈ C ∩ V, let

π (g, x ) := arg min{g, y + d (y, x )},

(6)

y∈V

where d( · , · ) is a suitable proximal distance, which extends the usual quadratic Euclidean distance that allows control over the problem constraints. Therefore, let d : Rn × Rn → R ∪ {+∞} be a proximal distance that for each y ∈ C ∩ V satisfies the following properties: (D1) d( · , y) is proper, lsc, convex, and C1 on C ∩ V, with d (y, y ) = 0 and ∇1 d (y, y ) = 0. (D2) dom d (·, y ) ⊂ C , and dom ∂1 d (·, y ) = C where ∂ 1 d( · , y) denotes the subgradient map of the function d( · , y) with respect to the first variable. (D3) d( · , y) is σ -strongly convex, over C ∩ V, i.e., ∃ σ > 0 such that for all y ∈ C ∩ V

∇1 d (x1 , y ) − ∇1 d (x2 , y ), x1 − x2  ≥ σ x1 − x2 2 , ∀ x1 , x2 ∈ C ∩ V, for any norm  ·  in Rn . We present an important property regarding the mapping π . Proposition 2.1. For any x ∈ C ∩ V, any g ∈ Rn and λ > 0, the unique solution π (λg, x) defined by (6) satisfies π (0, x ) = x and the following properties holds: (i ) σ ||x − π (λg, x )||2 ≤ λx − π (λg, x ), g. (ii ) π (λg, x ) − x ≤ (λ/σ )g, where σ > 0 is the modulus of strong convexity of d( · , y). Proof. See Proposition 1, [19].



As presented by Auslender and Teboulle [19], we extend π to X := C ∩ V . The resulting projection map in that case results in a non-interior projection-like map π defined by

π (g, x ) := arg min{g, y + d (y, x )}.

(7)

y∈X

. Since d( · , x) is convex then h(y ) = g, y + d (y, x ) is also convex. However problem (7) is equivalent to the following variational inequality problem

∇ h(π (g, x )), y − π (g, x ) ≥ 0, ∀y ∈ X or equivalently

g + ∇1 d (π (g, x ), x ), y − π (g, x ) ≥ 0, ∀y ∈ X. Now developing the inner product in the last inequality we have a characterization of problem (7) given by

π (g, x ) − y, g ≤ y − π (g, x ), ∇1 d (π (g, x ), x ), ∀ y ∈ X. Example 2. Let X ⊂

Rn

be a non-empty closed convex set, then if d (x, y ) =

(8) 1 2 x

− y2

we have

π (g, x ) = PX [x − g], where, PX [ · ] is the usual Euclidean projection operator. We recall an important result regarding the Bregman distance. Lemma 2.1 [20]. Let C ∩ V ⊂ Rn be an open set with closure X = C ∩ V and let ψ : X → R be continuously differentiable on C ∩ V. Subsequently, for any three points a, b ∈ C ∩ V and c ∈ X the following identity holds

dψ (c, a ) + dψ (a, b) − dψ (c, b) = ∇1 dψ (b, a ), c − a. where dψ (x, y ) = ψ (x ) − ψ (y ) − x − y, ∇ψ (y ) and ∇1 dψ (x, y ) = ∇ψ (x ) − ∇ψ (y ).  In the case of set C = Rn++ and V = {x ∈ Rn : ni=1 xi = 1} the problem given by

f  = in f { f (x ) | x ∈ X }, where X = C ∩ V, reduces to a convex minimization problem over the unit simplex  = {x ∈

Rn

:

n

i=1 xi

(9) = 1, x ≥ 0}.

Please cite this article as: R.S.V. Hoto, L.C. Matioli and P.S.M. Santos, A penalty algorithm for solving convex separable knapsack problems, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124855

ARTICLE IN PRESS

JID: AMC 4

[m3Gsc;November 1, 2019;9:19]

R.S.V. Hoto, L.C. Matioli and P.S.M. Santos / Applied Mathematics and Computation xxx (xxxx) xxx

As was presented in the appendix of [19], if d is given by

d (x, y ) =

n 

xi log(xi /yi ) + yi − xi ,

∀x ∈ Rn+ , ∀y ∈ Rn++ ,

i=1

which is known as Kullback-Leibler distance, then d is 1-strongly convex on the simplex  with respect to the l1 norm, so that Assumptions (D1), (D2), and (D3) hold and π (g, x) can be easily solved analytically and its components have the form:

x exp(−g j )

π j (g, x ) = n j

i=1

xi exp(−gi )

, j = 1, 2, . . . , n.

(10)

Since we will use the projection (10) in the algorithm, it will be necessary to transform the constraints of problem (5) in a simplex set. For this we will penalize part of the box constraints by obtaining the following problem:

⎧ ⎪ ⎪ Minimize ⎪ ⎪ ⎪ ⎪ ⎨ n 

n  i=1



fi

c − b, l  xi + li bi



+

ρ 2

(max{0, xi − ui } )2 s.t. (11)

xi = 1, ⎪ ⎪ i=1 ⎪ ⎪ ⎪ ⎪ ⎩ x≥0

2 where ρ > 0 is the penalty parameter and ρ2 (max{0, x − u} ) is the term that penalizes the constraint x − u¯ ≤ 0. For the sake of simplicity, henceforth, in the algorithm we will denote the objective function of the problem (11) by ρ  2 f k (x ) = f (x ) + 2k ni=1 (maxi {0, xi − ui } ) .

3. Statement of algorithm and convergence results This section is divided into two sections. The first one describes the proposed algorithm and the second section discusses its convergence, which is based on the Bregman distance. Consider ρ > 0 and two positives sequences {ρ k } and {β k } satisfying the following conditions +∞ 

βk = +∞,

k=1

+∞ 

βk2 < ∞, 0 < ρ < ρk , ∀ k ∈ N, lim (ρk βk ) = 0. k→∞

k=1

(12)

3.1. Projected penalty algorithm (PPA) The algorithm starts with an initial point x0 belonging to the set X and uses two exogenous sequences and the Bregman distance to define a penalized function that provides, in the primary step, an explicit projection formula. Step 0. Given sequences {ρ k } and {β k } satisfying (12), select x0 ∈ X. Set k = 0. Step 1. Define

αk :=

βk γk

where

γk := max{1, ∇ f k (xk )},

(13)

ρ  2 and f k (x ) = f (x ) + 2k ni=1 (maxi {0, xi − ui } ) . Step 2. Compute the projection:

xkj exp(−αk [∇ f k (xk )] j ) xkj +1 = n , j = 1, . . . , n. k i=1 xi exp (−αk [∇ f k (x )]i )

(14)

If xk+1 = xk and xk+1 ≤ u, then stop. Otherwise, set k = k + 1 and return to Step 1. We highlight certain comments and reasons for the simplicity achieved by our method. First, we consider an objective function f k , which is obtained from the original objective function f added to a quadratic penalty term related to the upper bound constraint. Thus, it is possible to deal with a problem unbounded above. Besides, since we select the Kullback-Leibler distance, then it results in a closed formula for the exterior projection operator (14). 3.2. Convergence analysis of PPA The first result is concerned with the stop criteria of the algorithm. Subsequently, we prove a technical lemma and a proposition that will be necessary to obtain our primary result, which consists of the convergence of the algorithm. Please cite this article as: R.S.V. Hoto, L.C. Matioli and P.S.M. Santos, A penalty algorithm for solving convex separable knapsack problems, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124855

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 1, 2019;9:19]

R.S.V. Hoto, L.C. Matioli and P.S.M. Santos / Applied Mathematics and Computation xxx (xxxx) xxx

5

Proposition 3.1. If the Algorithm PPA generates a finite sequence, then the last point is a solution of the problem (5). Proof. Since ∇ f k (xk ) = ∇ f (xk ) + ρk max{0, xk − u}, if the algorithm stops at iteration k, we have xk = xk+1 ≤ u and it results ∇ f k ( xk ) = ∇ f ( xk ). Now, using (8) and (14) we obtain

xk+1 − y, αk ∇ f k (xk ) ≤ y − xk+1 , ∇1 d (xk+1 , xk ), ∀ y ∈ X. Since α k > 0 and ∇1 d (xk+1 , xk ) = 0 we have

xk − y, ∇ f (xk ) ≤ 0, ∀ y ∈ X. Therefore, since f is a convex function, we obtain

f ( y ) ≥ f ( xk ),

∀ y ∈ X.

The proof is complete.



From now on, we assume that the algorithm PPA generates an infinite sequence denoted by {xk }. Moreover, without loss of generality, we consider  ·  as l1 norm, where the Kullback-Leibler distance d is 1-strongly convex, i.e., σ = 1, see for example [18], Proposition 5.1. Now, we derive an important property. Lemma 3.1. For each k, the following inequalities hold (i) αk ∇ f k (xk ) ≤ βk ; (ii) xk+1 − xk  ≤ βk . Proof. (i) From (13) it follows that

αk ∇ f k (xk ) =

βk ∇ f k (xk ) ≤ βk . max{1, ∇ f k (xk )}

(15)

(ii) By taking x = xk in Proposition 2.1 and using that σ = 1, it results in

xk+1 − xk  = π (αk ∇ f k (xk ), xk ) − xk  ≤ αk ∇ f k (xk ) ≤ βk where the last inequality follows from (15).

(16)



Remark 3.1. Since the objective function f is continuous and, in view of (4), the constraint set of the problem (5) is compact, its solution set, denoted by S∗ , is non-empty. ∗ Proposition  3.2. For every  x , solution of the problem (5), and for each k ∈ N, the following inequalities hold (i. ) αk f (xk ) − f (x∗ ) ≤ d (x∗ , xk ) − d (x∗ , xk+1 ) + βk2 , k = 0, 1, . . . ,

(ii. ) d (x∗ , xk+1 ) ≤ d (x∗ , xk ) − d (xk+1 , xk ) + βk2 , k = 0, 1, . . . , Furthermore, {xk } is bounded.

Proof. Let x∗ ∈ S∗ be an optimal solution of (5). Since, xk+1 = π (αk ∇ f k (xk ), xk ) and using (8) we have

xk+1 − x, αk ∇ f k (xk ) ≤ x − xk+1 , ∇1 d (xk+1 , xk ) ∀ x ∈ X, i.e.,

xk+1 − x, αk ∇ f k (xk ) + ∇1 d (xk+1 , xk ) ≤ 0 ∀ x ∈ X,

(17)

Following [18] and using the convexity of f k , we obtain

  αk f (xk ) − f (x∗ )   ≤ αk f (xk ) − f (x∗ ) + αk ρk max{0, xk − u}   = αk f k (xk ) − f k (x∗ )

0≤

αk ∇ f k (xk ), xk − x∗  = αk ∇ f k (xk ), xk − xk+1  + αk ∇ f k (xk ), xk+1 − x∗  = αk ∇ f k (xk ), xk − xk+1  + αk ∇ f k (xk ) + ∇1 d (xk+1 , xk ), xk+1 − x∗  − ∇1 d (xk+1 , xk ), xk+1 − x∗  ≤ αk ∇ f k (xk ), xk − xk+1  − ∇1 d (xk+1 , xk ), xk+1 − x∗ , ≤

where the last inequality follows from (17) with x =

(18)

x∗ .

Please cite this article as: R.S.V. Hoto, L.C. Matioli and P.S.M. Santos, A penalty algorithm for solving convex separable knapsack problems, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124855

ARTICLE IN PRESS

JID: AMC 6

[m3Gsc;November 1, 2019;9:19]

R.S.V. Hoto, L.C. Matioli and P.S.M. Santos / Applied Mathematics and Computation xxx (xxxx) xxx

Using (18) and the fact u, v ≤

  αk f (xk ) − f (x∗ )   ≤ αk f k (xk ) − f k (x∗ )

1 2



 u2 + v2 , for all u, v ∈ Rn , we obtain

0≤

αk ∇ f k (xk ), xk − xk+1  + ∇1 d (xk+1 , xk ), x∗ − xk+1   1 ≤ αk ∇ f k (xk )2 + xk − xk+1 2 + ∇1 d (xk+1 , xk ), x∗ − xk+1  2  1 = αk ∇ f k (xk )2 + xk − xk+1 2 + d (x∗ , xk ) − d (x∗ , xk+1 ) − d (xk+1 , xk ), ≤

2

(19)

where the last equality follows from Lemma 2.1. So, from (19) and Lemma 3.1, we get

  αk f (xk ) − f (x∗ )  1 2 ≤ βk + βk2 + d (x∗ , xk ) − d (x∗ , xk+1 ) − d (xk+1 , xk )

0≤

2 = d (x∗ , xk ) − d (x∗ , xk+1 ) − d (xk+1 , xk ) + βk2 .

(20)

We conclude the proof using (20).

d (x∗ , xk+1 ) ≤ d (x∗ , xk ) − d (xk+1 , xk ) + βk2 .  An important consequence of Proposition 3.2 is the boundedness of the sequence {xk }, in particular, it implies the boundedness of {∇ f k (xk )}, which will be useful to obtain an efficiency estimate. Theorem 3.1. Under the assumptions of Proposition 3.2, assume that there exists L > 1 such that ∇ f k (xk ) ≤ L for all k ∈ N. Then,





min f (x ) − f (x ) ∗

j

j=0,...,s



L d ( x∗ , x0 ) + ≤ s k=0



s

βk

k=0

βk2



.



Furthermore, lims→+∞ min j=0,...,s f (x j ) − f (x∗ ) = 0. Proof. For each k ∈ N we have

βk = γk αk ≤ αk L, and from Proposition 3.2-(i) it follows that

0≤

βk  L

f ( xk ) − f ( x∗ )





  αk f (xk ) − f (x∗ )



βk2 + d (x∗ , xk ) − d (x∗ , xk+1 ).

(21)

Now, by summing (21) over k = 0, 1, . . . , s one obtains,

L−1

s 

s    βk f (xk ) − f (x∗ ) ≤ d (x∗ , x0 ) − d (x∗ , xs+1 ) + βk2 ,

k=0

and using

0≤L

−1

(22)

k=0

s 

βk

k=0

min f (x ) − f (x ) ∗

j

j=1,...,s

it results in,



≤ L−1

s  k=1



min f (x j ) − f (x∗ ) L−1

j=0,...,s

s 

βk ≤ d (x∗ , x0 ) +

k=0

that is,



min f (x ) − f (x ) j

j=0,...,s

Finally, from (12), namely,



  βk f (xk ) − f (x∗ ) ,

k=0

βk2 ,

k=0



L d ( x∗ , x0 ) + ≤ s

∞

s 

k=0

βk = +∞ and

s

βk

k=0

s

βk2



2 k=∞ βk

. < ∞, the result follows.



Please cite this article as: R.S.V. Hoto, L.C. Matioli and P.S.M. Santos, A penalty algorithm for solving convex separable knapsack problems, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124855

ARTICLE IN PRESS

JID: AMC

[m3Gsc;November 1, 2019;9:19]

R.S.V. Hoto, L.C. Matioli and P.S.M. Santos / Applied Mathematics and Computation xxx (xxxx) xxx

7

Table 1 Resource renewal problem. n

Iter.(k)

CPU(s)

5000 10,000 50,000 100,000 200,000

4924.36 5871.46 9473.62 11630.38 13930.01

9.4-01 2.3+00 1.8+01 4.4+01 1.0+02

Table 2 Step size parameter for each dimension n. n

50 0 0

10,0 0 0

50,0 0 0

10 0,0 0 0

20 0,0 0 0

βk

n 4 (k + 1 )

n 14(k + 1 )

n 24(k + 1 )

n 34(k + 1 )

n 44(k + 1 )

4. Numerical experiments In this section, we illustrate the performance of our proposed algorithm by running three numerical tests. The algorithm is written in C and run on a 12 GB RAM 920 2.67 GHz i7 desktop with Ubuntu 18.04 64 bits. For all tests, the stop criterion has been defined by:

 max{xk − u, 0} < 1 , xk+1 − xk  < 2 , where max{v, 0} = (max{v1 , 0}, . . . , max{vn , 0} ) and 1 = 10−3 , 2 = 10−6 . It was considered a set of 100 problems randomly generated with 5 different dimensions 50 0 0, 10, 0 0 0, 50, 0 0 0, 10 0, 0 0 0 and 20 0, 0 0 0, totalizing 50 0 problems. Each problem have been performed 100 times to obtain a reliable estimate of the running time. For all problems we have chosen x0 = (1/n, . . . , 1/n ) ∈ Rn . T

Example 3. In   the first example we solve a nonlinear problem studied by [15]. This class of problems has fi (zi ) = ai zi e−1/zi − 1 for zi > 0 and fi (zi ) = −zi for zi ≤ 0, satisfying hi (zi ) = bi zi . Instances were generated as follows: • ai , bi ∈ [0.001, 1000];  • c = 1.1 bi ξi , where γ = min j {a j /b j } and



ξi =

0, arg minzi fi (zi ) + γ gi (zi ),

if if

ai /bi > γ ai /bi ≤ γ

• li = 0, ui = c/bi . Table 1 shows the average number of iterations and CPU time for different dimensions n and exogenous parameters

ρk = 100, βk = 20/(k + 1 ) where



[ ∇ f k ( x k )] j = a j e



bj cxk j



1+

bj





cxkj

−1

c bj

+ ρk max{0, xkj − 1}.

Table 1 shows that PPA Algorithm can solve the Resource renewal problem but using a high number of iterations and CPU time. For the next examples, the sequence of the exogenous parameters {β k }, related to the step size of PPA algorithm, has been selected depending on the dimension of the tested problem, as presented in Table 2. Example 4. In the second example we solve a polynomial problem over a simplex based on a test problem given at [17]. This class of problems has fi (xi ) = si x4i + ti x3i + vi x2i + wi xi and hi (xi ) = xi . Instances of these problems were generated as follows: • si , vi , w i ∈ [1, 10]; • ti ∈ [0, (8si vi )/3), i = 1, . . . , n. The extremes of the box were considered as li = 0 and ui ∈ (0, 10], i = 1, . . . , n. Finally, the right-hand side constant c of the knapsack constraint was chosen such that b, l < c ≤ b, u. Table 3 shows the average of the iterations and CPU time for different dimensions n and the exogenous parameter ρk = n where





[∇ f k (xk )] j = 4s j cxkj

3



+ 3t j cxkj

2



+ 2v j cxkj + w j c + ρk max{0, xkj − u j }

In Table 3, it is interesting to observe that the number of iterations have decreased with the growth of dimension n. Please cite this article as: R.S.V. Hoto, L.C. Matioli and P.S.M. Santos, A penalty algorithm for solving convex separable knapsack problems, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124855

ARTICLE IN PRESS

JID: AMC 8

[m3Gsc;November 1, 2019;9:19]

R.S.V. Hoto, L.C. Matioli and P.S.M. Santos / Applied Mathematics and Computation xxx (xxxx) xxx Table 3 Convex quartic over a simplex. n

Iter. (k)

CPU(s)

5000 10,000 50,000 100,000 200,000

25.96 11.02 11.75 10.05 9.73

1.3-03 1.1–03 6.2–03 1.1–02 2.4-02

Table 4 Uncorrelated test. PPA

NM

VF

VS

n

It.(k)

cpu(s)

It.(k)

cpu(s)

It.(k)

cpu(s)

It.(k)

cpu(s)

5000 10,000 50,000 100,000 200,000

8.6 4.2 4.1 3.8 3.8

0.4 0.5 2.6 5.0 10.7

4.3 4.2 3.9 4.2 4.3

0.3 0.5 2.6 5.4 11.2

7.3 7.6 7.8 8.2 8.5

0.3 0.6 3.3 6.7 14.8

13.4 14.3 16.7 17.7 18.7

0.6 1.2 5.6 11.2 23.7

Table 5 Weakly correlated test. PPA

NM

VF

VS

n

It.(k)

cpu(s)

It.(k)

cpu(s)

It.(k)

cpu(s)

It.(k)

cpu(s)

5000 10,000 50,000 100,000 200,000

9.6 4.2 4.5 4.1 4.1

0.5 0.5 2.7 5.2 11.3

4.1 4.0 3.8 4.1 4.1

0.3 0.5 2.5 5.1 10.8

7.1 7.4 7.7 7.9 8.0

0.3 0.6 3.3 6.6 14.4

13.3 14.3 16.7 17.7 18.6

0.6 1.2 5.6 11.0 24.8

Table 6 Strongly correlated test. PPA

NM

VF

VS

n

It.(k)

cpu(s)

It.(k)

cpu(s)

It.(k)

cpu(s)

It.(k)

cpu(s)

5000 10,000 50,000 100,000 200,000

9.1 4.3 4.5 4.2 4.3

0.5 0.6 2.8 5.2 11.6

4.0 3.8 3.9 3.8 3.7

0.2 0.5 2.5 5.0 10.3

6.9 7.1 7.6 7.8 7.8

0.3 0.6 3.2 6.7 14.4

13.5 14.3 16.7 17.7 18.6

0.6 1.1 5.5 11.1 23.7

Example 5. In this convex quadratic knapsack problem, where f i (xi ) = 12 pi x2i − ai xi , we generated the tests with varying degrees of correlation between the constraint and objective function coefficients, in the same way as described in [8]. All parameters were randomly generated from the following intervals: • Uncorrelated pi , ai and bi ∈ [10, 25], i = 1, . . . , n • Weakly Correlated bi ∈ [10, 25], pi ∈ [bi − 5, bi + 5] and ai ∈ [bi − 5, bi + 5], i = 1, . . . , n • Strongly Correlated bi ∈ [10, 25], pi = bi + 5 and ai = bi + 5, i = 1, . . . , n. For all the three problem classes, the extremes of the box, li and ui , were considered in the interval [10,15], for all i = 1, . . . , n. Finally, the right-hand side constant c of the knapsack constraint was chosen such that b, l < c ≤ b, u. The exogenous parameters were selected as ρk =  p2 , β k given in Table 2 and [∇ f k (xk )] j = p j xkj − a j + ρk max{0, xkj − u j }, where



pi =

pi

c − b, l  bi

2 

and ai =

(ai − pi li )(c − b, l  ) bi

.

Please cite this article as: R.S.V. Hoto, L.C. Matioli and P.S.M. Santos, A penalty algorithm for solving convex separable knapsack problems, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124855

JID: AMC

ARTICLE IN PRESS

[m3Gsc;November 1, 2019;9:19]

R.S.V. Hoto, L.C. Matioli and P.S.M. Santos / Applied Mathematics and Computation xxx (xxxx) xxx

9

Tables 4–6 show the average of the iterations and CPU time (in seconds) for the four tested schemes that are our scheme (PPA), Newton method given at [8] (NM), Variable fixing given at [13] (VF), and Variable search given at [12] (VS). Following [8], we have coded all algorithms using C (ANSI C99). Tables 4–6 report the results for Example 5 with uncorrelated, weakly correlated and strongly correlated data, respectively. As can be seen, PPA and Newton methods have similar results and, for this example, both performed better than the other compared methods. 5. Conclusions In this paper, we have developed an algorithm based on penalized gradient projection for solving the continuous convex separable knapsack problem. The most challenging step of the algorithm is obtaining a solution for the minimization problem in a compact set for which a closed formula is available. Moreover, at each step of the algorithm, only basic operations that involve function evaluations and sums are performed. Therefore, this provides us with a fast and promising algorithm. Based on the methods introduced in [18,19] and using a Bregman distance, named by Kullback-Leibler, we prove the convergence of the proposed algorithm. We have also implemented our algorithms in C (ANSI C99) on our personal Desktop and tested on several known medium-size problems of the literature. Tables 1 and 3 showed that PPA performed better in Example 4 compared to Example 3. Tables 4–6 have demonstrated that the proposed algorithm is promising and competitive compared with other existing algorithms. A study by including other performance indicators, rules to choose parameters and tests with large problems is an ongoing research. Acknowledgments We would like to thank two anonymous referees whose comments and suggestions greatly improved this work. The third author was partially supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. References [1] K.M. Bretthauer, B. Shetty, The nonlinear knapsack problem–algorithms and applications, Eur. J. Operat. Res. 138 (3) (2002) 459–472. [2] M. Patriksson, A survey on the continuous nonlinear resource allocation problem, Eur. J. Operat. Res. 185 (1) (2008) 1–46. [3] M. Patriksson, C. Strömberg, Algorithms for the continuous nonlinear resource allocation problem - new implementations and numerical studies, Eur. J. Operat. Res. 243 (3) (2015) 703–722. [4] G.R. Bitran, A.C. Hax, Disaggregation and resource allocation using convex knapsack problems with bounded variables, Manag. Sci. 27 (4) (1981) 431–441. [5] K.M. Bretthauer, B. Shetty, Quadratic resource allocation with generalized upper bounds, Operat. Res. Lett. 20 (2) (1997) 51–57. [6] K.M. Bretthauer, B. Shetty, S. Syam, A branch and bound algorithm for integer quadratic knapsack problems, ORSA J. Comput. 7 (1) (1995) 109–116. [7] P. Brucker, An o(n) algorithm for quadratic knapsack problems, Operat. Res. Lett. 3 (3) (1984) 163–166. [8] R. Cominetti, W.F. Mascarenhas, P.J.S. Silva, A Newton’s method for the continuous quadratic knapsack problem, Math. Programm. Comput. 6 (2) (2014) 151–169. [9] Y.-H. Dai, R. Fletcher, New algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds, Math. Program. 106 (3) (2006) 403–421. [10] T.A. Davis, W.W. Hager, J.T. Hungerford, An efficient hybrid algorithm for the separable convex quadratic knapsack problem, ACM Trans. Math. Softw. (TOMS) 42 (3) (2016) 22. [11] K.C. Kiwiel, On linear-time algorithms for the continuous quadratic knapsack problem, J. Optim. Theory Appl. 134 (3) (2007) 549–554. [12] K.C. Kiwiel, Breakpoint searching algorithms for the continuous quadratic knapsack problem, Math. Program. 112 (2) (2008) 473–491. [13] K.C. Kiwiel, Variable fixing algorithms for the continuous quadratic knapsack problem, J. Optim. Theory Appl. 136 (3) (2008) 445–458. [14] A.G. Robinson, N. Jiang, C.S. Lerme, On the continuous quadratic knapsack problem, Math. Program. 55 (1–3) (1992) 99–108. [15] A. Melman, G. Rabinowitz, An efficient method for a class of continuous nonlinear knapsack problems, SIAM Rev. 42 (3) (20 0 0) 440–448. [16] H. Suzuki, A generalized knapsack problem with variable coefficients, Math. Program. 15 (1) (1978) 162–176. [17] S.E. Wright, J.J. Rohal, Solving the continuous nonlinear resource allocation problem with an interior point method, Operat. Res. Lett. 42 (6–7) (2014) 404–408. [18] A. Beck, M. Teboulle, Mirror descent and nonlinear projected subgradient methods for convex optimization, Operat. Res. Lett. 31 (3) (2003) 167–175. [19] A. Auslender, M. Teboulle, Projected subgradient methods with non-euclidean distances for non-differentiable convex minimization and variational inequalities, Math. Program. 120 (1–2) (2009) 27–48. [20] G. Chen, M. Teboulle, Convergence analysis of a proximal-like minimization algorithm using Bregman functions, SIAM J. Optim. 3 (3) (1993) 538–543.

Please cite this article as: R.S.V. Hoto, L.C. Matioli and P.S.M. Santos, A penalty algorithm for solving convex separable knapsack problems, Applied Mathematics and Computation, https://doi.org/10.1016/j.amc.2019.124855