Discrete dynamic convexized method for nonlinearly constrained nonlinear integer programming

Discrete dynamic convexized method for nonlinearly constrained nonlinear integer programming

Computers & Operations Research 36 (2009) 2723 -- 2728 Contents lists available at ScienceDirect Computers & Operations Research journal homepage: w...

172KB Sizes 1 Downloads 84 Views

Computers & Operations Research 36 (2009) 2723 -- 2728

Contents lists available at ScienceDirect

Computers & Operations Research journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / c o r

Discrete dynamic convexized method for nonlinearly constrained nonlinear integer programming Wenxing Zhu a,∗,1 , M.M. Ali b a b

Center for Discrete Mathematics and Theoretical Computer Science, Fuzhou University, Fuzhou 350002, China School of Computational and Applied Mathematics, University of the Witwatersrand, Wits 2050, South Africa

A R T I C L E

I N F O

Available online 9 December 2008 Keywords: Constrained nonlinear integer programming Convexized method Constrained discrete local minimizer

A B S T R A C T

This paper considers the nonlinearly constrained nonlinear integer programming problem over a bounded box. An auxiliary function is constructed based on a penalty function. By increasing the value of a parameter, minimization of the function by a discrete local search method can escape successfully from a previously converged discrete local minimizer. An algorithm is designed based on minimizing the auxiliary function with increasing values of the parameter. Numerical experiments show that the algorithm is robust and efficient. © 2008 Elsevier Ltd. All rights reserved.

1. Introduction Nonlinearly constrained nonlinear integer programming problems have many applications in science and engineering. A number of research papers dealing with reliability optimization problems are reported in the literature. These are integer programming problems with nonlinear separable objective function and nonlinear multichoice constraints [1,2]. Also, the problem of determining an optimal batch size for a product and purchasing policy of associated raw materials for a manufacturing firm, can be formulated as a constrained nonlinear integer programming problem [3]. There are two main classes of solution methods for constrained nonlinear integer programming, the exhaustive methods and the approximation methods. The exhaustive methods, which include the branch and bound method [4–6], are only applicable to problems with some analytical properties. On the other hand, the approximation or heuristic methods can be applied to almost all discrete optimization problems. However, limited number of approximation or heuristic methods have been developed for constrained nonlinear integer programming. Generally, they can be divided into two classes. The first class of methods are based on the Monte-Carlo techniques. Bertocchi et al. [7] presented a two-phase Monte-Carlo approach

∗ Corresponding author. E-mail address: [email protected] (W. Zhu). 1 Research supported by the National Natural Science Foundation of China under Grant 60773126, the Program for NCET, and the Natural Science Foundation of Fujian Province under Grant 2006J0030. 0305-0548/$ - see front matter © 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2008.12.002

for 0–1 programming problems with separable objective and constraint functions. Litinetski and Abramzon [8] used a multi-start adaptive random search method for discrete global constrained optimization in engineering applications. The second class of approximation methods are based on greedy search or local search methods. Vassilev and Genova [9] presented a feasible integer directions algorithm to solve integer constrained convex polynomial programming problems. In nonlinear integer programming, local search methods often get stuck at a local minimizer. To escape from a local minimizer, Mohan and Nguyen [10] used the technique of simulated annealing in controlled random search. They tested the algorithm on a small number of test problems. In recent years, some authors [11–14] presented a new scheme for nonlinear integer programming, which tries to escape from a discrete local minimizer by minimizing a filled function. However, for a constrained nonlinear integer programming problem, these methods have to use a penalty function [15] to convert the problem firstly into an unconstrained one before solving it. But it is difficult to determine an exact penalty parameter while using the penalty function. Ng et al. [16] presented a discrete global descent method for nonlinear integer programming, which tries to escape from a feasible discrete local minimizer to another better feasible one by minimizing an auxiliary function. The method begins with a feasible solution of the problem, but it is well known that finding a feasible solution of the constrained nonlinear integer programming problem is NP-hard. In this paper, we extend the dynamic convexized method for the box-constrained nonlinear integer programming problem [17] to the nonlinearly constrained case. The method does not use the exact penalty function, nor does it need a feasible solution of the problem. The rest of the paper is organized as follows: In Section 2, we give

2724

W. Zhu, M.M. Ali / Computers & Operations Research 36 (2009) 2723 -- 2728

definitions of constrained discrete local and global minimizers, and present a discrete local search method for constrained nonlinear integer programming problems. We construct an auxiliary function in Section 3, and design an algorithm in Section 4 based on the auxiliary function. Numerical experiments of the algorithm are done in Section 5 to show the efficiency and robustness of the algorithm. Conclusions are made in Section 6. 2. Definitions of discrete local and global minimizers We consider the following nonlinearly constrained nonlinear integer programming problem over a bounded box:

(P)

⎧ min ⎪ ⎪ ⎪ ⎨ s.t. ⎪ ⎪ ⎪ ⎩

Algorithm 2 (Constrained local search). Step 1: Given an initial feasible integer point x0 of problem (P). Step 2: If x0 is a constrained discrete local minimizer of problem (P), then stop; else take an integer point x ∈ N(x0 ) ∩ S such that f (x) < f (x0 ). Step 3: Let x0 := x, and go to step 2. Generally, it is difficult to find a constrained discrete local minimizer of problem (P). Note that a constrained discrete local minimizer of problem (P) is a feasible solution, which satisfies that gi (x)  0, i ∈ K, hj (x) = 0, j ∈ J, x ∈ X ∩ Z n . But it is NP-hard to find a feasible solution of this inequality system. 3. Auxiliary function and its properties

f (x) gi (x)  0,

i ∈ K,

hj (x) = 0,

j ∈ J,

x ∈ X ∩ Zn ,

where K, J are finite sets of indices, Z n is the set of integer points in Rn , X is a bounded closed box in Rn , i.e., X = {x ∈ Rn : a  x  b}, and a, b are integer points in Rn , f (x), gi (x), i ∈ K, hj (x), j ∈ J: Rn → R. Assume that problem (P) is feasible. Definition 1 (Zhu and Fan [17]). For any x ∈ Z n , a set of integer points N(x) ⊆ Z n is called a neighborhood of the integer point x, if {x, x + ei , x − ei , i = 1, . . . , n} ⊆ N(x), where ei is an n-dimensional vector with the i-th component 1, the other components 0's. There are many neighborhoods of an integer point which can satisfy the above definition. However, only one neighborhood could be used in this paper. So a neighborhood of an integer point must be specified firstly, which satisfies Definition 1, and kept unchanged in the sequel. Definition 2 (Zhu and Fan [17]). An integer point x0 ∈ X ∩ Z n is called a discrete local minimizer of f (x) over X ∩ Z n , if f (x)  f (x0 ), for all x ∈ N(x0 ) ∩ X. Definition 3 (Zhu and Fan [17]). An integer point x0 ∈ X ∩ Z n is called a discrete global minimizer of f (x) over X ∩ Z n , if f (x)  f (x0 ), for all x ∈ X ∩ Zn . Next, we present a discrete local search method for finding a discrete local minimizer of f (x) over X ∩ Z n . Algorithm 1 (Local search, Zhu [14]). Step 1: Take an initial integer point x0 ∈ X ∩ Z n . Step 2: If x0 is a discrete local minimizer of f (x) over X ∩ Z n , then stop; else take an integer point x ∈ N(x0 ) ∩ X such that f (x) < f (x0 ). Step 3: Let x0 := x, and go to step 2. Let S be the set of feasible integer points of problem (P), i.e., S = {x ∈ X ∩ Z n : gi (x)  0, i ∈ K, hj (x) = 0, j ∈ J}. Definition 4. An integer point x0 ∈ S is called a constrained discrete local minimizer of problem (P), if f (x)  f (x0 ), for all x ∈ N(x0 ) ∩ S. Definition 5. An integer point x0 ∈ S is called a constrained discrete global minimizer of problem (P), if f (x)  f (x0 ), for all x ∈ S. Obviously, a constrained discrete global minimizer of problem (P) is a constrained discrete local minimizer of problem (P). We present in the following a constrained local search method for problem (P).

Let x∗1 be the current best minimal solution of problem (P), and let f1∗ be a finite number such that, if x∗1 is a feasible integer point of problem (P), then f1∗ = f (x∗1 ); otherwise f1∗ is an upper bound on the global minimal value of problem (P). Take ⎡ ⎤  |hj (x)|⎦ , (1) p(x) =  · ⎣max{0, gi (x), i ∈ K} + j∈J

where  is any positive number. Obviously, x ∈ X ∩ Z n is a feasible solution of problem (P) if and only if p(x) = 0. Construct the following auxiliary function: max{0, f (x)−f1∗ }+p(x)+kx − x∗1  if f (x)  f1∗ or p(x) > 0, T(x, k) = f (x) − f1∗ otherwise, (2) where k is a nonnegative parameter,  ·  designates the p-norm, p = 1, 2, or ∞. Obviously, for all x ∈ X ∩ Z n , if f (x)  f1∗ or p(x) > 0, then T(x, k)  0; otherwise T(x, k) < 0. Construct the following auxiliary nonlinear integer programming problem: min T(x, k) (AP) s.t. x ∈ X ∩ Zn . The objective of our method is to find a constrained discrete local minimal value of problem (P) less than f1∗ by solving problem (AP). Firstly, we analyze properties of the function T(x, k) on X ∩ Z n . 3.1. Discrete local and global minimizers of T(x, k) Theorem 1. For x∗1 , we have the following results. (1) If x∗1 is a constrained discrete local minimizer of problem (P), then x∗1 is a discrete local minimizer of T(x, k) over X ∩ Z n . (2) If p(x∗1 ) > 0, and x∗1 is a discrete local minimizer of max{0, f (x) − f1∗ } + p(x) over X ∩ Z n , then x∗1 is a discrete local minimizer of T(x, k) over X ∩ Z n . Proof. (1) In this case, p(x∗1 ) = 0, f1∗ = f (x∗1 ). Then, T(x∗1 , k) = 0, and f (x)  f (x∗1 ) for all x ∈ N(x∗1 ) ∩ S. N(x∗1 ) ∩ X.

(3) N(x∗1 ) ∩ X,

Now consider the set For x ∈ if x ∈ S, then by (3), f (x)  f (x∗1 ), and by (2), T(x, k)  0. So T(x, k)  T(x∗1 , k); otherwise if x ∈/ S, then p(x) > 0, and by (2), T(x, k)  0. So T(x, k)  T(x∗1 , k). Hence x∗1 is a discrete local minimizer of T(x, k) over X ∩ Z n . (2) In this case, for all x ∈ N(x∗1 ) ∩ X, max{0, f (x) − f1∗ } + p(x)  max{0, f (x∗1 ) − f1∗ } + p(x∗1 ) > 0.

W. Zhu, M.M. Ali / Computers & Operations Research 36 (2009) 2723 -- 2728

Then by (2), for all x ∈ N(x∗1 )∩X, T(x, k)=max{0, f (x)−f1∗ }+p(x)+kx− x∗1 , T(x∗1 , k)=max{0, f (x∗1 )−f1∗ }+p(x∗1 )+kx∗1 −x∗1 , and T(x, k)  T(x∗1 , k). Hence x∗1 is a discrete local minimizer of T(x, k) over X ∩ Z n .  To find x∗1 such that one of the two conditions in Theorem 1 holds, we take randomly an initial point x0 ∈ X ∩ Z n to minimize max{0, f (x) − f1∗ } + p(x) over X ∩ Z n using Algorithm 1. Suppose that x is an obtained discrete local minimizer. If p(x ) > 0, then let x∗1 = x ; otherwise use Algorithm 2 to minimize f (x) over S from x to get a constrained discrete local minimizer of problem (P), and denote it as x∗1 . Theorem 2. For all x ∈ S1 = {x ∈ X ∩ Z n : f (x) < f1∗ , and p(x) = 0}, and for all y ∈ S2 = {x ∈ X ∩ Z n : f (x)  f1∗ , or p(x) > 0}, it holds that T(x, k) < T(y, k). Proof. By (2), for all x ∈ S1 , T(x, k) = f (x) − f1∗ < 0, and for all y ∈ S2 , T(y, k) = max{0, f (y) − f1∗ } + p(y) + ky − x∗1   0. So it is obvious that Theorem 2 holds.  Theorem 2 implies the following corollary.

Furthermore, we have the following result. Theorem 3. Suppose that f1∗ is not the global minimal value of problem (P). For y ∈ S1 = {x ∈ X ∩ Z n : f (x) < f1∗ , and p(x) = 0}, y is a discrete local minimizer of problem (AP) if and only if y is a constrained discrete local minimizer of problem (P). Proof. If f1∗ is not the global minimal value of problem (P), then S1  ∅, and by (2), for any y ∈ S1 , (4)

Thus if y is a discrete local minimizer of problem (AP), then T(y, k) = f (y) − f1∗  T(x, k) for all x ∈ N(y) ∩ X.

(5)

For any x ∈ N(y) ∩ S, we have p(x) = 0. If f (x) < f1∗ , then by (2), T(x, k) = f (x) − f1∗ , and by (5) it holds that f (y)  f (x); otherwise if f (x)  f1∗ , then by the assumption that y ∈ S1 , i.e., f (y) < f1∗ , we have f (y) < f (x). So for all x ∈ N(y) ∩ S, f (x)  f (y). That is to say, y is a constrained discrete local minimizer of problem (P). Conversely, if y is a constrained discrete local minimizer of problem (P), then f (y)  f (x) for all x ∈ N(y) ∩ S.

3.2. Properties of T(x, k) dependent on k Lemma 1. For any x ∈ X ∩ Z n , if x  x∗1 ∈ X ∩ Z n , then there exists y ∈ N(x) ∩ X such that y − x∗1  < x − x∗1 . Proof. Similar to the proof of [17, Lemma 9].



Theorem 4. For the function T(x, k), we have the following results. 1. For any x ∈ S2 = {x ∈ X ∩ Z n : f (x)  f1∗ , or p(x) > 0}, if there exists y ∈ N(x) ∩ X such that f (y) < f1∗ and p(y) = 0, then x is not a discrete local minimizer of problem (AP). 2. For any x ∈ S2 , x  x∗1 , let L(x) =

min

[max{0, f (z)−f1∗ }+p(z)]−[max{0, f (x)−f1∗ }+p(x)] . x − x∗1  − z−x∗1 

z∈N(x)∩X z−x∗1 <x−x∗1 

(7) If k > L(x), then x is not a discrete local minimizer of problem (AP). 3. Especially, if

Corollary 1. If f1∗ is not the global minimal value of problem (P), then S1 = {x ∈ X ∩ Z n : f (x) < f1∗ , and p(x) = 0}  ∅, and all discrete global minimizers of problem (AP) are in the set S1 .

T(y, k) = f (y) − f1∗ < 0.

2725

(6)

For any x ∈ N(y)∩X, if f (x) < f1∗ and p(x)=0, then by (2), T(x, k)=f (x)−f1∗ , and by (4) and (6), it holds that T(y, k)  T(x, k); otherwise if f (x)  f1∗ or p(x) > 0, then by (2), T(x, k)  0. So it also holds that T(y, k)  T(x, k), since y ∈ S1 and T(y, k) < 0. Hence T(y, k)  T(x, k) for all x ∈ N(y) ∩ X, which means that y is a discrete local minimizer of problem (AP).  By Corollary 1 and Theorem 3, we have, Corollary 2. If f1∗ is not the global minimal value of problem (P), then problems (P) and (AP) have the same global minimizers and global minimal values.

k > max L(x),

(8)

x∈X∩Z n

then for all x ∈ S2 , x  x∗1 , x is not a discrete local minimizer of problem (AP). Proof. 1. For any x ∈ S2 , if there exists y ∈ N(x) ∩ X such that f (y) < f1∗ , and p(y) = 0, then by (2), we have T(y, k) = f (y) − f1∗ < 0, and T(x, k) = max{0, f (x) − f1∗ } + p(x) + kx − x∗1   0. So T(x, k) > T(y, k), i.e., x is not a discrete local minimizer of problem (AP). 2. For any x ∈ S2 , x  x∗1 , by Lemma 1, we have z ∈ N(x) ∩ X such that z − x∗1  < x − x∗1 . So there exists y ∈ N(x) ∩ X such that y − x∗1  < x − x∗1 , and L(x) =

[max{0, f (y) − f1∗ } + p(y)] − [max{0, f (x) − f1∗ } + p(x)] . x − x∗1  − y − x∗1 

Thus if k > L(x), then max{0, f (x) − f1∗ } + p(x) + kx − x∗1  = T(x, k) > max{0, f (y) − f1∗ }

+ p(y) + ky − x∗1   T(y, k).

Hence x is not a discrete local minimizer of problem (AP), and assertion 2 holds. 3. Assertion 3 follows from assertion 2 directly.  Theorem 4 suggests that if minimization of T(x, k) over X ∩ Z n using the local search method (Algorithm 1) gets stuck at a discrete local minimizer in the set S2 = {x ∈ X ∩ Z n : f (x)  f1∗ , or p(x) > 0}, then by increasing the value of k sufficiently, minimization of T(x, k) can escape from the discrete local minimizer. Moreover, by Theorems 1 and 4, if k is large enough, then while minimizing T(x, k) over X ∩Z n from any initial point in X ∩Z n , the minimization sequence will converge to the prefixed discrete local minimizer x∗1 , or converge to a discrete local minimizer in the set {x ∈ X ∩ Z n : f (x) < f1∗ , and p(x) = 0}. Let H(x) = max{0, f (x) − f1∗ } + p(x). Note that the objective of minimizing T(x, k) over X ∩Z n is to find a point x in the set S1 ={x ∈ X ∩Z n : f (x) < f1∗ , and p(x) = 0}, which satisfies that H(x) = 0. However, if the set S1 is small, then it is difficult to find such a point, and most of the effort is searching the set S2 = {x ∈ X ∩ Z n : f (x)  f1∗ , or p(x) > 0}.

2726

W. Zhu, M.M. Ali / Computers & Operations Research 36 (2009) 2723 -- 2728

One can imagine that while minimizing T(x, k) over X ∩Z n , for two points x and y in S2 , x ∈ N(y), if H(x) < H(y), then it is desirable that T(x, k) < T(y, k). However, by the proof of assertion 2 in Theorem 4, if the value of k is large enough such that inequality (8) holds, then T(x, k) will be larger than T(y, k) if x−x∗1  > y−x∗1 , and this will misconduct the search for a good integer point. To make T(x, k) < T(y, k) while H(x) < H(y), we need the following result. Theorem 5. Let H(x) = max{0, f (x) − f1∗ } + p(x). Suppose that x ∈ S2 , z ∈ N(x) ∩ X, z ∈ S2 , and H(z) < H(x). Then T(z, k) < T(x, k) if and only if one of the following conditions holds: 1. k = 0; 2. k > 0 and z − x∗1   x − x∗1 ; 3. k > 0, z − x∗1  > x − x∗1 , and k < H(x) − H(z)/(z − x∗1  − x − x∗1 ). Proof. Under the assumptions of Theorem 5, by (2), T(z, k) < T(x, k) is equivalent to H(z) + kz − x∗1  < H(x) + kx − x∗1 .

(9)

If k = 0, then by the assumption that H(z) < H(x), it is obvious that (9) holds. If z − x∗1   x − x∗1 , then for all nonnegative k, inequality (9) holds, since H(z) < H(x). Furthermore, if condition 3 holds, then we have k(z − x∗1  − x − x∗1 ) < H(x) − H(z), and inequality (9) holds.  Theorem 5 suggests that in some cases T(x, k) could not keep the descent points of H(x) in the set S2 if k is too large. So while minimizing T(x, k) on X ∩ Z n , for the sake of finding a constrained discrete local minimal value less than f1∗ of problem (P), k should not be too large. But by Theorem 4, to bypass a previously converged discrete local minimizer while minimizing T(x, k) on X ∩ Z n , k should be large enough. This contradicts the conclusion of Theorem 5. Hence in the algorithm presented in the next section, while minimizing T(x, k) on X ∩ Z n , we take k = 0 initially, and increase the value of k sequentially. 4. Dynamic convexized method Now we present an algorithm for problem (P) by solving problem (AP). The basic idea of the algorithm is as follows. We take k=0 initially, and take randomly a starting point in X ∩Z n to minimize T(x, k) on X ∩ Z n using Algorithm 1. If the minimization sequence converges to a point x  x∗1 , and f (x )  f1∗ or p(x ) > 0, then increase the value of k, and minimize T(x, k) on X ∩ Z n from x . If at this time the minimization sequence converges to a point x  x∗1 , and f (x )  f1∗ or p(x ) > 0, then by Theorem 4, the value of k is too small, we increase the value of k and minimize T(x, k) on X ∩ Z n from x again, till the minimization sequence converges either to x∗1 or to a point in {x ∈ X ∩ Z n : f (x) < f1∗ , and p(x) = 0}. If the minimization sequence converges to x∗1 , then we repeat the above process. If the minimization sequence converges to a point in {x ∈ X ∩ Z n : f (x) < f1∗ , and p(x) = 0}, then by Theorem 3, we have found a constrained discrete local minimizer of problem (P) better than x∗1 . We let x∗1 be the better constrained discrete local minimizer, and repeat the above process again. We now present the step by step algorithm for the dynamic convexized method. Algorithm 3 (Discrete dynamic convexized method). Step 1: Let f1∗ be a large number such that it is an upper bound on the global minimal value of problem (P). Select randomly a point x ∈ X ∩ Z n , and start minimization of max{0, f (x) − f1∗ } + p(x) from x over X ∩ Z n using Algorithm 1 to get a discrete local minimizer x . If p(x ) > 0, then let x∗1 = x ; otherwise use Algorithm 2 to minimize f (x) over S from x to

get a constrained discrete local minimizer of problem (P), denote it as x∗1 , and let f1∗ = f (x∗1 ). Let NL be a sufficiently large integer, and let k be a positive number. Set N = 0. Step 2: Set k = 0, and N = N + 1. If N  NL , then go to Step 5; otherwise draw uniformly at random an initial point y in X ∩ Z n and go to Step 3. Step 3: Minimize T(x, k) over X ∩ Z n from y using Algorithm 1. Suppose that x is an obtained discrete local minimizer. If x  x∗1 , f (x )  f1∗ or p(x ) > 0, then set k = k + k , y = x , and repeat Step 3. If x = x∗1 , then go to Step 2. If f (x ) < f1∗ and p(x ) = 0, then go to Step 4. Step 4: Let x∗1 = x , and go to Step 2. Step 5: Stop the algorithm, if p(x∗1 ) = 0, then output x∗1 and f (x∗1 ) as an approximate global minimal solution and global minimal value of problem (P), respectively. In the above algorithm, NL is the maximal number of random initial points from which to minimize T(x, k) over X ∩ Z n between Steps 3 and 4. Next we discuss convergence of Algorithm 3. Suppose that problem (P) is feasible, and without loss of generality, suppose that f1∗ is not the global minimal value of problem (P). Let S∗ be the set of constrained discrete global minimizers of problem (P). By Corollary 2, S∗ is also the set of discrete global minimizers of problem (AP). Let (S∗ ) be the number of integer points in S∗ . Obviously, (S∗ ) > 0. Let xk be the k-th random point drawn uniformly in X ∩ Z n at Step 2. Obviously, the probability that xk ∈ S∗ satisfies that Pr{xk ∈ S∗ } > 0. Thus, Pr{∃k, s.t. xk ∈ S∗ } = 1. Suppose that f ∗ is the global minimal value of problem (P). Take ∗ fk+1 such that if in Algorithm 3 at Step 3, minimization of T(x, k) on ∗ = fk∗ ; otherwise X ∩ Z n from xk converges finally to x∗1 , then set fk+1 ∗ let fk+1 be the discrete local minimal value of problem (P) found at ∗ < f ∗ , and f ∗  f ∗ . Step 3, which satisfies that fk+1 k k+1 ∗ = f ∗ . So, Pr{f ∗ = f ∗ }  Pr{x ∈ ∗ By Corollary 2, if xk ∈ S , then fk+1 k k+1 ∗ ∗ ∗ S } > 0. Hence fk+1 converges to f with probability 1. 5. Numerical experiments Now we analyze the performance of Algorithm 3 on a set of test problems. These test problems are taken from [17], which are Problems 1, 3, 7, 8, 9, 11, and 15. For the neighborhood N(x) of an integer point x, we take N(x) = {x, x + ei , x − ei , i = 1, 2, . . . , n}. The algorithm was tested on a personal computer with CPU Pentium 1.7 GHz, and 128 M RAM. For the function p(x) in (1), we have a parameter . To study how the value of parameter  in the function p(x) affects the performance of Algorithm 3, we test Algorithm 3 on Problem 9 by fixing k = 10. We take  = 0.1, 1.0, 5.0, 10.0, 50.0, 100.0, 500.0, 1000.0, and 10 000.0, respectively, and run the algorithm 25 times to solve Problem 9. During practical implementations, if the algorithm cannot find a discrete global minimizer of the problem within 5.0 × 105 function calls, then we stop the algorithm. We record the number of function calls to reach a global minimizer. The test results are put in Table 1. In Table 1, every number in the column `min' is the minimal number of function calls to reach a discrete global minimizer among 25 runs; every number in the column `max' is the maximal number of function calls to reach a discrete global minimizer among successful runs; every number in the column `mean' is the average number of function calls of successful runs; and every number in the column `fail' is the number of runs that the optimum has not been reached among 25 runs.

W. Zhu, M.M. Ali / Computers & Operations Research 36 (2009) 2723 -- 2728

So x − x∗1 1 − z − x∗1 1 = 1, and Eq. (7) yields

Table 1 Performance of Algorithm 3 on Problem 9 for different .



Min

Max

Mean

Fail

 = 0.1  = 1.0  = 5.0  = 10.0  = 50.0  = 100.0  = 500.0  = 1000.0  = 10 000.0

1545 1539 1688 878 948 1032 1704 2558 17888

499804 119716 203768 110491 107046 338777 408783 492056 340643

127239 32950 53431 39300 61290 84354 149986 194646 213788

5 12 0 0 0 0 5 7 15

L(x)



min

z∈N(x)∩X z−x∗1 1 =x−x∗1 1 −1

[max{0, f (z) − f1∗ } + p(z)]

− [max{0, f (x) − f1∗ } + p(x)]

DIF(x). Thus, L(x)  DIF(x)  max DIF(x). x∈X∩Z n

Table 2 Performance of Algorithm 3 on Problem 9 for different k .

k

Min

Max

k = 0.01 k = 0.1 k = 0.5 k = 1.0 k = 5.0 k = 10.0 k = 100.0 k = 1000.0

* 2558 1200 1032 892 878 1537 1542

351 234 156 94 110 209 209

Mean * 408 096 443 450 491 890 019

99 126 77 38 39 66 88

* 876 255 401 315 300 553 421

Fail 25 10 0 0 0 0 0 0

* means that a discrete global minimizer was not found within 5.0 × 105 function calls.

From the `fail' column of Table 1, it can be seen that our algorithm cannot solve Problem 9 sometimes for too small and too large values of  within 5.0 × 105 function calls. Moreover, from the `mean' column, it can be seen that the average number of function calls decreases generally if the value of  increases from 0.1 to 10, and the average number of function calls increases if the value of  increases from 10 to 10 000. The reason may be that a too small value of  would make the search emphasize on minimizing the function f (x), and a too large value of  would make the search emphasize on minimizing the function p(x). Next, to study the sensitivity of k , we test the algorithm on Problem 9 by fixing  = 10 in the function p(x). We take k = 0.01, 0.1, 0.5, 1.0, 5.0, 10.0, 100.0, 500.0, and 1000.0, respectively, and run the algorithm 25 times to solve Problem 9. During practical implementations, if the algorithm cannot find a discrete global minimizer of the problem within 5.0 × 105 function calls, then we stop the algorithm. We record the number of function calls to reach a global minimizer. The test results are put in Table 2. From the `fail' column of Table 2, it can be seen that our algorithm cannot solve Problem 9 successfully for k =0.01 and 0.1 within 5.0× 105 function calls, but our algorithm can solve Problem 9 successfully for the other values of k . So if the value of k increases, then the probability of failing to find a discrete global minimizer of Problem 9 decreases. Moreover, from the `mean' column, it can be seen that the average number of function calls decreases if the value of k increases from 0.5 to 5.0, and the average number of function calls increases if the value of k increases from k = 5.0 to 1000.0. The reason may be that a too small value of k will make Algorithm 3 take more efforts to escape from a discrete local minimizer, and a too large value of k will make Algorithm 3 misconduct the local search too often, and use more function calls. So, it is important to choose a problem dependent value of k . Next we discuss how to choose the value of k . During practical implementations, we fix the value of  = 10, and take  · 1 as the norm in T(x, k). Note that for any z ∈ N(x) ∩ X, there exists i ∈ {1, 2, 3, . . . , n}, such that z = x + ei , or z = x − ei , and we have z − x∗1 = (x − x∗1 ) + ei

or z − x∗1 = (x − x∗1 ) − ei .

2727

(10)

We use maxx∈X∩Z n DIF(x) for estimation of the value of k . However, it is difficult to calculate the value of maxx∈X∩Z n DIF(x) exactly. To solve this problem, we take DIF(x) approximately as APP(x) =

min

z∈N(x)∩X

[f (z) + p(z)] − [f (x) + p(x)],

z−x∗1 1 =x−x∗1 1 −1

and before implementation of the algorithm, we draw randomly 100 points in X ∩ Z n , x1 , x2 , . . . , x100 , and use max{APP(x1 ), APP(x2 ), . . . , APP(x100 )} to estimate roughly the value of maxx∈X∩Z n DIF(x). If it is positive, then let

k =

1 max{APP(x1 ), APP(x2 ), . . . , APP(x100 )}; 200

(11)

else let k = 10−6 . To see the accuracy of k provided by (11), we have calculated the value of k 1000 times independently on Problem 9. These values are between 5.845 and 7.42, and the average value on them is 6.823. In comparison with Table 2, it is obvious that it is acceptable to calculate in this way the value of k . Thus, we run the algorithm 25 times independently on every test problem, and record the number of function calls to reach a global minimizer. During practical implementations, if the algorithm cannot find a discrete global minimizer of the test problem within 2.0 × 107 function calls, then we stop the algorithm. The test results are put in Table 3. In Table 3, every result of our algorithm includes the cost of estimation of k by (11). Moreover, every number in the column `time' is the average time (in seconds) of Algorithm 3 used to reach a global minimizer of the problem solved among 25 runs; every number in the column `av-ni' is the average number of random initial points drawn at Step 2 of Algorithm 3. To compare the performances of Algorithm 3 with some other methods, we also put in Table 3 the test results of the discrete filled function method [11], and the discrete dynamic convexized method with the exact penalty function method [17]. From the `fail' column of Table 3, it can be seen that Algorithm 3 can find discrete global minimizers of all test problems successfully in all 25 runs. And from the `mean' column of Table 3, it can be seen that Algorithm 3 does not use very large number of function calls. Specifically, from the `av-ni' column of Table 3, it can be seen that Algorithm 3 uses rather small number of initial points to get the results of all test problems. To see the effects of the dimensionality on the number of function calls, we conducted test runs using Problem 15. From the `mean' column of Problem 15, it can be seen that when the dimension of the test problem increases, the average number of function calls of Algorithm 3 increases moderately. So the algorithm is robust on these problems. Although the discrete filled function method [11] was tested using given initial points, while our algorithm uses random initial points, we still compare the average number of function calls of our algorithm with the discrete filled function method. From the `mean'

2728

W. Zhu, M.M. Ali / Computers & Operations Research 36 (2009) 2723 -- 2728

Table 3 Performances and comparisons of algorithms. Prob.

Our algorithm's results Time

Results by [11]

Results by [17]

Fail

Mean

Mean

Time

Min

Max

Mean

av-ni

1 3 7 8 9 11

7690 405 294 1170 478 3 051 376

211 393 1152 48 641 160 795 199 656 18 355 528

60 360.6 599.2 11 568.7 51 186.3 31 162.1 10 982 038.6

29.3 3.2 20.2 63.1 56.2 72.9

0.046 0.004 0.016 0.023 0.048 7.35

0 0 0 0 0 0

4474 104.9 – – – 1 608 067.3

24 679.2 658.1 7055.2 77 122.8 52 000.9 12780839

0.039 0.006 0.038 0.049 0.055 11.22

Prob. 15 (n = 10) (n = 20) (n = 25) (n = 30) (n = 40) (n = 50) (n = 100)

600 7872 9592 22 660 66 116 102 312 809 626

4114 49 036 57 074 90 820 182 261 453 557 2 055 377

2107.4 13 973.9 25 553.9 41 047.8 99 428.5 204 070.5 1 249 832.2

4.6 8.4 10.0 12.0 16.5 21.1 34.0

0.007 0.07 0.086 0.283 0.87 2.17 21.52

0 0 0 0 0 0 0

– – 45 158.5 – – 323 156.5 2 734 844.5

– – 28 089.6 – – 180 368.3 1 559 704

– – 0.093 – – 1.499 24.31

– means results are not available.

columns of Table 3, it can be seen that our algorithm uses less number of function calls than the discrete filled function method on Problems 11 and 15 on all dimensions. But the discrete filled function method uses less number of function calls than our algorithm on Problems 1, 3, and 11. Next we compare the test results of our algorithm with the results from [17]. From the `time' columns of Table 3, it can be seen that our algorithm uses less time to solve Problems 3–11, and Problem 15 on dimensions 25 and 100, than the discrete dynamic convexized method with the exact penalty function method. From the `mean' columns of Table 3, it can be seen that our algorithm uses less number of function calls than the discrete dynamic convexized method with the exact penalty function method on Problems 3–11, and Problem 15 on dimensions 25 and 100. 6. Conclusions In this paper, we have proposed an algorithm for constrained nonlinear integer programming problems. The algorithm is based on an auxiliary function. We have proved theoretically that the auxiliary function has the same discrete local and global minimizers as the original function within the feasible region. Therefore, the new algorithm is based on the minimization of the auxiliary function. We have studied the parameters of the auxiliary function both theoretically and numerically. In both the cases, we have shown that the global minimizer can be obtained with suitable choices of the parameter values. Comparisons of the obtained numerical results with those of some recent algorithms proves the superiority of the auxiliary function based algorithm. Acknowledgments The authors are grateful for the insightful and valuable comments provided by anonymous referees, which have improved the clarity and quality of this paper. References [1] Chern M, Jon R. Reliability optimization problems with multiple constraints. IEEE Transactions on Reliability 1986;R-35(4):431–6.

[2] Mira KB, Sharma U. An efficient algorithm to solve integer programming problems arising in system reliability design. IEEE Transactions on Reliability 1991;R-40(1):81–91. [3] Sarker R, Runarsson T, Newton C. Genetic algorithms for solving a class of constrained nonlinear integer programs. In: Proceedings of the 15th Australian society for operational research conference, 1999. p. 1122–37. [4] Benson HP, Erenguc SS, Horst R. A note on adapting methods for continuous global optimization to the discrete case. Annals of Operations Research 1990;25(1–4):243–52. [5] Erenguc SS, Benson HP. An algorithm for indefinite integer quadratic programming. Computers and Mathematics with Applications 1991;21(6/7): 99–106. [6] Lee WJ, Cabot AV, Venkataramanan MA. A branch and bound algorithm for solving separable convex integer programming problems. Computers and Operations Research 1994;91(9):1011–24. [7] Bertocchi M, Brandolini L, Slominski L, Sobczynska J. A Monte-Carlo approach for 0–1 programming problems. Computing 1992;48:259–74. [8] Litinetski VV, Abramzon BM. MARS—a multistart adaptive random search method for global constrained optimization in engineering applications. Engineering Optimization 1998;30:125–54. [9] Vassilev V, Genova K. An approximate algorithm for nonlinear integer programming. European Journal of Operational Research 1994;74:170–8. [10] Mohan C, Nguyen HT. A controlled random search technique incorporating the simulated annealing concept for solving integer and mixed integer global optimization problems. Computational Optimization and Applications 1999;14:103–32. [11] Ng C, Zhang L, Li D, Tian WW. Discrete filled function method for discrete global optimization. Computational Optimization and Applications 2005;31:87–115. [12] Tian W, Zhang L. An algorithm for finding global minimum of nonlinear integer programming. Journal of Computational Mathematics 2004;22(1):69–78. [13] Zhu WX. An approximate algorithm for nonlinear integer programming. Applied Mathematics and Computation 1998;93:183–93. [14] Zhu WX. A filled function method for integer programming. Acta Mathematicae Applicatae Sinica 2000;23(4):481–7 [in Chinese]. [15] Sinclair M. An exact penalty function approach for nonlinear integer programming problems. European Journal of Operational Research 1986;27: 50–6. [16] Ng C, Li D, Zhang L. Discrete global descent method for discrete global optimization and nonlinear integer programming. Journal of Global Optimization 2007;37(3):357–79. [17] Zhu WX, Fan H. A discrete dynamic convexized method for nonlinear integer programming. Journal of Computational and Applied Mathematics 2009;223(1):356–73.