European Journal of Operational Research 180 (2007) 1011–1027 www.elsevier.com/locate/ejor
Discrete Optimization
A method for finding well-dispersed subsets of non-dominated vectors for multiple objective mixed integer linear programs John Sylva a, Alejandro Crema
b,*
a
b
Departamento de Matema´ticas Aplicadas, Facultad de Ingenierı´a, Universidad Central de Venezuela, Venezuela Facultad de Ciencias, Escuela de Computacio´n, Universidad Central de Venezuela, Apdo. 47002, Caracas 1041-A, Venezuela Received 16 September 2004; accepted 16 February 2006 Available online 7 July 2006
Abstract We present an algorithm for generating a subset of non-dominated vectors of multiple objective mixed integer linear programming. Starting from an initial non-dominated vector, the procedure finds at each iteration a new one that maximizes the infinity-norm distance from the set dominated by the previously found solutions. When all variables are integer, it can generate the whole set of non-dominated vectors. Ó 2006 Elsevier B.V. All rights reserved. Keywords: Integer programming; Multiple objective programming; Parametric programming
1. Introduction Generating non-dominated vectors and efficient solutions in multiple objective mathematical programming has been a field of interest both as a theoretical problem and as a part of multiple criteria decision making procedures. Several procedures have been developed for generating the non-dominated set for multiple objective linear programming (MOLP) [4,5,7], multiple objective integer linear programming (MOILP) [3,11,16,17], and multiple objective mixed integer linear programming (MOMILP) [10] problems. However, these methods have a considerable computational cost and, due to the great number of solutions, their results cannot be used without relying on filtering procedures. There are also interactive procedures for these problems [1,8,14] that try to overcome the computational cost but they may need filtering too in order to get well-dispersed results. The method we present in this paper was originally conceived as a variant of the procedure we proposed in [15]. That method, based on [9] could only generate the whole set of non-dominated vectors for MOILP *
Corresponding author. Fax: +58 2 60 52 168. E-mail addresses:
[email protected] (J. Sylva),
[email protected] (A. Crema).
0377-2217/$ - see front matter Ó 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2006.02.049
1012
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
problems with integer entries in the cost matrix. Our original goal was to overcome this limitation, but we found however a more useful feature of the new procedure, namely, that it generates at each iteration the vector that maximizes the infinity-norm distance to the set dominated by all the previously found vectors. This characteristic allow us to obtain uniform representations (as defined in [12]) of the set of non-dominated vectors using partial runs of the procedure without relying on filtering techniques. On the contrary, in our first procedure the solutions are generated in decreasing order of some fixed weighted objective function. This difference is important if we consider that a full enumeration of the set of non-dominated vectors is impossible in practice with these methods as the complexity of the integer linear problems grows with the number of solutions generated. On the contrary, partial runs of the procedures are more viable and therefore, the order of generation of the solutions is a critical aspect to take into account when comparing both approaches. The first method allows us to get better solutions first if we have a good estimate of the decision maker’s preferences but when there is little information about these, a representative subset of non-dominated can only be achieved by introducing problem-dependant step parameters and there is no guarantee of uniformity. On the contrary, our new proposition can generate uniform representations by specifying more natural parameters such as the number of elements in the subset or the minimal distance between its elements. The procedure can be applied to problems with continuous variables such as MOLP and MOMILP problems allowing us to produce well-dispersed samples of non-dominated vectors to these classes of problems. 2. Theoretical basis The MOMILP problem can be stated as ðP Þ :
\ max "fCx : Ax 6 b; x P 0; xj 2 Z for j 2 J g; pn
where C 2 R , A 2 Rmn , b 2 Rm and J {1, . . . , n}. Cx represents p objective functions, Ax 6 b represents m linear constraints and x represents n decision variables. J is an index set of the integer variables; two important cases considered are the MOLP problem, where J = ; and the MOILP problem, where J = {1, . . . , n}. The feasible set of problem (P) will be denoted by F(P) and is assumed to be a non-empty bounded set. Because of conflicting objectives, there is not usually a maximum solution but maximal or non-dominated solutions. Definition 2.1. A feasible solution x* to problem (P) is an efficient solution iff there is not another feasible x such that Cx P Cx* with at least one strict inequality. The resulting criterion vector Cx* is said to be nondominated. A well-known result connecting multiple objective programming and parametric programming is the following [14]: Theorem 2.2. If x* is an optimal solution to the (single objective) problem maxfkt Cx : x 2 Sg for some k 2 Rp , k > 0, then x* is an efficient solution to problem \ max "fCx : x 2 Sg: Efficient solutions that are optimal to the parametric problem in Theorem 2.2 are said to be supported efficient solutions. Unlike MOLP, the reciprocal of this theorem does not hold for MOMILP or MOILP [2] as some efficient solutions (known as unsupported efficient solutions) may not be optimal for any k > 0. The following results allow us to find a solution (supported or unsupported) such that its corresponding objective vector is at a maximal distance, accordingly to the infinity-norm, from the region dominated by a finite set of known non-dominated vectors. In the next two lemmas, we will prove that, given a set {x1, . . . , xl} of efficient solutions and an ^x, candidate to be efficient, it is possible to calculate the infinity-norm distance from vector C^x to the set {zjz 6 Cxs, for some s = 1, . . . , l} by solving a mixed integer linear programming (MILP) problem.
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
1013
Lemma 2.3. Let x1, . . . , xl be efficient solutions to (P), let ^x 2 F ðP Þ [ls¼1 Ds , where Ds = {x 2 F(P) : Cx 6 Cxs} and let s d ¼ min max fðC^xÞk ðCx Þk g : s¼1;...;l
k¼1;...;p
Then d ¼ minfkC^x zk1 jz 2 [ls¼1 Z s g where Z s ¼ fz 2 Rp jz 6 Cxs g. Proof. Let s and k be such that d ¼ ðC^xÞk ðCxs Þk and let z be such that zk ¼ minððC^xÞk ; ðCxs Þk Þ for k = 1, . . . , p. From the definition of z we have that z 6 Cxs and kC^x zk1 ¼ d. Now for each s = 1, . . . , l we have that if z 6 Cxs then kC^x zk1 ¼ max fjðC^xÞk zk jg P max fðC^xÞk zk g P max fðC^xÞk ðCxs Þk g P d: k¼1;...;p
k¼1;...;p
k¼1;...;p
Lemma 2.4. Let x1, . . . , xl be efficient solutions to (P) and let ^x 2 F ðP Þ [ls¼1 Ds . Let us consider the MILP problem1 ðP l ð^xÞÞ :
max
d
s:t:
x ¼ ^x; ðCxÞk P ðCxs Þk y sk þ d ðM k þ RÞð1 y sk Þ p X
y sk ¼ 1
for s ¼ 1; . . . ; l; k ¼ 1; . . . ; p;
for s ¼ 1; . . . ; l;
k¼1
y sk 2 f0; 1g
for s ¼ 1; . . . ; l; k ¼ 1; . . . ; p;
d P 0; where Mk is a lower bound to (Cx)k for all x 2 F(P), k = 1, . . . , p and R is an upper bound to kCx Cx 0 k1 for x, x 0 2 F(P). Then ðP l ð^xÞÞ has an optimal value ^ d and ^ d ¼ minfkC^x zk1 jz 2 [ls¼1 Z s g: Proof. First we will show that there is a feasible solution with objective value ^d. For each s, let us choose ks such that ðC^xÞks ðCxs Þks has a maximum value, ^y sk ¼ 0 if k 5 ks, ^y sk ¼ 1 if k = ks and ^ d ¼ mins¼1;...;l fðC^xÞks ðCxs Þks g. Accordingly to Lemma 2.3, ^ d ¼ minfkC^x zk1 jz 2 [ls¼1 Z s g: Then ð^x; ^y ; ^ dÞ is feasible as ðC^xÞks ¼ ðCxs Þks þ ððC^xÞks ðCxs Þks Þ P ðCxs Þks þ ^d d R, because ^d ¼ mins¼1;...;l fkC^x zk1 jz 2 [ls¼1 Z s g 6 R. and for k 5 ks then ðC^xÞk P M k P M k þ ^ Now we will prove that this solution is optimal. Let ð^x; y; dÞ be a feasible solution to ðP l ð^xÞÞ. For each s = 1, . . . , l there is some ks such that y sk ¼ 0 if k 5 ks, y sk ¼ 1 if k = ks. Therefore ðC^xÞks P ðCxs Þks þ d and d 6 ðC^xÞks ðCxs Þks 6 max fðC^xÞk ðCxs Þk g: k¼1;...;p
As this inequality holds for each s = 1, . . . , l, we have that s d: d 6 min max fðC^xÞk ðCx Þk g ¼ ^ s¼1;...;l
1
k¼1;...;p
This verbose formulation is used to generalize this result easily to Proposition 2.5.
1014
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
The main result comes when we relax the constraint x ¼ ^x to x 2 F(P). Proposition 2.5. Let x1, . . . , xl be efficient solutions to problem (P). Let us consider the MILP problem ðP l ð^xÞÞ :
max
d
s:t:
x ¼ ^x; ðCxÞk P ðCxs Þk y sk þ d ðM k þ RÞð1 y sk Þ for s ¼ 1; . . . ; l; k ¼ 1; . . . ; p; p X
y sk ¼ 1 for s ¼ 1; . . . ; l;
k¼1
y sk 2 f0; 1g
for s ¼ 1; . . . ; l; k ¼ 1; . . . ; p;
d P 0; where Mk is a lower bound to (Cx)k for all x 2 F(P), k = 1, . . . , p and R is an upper bound to kCx Cx 0 k1 for x, x 0 2 F(P). Let d* be the optimal value to (Pl). If d* > 0 then l s d ¼ max min fkCx zk1 jz 2 [s¼1 Z g : x2F ðP Þ[ls¼1 Ds
s¼1;...;l
Otherwise, if d* = 0 then {Cx1, . . . , Cxl} is the entire set of non-dominated objective vectors for problem (P). Proof. First we prove that the problem is feasible. In effect, it has a feasible solution for x = x1, ys = e1 = (1, 0, . . . , 0)T for all s = 1, . . . , l and d = 0. As F(P) is a bounded set then d is also bounded because d 6 ðCxÞk ðCx1 Þk y 1k þ ðM k þ RÞð1 y 1k Þ. Therefore the problem has an optimum. Let (x*, y*, d*) be an optimal solution to this problem. Setting ^x ¼ x , we have that (x*, y*, d*) is optimal to the problem in Lemma 2.4 and therefore d ¼ minfkCx zj1 kz 2 [ls¼1 Z s g. For any other x 2 F ðP Þ [ls¼1 Ds we will now prove that minfkCx zk1 jz 2 [ls¼1 Z s g 6 d . Proceeding by contradiction, assume there is an x* in that set such that minfkCx zk1 jz 2 [ls¼1 Z s g > d . Setting ^x ¼ x and solving the problem in Lemma 2.4 we would obtain a feasible solution to (Pl) with an objective value greater than d*, contradicting that d* is a maximum to this problem. We must now show that all x 2 F ðP Þ [ls¼1 Ds are feasible. The constraint x 2 F(P) obviously holds; on the other hand, x 62 [ls¼1 Ds implies that for each s = 1, . . . , l there is a ks such that ðCxÞks > ðCxs Þk s and, defining y sk ¼ 0 if k 5 ks, y sk ¼ 1 if k = ks, we have that ðCxÞk P ðCxs Þk y sk ðM k þ RÞð1 y sk Þ: To prove the last statement, we note that the above inequality holds strictly for k = ks. This implies that if F ðP Þ [ls¼1 Ds is non-empty, there are feasible solutions with d > 0, or equivalently, d* = 0 implies that F ðP Þ [ls¼1 Ds . Therefore, if d* = 0 then for any x 2 F(P) there exists an xs such that Cx 6 Cxs and Cx = Cxs (and Cx 2 fCxs gls¼1 ) or Cx 6 Cxs with at least one strict inequality (and Cx is a dominated vector). h It is important to point out that the optimal x to this problem need not be efficient. However, its corresponding objective vector can be used to reduce the search region. Proposition 2.6. Let z 2 Rp be a fixed objective vector, k 2 Rp a fixed positive weight vector and x* an optimal solution to the problem maxfkt Cx : Cx P z; x 2 F ðP Þg: Then x* is an efficient solution to problem (P). Proof. Let us suppose that x* is not efficient. Then there exist ~x 2 S such that C~x P Cx with at least one strict inequality. As C~x P Cx P z and kt C~x > kt Cx , x* is not optimal. This contradicts the hypothesis that x* is an optimal solution (see Fig. 1). h
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
1015
Z2 6 5 4 3 2 1
z Z1
-4
-3
-2
-1
Fig. 1. Non-dominated vectors greater than a fixed z are non-dominated to (P).
3. Algorithm Using the results in the previous section, we can state the following algorithm: (1) (Initialization step). Choose k > 0, a termination condition and solve ðP 0k Þ : maxfkt Cx : x 2 F ðP Þg. If no solution exists then stop; otherwise, let x1 be an optimal solution to this problem. Store x1 as an efficient solution and set l = 0. Choose Mk and R as indicated in Proposition 2.5. (2) Let l = l + 1 and solve Pl as stated in Proposition 2.5. Let (x*, y*, d*) be an optimal solution to this problem. (3) If termination condition holds then stop, otherwise, solve ðP lk Þ :
maxfkt Cx : x 2 F ðP Þ; Cx P Cx g:
Let xl+1 an optimal solution to ðP lk Þ. Store xl as an efficient solution and go to step 2. Possible termination conditions can be: d* less than a fixed dend, number of found solutions equaling a given amount, running time reaching a given bound, etc. For the following propositions we consider the infinite sequence generated when this termination condition is always false. Proposition 3.1. If problem (P) is feasible then all problems (Pl) are feasible and their respective optimal values dl form a non-increasing sequence. Proof. If problem (P) is feasible then ðP 0k Þ has an optimal solution x1. Let d ¼ 0, and, for all s = 1, . . . , l, let y s1 ¼ 1 and y sk ¼ 0. We can verify that ðx1 ; y ; dÞ is feasible to (Pl). Now let ð^x; ^y ; dlþ1 Þ be optimal to (Pl+1). This solution (deleting the ^y lþ1 variables) is feasible to (Pl), k therefore we have that dlþ1 6 dl as dl is the maximum value to ðP lk Þ. h The following proposition refers to the infinite sequence we obtain if we remove the stop command in step 3. Proposition 3.2. liml!1 dl ¼ 0. Proof. Let us define z such that zk = minx2F(P){(Cx)k}, W ¼ fzjz 6 z 6 Cx, for some x 2 F(P)}, Ws = {zjz 6 z 6 Cxs} for s = 0, 1, 2, 3, . . . and Ds ¼ fzjCxs ds e < z 6 Cxs g for s = 1, 2, 3, . . . We have that Ds P W , D s Ws P but (Proposition 2.5) Ds \ Ws1 = ;. As F(P) P is bounded, W has a finite measure lðW Þ l l 1 p p and s¼1 ðds Þ ¼ s¼1 lðDs Þ 6 lðW Þ for all l, therefore the series s¼1 ðds Þ converges and liml!1 dl ¼ 0. h Corollary 3.3. If the termination condition is d* < dend, for any fixed dend > 0, the algorithm ends in a finite number of steps. Corollary 3.4. If problem (P) is a MOILP and the termination condition is d* = 0, the algorithm ends in a finite number of steps and generates all non-dominated vectors.
1016
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
4. Numerical examples 4.1. MOILP example Let us consider the MOILP problem ( ðP Þ :
\ max " s:t:
x1 2x2
)
x1 þ 3x2 x1 2x2 6 0; x1 ; x2 2 f0; 1; 2g:
Feasible solutions and feasible objective vectors to this problems are shown in Figs. 2 and 3. For this example we use, M1 = 4 and M2 = 2, k = (4, 3)T and R = 6. Initially, we must solve ðP 0k Þ
max
x1 þ x2
s:t:
x1 2x2 6 0; x1 ; x2 2 f0; 1; 2g:
An optimal solution for this problem is x1 = (2, 2)T. This is an efficient solution for problem (P) corresponding to a non-dominated objective vector equal to (2, 4)T.
X2 2
1
1
X1
2
Fig. 2. Feasible solutions—MOILP example.
Z2 6 5 4 3 2 1 Z1
-4
-3
-2
-1
Fig. 3. Feasible objective vectors—MOILP example.
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
1017
After getting this solution, we must solve ðP 1 Þ
max s:t:
d x1 2x2 6 0; x1 2x2 P 2y 11 þ d 10ð1 y 11 Þ; x1 þ 3x2 P 4y 12 þ d 8ð1 y 12 Þ; y 11 þ y 12 ¼ 1; x1 ; x2 2 f0; 1; 2g; y 11 ; y 12 2 f0; 1g:
This problem is equivalent to finding a solution such that its corresponding objective vector is at a maximum distance from the region dominated by the vector (2, 4)T. The distance is measured accordingly to the infinity-norm (Fig. 4). From the three solutions shown in the figure, let us suppose we choose the one corresponding to the objective vector (0, 0)T, that is, x* = (0, 0)T, y*1 = (1, 0)T and d* = 2. This need not be an efficient solution as shown after solving ðP 1k Þ
max
x1 þ x2
s:t:
x1 2x2 6 0; x1 2x2 P 0; x1 þ 3x2 P 0; x1 ; x2 2 f0; 1; 2g:
The optimal solution to this problem is x2 = (2, 1)T, corresponding to the non-dominated objective vector (0, 1)T. In the next step, we solve ðP 2 Þ
max
d
s:t:
x1 2x2 6 0; x1 2x2 P 2y 11 þ d 10ð1 y 11 Þ; x1 þ 3x2 P 4y 12 þ d 8ð1 y 12 Þ; x1 2x2 P d 10ð1 y 21 Þ; x1 þ 3x2 P y 22 þ d 8ð1 y 22 Þ; y 11 þ y 12 ¼ 1; y 21 þ y 22 ¼ 1; x1 ; x2 2 f0; 1; 2g; 2
y 1 ; y 2 2 f0; 1g : Most distant vectors
Z2 6 5 4 3 2 1 Z1
-4
-3
-2
-1
Fig. 4. Most distant vectors to dominated region.
1018
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027 Most distant vector
Z2 6 5 4 3 2 1
Z1
-4
-3
-2
-1
Fig. 5. Most distant vector to dominated region.
The solution to this problem corresponds to the objective vector (4, 6), which maximizes the distance to the region dominated by the two objective vectors previously found (Fig. 5). This solution is x* = (0, 2)T, y*1 = (0, 1)T, y*2 = (0, 1)T and d* = 2. In the next step we solve ðP 2k Þ
max
x1 þ x2
s:t:
x1 2x2 6 0; x1 2x2 P 4; x1 þ 3x2 P 6; x1 ; x2 2 f0; 1; 2g:
Here, we verify that x3 = x* = (0, 2)T is an efficient solution. Its corresponding objective vector is (4, 6). In the next step, we solve ðP 3 Þ
max
d
s:t:
x1 2x2 6 0; x1 2x2 P 2y 11 þ d 10ð1 y 11 Þ; x1 þ 3x2 P 4y 12 þ d 8ð1 y 12 Þ; x1 2x2 P d 10ð1 y 21 Þ; x1 þ 3x2 P y 22 þ d 8ð1 y 22 Þ; x1 2x2 P 4y 31 þ d 10ð1 y 31 Þ; x1 þ 3x2 P 6y 32 þ d 8ð1 y 32 Þ; y s1 þ y s2 ¼ 1; s ¼ 1; 2; 3; x1 ; x2 2 f0; 1; 2g; 2
y s 2 f0; 1g ;
s ¼ 1; 2; 3:
An optimal solution2 to this problem is x* = (1, 1)T, y*1 = (0, 1)T, y*2 = (0, 1)T, y*3 = (1, 0)T and d* = 1. The next problem is ðP 3k Þ
max
x1 þ x2
s:t:
x1 2x2 6 0; x1 2x2 P 1; x1 þ 3x2 P 2; x1 ; x2 2 f0; 1; 2g:
2
There is another optimal solution for x* = (1, 2)T.
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
1019
The optimal solution is efficient point x4 = (1, 1)T with an objective vector equal to (1, 2)T. Now we define problem (P4) ðP 4 Þ
max
d
s:t:
x1 2x2 6 0; x1 2x2 P 2y 11 þ d 10ð1 y 11 Þ; x1 þ 3x2 P 4y 12 þ d 8ð1 y 12 Þ; x1 2x2 P d 10ð1 y 21 Þ; x1 þ 3x2 P y 22 þ d 8ð1 y 22 Þ; x1 2x2 P 4y 31 þ d 10ð1 y 31 Þ; x1 þ 3x2 P 6y 32 þ d 8ð1 y 32 Þ; x1 2x2 P y 41 þ d 10ð1 y 41 Þ; x1 þ 3x2 P 2y 42 þ d 8ð1 y 42 Þ; y s1 þ y s2 ¼ 1; s ¼ 1; . . . ; 4; x1 ; x2 2 f0; 1; 2g; 2
y s 2 f0; 1g ;
s ¼ 1; . . . ; 4:
This problem has a maximum for x* = (1, 2)T, y*1 = y*2 = y*4 = (0, 1)T, y*3 = (1, 0)T and d* = 1. I order to get an efficient solution we solve ðP 3k Þ
max
x1 þ x2
s:t:
x1 2x2 6 0; x1 2x2 P 3; x1 þ 3x2 P 5; x1 ; x2 2 f0; 1; 2g:
We verify here that x5 = (1, 2)T is optimal to this problem and therefore, efficient to (P). The corresponding objective vector is (3, 5)T. Finally, we consider the problem ðP 5 Þ
max
d
s:t:
x1 2x2 6 0; x1 2x2 P 2y 11 þ d 10ð1 y 11 Þ; x1 þ 3x2 P 4y 12 þ d 8ð1 y 12 Þ; x1 2x2 P d 10ð1 y 21 Þ; x1 þ 3x2 P y 22 þ d 8ð1 y 22 Þ; x1 2x2 P 4y 31 þ d 10ð1 y 31 Þ; x1 þ 3x2 P 6y 32 þ d 8ð1 y 32 Þ; x1 2x2 P y 41 þ d 10ð1 y 41 Þ; x1 þ 3x2 P 2y 42 þ d 8ð1 y 42 Þ; x1 2x2 P 3y 51 þ d 10ð1 y 51 Þ; x1 þ 3x2 P 5y 52 þ d 8ð1 y 52 Þ; y s1 þ y s2 ¼ 1; s ¼ 1; . . . ; 5; x1 ; x2 2 f0; 1; 2g; y s 2 f0; 1g2 ;
s ¼ 1; . . . ; 5:
1020
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
This problem has an optimal value d* = 0 indicating that the algorithm has already generated the whole set of non-dominated vectors. 4.2. MOLP example The following example shows how the method can be used to generate a dispersed subset of non-dominated vectors in a MOLP, including non-extreme solutions in different efficient faces without founding these faces explicitly.3 ( ) x1 þ x3 ðP Þ : \ max " x2 2 6 x1 þ x2 þ x3 6 3;
s:t:
x1 þ x2 6 2; x2 þ x3 6 2; x P 0: This example was solved in [5] and its set of non-dominated vectors is H({z1, z2}) [ H({z2, z3}), where H(X) is the convex hull of set X and z1 = (3, 0), z2 = (2, 1), z3 = (0, 2) as shown in Fig. 6. For this example we will set M1 = 4, M2 = 3, k = (1, 1)T and R = 4. Initially we solve the linear program problem ðP 0k Þ :
max
x1 þ x2 þ x3
s:t:
2 6 x1 þ x2 þ x3 6 3; x1 þ x2 6 2; x2 þ x3 6 2; x P 0:
An optimal solution to this problem is x1 = (2, 0, 1)T corresponding to non-dominated vector (3, 0)T. In the next step we must solve ðP 1 Þ
max
d
s:t:
2 6 x1 þ x2 þ x3 6 3; x1 þ x2 6 2; x2 þ x3 6 2; x1 þ x3 P 3y 11 þ d 8ð1 y 11 Þ; x2 P d 7ð1 y 12 Þ; y 11 þ y 12 ¼ 1; y 11 ; y 12 2 f0; 1g; x P 0:
3
Sayin [13] has developed a method for finding discrete representations of the efficient set for MOLP problems but her procedure requires finding the efficient faces.
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
1021
z2
Non-dominated vectors
2
1
2
3
z1
Fig. 6. Feasible non-dominated vectors for MOLP example.
Here we have an optimal for x* = (0, 2, 0)T, d* = 2 and y*1 = (0, 1)T. Now we check if x* is efficient ðP 1k Þ :
max
x1 þ x2 þ x3
s:t:
2 6 x1 þ x2 þ x3 6 3; x1 þ x2 6 2; x2 þ x3 6 2; x1 þ x3 P 0; x2 P 2; x P 0:
Here we have that x1 = (0, 2, 0)T is optimal to this problem an therefore (0, 2)T is non-dominated. Now, to obtain a new solution we solve first ðP 2 Þ
max
d
s:t:
2 6 x1 þ x2 þ x3 6 3; x1 þ x2 6 2; x2 þ x3 6 2; x1 þ x3 P 3y 11 þ d 8ð1 y 11 Þ; x2 P d 7ð1 y 12 Þ; x1 þ x3 P d 8ð1 y 21 Þ; x2 P 2y 22 þ d 7ð1 y 22 Þ; y s1 þ y s2 ¼ 1; 2
y s 2 f0; 1g ;
s ¼ 1; 2; s ¼ 1; 2;
x P 0: We have an optimal solution to this problem for x* = (2/3, 4/3, 2/3)T, d* = 4/3, y*1 = (1, 0)T, y*2 = (0, 1)T. Continuing with problem ðP 2k Þ we can verify this solution is efficient. Notice that its corresponding non-dominated vector is (4/3, 4/3)T which is not extreme as it belongs to the (efficient) segment H({z2, z3}). The procedure can be continued until desired; it is clear it cannot generate the whole set of non-dominated vectors but it can be useful for generating a set of well-dispersed solutions (Fig. 7).
1022
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
2 1.8 1.6 1.4 1.2 z2
1 0.8 0.6 0.4 0.2 0 0
0.5
1
1.5 z1
2
2.5
3
Fig. 7. Partial solution of MOLP example.
4.3. MOMILP example We will show now a very simple example with two continuous variables and a binary one (Fig. 8): x1 ðP Þ : \ max " x2 s:t: x1 þ 2x2 þ 2w 6 4; 2x1 þ x2 2w 6 2; x P 0; w 2 f0; 1g: Here we can choose M1 = M2 = 0, R = 2 and k = (1, 1)T. Let us suppose we want to find only four non-dominated vectors. To find an initial solution we solve ðP 0k Þ :
max s:t:
x1 þ x2 x1 þ 2x2 þ 2w 6 4; 2x1 þ x2 2w 6 2; x P 0;
w 2 f0; 1g:
This problem has an optimal solutions for x1 = (0, 2)T, w = 0 (there is another possibility) and we have an initial non-dominated vector (0, 2). In the next step we must solve
z2 2
1
1
2
z1
Fig. 8. Feasible vectors for MOMILP example.
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
ðP 1 Þ
max s:t:
1023
d x1 þ 2x2 þ 2w 6 4; 2x1 þ x2 2w 6 2; x1 P d 2ð1 y 11 Þ; x2 P 2y 12 þ d 2ð1 y 12 Þ; y 11 þ y 12 ¼ 1; y 11 ; y 12 ;
w 2 f0; 1g;
x P 0: This problem has an optimal solution for x* = (2, 0)T, w* = 1, y*1 = (1, 0)T and d = 2. To check if this solution is efficient, we solve ðP 1k Þ
max
x1 þ x2
s:t:
x1 þ 2x2 þ 2w 6 4; 2x1 þ x2 2w 6 2; x1 P 2; x2 P 0; x P 0;
w 2 f0; 1g:
After solving this problem, we verify that we have an efficient solution x2 = (2, 0)T. In this example, all (Pk) problems are trivial (their feasible sets are singletons). For the sake of brevity, we will omit the formulation of these problems in the rest of the steps. To find another solution we solve ðP 2 Þ
max
d
s:t:
x1 þ 2x2 þ 2w 6 4; 2x1 þ x2 2w 6 2; x1 P d 2ð1 y 11 Þ; x2 P 2y 12 þ d 2ð1 y 12 Þ; x1 P 2y 21 þ d 2ð1 y 21 Þ; x2 P d 2ð1 y 22 Þ; y s1 þ y s2 ¼ 1; s ¼ 1; 2; y s 2 f0; 1g2 ; x P 0;
s ¼ 1; 2;
w 2 f0; 1g: z2 2 z1
1 z3
z4 z2 1
2
z1
Fig. 9. First four non-dominated vectors for MOMILP example.
1024
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
This problem has an optimal solution for x* = (2/3, 2/3)T, w* = 1, y*1 = (1, 0)T, y*2 = (0, 1)T and d = 4/3. After solving ðP 2k Þ we can check that x3 = (2/3, 2/3)T is another non-dominated solution. In order to get another solution we must solve ðP 3 Þ
max
d
s:t:
x1 þ 2x2 þ 2w 6 4; 2x1 þ x2 2w 6 2; x1 P d 2ð1 y 11 Þ; x2 P 2y 12 þ d 2ð1 y 12 Þ; x1 P 2y 21 þ d 2ð1 y 21 Þ; x2 P d 2ð1 y 22 Þ; x1 P ð2=3Þy 31 þ d 2ð1 y 31 Þ; x2 P ð2=3Þy 32 þ d 2ð1 y 32 Þ; y s1 þ y s2 ¼ 1;
s ¼ 1; 2;
2
y s 2 f0; 1g ; x P 0;
s ¼ 1; 2;
w 2 f0; 1g:
An optimal solution to this problem is x* = (10/9, 4/9)T, w* = 1, y*1 = (1, 0)T, y*2 = (0, 1)T, y*3 = (1, 0)T and d = 4/9. By solving ðP 3k Þ we can verify that we have a new non-dominated solution x4 = (10/9, 4/9)T. Fig. 9 shows how the non-dominated vectors found are distributed. 5. Computational results The algorithm was implemented in a C++ program using OSL Version 3 [6]. For the branching process, artificial variables y lk had a higher priority over the original xj variables. The program was run on a Pentium III under Windows XP.
Table 1 Knapsack problems with two objective functions m
n
Solutions Mean
5
10
Max
Time (seconds)
Nodes
Mean
Mean
Max
Mean
(Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ
2012.9 501.2 11,039.2 1866.6 22,794.7 3569.1 59,182.6 7621.4
8836 1548 37,046 5465 141,950 18,367 163,321 19,378
3967.8 1031.5 22,792.4 3923.9 50,473.7 7992.8 133,027.0 17,107.2
17,380 3226 75,362 11,617 331,203 43,268 361,996 43,268
81 12 94 5 95 4 97 2
(Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ
3305.1 729.5 23,742.8 3187.0 30,294.3 4239.5 95,998.1 10,721.0
25,753 3427 195,797 16,230 92,762 9598 404,811 41,822
8130.0 1852.9 61,946.3 8665.5 82,626.6 11,748.2 274,127.4 30,999.3
63,587 8587 490,333 45,183 239,928 26,042 1,184,705 125,860
88 9 96 3 96 4 98 2
Max
15
7.2
13
3.8
19.0
20
12.8
22
38.7
170.8
25
14.5
25
81.5
767.3
30
20.8
36
335.1
1261.0
15
7.3
19
8.7
100.3
20
13.3
30
128.6
1570.6
25
16.6
28
128.1
573.9
30
21.0
33
591.3
3101.0
Simplex iter.
% Time Max
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
1025
The method was tested with the same problems used in [15]. In that work, we developed a procedure for MOILP where the results were ordered accordingly to a ponderation of the objective function values. For each set of problems we show the number of solutions found, CPU times, number of nodes and number of simplex iterations for (Pl) problems ðP lk Þ and percentage of total time spent solving these problems. The first set of problems consists in randomly generated multiconstrained 0-1 knapsack problems with two objectives functions. Objective function and constraint coefficients are uncorrelated integers uniformly distributed between 1 and 99. For each constraint, the right-hand side value is set to a (truncated) 50% of the sum of its coefficients. For all these problems, the complete set of non-dominated vectors was generated. Table 1 shows cumulative results for batches of 30 problems of different dimensions. For both subproblem classes in the procedure, we report the mean values of the number of solutions, CPU time, number of nodes and number of simplex iterations for each set of 30 problems, as well as the same parameters in the worst case (max). We also report the mean percentage of time spent solving each type of subproblem. Next the algorithm was tested for randomly generated multiconstrained 0-1-2 knapsack problems with two objectives functions. Objective function and constraint coefficients are uncorrelated integers Table 2 0-1-2 Integer linear problems with two objective functions m
5
10
n
Solutions
Time (seconds)
Nodes
Mean
Mean
Max
Mean
Max
10,096.8 1331.5 33,532.2 3776.0 35,741.1 3994.7 23,272.7 2623.5 35,741.1 3994.7 81,384.0 7360.3
145,300 12,939 104,700 8829 166,763 13,461 114,177 11,892 166,763 13,461 572,577 36,183
15,931.7 2228.9 67,390.7 8062.2 62,765.3 7335.7 40,048.0 4792.0 62,765.3 7335.7 168,596.5 16,514.8
221,498 20,582 199,707 18,734 302,965 25,806 189,247 21,217 302,965 25,806 1,143,445 77,189
Mean
Max
Max
12
12.4
31
44.2
932.7
15
18.2
35
217.4
1044.5
18
19.7
41
248.0
1797.5
12
17.5
37
135.0
1072.6
15
19.7
41
248.0
1797.5
18
22.8
48
722.4
9364.8
(Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ
Simplex iter.
% Time
95 2 97 2 97 1 97 2 97 1 98 1
Table 3 Multi-knapsack problems with three objective functions m
n
dend
Solutions Mean
5
10
Time (seconds)
Nodes
Max
Mean
Mean
Max
15
525
4.2
9
1.1
2.5
20
700
4.0
7
1.8
5.3
25
825
4.5
9
2.9
13.6
30
1050
4.9
7
4.9
15.5
15
525
4.4
9
2.3
10.3
20
700
4.3
7
3.0
7.9
25
825
4.9
8
5.6
16.0
30
1050
7.6
11
81.2
362.2
Simplex iter. Max
Mean
% Time Max
(Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ
551.3 249.6 990.3 427.8 1457.8 729.6 2541.0 1071.9
1361 681 2751 1139 5975 2869 9062 2655
1508.0 510.0 2842.1 966.6 4258.3 1727.5 7695.2 2674.7
3756 1306 8081 2695 18,497 6965 27,368 7377
63 20 71 20 71 23 76 21
(Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ
1093.0 395.2 1444.5 671.8 2572.3 1189.6 26,735.9 16,851.6
4583 1191 3587 1446 6807 3125 104,348 73,864
3882.4 1109.9 5227.7 2023.7 9879.2 3818.1 99,843.9 70,171.0
17,732 3426 15,118 4357 27,726 10,297 362,583 357,711
74 18 71 23 74 23 70 30
1026
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
uniformly distributed between 1 and 99. For each constraint, the right-hand side value is set to the sum of its coefficients. The complete set of non-dominated vectors was generated and results are shown in Table 2. A third batch of test problems consists in multiconstrained 0-1 knapsack problems with three objective functions. In this case, only a small subset of well-dispersed solutions was generated; this was accomplished by stipulating a dend value proportional to the number of variables in order to get about 10% of the order of magnitude of the objective functions. Objective function and constraint coefficients are randomly generated integers between 1 and 999 and the right-hand side values of the constraints are set to one half of the sum of its coefficients. Results are shown in Table 3. The algorithm was also tested on General Assignment Problems (GAP) with two objective functions. The GAP deals with the optimal allocation of s agents to a group of t tasks in such a way that each task j is assigned to exactly one agent i incurring a cost cij (a vector in the multiple objective case) and consuming rij units of a single resource subject to an availability of bi for each agent. This results in the formulation
Table 4 GAP with two objective functions s
5
10
15
t
dend
Solutions Mean
Max
Time (seconds)
Nodes
Mean
Mean
Max
ðP lk Þ (Pl) ðP lk Þ
3154.7 517.1 3538.7 596.1 3498.0 604.0 4379.1 788.2 4726.8 740.5 5910.3 1083.4
5515 1524 7976 1211 6442 1241 6771 1455 7605 1694 10,185 1757
17,967.7 1447.0 21,443.0 1695.3 23,345.0 1826.6 30,617.9 2402.6 33,504.9 2323.9 46,395.5 3300.3
31,132 3470 31,984 3662 36,792 3718 53,695 4380 56,204 5437 79,400 5873
84 3 85 3 86 4 87 4 88 3 89 3
(Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ
3794.7 642.8 5027.8 943.2 5471.9 1055.0 6135.0 1188.4 6594.6 1284.0 7604.7 1565.0
6137 1191 8288 1978 7980 2433 9215 2651 9941 1998 10,987 3313
35,114.5 2237.0 49,374.1 3517.5 60,426.8 4128.3 65,705.7 4532.1 76,325.7 5040.0 91,496.9 5987.7
56,902 3976 72,424 6839 91,120 8173 97,079 11,130 102,270 8119 141,838 11,627
84 3 86 3 86 3 87 3 87 3 88 3
(Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ
4413.0 827.4 4784.8 893.8 5658.6 968.5 6565.0 1245.3 6988.8 1542.8 7726.6 1476.0
7087 1383 7146 1842 9348 1784 11,633 2533 9741 3086 11,114 2671
43,948.6 3168.4 55,391.9 3637.3 68,304.4 3573.3 77,644.7 4630.4 87,802.3 6142.2 101,491.8 5821.9
80,702 5450 96,317 6535 107,762 6899 126,838 6754 144,618 12,888 179,300 14,000
84 3 85 3 87 2 87 3 88 3 88 3
Max
50
500
23.6
32
43.4
99.1
60
600
23.2
33
58.3
165.9
70
700
21.9
30
58.5
132.5
80
800
23.2
29
86.8
143.6
90
900
22.7
29
97.0
176.1
100
1000
24.0
33
145.6
335.1
50
500
24.9
31
103.0
218.8
60
600
25.7
31
149.6
271.9
70
700
25.8
34
194.6
321.0
80
800
25.5
31
285.9
819.0
90
900
25.9
31
299.6
460.7
100
1000
25.3
30
381.2
559.4
50
500
24.9
31
173.2
408.4
60
600
24.8
32
218.5
370.3
70
700
25.2
29
316.1
508.9
80
800
24.7
30
426.6
695.0
90
900
24.8
30
494.0
806.7
100
1000
24.9
31
632.4
1045.8
(Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ (Pl) ðP lk Þ
Simplex iter. Mean
% Time Max
J. Sylva, A. Crema / European Journal of Operational Research 180 (2007) 1011–1027
ðGAPÞ :
min
s X t X i¼1
s:t:
t X
1027
cij xij
j¼1
rij xij 6 bi ;
i ¼ 1; . . . ; s;
j¼1 s X
xij ¼ 1;
j ¼ 1; . . . ; t;
i¼1
xij 2 f0; 1gi ;
i ¼ 1; . . . ; s; j ¼ 1; . . . ; t:
In this experiments, cij components and rij are randomly selected integers between 1 and 999 and each bi is fixed at a 50% of the sum of the corresponding rij. In this case, the value of dend is proportional to the number of tasks, about 1% of the order of magnitude of the objective functions. Results are shown in Table 4. 6. Conclusions Accordingly to our results, the algorithm can generate the whole set of non-dominated vectors for MOILP problems although, in practice, it would be expensive using the method for enumerating the whole set of nondominated vectors except for small problems. The procedure is better fit for obtaining partial sets of non-dominated vectors, not being limited in this case to integer variables. The algorithm produces well-dispersed subsets of non-dominated vectors without relying on filtering techniques. This can make this procedure a useful tool in decision making procedures, specially at early stages when there is little information on the DMs preferences. The algorithm can be easily implemented using available integer programming libraries such as IBM OSL. References [1] M.J. Alves, J. Clı´maco, An interactive reference point approach for multiobjective mixed-integer programming using branch and bound, European Journal of Operational Research 124 (3) (2000) 478–494. [2] G.R. Bitran, Linear multiple objective programs with zero–one variables, Mathematical Programming 13 (1) (1977) 121–139. [3] J. Climaco, C. Ferreira, M.E. Captivo, Multicriteria integer programming: An overview of different algorithmic approaches, in: J. Climaco (Ed.), Multicriteria Analysis, Springer-Verlag, Berlin, 1997, pp. 248–258. [4] J.G. Ecker, N.S. Hegner, I.A. Kouada, Generating all maximal efficient faces for multiple objective linear programs, Journal of Optimization Theory and Applications 30 (1980) 353–381. [5] T. Gal, A general method for determining the set of all efficient solutions to a linear vector maximum problem, European Journal of Operational Research 1 (5) (1977) 307–322. [6] IBM. Optimization Subroutine Library Release 2.1, 1992. [7] H. Isermann, The enumeration of the set of all efficient solutions for a linear multiple objective program, Operational Research Quarterly 28 (1977) 711–725. [8] J. Karaivanova, S. Narula, V. Vassilev, An interactive algorithm for integer programming, European Journal of Operational Research 68 (3) (1993) 344–351. [9] D. Klein, E. Hannan, An algorithm for the multiple objective integer linear programming problem, European Journal of Operational Research 9 (4) (1982) 378–385. [10] G. Mavrotas, D. Diakoulaki, A branch and bound algorithm for mixed zero–one multiple objective linear programming, European Journal of Operational Research 107 (3) (1998) 530–541. [11] L.M. Rasmussen, Zero–one programming with multiple criteria, European Journal of Operational Research 26 (1) (1986) 83–95. [12] S. Sayin, Measuring the quality of discrete representations of efficient sets in multiple objective mathematical programming, Mathematical Programming 87 (3) (2000) 543–560. [13] S. Sayin, A procedure to find discrete representations of the efficient set with specified coverage errors, Operations Research 51 (3) (2003) 427–436. [14] R.E. Steuer, Multiple Criteria Optimization—Theory, Computation and Application, John Wiley and Sons, 1986. [15] J. Sylva, A. Crema, A method for finding the set of nondominated vectors for multiple objective integer linear programs, European Journal of Operational Research 158 (1) (2004) 46–55. [16] J. Teghem, P.L.J. Kunsch, A survey of techniques for finding efficient solutions to multi-objective integer linear programming, AsiaPacific Journal of Operational Research 3 (1986) 95–108. [17] E.L. Ulungu, J. Teghem, Multi-objective combinatorial optimization problems: A survey, Journal of Multi-Criteria Decision Analysis 3 (1994) 83–104.