Additive and multiplicative tolerance in multiobjective linear programming

Additive and multiplicative tolerance in multiobjective linear programming

Operations Research Letters 36 (2008) 393–396 www.elsevier.com/locate/orl Additive and multiplicative tolerance in multiobjective linear programming ...

219KB Sizes 0 Downloads 87 Views

Operations Research Letters 36 (2008) 393–396 www.elsevier.com/locate/orl

Additive and multiplicative tolerance in multiobjective linear programming Milan Hlad´ık Charles University, Department of Applied Mathematics, Malostransk´e n´am. 25, 118 00, Prague, Czech Republic Received 18 May 2007; accepted 2 October 2007 Available online 23 December 2007

Abstract We consider a multiobjective linear program. We propose a procedure for computing an additive and multiplicative (percentage) tolerance in which all the objective function coefficients may simultaneously and independently vary while preserving the efficiency of a given solution. For a nondegenerate basic solution, the procedure runs in polynomial time. c 2007 Elsevier B.V. All rights reserved.

Keywords: Multiobjective linear programming; Efficient point; Sensitivity analysis; Tolerance analysis; Generalized fractional programming

1. Introduction

AB

Sensitivity analysis is an important way to study uncertainties occurring in real life problems. It gives us basic information on stability, robustness and impact of errors on a model. The tolerance approach was developed in [11,12] for dealing with simultaneous and independent changes of linear program coefficients. It provides a decision maker with an easy-to-use method for tackling perturbations of selected parameters. However, this approach is, in essence, restricted to considering only parameters in one line: the right-hand side, the objective function coefficients, or a row/column of the coefficient matrix. Multiobjective linear programming (MOLP) problems are often solved using a weighted sum of objective functions. This is why tolerance analysis in MOLP was mainly concerned with sensitivity of these weights [1,2,5,6]. Considering all the objective function coefficients seemed to be unrealistic. Nevertheless, we propose a simple procedure calculating a quite large (but not maximal in general) tolerance in which all the objective function coefficients may vary while retaining efficiency of a given point. We use the following notation:

sgn(z)

Ai•

the i-th row of a matrix A;

E-mail address: [email protected]. c 2007 Elsevier B.V. All rights reserved. 0167-6377/$ - see front matter doi:10.1016/j.orl.2007.10.002

e kxk∞ kAk∞ |A| cl S int S

the submatrix of A consisting of the rows indexed by B; the sign of a real z, i.e., sgn(z) = 0 if z = 0, sgn(z) = 1 if z > 0, and sgn(z) = −1 if z < 0; a vector of ones (with convenient dimension); the Chebyshev norm, i.e., kxk∞ = maxi |xi |; the Chebyshev norm for matrices, i.e., kxk∞ = maxi, j |Ai j |; the absolute value of a matrix A, i.e., |A| = (|Ai j |); the closure of a set S; the interior of a set S.

Let us consider the multiobjective linear program max C x,

x∈M

(1)

where the feasible set M := {x ∈ Rn : Ax ≤ b}, and C ∈ Rs×n , A ∈ Rm×n and b ∈ Rm . A feasible solution x ∗ to (1) is called efficient if there is no x ∈ M such that C x ≥ C x ∗ and C x 6= C x ∗ . Let us introduce a general δ-neighborhood of a matrix C as Oδ,G (C) := {Cˆ : |Ci j − Cˆ i j | < δ|G|i j if |G|i j > 0, Ci j = Cˆ i j if |G|i j = 0}, where G is a given matrix. Although in classical tolerance approaches in linear programming a closed neighborhood is naturally used, here it is much more useful to use an open one because of possible noncompactness of the efficient set.

394

M. Hlad´ık / Operations Research Letters 36 (2008) 393–396

For G = C, this concept corresponds to the most frequently used percentage deviation. When G consists of all ones, then it represents an additive perturbation. For the purpose of this paper, an additive δ-neighborhood of a certain matrix Z is denoted separately by Oδ (Z ) := =

{ Zˆ : k Zˆ − Z k∞ < δ} { Zˆ : | Zˆ i j − Z i j | < δ ∀i, j}.

An additive tolerance for an efficient solution δ > 0 such that x ∗ remains efficient to

Lemma 3. Let λ ≥ 0 and suppose that C T λ ∈ int N (x ∗ ). Then x ∗ is an efficient solution to (1). Proof. If λ > 0, then the statement follows from Theorem 1. Otherwise, define cε := C T (λ + εe), where ε > 0 is sufficiently small. Then cε ∈ N (x ∗ ) and, hence, x ∗ is an efficient solution. 

x∗

is any real

max Cˆ x

(2)

x∈M

for every Cˆ ∈ Oδ (C). A multiplicative tolerance for an efficient x ∗ and given G is any δ > 0 such that x ∗ remains efficient to (2) for every Cˆ ∈ Oδ,G (C). From a practical viewpoint, the multiplicative tolerance is much more useful than the additive deviation, since it takes into account diverse values of the coefficients. But the most common percentage case of the multiplicative approach has one drawback. Any percentage tolerance for zero values gives us no information—the zeros have to remain zeros.

The following theorem says how to effectively compute an additive tolerance δ ∗ , in which all coefficients of C can independently vary. As we see in Example 5, the tolerance δ ∗ is not maximal in general. This is a drawback when dealing with perturbation of all entries of C. Theorem 4 (Additive Tolerance). Let (λ∗ , δ ∗ ) be an optimal solution to the linear program max δ subject to DC T λ − |D|eδ ≥ 0, λ, δ ≥ 0, eT λ = 1.

(4)

Then for every Cˆ ∈ Oδ ∗ (C) the point x ∗ is an efficient solution to the multiobjective linear program (2).

Let x ∗ be a vertex of M that is efficient to (1). First we compute the normal (polar) cone N (x ∗ ) of M at the point x ∗ [8,10] by means of inequalities

Proof. Let λ ≥ 0, eT λ = 1. Suppose that c := C T λ ∈ N (x ∗ ) and that for certain δ > 0 we have cl Oδ (c) ⊆ N (x ∗ ), which is equivalent to Oδ (c) ⊆ int N (x ∗ ). By Lemma 2, Cˆ T λ ∈ int N (x ∗ ) for all Cˆ ∈ Oδ (C). By Lemma 3, x ∗ is an efficient solution of (2) for every Cˆ ∈ Oδ (C). It remains to determine the maximum tolerance δ satisfying our assumptions. We use the optimization problem

N (x ∗ ) = {x ∈ Rn : Dx ≥ 0}.

max δ subject to cl Oδ (c) ⊆ N (x ∗ ),

2. Additive tolerance

(3)

If x ∗ is a nondegenerate basic solution with the optimal basis T B, then D = (A−1 B ) . That is, the normal cone is described −1 T by (A B ) x ≥ 0. Otherwise, let h i , i ∈ I , be directions of all edges emerging from x ∗ . Then the normal cone N (x ∗ ) is characterized by the system −h iT x ≥ 0 ∀i ∈ I . Let us recall the well known [3] characterization of efficiency in multiobjective linear programming. Theorem 1. The feasible solution x ∗ is efficient if and only if there exists λ > 0 such that DC T λ ≥ 0. Since x ∗ is efficient due to our assumption, there is some λ > 0 such that DC T λ ≥ 0. That is, x ∗ is an optimal solution to maxx∈M c(λ)T x, where c(λ)T = C T λ. The main idea is to determine the maximum tolerance δ for the vector c(λ) with fixed λ, and subsequently also such a vector λ for which the corresponding maximum tolerance of c(λ) is as large as possible. Lemma 2. Let λ ≥ 0 with eT λ = 1, and δ > 0 be given. If Oδ (C T λ) ⊆ int N (x ∗ ), then Cˆ T λ ∈ int N (x ∗ ) for every Cˆ ∈ Oδ (C). Proof. Let Cˆ be an arbitrary matrix with kCˆ − Ck∞ < δ. It is sufficient to prove that Cˆ T λ ∈ Oδ (C T λ), i.e. kCˆ T λ−C T λk∞ < δ. We have kCˆ T λ − C T λk∞ = k(Cˆ T − C T )λk∞ ≤ k(Cˆ T − C T )k∞ eT λ < δ, which completes the proof.



which we rewrite as max δ subject to D cˆ ≥ 0

∀cˆ ∈ Oδ (c),

or max δ subject to Dc + D eδ ˆ ≥0

∀eˆ : |e| ˆ ≤ e.

The k-th inequality Dk• c + Dk• eδ ˆ ≥ 0 is true for all eˆ such that |e| ˆ ≤ e if and only if Dk• c − |D|k• eδ ≥ 0. Hence, the optimization problem max δ subject to Dc − |D|eδ ≥ 0 yields the same or smaller tolerance. So far, the weights λ have been fixed. To compute the largest tolerance we introduce λ as new variables to the optimization problem. We get the linear program (4).  Let us note that the linear program (4) is infeasible if and only if x ∗ is not an efficient solution of (1). It is also easy to see that for each index i ∈ {1, . . . , s} with λi∗ = 0 the tolerance of the i-th row of C is infinite. That is, for such indices the elements of Ci• can vary absolutely arbitrarily and independently while the other elements vary only in the δneighborhood. Example 5. Consider (inspired by [9])   2.5 2 C= , 3.5 0.65 M = {x ∈ R2 : 3x1 + 4x2 ≤ 42, 3x1 + x2 ≤ 24, x2 ≤ 9, x ≥ 0},

M. Hlad´ık / Operations Research Letters 36 (2008) 393–396

395

and the (nondegenerate) basic solution x ∗ = (6, 6)T . Then B = (1, 2) and the normal cone at x ∗ is described as follows:

Then for every Cˆ ∈ Oδ ∗ ,G (C) the point x ∗ is an efficient solution to the multiobjective linear program (2).

T N (x ∗ ) = {x ∈ R2 : (A−1 B ) x ≥ 0}

Proof. Let λ ≥ 0 and define c := C T λ. Suppose that c ∈ int N (x ∗ ), that is, Dc > 0. Furthermore, suppose that for certain δ > 0 we have cˆ ∈ int N (x ∗ ) for every cˆ such that |cˆ − c| ≤ δ|G|T λ and |cˆ − c|i < δ(|G|T λ)i if (|G|T λ)i > 0. By Lemma 6, Cˆ T λ ∈ int N (x ∗ ) for all Cˆ ∈ Oδ,G (C). By Lemma 3, x ∗ is an efficient solution of (2) for every Cˆ ∈ Oδ,G (C). It remains to determine the maximum tolerance δ satisfying our assumptions. The equivalent formulation for the second one is that cˆ ∈ N (x ∗ ) for every cˆ such that |cˆ − c| ≤ δ|G|T λ (since c ∈ int N (x ∗ )). To obtain the maximal δ we use the optimization problem

= {x ∈ R2 : −x1 + 3x2 ≥ 0, 4x1 − 3x2 ≥ 0}.   3 Hence D = −1 4 −3 . According to Theorem 4 we solve the linear program max δ subject to 3.5λ1 − 1.55λ2 − 4δ ≥ 0, 4λ1 + 12.05λ2 − 7δ ≥ 0, λ1 + λ2 = 1, λ, δ ≥ 0. The optimal solution is λ ' (0.8742, 0.1258)T , δ ∗ ' 0.7161. Therefore, we obtain the additive tolerance 0.7161. However, this tolerance is not maximal. In this simple example it is easy to find that the maximum tolerance is 78 . 3. Multiplicative tolerance In this section we generalize the additive tolerance approach to the multiplicative one. Although a multiplicative tolerance will not be calculated using a simple linear program, the computation can be done in polynomial time under the same assumptions. Let x ∗ be a vertex of M that is efficient to (1). First we proceed like in the previous section and compute the normal cone N (x ∗ ) of M at the point x ∗ by means of inequalities (3). By Theorem 1, the efficiency of x ∗ implies that there is some λ > 0 such that x ∗ is an optimal solution to the linear program maxx∈M c(λ)T x, where c(λ)T = C T λ. Again, we determine a maximum tolerance δ for the vector c(λ) with fixed λ. Afterwards we relax λ and compute for which λ the corresponding maximum tolerance of c(λ) is as large as possible.

max δ subject to D cˆ ≥ 0 ∀cˆ : |cˆ − c| ≤ δ|G|T λ, Dc > 0.

The k-th inequality Dk• cˆ ≥ 0 is true for all cˆ such that |cˆ − c| ≤ δ|G|T λ if and only if Dk• c − δ|D|k• |G|T λ ≥ 0. This is because T λ, and then the worst case is when cˆi = ci − δsgn(Dki )|G|i• Dk• cˆ = Dk• c − δ|D|k• |G|T λ. Putting all inequalities together we get a sufficient condition. Hence, the optimization problem max δ subject to Dc − δ|D||G|T λ ≥ 0, Dc > 0, or max δ subject to DC T λ − δ|D||G|T λ ≥ 0, DC T λ ≥ e

yields the optimal value that is equal to or less than (6). So far, the weights λ have been fixed. To compute the largest possible tolerance we introduce λ as new variables into the optimization problem. We get the optimization problem (5).  Remark 8. The nonlinear program (5) is a particular case of a so called generalized linear fractional program max min i

Lemma 6. Let λ ≥ 0, λ 6= 0, and δ > 0 be given. Suppose that cˆ ∈ int N (x ∗ ) for every cˆ such that |cˆ − C T λ| ≤ δ|G|T λ and |cˆ − C T λ|i < δ(|G|T λ)i if (|G|T λ)i > 0. Then Cˆ T λ ∈ int N (x ∗ ) for every Cˆ ∈ Oδ,G (C). Proof. Let Cˆ ∈ Oδ,G (C). We have |Cˆ T λ − C T λ| = |(Cˆ T − C T )λ| ≤ |Cˆ − C|T λ ≤ δ|G|T λ, and the right inequality is strict for each component i with (|G|T λ)i > 0. According to the assumption of Lemma 6 we have cˆ := Cˆ T λ ∈ int N (x ∗ ), which completes the proof.  The following theorem says how to compute a multiplicative tolerance δ ∗ . Such a tolerance δ ∗ may not be the best possible; see Example 9.

Pi• x subject to Qx > 0, Rx ≥ r, Q i• x

or max α subject to P x − α Qx ≥ 0, Qx ≥ 0, Rx ≥ r, which is polynomially solvable using an interior point method [4,7]. Example 9. Reconsider the problem from Example 5 and set G = C. Recall that     2.5 2 −1 3 C= , D= . 3.5 0.65 4 −3 According to Theorem 7 we solve the generalized linear fractional problem max δ subject to 3.5λ1 − 1.55λ2 − δ(8.5λ1 + 5.45λ2 ) ≥ 0,

Theorem 7 (Multiplicative tolerance). Let (λ∗ , δ ∗ ) be an optimal solution to the mathematical program

4λ1 + 12.05λ2 − δ(16λ1 + 15.95λ2 ) ≥ 0,

max δ subject to DC λ − δ|D||G| λ ≥ 0,

4λ1 + 12.05λ2 ≥ 1,

T

DC λ ≥ e, λ, δ ≥ 0. T

T

(5)

(6)

3.5λ1 − 1.55λ2 ≥ 1, λ, δ ≥ 0.

396

M. Hlad´ık / Operations Research Letters 36 (2008) 393–396

 T 20 The optimal solution is λ∗ = 101 , δ ∗ = 13 . Therefore, 121 , 121 we obtain the percentage tolerance 33.33%. However, this tolerance is not maximal. In this simple example it is easy to 7 find that the maximal tolerance is 17 ' 41.18%.

[6]

[7]

References [8] [1] A.R.P. Borges, C.H. Antunes, A visual interactive tolerance approach to sensitivity analysis in MOLP., Eur. J. Oper. Res. 142 (2) (2002) 357–381. [2] A.R.P. Borges, C.H. Antunes, Stability of efficient solutions against weight changes in multi-objective linear programming models, in: Proceedings of the MOPGP’06 — 7th International Conference on MultiObjective Programming and Goal Programming, 2006. [3] M. Ehrgott, Multicriteria Optimization, 2nd ed., Springer, Berlin, 2005. [4] R.W. Freund, F. Jarre, An interior-point method for multifractional programs with convex constraints, J. Optimization Theory Appl. 85 (1) (1995) 125–161. [5] P. Hansen, M. Labb´e, R.E. Wendell, Sensitivity analysis in multiple

[9]

[10] [11] [12]

objective linear programming: The tolerance approach, Eur. J. Oper. Res. 38 (1) (1989) 63–69. A.M. M´armol, J. Puerto, Special cases of the tolerance approach in multiobjective linear programming, Eur. J. Oper. Res. 98 (3) (1997) 610–616. Y.E. Nesterov, A.S. Nemirovskij, An interior-point method for generalized linear-fractional programming, Math. Programming 69 (1B) (1995) 177–204. F. Noˇziˇcka, L. Grygarov´a, K. Lommatzsch, Geometrie konvexer Mengen und konvexe Analysis, Akademie-Verlag, Berlin, 1988. C. Oliveira, C.H. Antunes, Multiple objective linear programming models with interval coefficients – an illustrated overview, Eur. J. Oper. Res. 181 (3) (2007) 1434–1463. R.T. Rockafellar, R.J.-B. Wets, Variational Analysis, 2nd print, Springer, Berlin, 2004, corr. J.E. Ward, R.E. Wendell, Approaches to sensitivity analysis in linear programming, Ann. Oper. Res. 27 (1990) 3–38. R.E. Wendell, Linear programming. III: The Tolerance Approach, in: T. Gal, et al. (Eds.), Advances in Sensitivity Analysis and Parametric Programming, in: Int. Ser. Oper. Res. Manag. Sci., vol. 6, Kluwer Academic Publishers, Dordrecht, 1997, pp. 1–21 (Chapter 5).