Applied Mathematics and Computation 225 (2013) 487–502
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
A new multi-section based technique for constrained optimization problems with interval-valued objective function Samiran Karmakar a,⇑, Asoke Kumar Bhunia b a b
Department of Business Mathematics and Statistics, St. Xavier’s College (Autonomous), 30, Mother Teresa Sarani (Park Street), Kolkata 700016, India Department of Mathematics, The University of Burdwan, Burdwan 713104, India
a r t i c l e
i n f o
Keywords: Interval optimization Interval objective function Interval mathematics Interval order relations Decision theory
a b s t r a c t In this paper, an efficient optimization technique is proposed for constrained optimization problems with interval valued objective function. At first, the significance of intervalvalued objective function and the meaning of the interval-valued solution of the proposed problem have been explained with graphical interpretation. Generally, this type of problems has infinitely many compromise solutions. The aim of this approach is to obtain one of such solutions with higher accuracy and lower computational cost. The proposed technique is mainly based on the splitting criterion of the accepted/prescribed search region, calculation of the interval inclusion functions and the selection of subregion depending on the modified interval order relations in the context of the decision makers’ point of view. Novel interval oriented constraint satisfaction rules are used for non-interval equality and inequality constraints. Clearly, the proposed technique is nothing but an imitation of well known interval analysis based branch and bound (B & B) optimization approach. The modified multi-section division criterion with some new interval oriented constraint satisfaction rules for non-interval-valued equality and inequality constraints and some novel interval order relations in the context of the decision makers’ point of view have been applied to increase the efficiency of the proposed algorithm. Finally, the technique is applied for solving some test problems and the results are compared with the same obtained from the existing methods. Ó 2013 Elsevier Inc. All rights reserved.
1. Introduction After the Second World War, the subjects Optimization Theory and Operations Research (O.R.) have been developed theoretically and also its applications in diverse fields. In most of the situations, these subjects were developed with respect to some ideal environment. Usually the Optimization and O.R. models are constructed by deterministic parameters and the corresponding solutions are also precise. However, in reality, these models cannot be formulated in a straightforward way, as the real life situations are full of uncertainty and risk. So, the decisions have to be taken in changeable conditions. We shall illustrate this situation with the help of the following example: Example 1.1. A company produces two grades of a particular product, viz. A1 and A2. Owing to raw material restrictions, it cannot produce more than 400 tons of grade A1 and 300 tons of grade A2 in a week. There are 160 production hours in a week. It requires 0.2 and 0.4 h to produce a ton of products A1 and A2 respectively. The expected profit of A1 is lying between $ 200 – $ 250 per ton and of A2 is lying between $ 500 – $ 600 per ton. Formulate the problem to maximize the profit. ⇑ Corresponding author. E-mail address:
[email protected] (S. Karmakar). 0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2013.09.042
488
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
Mathematical formulation: Clearly, the individual profits (per ton) of the particular products are not fixed and hence it is not possible to formulate the appropriate profit function using only the fixed (real) coefficients. In this case, it is necessary to introduce the linear programming problem with interval-valued objective function. Let x1 and x2 be the number of units of two grades of products. The expected profits of the products A1 and A2 are represented by the intervals U1 = [200, 250] and U2 = [500, 600] respectively. Hence, to maximize the total profit, the corresponding optimization problem is as follows:
Maximize f ðx; UÞ ¼ U 1 x1 þ U 2 x2 ; subject to the constraints
x1 400; x2 300; 0:2x1 þ 0:4x2 160 and x1 ; x2 0: Here, the objective function f is defined as f: R2 ? I where R and I represent the set of real numbers and the set of intervals respectively. In the above example the profit is an interval. This type of concept is obviously more realistic than the earlier concept of profit like a fixed amount of $ 220 or $ 450 etc. Because, different factors like market value, labor cost etc. differ from time to time and we cannot predict the actual profit in prior. In order to deal with this type of situations most of the researchers commonly use fuzzy or stochastic approaches or both. In fuzzy programming techniques, the imprecise parameters are regarded as fuzzy sets and their membership functions are assumed to be known; whereas in stochastic programming approaches, those are viewed as random variables and their probability distributions also need to be known. In these techniques, the membership functions and the probability distribution functions play important roles to formulate the problems and convert the same into crisp problems. However, in reality, it is very difficult to choose the proper membership function or to select the appropriate probability distribution in advance. In recent years, O.R. scientists are motivated to deem the model parameters as intervals and an extensive application of interval oriented methods of such type can be illustrated in the field of optimization theory [1,4,6–9,13,14,16–19,24,26]. In many of these works, the interval arithmetic is used in order to have guaranteed enclosures of function so as to make deterministic optimization problem. The main privilege of using interval oriented techniques is that one has to calculate only the bounds of the intervals that specify the limits of uncertainty. Interval oriented methods are deterministic in nature [5]; nevertheless, the uncertainties of real world problems can easily be represented by intervals and can be solved by using interval arithmetic and interval oriented processes. Several studies have developed efficient and effective optimization techniques for solving interval-valued objective optimization problems. There exist various approaches to solve interval oriented optimization problems. Some of these approaches ensure the guarantee to enclose the set of all optimal solutions covering all possibilities [4,13,14,26] and in some other approaches, the aim is to give some approximations of compromise solution [1,7,24]. Ishibuchi and Tanaka [7] considered the linear programming problem with interval coefficients in the objective function. They solved the proposed problem by converting it into the bi-criteria optimization problem using the preference relation between two intervals introduced by them. Two solution techniques for the linear programming models with imprecise coefficients were proposed by Tong [24]. In the first approach, the interval-valued numbers are considered to represent the coefficients of the objective function and constraints of a linear programming problem. On the other hand, in the second approach, fuzzy numbers are used for the said purpose. Introducing a generalized approach of [6] by t0, t1 – cut of an interval Chanas and Kuchta [19] solved the interval objective linear optimization problem. Another approach for solving interval linear programming problems was proposed by Sengupta et al. [1]. Very recently, Suprajitno and Bin Mohd [9] have used the modified simplex method for interval linear programming problems. However, most of these techniques are restricted only for linear programming problems with inequality constraints. Consideration of nonlinearity in the structure of model formulation is inevitable for most of the engineering and management science problems. In solving constraint optimization problems with interval objective function the order relation between intervals plays an important role to take the appropriate decision regarding the choice of the optimal interval. Moore [17] first suggested two transitive order relations. However, these relations are not sufficient for partially or fully overlapping intervals. After Moore, different types of interval order relations have been proposed by several researchers in different context. In this area, one may refer to the works of Ishibuchi and Tanaka [7], Chanas and Kuchta [19], Sengupta and Pal [2], Levin [29] and others. Recently, some new definitions of order relations between two intervals have been proposed by Mahato and Bhunia [25] to overcome the incompleteness of the earlier definitions. They defined the order relations ‘6omin’, ‘
omax’ for optimistic decision making and ‘pmax’ for pessimistic decision making. A detailed survey of the existing definitions of order relations of intervals with their comparative discussion have been given by Karmakar and Bhunia [20]. The solution methodology is a very important factor for finding the global optimum of a linear/nonlinear, convex/nonconvex and real or interval-valued continuous optimization problems. In the last few decades, several researchers have developed a number of techniques for solving optimization problems. Markót et al. [15] introduced the multi-section method by replacing previously used bisection division criterion for the global optimization. However, Csallner et al. [3] proposed the theoretical approach and different convergence properties of the mentioned multi-section method. According to their point
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
489
of view, the multi-section method was indispensable to solve robust optimization problems satisfactorily. Introducing the rejection index (pf ⁄) Casado et al. [12] also investigated the heuristic variants of simple Branch and Bound (B & B) algorithms. In their procedure they assumed that the global optimal value or an approximation to it was known in advance. A more genk F L ðXÞ eral indicator, given by pf ðfk ; XÞ ¼ F UfðXÞF , where fk is the kth iteration of the approximation of the global optimum and L ðXÞ F(X) = [FL(X), FU(X)] is the interval inclusion of f with argument interval X, was introduced by Csendes [27] for the selection of better subintervals. Using this pf indicator, a more general solution technique has been provided by Csendes [27] for global optimization problems. However, this process also needed an initial approximation to the global optimal solution that is a much complicated task. The efficiency of a B & B algorithm depends largely on the used ‘Subdivision direction selection’ rule. Previously, many researchers have used different types of ‘Subdivision direction selection’ rule [6,28]. According to Csendes [28], if the interval inclusion function [17,18] is the only available information regarding the interval objective function then the traditional approach of choosing the direction of subdivision in which the actual box has the largest width is the best rule. Two different multi-splitting techniques for global solution of the nonlinear bound-constrained optimization problems have recently been introduced by Karmakar et al. [23] using the concept of Markót et al. [15] and Csallner et al. [3] and they suggested that the multi-section division technique is more fruitful between the two division techniques. Later on, Karmakar and Bhunia [21,22] have applied the multi-section technique successfully to solve non-interval (or degenerate interval) constrained global optimization problems and bound constrained problems with uncertain coefficients in interval form. If the coefficients of the constraints are all degenerate intervals the proposed constrained handling technique can be used very easy and it has been done by Karmakar and Bhunia [22]. In this work, an interval oriented optimization technique based on the division criteria of prescribed/accepted search region developed by Karmakar et al. [23] has been proposed to solve the constrained optimization problems involving intervalvalued objective functions. This is a different kind of B & B algorithm. In this regard, first of all, the significance of the interval objective function and the solutions of the prescribed interval-valued optimization problems have been explained. Generally, this type of problems has infinitely many compromise solutions. The aim of this approach is to obtain one of such solutions with higher accuracy and lower computational cost. The solution procedure requires the interval arithmetic and interval order relations defined in the context of the decision makers’ point of view. In the proposed technique, the accepted search domain (initially it is the prescribed search region of the problem) is split into several equal and disjoint subregions. The given constraints are tested whether they are satisfied or not in each subregion. If the constraints are satisfied in any subregion the objective function is computed in the form of an interval in that subregion. Then comparing those interval objective values to each other by the definition of interval order relations, the subregion containing the better objective value is accepted. The process is repeated until the interval width for each variable in the reduced subregion is negligible and the optimal solution in interval form is obtained. Finally, the proposed technique has been applied to some test problems to examine the efficiency of the algorithm. In the next section, we shall discuss the important parts of interval mathematics including interval inclusion functions and fundamental theorem of interval analysis. In Section 3, a brief comparative discussion of some interval ordering definitions has been given. Section 4 provides the statement of the problem, concept of interval-valued optimal solutions and the geometrical interpretation of the meaning of interval objective function and the optimal solutions in terms of decision makers’ choice. In Section 5, we have given the details of proposed solution technique and Section 6 includes the details of numerical experiments and comparative discussions. 2. Interval mathematics An interval A can be defined as A ¼ ½aL ; aR ¼ fx : aL 6 x 6 aR ; x 2 Rg with the width (aR – aL). Alternatively, an interval can also be expressed in the form of centre and width of it as A ¼ haC ; aW i ¼ fx : aC aW 6 x 6 aC þ aW ; x 2 Rg; where aC = (aL + aR)/2 is the centre and aW = (aR – aL)/2 is the half-width (in the rest of the article, ‘half-width’ is indicated as ‘width’ of the interval). Here, all lower case italic letters and upper case italic letters denote real numbers and intervals, respectively on the real line R. However, in the recent years, the concept of intervals is extended to semi open and open intervals. The details of interval mathematics are available in [6,8,10,16,23,25]. Here, some formulations regarding intervals have been discussed. Definition 2.1. Let A = [aL, aR] be an interval, and n be any non-negative integer. Then the nth power of A is defined by
8 ½1; 1 > > > < ½an ; an L R An ¼ n n > ½a ; a > R L > : ½0; maxðanL ; anR Þ
if n ¼ 0; if aL 0 or if n is odd; if aR 0 and n is even; if aL 0 aR and nð> 0Þ is even:
The above definition is given in [6]. An extensive development of the interval arithmetic is also given in [8,10,16]. Karmakar et al. [23] defined the formulae for nth root, rational power and modulus of the intervals in the following ways.
490
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
Definition 2.2. The nth root of an interval A = [aL, aR], n being a positive integer, is defined as
8 pffiffiffiffiffi pffiffiffiffiffi n n > < ½ apL ;ffiffiffiffiffi aR if aL 0 or if n is odd; p ffiffiffiffiffiffiffiffiffiffiffiffiffiffi n n ðAÞ ¼ ½aL ; aR ¼ ½aL ; aR ¼ ½0; aR if aL 0; aR 0 and n is even; > : / if aR < 0 and n is even; 1 n
1 n
where / is the empty interval. Again, by applying the Definitions 2.1 and 2.2, the rational power of an interval A = [aL, aR] is defined as 1
p
Aq ¼ ðAp Þq
Definition 2.3. The modulus of an interval can be defined as follows
8 if aL 0; > < ½aL ; aR jAj ¼ j½aL ; aR j ¼ ½jaR j; jaL j if aR 0; > : ½0; maxfjaL j; jaR jg if aL < 0 < aR : Recently, Sahoo et al. [11] defined the interval power of an interval as follows: Definition 2.4. For two arbitrary intervals A = [aL, aR] and B = [bL, bR] ½expðminðbL log aL ;bL log aR ; bR log aL ;bR log aR ÞÞ; expðmaxðbL log aL ;bL log aR ; bR log aL ;bR log aR ÞÞ; if aL P 0; AB ¼ ½aL ;aR ½bL ;bR ¼ a complex interval; if aL < 0: According to Moore [17], an interval function is an interval-valued function of one or more interval arguments. Let f: Rn ? R be a real valued function of real variables x1, x2,. . ., xn and F: In ? I be an interval-valued function of interval variables X1, X2,. . ., Xn. The function F is said to be an interval extension of f if F(x1, x2,. . ., xn) = f(x1, x2,. . ., xn) for all xi (i = 1, 2,. . ., n). Again, an interval function F is said to be inclusion monotonic if Xi # Yi (i = 1, 2,. . ., n) implies F(X1, X2,. . ., Xn) F(Y1, Y2,. . ., Yn). Now we shall state the fundamental theorem of interval analysis which is also the most important one for any interval oriented method [6]. Theorem 2.1. Let F(X1, X2,. . ., Xn) be an inclusion monotonic interval extension of a real function f(x1, x2,. . ., xn). Then F(X1, X2,. . ., Xn) contains the range of values of f(x1, x2,. . ., xn) for all xi 2 Xi (i = 1, 2,. . ., n). Different types of well known functions with interval arguments are defined here. For monotonically increasing function f(x) in the interval A = [aL, aR], where x 2 R
f ðAÞ ¼ f ð½aL ; aR Þ ¼ ½f ðaL Þ; f ðaR Þ: Similarly, if f(x) is a monotonically decreasing function in the interval A = [aL, aR], then
f ðAÞ ¼ f ð½aL ; aR Þ ¼ ½f ðaR Þ; f ðaL Þ: By the above definitions the exponential, logarithmic etc. functions can be expressed for interval arguments as they are strictly monotonic functions. A comprehensive discussion about the functions with interval arguments is given in [6,8,10]. For bounded periodic functions, e.g., trigonometric functions sin (A) or cosine (A) can be defined as follows
sinð½aL ; aR Þ ¼ ½bL ; bR ; where
bL ¼
1
if 9 k 2 Z : 2kp p2 2 ½aL ; aR ;
minfsinðaL Þ; sinðaR Þg otherwise
and
bR ¼
1
if 9 k 2 Z : 2kp þ p2 2 ½aL ; aR ;
maxfsinðaL Þ; sinðaR Þg otherwise:
In a similar way, cos ([aL, aR]) can be defined. We can define other trigonometric functions also.
tanð½aL ; aR Þ ¼ ½tanðaL Þ; tanðaR Þ if ð2k þ 1Þ
p
R ½aL ; aR for any k 2 Z; 2 cosecð½aL ; aR Þ ¼ ½minfcosecðaL Þ; cosecðaR Þg; maxfcosecðaL Þ; cosecðaR Þg if kp 2 ½aL ; aR for any k 2 Z; secð½aL ; aR Þ ¼ ½minfsecðaL Þ; secðaR Þg; maxfsecðaL Þ; secðaR Þg if ð2k þ 1Þ cotð½aL ; aR Þ ¼ ½cotðaR Þ; cotðaL Þ if kp 2 ½aL ; aR for any k 2 Z:
p 2
R ½aL ; aR for any k 2 Z; and
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
491
3. Order relations of intervals In solving interval-valued optimization problems the decision regarding the order relations between two arbitrary intervals is an important question. In this section, we shall discuss the previous developments of order relations of intervals. Any two closed intervals A = [aL, aR] and B = [bL, bR] may be of the following types: Type I: Type II: Type III:
Non-overlapping intervals i.e., when aL > bR or bL > aR. Partially overlapping intervals i.e., when bL 6 aL 6 bR < aR or aL 6 bL 6 aR < bR. Fully overlapping intervals i.e., when aL 6 bL < bR 6 aR or bL 6 aL < aR 6 bR.
Till now, several researchers have proposed different types of definitions of interval order relations. Some of them are discussed briefly with strengthening on the definitions of Mahato and Bhunia [25]. For any two intervals A = [aL, aR] and B = [bL, bR], Moore [17] gave the first transitive order relation ‘ < ’ as Definition 3.1
A < B iff aR < bL : Other transitive order relation for intervals is Definition 3.2
A # B iff bL aL and aR bR : These two order relations cannot order two partially or fully overlapping intervals. The second transitive order relation, which is the extension of the set inclusion property, cannot order A and B in terms of value. It describes only the condition that the interval A is nested in B [2]. Ishibuchi and Tanaka [7] defined the order relations for minimization problems of two closed intervals A = [aL, aR] = haC, aWi and B = [bL, bR] = hbC, bWi in the following way: Definition 3.3
ðiÞ A6LR B iff aL 6 bL and aR 6 bR ; A
ðiiÞ ACW B iff aC bC and aW bW ; A
A=½t0 ;t1 ¼ ½aL þ t0 ðaR aL Þ; aL þ t1 ðaR aL Þ: Using this definition of t0,t1 – cut, Chanas and Kuchta [19] modified the interval ranking definitions of [7]. For minimization problems, they considered the previous Definitions 3.3 and 3.4 introduced in [7] and redefined as follows: Definition 3.6
ðiÞ A6LR =½t0 ;t1 B () A=½t0 ;t1 6LR B=½t0 ;t1 ; A
492
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
Definition 3.7
ðiiÞ A6CW =½t0 ;t1 B () A=½t0 ;t1 6CW B=½t0 ;t1 ; A
AðA; BÞ ¼
bC aC where bW þ aW –0: bW þ aW
A (A, B) may be regarded as a grade of acceptability of the ‘first interval to be inferior to the second’. If A (A, B) = 0 then for a minimization problem, the interval A cannot be accepted. If 0 < AðA; BÞ < 1, then A can be accepted with the grade of acceptC ability bbWC a . Finally, if A (A, B) P 1, then A is accepted with full satisfaction. þaW According to them, the acceptability index is only a value based ranking index and it can be applied partially to select the best alternative from the pessimistic point of view of the decision maker. So, only the optimistic decision maker can use it completely. For a pessimistic decision maker, Sengupta and Pal [2] introduced the fuzzy preference ordering for the ranking of a pair of intervals on the real line. The fuzzy preference method was actually described for maximizing the profit interval; although the method is equally applicable for minimizing the cost/time intervals. Therefore they have assumed first that A and B are two interval profits and the problem is to choose the maximum profit interval form among them. Thereafter they considered the fuzzy set ‘‘Rejection of an interval A in comparison to the interval B’’ or ‘‘Acceptance of B in comparison to A’’. The membership function of this fuzzy set is given by
8 1 bC ¼ aC ; > > n o < bC aL bW lðB; AÞ ¼ max 0; aC aL bW aL þ bW 6 bC 6 aC ; > > : 0 otherwise: This nonlinear membership function lies in the interval [0,1]. When the values of this membership function lie within the interval [0.333,0.666], this definition fails to find the order relations. 3.1. Mahato and Bhunia’s interval ranking approach Mahato and Bhunia [25] proposed another class of definitions of interval order relations which place more emphasis on the decision makers’ preference. There are different types of decision making conditions. However, they emphasize on the optimistic and the pessimistic decision makings. In optimistic decision making, the decision maker selects the best alternative ignoring the uncertainty. On the other hand, the pessimistic decision maker selects the best alternative with less uncertainty. Naturally, the optimistic decision maker is more confident to get the best alternative under uncertain conditions and the pessimistic decision maker is less confident to get the best alternative under such conditions. Mahato and Bhunia [25] first pointed out the incompleteness of the aforementioned interval ranking definitions with respect to the decision makers’ point of view. To clarify, let us consider an example with a pair of intervals of Type-III: Example 3.1. Let A = [10,50] = h30, 20i and B = [25,45] = h35, 10i be two intervals representing the profits in the case of maximization problems and time/cost intervals in the case of minimization problems. It is obvious that an optimistic decision maker will always prefer the interval A to B for both maximization and minimization problems. However, the job is not so easy for a pessimistic decision maker. For maximization problems, pessimists may choose the interval B as a most profitable interval and for minimization problems they select the lower cost/time interval A. 3.1.1. Optimistic decision-making As an optimistic decision maker takes decisions ignoring the uncertainty, Mahato and Bhunia [25] proposed the following definitions:
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
493
Definition 3.9. For minimization problems, the order relation 6omin between the intervals A = [aL, aR] and B = [bL, bR] are as follows:
Aomin B iff aL bL ; A
Aomax B iff aR bR ; A>omax B iff Aomax B and A–B: This implies that A is superior to B and the optimistic decision maker accepts the profit interval A. Here also the order relation Pomax is not symmetric. Pessimistic decision-making In this case, the decision maker chooses the most preferable intervals according to the principle ‘‘Less uncertainty is better than more uncertainty’’. The proposed definitions are: Definition 3.11. For minimization problems, the order relation
ðiÞ A bW, a pessimistic decision cannot be taken. In this case the optimistic decision is considered. Definition 3.12. For maximization problems, the order relation >pmax between the intervals A = [aL, aR] = haC, aWi and B = [bL, bR] = hbC, bWi are
ðiÞ A>pmax B iff aC > bC for Type I and Type II intervals; ðiiÞ A>pmin B iff aC bC and aW < bW for some Type III intervals: However, for Type III intervals with aC > bC and aW > bW, a pessimistic decision cannot be taken. In this case the optimistic decision is taken. Let us consider few illustrative examples to show the applicability of Mahato and Bhunia’s [25] definitions for profit intervals, Example 3.2. A = [10,20] = h15, 5i, B = [21,23] = h22, 1i. Clearly, B is more profitable interval for both optimistic and pessimistic decision maker. Example 3.3. A = [10,20] = h15, 5i, B = [19,21] = h20, 1i. In this case, B is more profitable interval for both optimistic and pessimistic decision maker. Example 3.4. A = [10,20] = h15, 5i, B = [17,19] = h18, 1i. Clearly, B >pmax A ) B is accepted, i.e., B is more profitable interval for pessimistic decision maker. A is the accepted interval for optimists. Example 3.5. A = [10,20] = h15, 5i, B = [12,16] = h14, 2i. Here, the order relation ‘ >pmax’ fails to rank the intervals for pessimistic decision making. However, in optimistic point of view, A >omax B holds and A is accepted as the most profitable interval.
4. Present optimization problem and treatment 4.1. Statement of the problem Let f: Rn ? I be an interval valued function where Rn be the set of ordered n-tuples of real numbers and I be the set of intervals, x = (x1, x2,. . ., xn) be an n-dimensional decision vector, U = (U1, U2,. . ., Uq) be a q-dimensional interval vector whose components are all intervals.
494
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
Hence, a general constrained optimization problem with interval valued objective function can be written as follows:
Minimize
Z ¼ f ðx; UÞ;
subject to g i ðxÞ 0ðor ¼ 0Þ; i ¼ 1; 2; . . . ; k; and
ð4:1Þ
x 2 D # Rn ;
where D is the n-dimensional interval (or box) and is given by D = {x 2 Rn : l 6 x 6 u}. Here l, u 2 Rn be two vectors given by l = (l1, l2,. . ., ln) and u = (u1, u2,. . ., un) such that lj 6 xj 6 uj (j = 1, 2,. . ., n). gi (x) 6 0 (or gi (x) = 0) is the ith inequality constraint (or equality constraint) and k is the total number of inequality and equality constraints. 4.2. Optimal solutions The interval objective function is defined as f: Rn ? I and it is expressed as f (x, U) = h fC (x), fW (x)i where fC (x) and fW (x) are the centre and width of the interval function. Definition 4.1. A decision vector x⁄ 2 D is a minimum point if fC (x⁄) 6 fC (x) (maximum if fC (x⁄) P fC (x) for maximization problem) and fW (x⁄) 6 fW (x) for any x 2 D. In this case, the minimum value is denoted by f ⁄ and the minimizer point by x⁄, i.e., f ¼ min f ðx; UÞ ¼ f ðx ; UÞ: x2D
4.3. Interpretation of the solution of the problem with interval-valued objective function The considered interval valued objective function is defined as f: Rn ? I. Let us denote the optimizer point as x⁄2 Rn and the optimized value of the objective function as f ⁄2 I, i.e., we want to find the point of the search region for which the interval valued objective function will be optimum. For this type of problem, the optimum interval means the interval having optimum centre (expected value of the interval) with minimum width (uncertainty). Let us consider the following examples to visualize the situation: Example 4.1 (Function of single variable). 3
f1 ðx; UÞ ¼ U 1 x þ U 2 ðx þ xcosxÞ þ U 3 ðx3 þ sin xÞ þ U 4 ðx þ x3 þ x5 Þ where U1 = [2,4], U2 = [1.5,4.5], U3 = [1,2], U4 = [–1, 3]. Now, we shall discuss about the optimizer point (or points) and the optimum value of the interval valued function for different search regions with the help of graph. To plot the interval valued function f1(x, U) of one real variable we first compute the bounds of the function in the prescribed domain of the variable. Here, the graph consists of two curves, as the corresponding function is a single variable interval valued function. Among the curves, one represents the graph of upper bound of f1(x, U) and the other, the graph of the lower bound. Clearly, the difference between the two curves represents the uncertainty of the interval-valued function. Then we can easily find the upper and lower limits of the optimum interval of the given interval valued function and the optimizer point. The graph has been plotted with the help of MATHEMATICA 7.0 software. Two different search regions have been considered for this discussion. (i) When the search region is {x: 0 6 x 6 2.5}
200
150
100
50
0.5
1.0
1.5
50
100
Fig. 1. Graph of f1(x, U) for {x: 0 6 x 6 2.5}.
2.0
2.5
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
495
The optimizer point x⁄2 [0,2.5] is to be found so that the interval valued objective function at x = x⁄ will be the optimum interval, i.e., f1⁄ = f1 (x⁄, U) be an optimum interval for the search region {x: 0 6 x 6 2.5}. The solution is obtained by graphical method. The graph has been presented in Fig. 1. Clearly, the minimizer x⁄ of f1(x, U) is obtained at x = 0 as the uncertainty at that point is minimum. However, in case of optimistic decision-making, one can take the minimizer x⁄ of f1(x, U) as x⁄ = 2.5 ignoring the uncertainty. A similar ambiguity arises in case of finding the maximum value of the objective function. Here, we have considered only two decision-making situations – the optimistic and the pessimistic. However, in real life situations, a rational decision maker has to face different complex situations where he needs to consider some compromise solution. (ii) When the search region is {x: –1 6 x 6 1} In this case, it is clear that at x = 0, the uncertainty of the interval function is minimal and at x = 1 and –1, the uncertainty is highest. For maximum value of f1(x, U), x = 1 can be taken as the maximizer point ignoring the uncertainty (optimistic decision making). A similar dilemma will arise in case of finding the minimum value of f1(x, U) at x = –1 in this case. The graph is shown in Fig. 2. In this connection, there arise some questions: what will be the maximum or minimum value of f1(x, U)? Whether the maximizer or the minimizer points will be unique? If it is not unique then what will be the acceptable maximum or minimum value of f1(x, U) to a rational decision maker? It is clear that the graphical method is highly complicated for two variable problems. In addition, if we consider the constrained optimization problems instead of simple bound constrained problems, the task will be more difficult. On the other hand, for functions with more than two variables, graphical method is not applicable. In this work, an interval oriented algorithm is prescribed which will be useful to locate the optimizer point and obtain the optimum interval with the help of interval mathematics and ranking of intervals. 5. Solution procedure The search region of the problem (4.1) is as follows:
D ¼ fx : lj xj uj ; j ¼ 1; 2:::ng: Now, our objective is to split the accepted region (for the first time, either it is the prescribed search region or assumed if it is not prescribed) into a number of distinct equal subregions R1, R2,. . .,Rk (k = mn, with each direction of the variable xj of the hyper rectangular search region is divided into m sections simultaneously). First of all, we check whether in each subregion the given set of constraints gi (x) 6 0 (or = 0) are satisfied or not. If they are satisfied then the corresponding subregion is called feasible and the value of the objective function will be calculated. Otherwise, the subregion will be discarded. Let us explain the constraint satisfaction (here the constraints are non-interval-valued) rule by interval method. Constraint satisfaction rules The concept of interval extension of real valued functions due to Moore [17] has been used in these rules. Calculate the interval inclusion function Gi (Rc) = [ g i ; gi ] of the real valued function gi (x) in each of the subregion Rc (c = 1, 2,..., k). An inequality constraint gi (x) 6 0 is satisfied if g i 0 whereas an equality constraint gi (x) = 0 is satisfied when g i 0 and gi 0. According to the fundamental theorem of interval analysis [6] and the concept of interval extension and inclusion function theory, a constraint (inequality or equality) satisfying the conditions of the above constraint satisfaction rules in any subregion does not mean the necessarily satisfaction by all the real points of that particular subregion; however, the
20
10
-1.0
- 0.5
0.5
- 10
- 20
Fig. 2. Graph of f1(x, U) for {x: –1 6 x 6 1}.
1.0
496
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
subregion is called feasible. The case is quite different for infeasible subregions because, in that case no real points of the infeasible subregion can be found by which the constraints are satisfied. This is quite compatible with our intension. For illustration, let us consider an inequality constraint of the form 2x – 2y P 3 or –2x + 2y + 3 6 0. Now we want to check whether this constraint is satisfied or not in the subregions created by dividing the box D⁄={(x, y): 0 6 x 6 5 and 0 6 y 6 5} for m = 5. This is shown in Fig. 3. The region ABC is satisfied by the constraint in D⁄. According to the stated constraint satisfaction rule, the subregion will be feasible if any part of the satisfied region is included in that subregion. Here, the subregions marked by are not feasible and clearly the rest are feasible. Verifications:
R1 ¼ fðx; yÞ : 1 x 2 and 0 y 1g ) 2½1; 2 þ 2½0; 1 þ 3 ¼ ½1; 3 ) R1 is feasible: R2 ¼ fðx; yÞ : 4 x 5 and 0 y 1g ) 2½4; 5 þ 2½0; 1 þ 3 ¼ ½7; 5 ) R2 is feasible: R3 ¼ fðx; yÞ : 3 x 4 and 2 y 3g ) 2½3; 4 þ 2½2; 3 þ 3 ¼ ½1; 3 ) R3 is feasible: R4 ¼ fðx; yÞ : 0 x 1 and 0 y 1g ) 2½0; 1 þ 2½0; 1 þ 3 ¼ ½1; 3 ) R4 is infeasible: R5 ¼ fðx; yÞ : 0 x 1 and 4 y 5g ) 2½0; 1 þ 2½4; 5 þ 3 ¼ ½9; 13 ) R5 is infeasible: Similarly, we can verify the rule for other subregions. The rule is equally applicable for a nonlinear constraint, say, g(x, y) 6 0. The curve of g(x, y) = 0 and the corresponding region satisfied by the constraint g(x, y) 6 0 is shown in Fig. 4. In this case, the subregions indicated by the symbol ‘o’ are feasible. For equality constraint, the feasible subregions are those through which the real curve passes. The equality constraint case is graphically shown in Fig. 5. Now in each of the feasible subregion, the interval inclusion function values of the objective function have been calculated with the help of basic interval arithmetic operations. We know from Moore’s [17] discussion that the interval inclusion function F: In ? I of f(x) is a function having the property f(x) 2 F(X) whenever X 2 In. Let F(Rc) = ½f c ; f c be the interval inclusion of the objective function f(x, U) in the cth subregion, Rc, where f c and f c denote the lower and upper bounds of f(x, U) in Rc, computed by applying interval arithmetic. Now, comparing the objective function values calculated in the feasible subregions with the help of interval order relations, the subregion containing the best objective function value is accepted. Again, this accepted subregion is further subdivided into smaller disjoint subregions R0c (c = 1, 2,. . .,mn) by the aforesaid process. Then applying the same constrained satisfaction procedure and the acceptance criteria of subregion we obtain a further reduced subregion. This process is terminated after reaching the desired degree of accuracy and finally, we get the best value of the objective function in interval form and also the corresponding values of each decision variable in the form of closed interval with negligible width. The above procedure is applied for multi-section division criterion of the accepted region developed recently by Karmakar et al. [23]. Since in our proposed technique no other vital information like the inclusion of the gradient or the inclusion of the Hessian regarding the interval inclusion function are available [28] which are very important in some cases to fix the subdivision direction selection rule, so the multi-section technique is the best to use here. In this technique, all the directions of the decision variables are multi sectioned simultaneously. The idea of multi-section comes out from the concept of multiplebisection, where several bisections are done at a single iteration cycle. For three-dimensional case, the accepted region is a rectangular parallelepiped (3D Box) that can be multi-sectioned into 23 = 8 (in case of triple bisection) sub-boxes. The pictorial representation is given in Fig. 6 for m = 2, n = 3. The solution procedure is presented by the following algorithm:
y 2x – 2y = 3 5 4
C
3 2 1 O
A 1
2
3
4
5B
Fig. 3. Linear inequality constraint satisfaction.
x
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
497
y
5 4 3 2 1 1
O
2
3
4
x
5
g(x, y) = 0 Fig. 4. Nonlinear inequality constraint satisfaction.
y
5 4 3 2 1
O
1
2
3
4
x
5
g(x, y) = 0 Fig. 5. Nonlinear equality constraint satisfaction.
5.1. Algorithm Step-1: Initialize m and n. Step-2: Initialize the lower and upper bounds lj and uj (j = 1, 2,. . .,n) of all the decision variables. Set Rf = D. Step-3: Divide the accepted region (initially it is the prescribed region of the problem or assumed if it is not given) into mn n f equal disjoint subregions Rc (c =1, 2,. . .,mn) such that [m c¼1 Rc ¼ R Step-4: Calculate only the lower bounds g c for all the constraints of the form gc (x) 6 0 and both lower and upper bounds c respectively for all the equality constraints and check whether the constraint are satisfied or not according to g c and g the constrained handling technique mentioned in the previous section. Step-5: Applying interval arithmetic, compute an interval inclusion value FðRc Þ ¼ ½f c ; f c of the objective function in the feasible subregion Rc (c =1, 2,. . .,mn). If no such feasible subregion exit then go to Step-10. Step-6: Select the feasible subregion Rf among Rc (c =1, 2,. . .,mn) which has the best objective function value by comparing the interval inclusions F(Rc), c =1, 2,. . .,mn to each other with the help of the pessimistic order relations between any two interval numbers. Step-7: Compute the widths of Rf using wj = uj – lj, j =1, 2. . . n. Step-8: If wj < e, a pre-assigned very small positive number, for j =1, 2,. . .,n, go to Step-9; otherwise, go to Step-3. Step-9: Print the values of the optimal objective function in interval form and also the decision variables in the form of closed intervals with negligible width. Step-10: Print that the problem has infeasible solution. Step-11: Stop.
498
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
Fig. 6. Multi-section method.
6. Numerical examples and comparative discussion To test the performance of the proposed optimization technique for constrained optimization problems with interval-valued objective function, we have carried out a number of numerical experiments on 18 test problems F1 – F18, which are shown in Appendix. Among these test problems F3 and F4 have been taken form Chanas and Kuchta [19] and Ishibuchi and Tanaka [7] respectively. All other problems have been introduced newly. The problems F10 and F13 have equality constraints and the rest problems are with linear or nonlinear inequality constraints. The problems F3, F4, F5 and F7 are originally maximization problems. Each problem has been solved by the proposed technique taking suitable values of m and error tolerance e = 106. Tables 1–3 contain a summary of the execution results including suitable values of m, optimum objective function values, number of function evaluations and computational time (CPU time) in second. The approach for computing the best-found value in each subregion of a given search region with equality or inequality constraints of the test problems has been coded in C programming language and implemented on a 3.0 GHz Pentium IV PC with 1 GB RAM under LINUX OS. The computational results of the selected test problems have been obtained by using the proposed interval oriented algorithm. Most of the problems are of nonlinear type. The test set also contains some non-convex, multi-modal and higher dimensional problems. Basically, the better solutions of the robust optimization problems can be obtained by using our proposed algorithm for higher values of m. The stability of the solution (for different values of m) also sometimes depends on higher values of m. From the Tables 1–3, however, it is clear that the obtained solutions are stable for different values of m for almost all the problems. The problems F1 – F5 are simple Linear Programming Problems (LPPs) with interval-valued objective function. The computational results show that the interval objective LPPs are solved very easily with high accuracy at a cost of small number of function evaluations and very low CPU time. The nonlinear problems F10 and F13 are with equality constraints whereas the problems F11 and F12 are fractional programming problems (FPPs) with interval valued objective functions. The results show that the proposed algorithm can solve these problems very efficiently. The test problem F3 was previously solved by Chanas and Kuchta [19] and Suprajitno and Bin Mohd [9]. To solve the problem, Chanas and Kuchta [19] used a generalized approach with the help of t0,t1-cut of intervals. According to them, the set of Pareto optimal solutions and the corresponding optimal objective function values are displayed in the following Table 4. The Pareto optimal set obtained by Chanas and Kuchta [19] depends on the values of the parameters t0 and t1 lying in the interval [0,1]. Suprajitno and Bin Mohd [9] have used the modified simplex method for interval linear programming problems. In this case the cost coefficients as well as the decision variables are considered as intervals. The solutions obtained by them are given by x1(1) = [9.99999999999997, 10.0000000000001], x2(1) = [11.9999999999999, 12.0000000000001] and
Table 1 Computational results of the problems F1-F6. Test Problems
Dimension (n)
m
Objective function value
Number of Function Evaluations
Computational time of CPU
F1
2
F2
2
F3
2
F4
3
F5
4
F6
2
5 10 5 10 10 20 10 20 5 10 5 10
[8.000000, -5.275000] [8.000000, -5.275000] [155.000000,160.800000] [155.000000,160.800000] [200.000000, 680.000000] [180.000000, 675.000000] [53.111111,123.133334] [51.422097,126.151367] [168.895696,180.978288] [168.895696,180.978288] [2309.569920, 2112.880000] [2309.569920, 2112.880000]
96 102 628 2120 52 256 170 2182 1258 16496 696 1608
0.0015 0.0023 0.007 0.0095 0.0015 0.003 0.007 0.01 0.0035 0.03 0.007 0.0095
499
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502 Table 2 Computational results of the problems F7-F12. Test Problems
Dimension (n)
m
Objective function value
Number of Function Evaluation
Computational time of CPU
F7
2
F8
2
F9
2
F10
2
F11
2
F12
3
5 10 5 10 5 10 5 10 5 10 5 10
[10.640000,20.640000] [10.640000,20.640000] [0.000000,39.240000] [0.000000,39.240000] [1.223525, 0.936622] [1.223525, 0.936622] [7887.955905,65366.140518] [7857.602864,65394.838797] [0.690537,1.326992] [0.690537,1.326992] [96.852157,102.677778] [96.852157,102.677778]
38 58 596 1768 72 274 140 88 42 72 136 734
0.001 0.005 0.007 0.0095 0.0015 0.0035 0.001 0.005 0.001 0.005 0.009 0.02
m
Objective function value
Number of Function Evaluation
Computational time of CPU
5 10 5 10 5 10 5 10 10 15 2 3
[0.079341,0.101266] [0.079341,0.101266] [317.174933, 312.025380] [317.174933, 312.025380] [0.000000,0.000000] [0.000000,0.000000] [1189.420302, 11.743231] [1189.445612, 11.731950] [3.500000, 2.000000] [3.500000, 2.000000] [20.200000, 10.150000] [20.200000, 10.150000]
114 218 9856 658464 1312 3716 876 2516 1571512 10410328 313344 23678244
0.0015 0.0022 0.1 6.15 0.01 0.055 0.0055 0.065 3.22 21.31 0.2 17.19
Table 3 Computational results of the problems F13-F18. Test Problems
Dimension (n)
F13
2
F14
6
F15
2
F16
2
F17
5
F18
13
Table 4 Set of Pareto optimal solutions of F3. Range of t
Pareto Optimal values of x
Optimal objective function value
t 2 (0.0, 0.29] t = 0.3 t 2 [0.3, 0.33] t 2 [0.33, 0.49] t 2 [0.49, 0.66] t 2 [0.66, 1.0)
x(1) = (0, 18) x(2) = (6, 17) x(3) = (8, 16) x(4) = (9, 15) x(5) = (10, 12) x(6) = (13, 0)
[0.0, 180.0] = h90.0, 90.0i [120.0, 470.0] = h175.0, [160.0, 560.0] = h200.0, [180.0, 600.0] = h210.0, [200.0, 620.0] = h210.0, [260.0, 650.0] = h195.0,
295.0i 360.0i 390.0i 410.0i 455.0i
x1(2) = [12.99999999999999, 13.0000000000001], x2(2) = [0.00000000000000, 0.00000000000000]. Clearly, the values of the decision variables are intervals with negligible width. According to Suprajitno and Bin Mohd [9] the respective approximate optimal objective function values are F3(x(1)) = [200.0, 620.0] = h210.0, 410.0i and F3(x(2)) = [260.0, 650.0] = h195.0, 455.0i. The solutions obtained by us are f1⁄ = [200, 680] = h240.0, 440.0i for m = 10 and f2⁄ = [180, 675] = h247.5, 427.5i for m = 20. For maximization problem, it is concluded that our solutions are much better than the previous solutions. The problem F4 has been solved by Ishibuchi and Tanaka [7] by converting it into its equivalent bi-objective optimization problem. Ishibuchi and Tanaka [7] applied the weighted method to solve the reduced bi-objective optimization problem. They obtained a set of three Pareto optimal solutions as {xa = (0, 1.13, 3.45), xb = (3.48, 0, 1.39), xc = (4.57, 0, 0)} after choosing the suitable weights varying from 0 to 1. The corresponding objective function values are as follows:
Fðxa Þ ¼ ½51:4; 126:2 ¼ h88:8; 37:4i; Fðxb Þ ¼ ½66:1; 100:8 ¼ h83:4; 17:3i; Fðxc Þ ¼ ½68:5; 77:6 ¼ h73; 4:57i: Our multi-section method gives the solution as: f1 = [53.111111, 123.133334] = h88.122222, 35.011111i for m = 10 and f2 = [51.422097, 126.151367] = h88.786732, 37.364635i. As far as the problem’s minimizing objective is concerned, our solutions give better results with respect to the decision makers’ point of view.
500
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
7. Conclusion The aim of this paper is to present a new optimization technique for constrained optimization problems with uncertain objective function in which the uncertainty is represented by intervals. This technique neither requires any type of derivative information nor does it involve any stochastic or heuristic/meta-heuristic methods. The proposed algorithm depends mainly on the multi-section division criterion of the search region and latest interval order relations with respect to the decision maker’s point of view. Further, the proposed technique possesses the merits of fast convergence as the feasible search region is reduced exponentially in each iteration and tends very quickly to the solution point of the problem, which is evident form the summary of the numerical experimental results. However, for higher dimensional problems, some times the multi-section algorithm may not work with its expected level of efficiency. To get better solutions, the number of multi-sectioned sub-boxes (i.e. the value of m) is to be increased in some cases. As a result, for higher dimensional problems, the computational time will increase and the algorithm becomes less efficient. Also, the theoretical proof of convergence of the algorithm is not provided due to the unavailability of complete interval ordering definition and some limitations of existing interval mathematics. For future research, the technique can be used extensively to tackle the uncertainty in different fields of Operations Research. Moreover, it can be enhanced the level of efficiency of the technique by overcoming the said limitations. Acknowledgements We would like to thank Mr. Sibdas Karmakar (Retired Associate Professor of Applied Mathematics) for his constructive comments and fruitful suggestions on the previous version of the article. We also gratefully acknowledge the comments and suggestions made by the anonymous referees, which have helped us to improve the quality of the article. Appendix A. List of Test Functions F1: Minimize subject to where Search region: F2: Minimize subject to
where Search region: F3: Maximize subject to
where Search region: F4: Maximize subject to
where Search region: F5: Maximize subject to
where Search region: F6: Minimize subject to
f (x, U) = U1x1 + U2x2 g1(x) = x1 + 2x2 6 5 g2(x) = x1 + x2 6 4 U1 = [0.2, 1.11], U2 = [3.2, 2.11] xi 2 [0,10], i = 1, 2. f (x, U) = U1x1 + U2x2 g1(x) = 36x1 + 6x2 P 108 g2(x) = 3x1 + 12x2 P 36 g3(x) = 20x1 + 10x2 P 100 U1 = [19.55, 20.42], U2 = [38.4, 39.56] xi 2 [0,10], i = 1, 2. f (U, x) = U1 x1+U2 x2 g1(x) = 10x1 + 60x2 6 1080 g2(x) = 10x1 + 20x2 6 400 g3(x) = 10x1 + 10x2 6 240 g4(x) = 30x1 + 10x2 6 420 g5(x) = 40x1 + 10x2 6 520 U1 = [–20, 50], U2 = [0,10] xi 2 [0,20], i = 1, 2. f (U, x) = U1 x1 + U2x2 + U3 x3 g1(x) = 4.6 x1 + 7.6x2 + 3.6x3 6 21 g1(x) = 5.8 x1 + 3.6x2 + 7.8x3 6 31 g1(x) = 7.5 x1 + 6.5x2 + 6.8x3 6 41 U1 = [15,17], U2 = [15,20], U3 = [10,30] xi 2 [0, 50], i = 1, 2, 3. f (x, U) = U1x1 + U2x2 + U3x3 + U4x4 g1(x) = x1 + 2x2 + 2x3 + 4x4 6 80 g2(x) = 3x1 + 3x2 + x3 + x4 6 80 g3(x) = 2x1 + 2x3 + x4 6 60 U1 = [3.88, 4.1], U2 = [2.5, 3.5], U3 = [3.7, 4.02], U4 = [5.8, 6.3] x1 2 [0,27], xi 2 [0,5], i = 2, 3, x4 2 [0,18]. f (x, U) = U1 x21 + U2 x1x2 – U3 x22 g1(x) = 2x1 + 5x2 6 98
501
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
where Search region: F7: Maximize subject to where Search region: F8: Minimize subject to where Search region: F9: Minimize Subject to Search region: F10: Minimize subject to where Search region: F11: Minimize subject to where Search region: F12: Minimize subject to
U1 = [1.92, 2.2], U2 = [9.98, 10.2], U3 = [5.5, 6.012] xi 2 [0, 50], i = 1, 2 f (x, U) = U1x1 + U2x2 g1(x) = x1x2 6 8 g2(x) = x21 þ x22 6 20 U1 = [1,2.8], U2 = [2.16, 3.76] xi 2 [0,10], i = 1, 2 f (x, U) = (U1x1 – U2)2 + (U1x2 – U3)2 g1(x) = x21 þ x2 6 4 g2(x) = ðx1 2Þ2 þ x2 6 3 U1 = [0,2], U2 = [0.9, 1.8], U3 = [3,6] xi 2 [0,10], i = 1, 2 f (X, U) = U1 log (U2 x1) + U3 log (U2 x2) g1(x) = x1 + x2 6 2 U1 = [2, 0.5], U2 = [0.5, 1.5], U3 = [2.3, 1] xi 2 (0, 8], i = 1, 2 f (x, U) = U1 exp (U2 x1 + 1) + U2 exp (x2 + U3) g1(x) = x1 + x2 = 7 U1 = [2.8, 3.8], U2 = [1.6, 2.4], U3 = [4.5,6] xi 2 [0,8], i = 1, 2 þU 2 x2 f ðx; UÞ ¼ U3Uðx1 x11þx 2 ÞþU 4
g1(x) = 3x1 + 5x2 6 15 g2(x) = 4 x1 + 3 x2 6 12 U1 = [1.82, 2.5], U2 = [2.7, 3.72], U3 = [0.5, 1.5], U4 = [6.91, 7.23] xi 2 [0,5], i = 1, 2 ffi þ U 3 x1 x3 þ U 4 x1 x2 x3 f ðx; UÞ ¼ U2 x1Up1ffiffiffi x2 x3 pffiffiffiffi 4 x2 1 g 1 ðxÞ ¼ 3x2 x2 þ 3x3 6 1 1 2
where Search region: F13: Minimize subject to where Search region: F14 : Minimize subject to
where Search region: F15: Minimize subject to where Search region: F16: Minimize subject to where Search region: F17: Minimize subject to
U1 = [39.56,40.1], U2 = [0.9,1.02], U3 = [19.5,20.2], U4 = [19.23,20] xi 2 [0,10], i = 1, 2 f (x, U) = exp (U1 x1 – U2 x2) g1(x) = sin (–x1 + x2 – 1) = 0 U1 = [0.98, 1.03], U2 = [1.93, 2.09] x1 2 [-2, 2], x2 2 [-1.5, 1.5] f ðx; UÞ ¼ U 1 ðx1 U 2 Þ2 ðx2 U 3 Þ2 ðx3 U 4 Þ2 ðx4 U 5 Þ2 ðx5 U 6 Þ2 ðx6 U 7 Þ2 g1(x) = ðx3 3Þ2 x4 þ 4 6 0 g2(x) = ðx5 3Þ2 x6 þ 4 6 0 g3(x) = x1 – 3x2 + 2 6 0 g4(x) = – x1 + x2 – 2 6 0 g5(x) = x1 + x2 – 6 6 0 g6(x) = – x1 – x2 + 2 6 0 U1 = [24.8, 25.37], U2 = [1.97, 2.01], U3 = [1.9, 2.15], U4 = [0.79, 1.03], U5 = [3.92, 4.1], U6 = [0.7, 1.2], U7 = [3.8, 4.05]; x1 2 [0,6], x2 2 [0,8], x3 2 [1,5], x4 2 [0,6], x5 2 [1,5], x6 2 [0,10]. U
U
f ðx; UÞ ¼ ðx21 þ x22 Þ 1 fsin2 ðU 2 ðx21 þ x22 Þ 3 Þ þ 1g g 1 ðxÞ ¼ x31 þ x2 6 50 U1 = [0.2, 0.3], U2 = [48.0, 52.0], U3 = [0.05, 0.2]; xi 2 [–100, 100], i = 1, 2. f (x, U) = U1 sin x1 exp{(U2 – cos x2)2} + U3 cos x2 exp{(U2 – sin x1)2} + U4 (x1 – x2)2 g 1 ðxÞ ¼ 3x1 þ 4x22 P 1 g 2 ðxÞ ¼ 4x21 þ 3x2 6 100 U1 = [0.9, 1.1], U2 = [0.5, 1.5], U3 = [0.75, 1.2], U4 = [0.8, 1.6]; xi 2 [–2p, 2p], i = 1, 2. P P f ðx; UÞ ¼ 5i¼1 U i x2i U 6 5j¼1 cosðU jþ6 xj Þ P g 1 ðxÞ ¼ 5i¼1 xi 6 1 P g 2 ðxÞ ¼ 5i¼1 x2i 6 1 (continued on next page)
502
S. Karmakar, A.K. Bhunia / Applied Mathematics and Computation 225 (2013) 487–502
where Search region: F18: Minimize Subject to
where Search region:
U1 = U2 = 1, U3 = [0.9, 1.1], U4 = [0.75, 1.25], U5 = [0,2], U6 = [0.4, 0.7], U7 = p, U8 = [p, 2p], U9 = [3p, 4p], U10 = [4p, 5p], U11 = 5p; xi 2 [–1, 1], i = 1, 2. P P P f ðx; UÞ ¼ U 1 4i¼1 xi U 2 4i¼1 x2i U 3 13 i¼5 xi g1(x) = 2x1 + 2x2 + x10 + x11 – 10 6 0 g2(x) = 2x1 + 2x3 + x10 + x12 – 10 6 0 g3(x) = 2x2 + 2x3 + x11 + x12 – 10 6 0 g4(x) = – 8x1 + x10 6 0 g5(x) = – 8x2 + x11 6 0 g6(x) = – 8x3 + x12 6 0 g7(x) = – 2x4 – x5 + x10 6 0 g8(x) = – 2x6 – x7 + x11 6 0 g9(x) = – 2x8 – x9 + x12 6 0 U1 = [4.9,5.15], U2 = [4.5, 5.75], U3 = [0.85, 1.12]; xi 2 [0,1], i = 1, 2, . . ., 9, 13 and xj 2 [0, 100], j = 10, 11, 12.
References [1] A. Sengupta, T.K. Pal, D. Chakraborty, Interpretation of inequality constraints involving interval coefficients and a solution to interval linear programming, Fuzzy Sets and Systems 119 (1) (2001) 129–138. [2] A. Sengupta, T.K. Pal, On comparing interval numbers, European Journal of Operational Research 127 (1) (2000) 28–43. [3] A.E. Csallner, T. Csendes, M.C. Markót, Multisection in interval branch-and-bound methods for global optimization I. Theoretical results, Journal of Global Optimization 16 (4) (2000) 371–392. [4] C. Jansson, S.M. Rump, Rigorous solution of linear programming problems with uncertain data, ZOR – Methods and Models of Operations Research 35 (1991) 87–111. [5] C.S. Pedamallu, New Interval Partitioning Algorithms for Global Optimization Problems, Ph.D. Dissertation submitted to the School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, 2007. [6] E.R. Hansen, G.W. Walster, Global Optimization using Interval Analysis, Marcel Dekker Inc., New York, 2004. [7] H. Ishibuchi, H. Tanaka, Multiobjective programming in optimization of the interval objective function, European Journal of Operational Research 48 (2) (1990) 219–225. [8] H. Ratschek, J. Rokne, New Computer Methods for Global Optimization, Ellis Horwood Ltd., Chichester, 1988. [9] H. Suprajitno, I. Bin Mohd, Linear programming with interval arithmetic, International Journal of Contemporary Mathematical Science 5 (7) (2010) 323–332. [10] L. Jaulin, M. Kieffer, O. Didrit, É. Walter, Applied Interval Analysis with Examples in Parameter and State Estimation, Robust Control and Robotics, Springer-Verlag, London, 2001. [11] L. Sahoo, A.K. Bhunia, P.K. Kapur, Genetic algorithm based multi-objective reliability optimization in interval environment, Computers & Industrial Engineering 62 (1) (2012) 152–160. [12] L.G. Casado, I. García, T. Csendes, V.G. Ruíz, Heuristic rejection in interval global optimization, Journal of Optimization Theory and Applications 118 (1) (2003) 27–43. [13] M. Hladík, Interval linear programming: a survery, in: Z.A. Mann (Ed.), Linear Programming – New Frontiers in Theory and Applications, Nova Science Publishers, New York, 2011. [14] M. Hladík, Optimal value bounds in nonlinear programming with interval data, TOP 19 (1) (2011) 93–106. [15] M.C. Markót, T. Csendes, A.E. Csallner, Multisection in interval branch-and-bound methods for global optimization II. Numerical tests, Journal of Global Optimization 16 (3) (1999) 219–228. [16] R.B. Kearfott, Rigorous Global Search: Continuous Problems, Kluwer Academic Publishers, Dordrecht, 1996. [17] R.E. Moore, Methods and Applications of Interval Analysis, SIAM, Philadelphia, 1979. [18] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to Interval Analysis, SIAM, Philadelphia, 2009. [19] S. Chanas, D. Kuchta, Multiobjective programming in optimization of interval objective functions – A generalized approach, European Journal of Operational Research 94 (3) (1996) 594–598. [20] S. Karmakar, A.K. Bhunia, A comparative study of different order relations of intervals, Reliable Computing 16 (2012) 38–72. [21] S. Karmakar, A.K. Bhunia, An efficient interval computing technique for bound constrained uncertain optimizations problems, Optimization: A Journal of Mathematical Programming and Operations Research iFirst (2012) 1–22, http://dx.doi.org/10.1080/02331934.2012.724684. [22] S. Karmakar, A.K. Bhunia, On constrained optimization by interval arithmetic and interval order relations, OPSEARCH 49 (1) (2012) 22–38. [23] S. Karmakar, S.K. Mahato, A.K. Bhunia, Interval oriented multi-section techniques for global optimization, Journal of Computation and Applied Mathematics 224 (2) (2009) 476–491. [24] S. Tong, Interval number and fuzzy number linear programmings, Fuzzy Sets and Systems 66 (3) (1994) 301–306. [25] S.K. Mahato, A.K. Bhunia, Interval-arithmetic-oriented interval computing technique for global optimization, Applied Mathematics Research Express 2006 (2006) 1–19. [26] S.T. Liu, R.T. Wang, A mumerical solution method to interval quadratic programming, Applied Mathematics and Computation 189 (2) (2007) 1274– 1281. [27] T. Csendes, Generalized subinterval selection criteria for interval global optimization, Numerical Algorithms 37 (1–4) (2004) 93–100. [28] T. Csendes, Interval Analysis: Subdivision Directions in Interval Branch and Bound Methods, Springer, New York, 2009. pp. 1717 - 1721. [29] V.I. Levin, Ordering of intervals and optimization problems with interval parameters, Cybernetics and Systems Analysis 40 (3) (2004) 316–323.