Applied Mathematics and Computation 134 (2003) 345–361 www.elsevier.com/locate/amc
Linearly constrained global optimization: a general solution algorithm with applications Hossein Arsham a,*, Miro Gradisar b, Mojca Indihar Stemberger b a Department of Information and Quantitative Sciences, Business Centre, University of Baltimore, 1420 North Charles Street, Baltimore MD 21201-5779, USA b University of Ljubljana, Faculty of Economics, 1000 Ljubljana, Slovenia
Abstract This paper presents an efficient enumerative approach to solve general linearly constrained optimization problems. This class of optimization problems includes fractional, nonlinear network models, quadratic, convex and non-convex programs. The unified approach is accomplished by converting the constrained optimization problem to an unconstrained optimization problem through a parametric representation of its feasible region. The proposed solution algorithm consists of three phases. In phase 1 it finds all interior critical points. In phase 2 the parametric representation of the feasible region is constructed to identify any critical points on the edges and faces of the feasible region. This is done by a modified version of an algorithm for finding the V-representation of the polyhedron. Then, in phase 3, the global optimal value of the objective function is found by evaluating the objective function at the critical points as well as at the vertices. For an illustration of the algorithm and a comparison with the existing methods small-size numerical examples are presented. 2002 Elsevier Science Inc. All rights reserved. Keywords: Global optimization; Linearly constrained optimization; Polyhedra; Nonlinear programming
*
Corresponding author. E-mail addresses:
[email protected] (H. Arsham),
[email protected] (M. Gradisar),
[email protected] (M. Indihar Stemberger). 0096-3003/02/$ - see front matter 2002 Elsevier Science Inc. All rights reserved. PII: S 0 0 9 6 - 3 0 0 3 ( 0 1 ) 0 0 2 8 9 - 2
346
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
1. Introduction Linearly constrained optimization problems are extremely varied. They differ in their functional form of the objective function, constraints, and in the number of variables. Although the structure of this problem is simple, finding a global solution – and even detecting a local solution is known to be difficult. The simplest form of this problem is realized when the objective function is linear. The resulting model is a linear program (LP). Other problems include fractional [4], nonlinear network models [3], quadratic, separable, convex and non-convex programs. There are well over 400 different solution algorithms for solving different kinds of linearly constrained optimization problems. However, there is not one algorithm superior to others in all cases. For example, in applying the Karush– Kuhn–Tucker (KKT) condition, it may be difficult, if not essentially impossible, to derive an optimal solution directly [27]. The most promising numerical solution algorithm is the feasible direction method, however, if objective function is non-convex then the best one can hope for is that it converges to a local optimal point. Linear dependence among the linear constraints is commonplace in practical problems. Linear dependence among linear constraints that are binding at a point is commonly known as degeneracy and such a point is called a degenerate point. The resolution of degeneracy at a vertex is essentially a combinatorial problem whose solution may require a significant amount of computation, see, e.g. [6,11,14,25]. Since our approach is an explicit enumeration technique as opposed to a path-following methods it is free from computational problems that may be caused by degeneracy. The paper develops an effective alternative approach to solve general continuous optimization problems with linear constraints. The unified approach is accomplished by converting the constrained optimization problem to an unconstrained optimization problem through a parametric representation of its feasible region. A similar idea is presented in [30], but the approach presented therein is limited to linearly constrained positive definite quadratic programming problem. The remainder of this paper is organized as follows: Section 2 presents the main phases of the algorithm with some definitions. To illustrate the steps of the algorithm some applications are presented in Section 3 in the context of published numerical problems solved by other methods which serve also for comparative study purposes. The proposed solution algorithm always finds the optimal solution successfully while in most cases other methods fail. The last section contains the conclusions with some useful remarks.
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
347
2. New solution algorithm We want to solve the following problem with a linear feasible region: Maximize f ðxÞ Problem P subject to Ax 6 b with some variables xi have explicit upper and/or lower bounds and some are unrestricted in sign, where A is an m n matrix, b is an m-vector and f is a continuous function. Problem P is a subset of a larger set of problems known as continuous global optimization [33] with diverse areas of applications, for example [2,5,15–18,20–22,26,28,29,32,35]. The feasible region of the problem P is obviously the set of points that defines the polyhedron. In the proposed solution we need to find all the critical points of objective function f inside and at the boundaries of the polyhedron. A polyhedron with finite number of vertices can be represented in two equivalent ways [1,7–9,19,36,37]: H-representation and V-representation. Definition 1. A H-representation of the polyhedron is given by an m n matrix A ¼ ðai;j Þ and m-vector b ¼ ðbi Þ S ¼ fx 2 Rn ; Ax 6 bg: Definition 2. A vertex v 2 Rn is a point of S that satisfies an affinely independent set of n inequalities as equations. Definition 3. An extreme ray w 2 Rn is a direction such that for some vertex v and any positive scalar l, v þ lw is in S and satisfies some set of n 1 affinely independent inequalities as equations. Definition 4. A V-representation of S is given by a minimal set of M vertices v1 ; v2 ; . . . ; vM and N extreme rays w1 ; w2 ; . . . ; wN ( ) M N M X X X S ¼ x 2 Rn ; x ¼ ki v i þ lj wj ; ki ; lj P 0; ki ¼ 1 : i¼1
j¼1
i¼1
Definition 5. A face of polyhedron S is a boundary set of S containing points on a line or plane (or hyper-plane). An edge is the line segment between any two adjacent vertices. If a feasible region is bounded then a corresponding polyhedron is called a polytope which has no extreme rays. Its V-representation is given by the convex combination of the vertices.
348
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
Example 6. The polyhedron in Fig. 1, defined by x1 x2 6 1; 2x1 x2 6 3; x1 6 3; x1 ; x2 P 0; has four vertices and one extreme ray: 0 1 2 v1 ¼ ; v2 ¼ ; v3 ¼ ; 0 0 1
v4 ¼
3 ; 3
w1 ¼
0 ; 1
which indicates that its parametric representation is given by fðx1 ; x2 Þ; ðx1 ; x2 Þ ¼ ðk2 þ 2k3 þ 3k4 ; k3 þ 3k4 þ lÞ; k1 ; k2 ; k3 ; k4 ; l P 0; k1 þ k2 þ k3 þ k4 ¼ 1g: For example, the edge connecting vertices v2 and v3 can be expressed as fðx1 ; x2 Þ; ðx1 ; x2 Þ ¼ ðk2 þ 2k3 ; k3 Þ; k2 ; k3 P 0; k2 þ k3 ¼ 1g and the unbounded edge defined by the vertex v4 and the extreme ray w1 as fðx1 ; x2 Þ; ðx1 ; x2 Þ ¼ ð3; 3 þ lÞ; l P 0g: Definition 7. The parametric representation of the objective function f is given by f ðxÞ ¼ f ðxðk; lÞÞ ¼ f ðk; lÞ: Definition 8. Critical point of a continuous function is a point where the first partial derivatives are zero or undefined.
Fig. 1. Unbounded polyhedron.
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
349
Theorem 9. Let the objective function f ðxÞ 2 C 1 and suppose that the feasible region is given in V-representation as defined above. If we are looking for the critical points in the open domain defined by a subset of vertices v1 ; . . . ; vs and a subset of extreme rays w1 ; . . . ; wt , then we have to solve the system: of of of ¼ ¼
¼ ; ok1 ok2 oks of of of ¼ ¼
¼ ¼ 0; ol1 ol2 olt k1 þ k2 þ þ ks ¼ 1; ki ; lj P 0: Proof. Suppose that the feasible region is defined by M vertices and N extreme rays. We will need partial derivatives of objective function f over each ki ; 1 6 i 6 M and each lj ; 1 6 j 6 N . We can find them by using the chain-rule: n X of of oxk ¼ ; oki oxk oki k¼1 n X of of oxk ¼ : olj oxk olj k¼1
To find the critical points on the open domain defined by the vertices v1 ; . . . ; vs and extreme rays w1 ; . . . ; wt we have to find the critical points of the parametric objective function over the domain k1 þ þ ks ¼ 1; 0 < k1 ; . . . ; ks < 1;
ksþ1 ¼ ¼ kM ¼ ltþ1 ¼ ¼ lN ¼ 0; l1 ; . . . ; lt > 0:
We can construct the Lagrangian Lðk1 ; . . . ; ks ; l1 ; . . . ; lt ; cÞ ¼ f ðk1 ; . . . ; ks ; l1 ; . . . ; lt Þ þ cð1 k1 ks Þ: In order to find the critical points we have to solve the following system: oL of ¼ c ¼ 0; ok1 ok1 .. . oL of ¼ c ¼ 0; okk oks
350
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
oL of ¼ ¼ 0; ol1 ol1 .. . oL of ¼ ¼ 0; oll okt oL ¼ 1 k1 ks ¼ 0: oc By eliminating c from the system we get of of of ¼ ¼
¼ ; ok1 ok2 oks of of of ¼ ¼
¼ ¼ 0; ol1 ol2 olt k1 þ k2 þ þ kk ¼ 1: Clearly we can use the partial derivatives that were already found. We can use them for other open domains as well. In the proposed solution algorithm we need to find critical points. It is necessary for the domain to be an open set for the definition of a derivative which is a limit with two sides (left and right limits). Therefore, we solve unconstrained problems over some relevant open sub-domains of the feasible region. First, we find critical points on the interior points of the feasible region. Next, we find critical points on interior points of the faces of the feasible region, then on interior points of the edges (i.e., line segments) of the feasible region. Finally, by a functional evaluation of the objective function at the critical points and the vertices of the feasible region the global optimal solution is found. Therefore, in solving an n dimension problem, we solve some unconstrained optimization problems in n; n 1; . . . ; 1 dimensions. Thus, removing the constraints by the proposed algorithm reduces the constrained optimization to unconstrained problems which can be more easily dealt with. Fig. 2 provides an overview of the algorithm’s process strategy. The second phase of the algorithm can be implemented by one of the algorithms for finding the vertices, extreme rays, edges and faces of the polyhedron. There are essentially two main approaches to the problem of generating all the vertices of the polyhedron, both with the origins in the 1950s. The double description method [31] involves building the polytope sequentially by adding the defining inequalities one at a time. Recent algorithms and practical implementations of this method have been developed by Fukuda and co-workers [12,23,24]. The second method for finding all the vertices and extreme rays of the polyhedron
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
351
Fig. 2. Linearly constrained global optimization process.
involves pivoting around the skeleton of the polytope. An efficient method using this approach is the reverse search method by Avis and Fukuda [10] and the revisited version [9]. In phase 3 of the algorithm we have to find the critical points over the open domains. We can construct the parametric version of the objective function over each domain and look for its critical points. But since there may be many such domains it is more efficient to use the theorem stated earlier. Not all problems go through all the three steps. For example, if we are looking for a maximum, the objective function is concave and the interior critical point has been found, then we know that this is the optimal point. Similarly, if we are looking for a minimum, the objective function is convex and the interior critical point has been found, then we can be sure that this is the optimal point.
352
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
Next section contains some diverse applications that illustrate the usefulness of the proposed method.
3. Numerical examples In the following examples we demonstrate how the proposed solution algorithm can be used to solve general optimization problems over polyhedron S. For all the examples our method finds optimal solution that was not found by other methods in many cases, although the problems are rather simple. Example 10. The following is a ‘‘test’’ problem: Min
f ðx1 ; x2 Þ ¼ 4x31 þ 3x1 6x2
subject to: x1 þ x2 6 4; 2x1 þ x2 6 5; x1 þ 4x2 P 2; x1 ; x2 P 0: This test problem is a non-convex program from [34, pp. 133–136] that was chosen as an example to demonstrate the difficulties for finding the global solution for this type of problems by any existing methods. In [34] the objective function behavior over the feasible region, that is drawn in Fig. 3, is investigated by drawing some iso-value curves of the objective function to illustrate the difficulties involved in solving general optimization problems even for this small-size test problem.
Fig. 3. Feasible region for Example 10.
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
353
In [34] an excellent and detailed discussion of the serious difficulties is provided such as: • non-convex programs are more difficult to solve than convex programs, • even specialized algorithms such as separable programming methods can guarantee no more than a local optimum to such problems, and • although such problems may be handled by interior point methods such as the penalty function algorithm, it is computationally expensive. Accordingly [34] detects the optimal solution only by graphical observation and not by any algorithmic approach. We solve this test problem, without referring to any graphical method using the proposed algorithm. First we are looking for interior critical points. The partial derivatives are of ¼ 12x21 þ 3; ox1 of ¼ 6; ox2 which means that the function has no interior critical points. Applying the results of Section 3, the vertices of the feasible region are: 2 1 0 0 v1 ¼ ; v2 ¼ ; v3 ¼ ; v4 ¼ 1 3 4 1=2 and the edges e1 ¼ ðv1 ; v2 Þ;
e2 ¼ ðv1 ; v4 Þ;
e3 ¼ ðv2 ; v3 Þ;
e4 ¼ ðv3 ; v4 Þ:
The values of the objective function at the vertices are presented in Table 1. The parametric representation of the feasible region is: x1 ¼ 2k1 þ k2 ; 1 x2 ¼ k1 þ 3k2 þ 4k3 þ k4 ; 2 where 0 6 k1 ; k2 ; k3 ; k4 6 1; k1 þ k2 þ k3 þ k4 ¼ 1. Table 1 Function values at the vertices for Example 10 Vertex v1 v2 v3 v4
Its coordinates
f ðxÞ
(2,1) (1,3) (0,4) (0,1/2)
)32 )19 24 )3
354
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
The next step to do is to find critical points over the four edges. We will need partial derivatives of objective functions over k1 ; k2 ; k3 and k4 . By using the chain-rule we get of of of ¼2 þ ¼ 24x21 ; ok1 ox1 ox2 of of of ¼ þ3 ¼ 12x21 15; ok2 ox1 ox2 of of ¼4 ¼ 24; ok3 ox2 of 1 of ¼ ¼ 3: ok4 2 ox2 To find the critical points on the edge e1 ¼ ðv1 ; v2 Þ we have to solve the system of of ¼ ok1 ok2 over the domain k1 þ k2 ¼ 1;
0 < k1 ; k2 < 1;
k3 ¼ k4 ¼ 0:
We get 24x21 ¼ 12x21 15: Since x1 ¼ k1 þ 1; x2 ¼ 2k1 þ 3;
pffiffiffi pffiffiffi the critical point is x1 ¼ 5=2, x2 ¼ 5 5 with the objective function value )18.82. To find the critical points on the edge e2 ¼ ðv1 ; v4 Þ we have to solve the system of of ¼ ok1 ok4 over the domain k1 þ k4 ¼ 1; We get 24x21 ¼ 3: Since x1 ¼ 2k1 ;
0 < k1 ; k4 < 1;
k2 ¼ k3 ¼ 0:
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
355
1 1 x2 ¼ k1 þ ; 2 2
pffiffiffi pffiffiffi the critical point is x1 ¼ 1=2 2, x2 ¼ 1=8 2 þ 1=2 with the objective function value )2.65. pffiffiffi pffiffiffi Similarly we get the critical point x1 ¼ 3=2; x2 ¼ 4 3=2 on the edge e3 joining the vertices v2 and v3 with f ðx1 ; x2 Þ ¼ 18:80. On the remaining edge e4 is no critical point. Now, comparing all the functional evaluations of f at all the critical points and at the vertices, the global optimal solution (x1 ¼ 2; x2 ¼ 1) yields the global optimal value of )32. Example 11. Another interesting example for which the proposed algorithm can be efficiently applied can be found in [13]. Author proposes an algorithm for solving a linear fractional functionals program when some of its constraints are homogeneous. 2x1 þ 6x2 x1 þ x2 þ 1
Max
f ðx1 ; x2 Þ ¼
subject to:
x1 þ x2 6 4; 3x1 þ x2 P 6; x1 x2 ¼ 0; x1 ; x2 P 0:
The partial derivatives of the objective function are: of 4x2 þ 2 ¼ ; ox1 ðx1 þ x2 þ 1Þ2 of 4x1 þ 1 ¼ : ox2 ðx1 þ x2 þ 1Þ2 The gradient vanishes at x1 ¼ 1=4, x2 ¼ 1=2, but this point is not feasible. Another candidate for a critical point is any point on the line x1 þ x2 þ 1 for which the gradient is not defined, but none of these points is feasible. The vertices of the feasible region and the corresponding objective function values are listed in Table 2.
Table 2 Vertices and function values at the vertices for Example 11 Vertex v1 v2
Its coordinates
f ðxÞ
(3/2,3/2) (2,2)
3 3.2
356
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
The feasible region for this example consists only of the line segment between both the vertices. Since we already know that there is no interior critical point, we do not have to look for the critical points on the line segment ðv1 ; v2 Þ Therefore, the optimal solution is x1 ¼ 2, x2 ¼ 2, f ðx1 ; x2 Þ ¼ 3:2, which is superior to the one found by Chandra, who reported the solution x1 ¼ 3=2; x2 ¼ 3=2; f ðx1 ; x2 Þ ¼ 3. Example 12. This example shows the use of the proposed algorithm in the case of an unbounded feasible region. Suppose we wish to maximize f ðx1 ; x2 Þ ¼ x1 x22 þ 4x2 ; subject to the same feasible region as in Example 6. The gradient of the objective function is ð1; 2x2 þ 4Þ, which is nonzero. Therefore, there is no critical point in the interior of the feasible region. The vertices of the feasible region and the corresponding objective function values are listed in Table 3. Parametrical representation of the feasible region is given by x1 ¼ k2 þ 2k3 þ 3k4 ; x2 ¼ k3 þ 3k4 þ l; where k1 þ k2 þ k3 þ k4 ¼ 1;
0 6 k1 ; k2 ; k3 ; k4 6 1;
l P 0:
Partial derivatives of objective function over each k and l are: of ¼ 0; ok1 of ¼ 1; ok2 of ¼ 2x2 þ 6; ok3 of ¼ 6x2 þ 15; ok4 Table 3 Vertices and function values at the vertices for Example 12 Vertex v1 v2 v3 v4
Its coordinates
f ðxÞ
(0,0) (1,0) (2,1) (3,3)
0 1 5 6
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
357
of ¼ 2x2 þ 4: ol To find the critical points on the interior points of unbounded edge e2 ¼ ðv1 ; w1 Þ, where w1 ¼ ð0; 1Þ is an extreme ray, we have to find the solution of of ¼ 2x2 þ 4 ¼ 0 ol over the domain l > 0. Since x1 ¼ 0; x2 ¼ l; the critical point is x1 ¼ 0, x2 ¼ 2 with the objective function value 4. There are no other critical points on the rest of edges. Since we are looking for a maximum, the global optimal solution is: x1 ¼ 3 and x2 ¼ 3 with the optimal value 6. Example 13. The last example is from [30]. The authors wanted to find the optimal solution to the following problem: Max
f ðx1 ; x2 Þ ¼ x21 x22 þ 16ðx1 þ x2 Þ
subject to:
x1 3x2 6 0; 3x1 2x2 6 6; 6x1 þ 7x2 6 42; 3x1 þ 5x2 6 15; 6x1 þ x2 P 6; 11x1 þ 4x2 6 44; x1 ; x2 P 0:
They reported that the optimal solution is x1 ¼ 2:64, x2 ¼ 3:74 with maximum value 81.10. When we were testing the proposed method on this example we have found out that the solution found by the authors is not feasible and that the objective function value in this point is )20.96. We think that the authors made a mistake in the formulation of the problem. The correct formulation should probably be:
358
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
Fig. 4. Feasible region for Example 13.
Max
f ðx1 ; x2 Þ ¼ x21 x22 þ 16ðx1 þ x2 Þ
subject to:
x1 3x2 6 0; 3x1 2x2 6 6; 6x1 þ 7x2 6 42; 3x1 þ 5x2 6 15; 6x1 þ x2 P 6; 11x1 þ 4x2 6 44; x1 ; x2 P 0:
Note that for the fourth and fifth constraints the inequality signs are changed. The feasible region is shown in Fig. 4. We have found the same solution as [30] by the proposed method if the formulation of the problem is corrected. It is one of the vertices of the feasible region. 4. Conclusion We have presented a new solution algorithm for the general linearly constrained optimization problems with continuous objective function. For a polyhedron specified by a set of linear equalities and/or inequalities, the proposed solution algorithm utilizes its parametric representation. This parametric
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
359
representation of the feasible region enables us to solve a large class of optimization problems including fractional linear programming and quadratic linear programming. The key to this generalized solution algorithm is that the constrained optimization problem is converted to an unconstrained optimization problem through a parametric representation of the feasible region. It favorably compares with other methods for this type of problems. The proposed algorithm, unlike other general-purpose solution methods, such as generalized reduced gradient, guarantees global optimal solution it has the simplicity, potential for wide adaptation, and deals with all cases. However, this does not imply that all distinctions among problems should be ignored. One can incorporate the special characteristic of the problem to modify the proposed algorithm in solving them. While the Lagrange and KKT (penalty-based) methods ‘‘appear’’ to remove the constraints by using a linear (or nonlinear) combination of the constraints in a penalty function, the proposed solution algorithm, however, uses the linear convex combination of vertices to remove the constraints. The main drawback for the proposed algorithm is that all the vertices of the feasible region have to be found. The vertex enumeration problem is a hard problem even with the recent progress, and we are not able to solve large scale problems. But problems with up to 20 dimensions can be solved in reasonable time. The main advantages of the presented algorithm are that • it covers all linearly constrained optimization problems, and • it always finds the optimal solution. There are many problems in the literature for which the proposed algorithm finds optimal solution and others do not. Some areas for future research include development of possible refinements. An immediate work is the development of an efficient computer code to implement the approach, and perform a comparative computational study.
Acknowledgements We appreciate the referees’ comments and suggestions. This work is supported by The National Science Foundation Grant CCR-9505732. References [1] W. Altherr, An algorithm for enumerating the vertices of a convex polyhedron, Computing 15 (1975) 181–193. [2] A. Arbel, S. Shmuel, Using approximate gradients in developing an interactive interior primaldual multiobjective linear programming algorithm, Eur. J. Oper. Res. 1 (89) (1996) 202–211.
360
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
[3] H. Arsham, A comprehensive simplex-like algorithm for network optimization and perturbation analysis, Optimization 32 (1995) 211–267. [4] H. Arsham, A complete algorithm for linear fractional programs, Comput. Math. Appl. 20 (2) (1990) 11–23. [5] H. Arsham, Affine geometric method for linear programs, J. Sci. Comput. 12 (1997) 289–303. [6] H. Arsham, Initialization of the simplex algorithm: An artificial-free approach, SIAM Rev. 39 (1997) 736–744. [7] H. Arsham, Computational geometry and linear programs, Int. J. Math. Algorithms 1 (1999) 251–266. [8] H. Arsham, M. Gradisar, M. Stemberger, Continuous global optimization over polyhedrons: A new introduction to optimization, Int. Math. J. 1 (1) (2002) 37–52. [9] D. Avis, A revised implementation of the reverse search vertex enumeration algorithm, in: G. Kalai, G. Ziegler (Eds.), Polytopes – Combinatorics and Computation, DMV-Seminar 29, Birkh€ auser, Basel, 2000. [10] D. Avis, K. Fukuda, A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra, Discrete Comput. Geom. 8 (1992) 295–313. [11] M. Best, R. Caron, The simplex method and unrestricted variables, J. Optim. Theory Appl. 45 (1985) 33–39. [12] D. Bremner, K. Fukuda, A. Marzetta, Primal-dual methods for vertex and facet enumeration, in: Proceedings of the 13th Annual ACM Symposium on Computers and Geometry, 1997, pp. 49–56. [13] S.S. Chandra, A linear fractional program with homogeneous constraints, Opsearch 36 (4) (1999) 390–398. [14] V. Chvatal, Linear Programming, Freeman, New York, 1983. [15] L. Cooper, Applied Nonlinear Programming for Engineers and Scientists, Englewood, NJ, Aloray, 1974. [16] L. Cooper, U. Bhat, L. LeBlanc, Introduction to Operations Research, Saunders, Philadelphia, 1977. [17] L. Cooper, D. Steinberg, Introduction to Methods of Optimization, Saunders, Philadelphia, 1970. [18] E. Dravnieks, J. Chinneck, Formulation assistance for global optimization problems, Comput. Oper. Res. 24 (1997) 1157–1168. [19] M.E. Dyer, L.G. Proll, An algorithm for determing all extreme points of a convex polytope, Math. Prog. 12 (1977) 81–96. [20] A. Fiacco, G. McCormick, Nonlinear Programming Sequential Unconstrained Minimization Techniques Classics in Applied Mathematics, SIAM, Philadelphia, PA, 1990. [21] R. Fletcher, Practical Methods of Optimization, Wiley, New York, 1987. [22] C.A. Floudas, P.M. Pardalos, in: A Collection of Test Problems for Constrained Global Optimization Algorithms, Lecture Notes in Computer Science 455, Springer, Berlin, 1990. [23] K. Fukuda, V. Rosta, Combinatorial face enumeration in convex polytopes, Comput. Geom. 4 (1994) 191–198. [24] K. Fukuda, T. Liebling, F. Margot, Analysis of backtrack algorithms for listing all vertices and all faces of a convex polyhedron, Comput. Geom.: Theory Appl. 8 (1997) 1–12. [25] P. Gill, W. Murray, M. Saunders, M. Wright, A practical anti-cycling procedure for linearly constrained optimization, Math. Prog.: Ser. B 45 (3) (1989) 437–474. [26] L. Heeseok, Simultaneous determination of capacities and load in parallel M/M/1 queues, Eur. J. Oper. Res. 73 (1) (1994) 95–102. [27] F. Hillier, G. Lieberman, Introduction To Operations Research, sixth ed., McGraw-Hill, New York, 1995. [28] L. Kin-nam, L. Pui-lam, T. Ka-kit, A mathematical programming approach to clusterwise regression model and its extensions, Eur. J. Oper. Res. 116 (3) (1999) 640–652.
H. Arsham et al. / Appl. Math. Comput. 134 (2003) 345–361
361
[29] C. Kelley, Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia, PA, 1995. [30] I.B. Mohd, Y. Dasril, Constraint exploration method for quadratic programming problem, Appl. Math. Comput. 112 (2000) 161–170. [31] T.S. Motzkin, H. Raifa, G.L. Thompsonm, R.M. Thrall, The double description method, Ann. Math. Stud. 8 (1953) 51–73. [32] C. Nilotpal, Some results concerning post-infeasibility analysis, Eur. J. Oper. Res. 73 (1) (1994) 139–143. [33] J. Pinter, Continuous global optimization: An introduction to models, solution approaches, tests and applications, ITORMS 2, 1999. [34] H. Williams, Model Building in Mathematical Programming, Wiley, Chichester, UK, 1999. [35] W. Winston, Introduction to Mathematical Programming: Applications and Algorithms, Wadsworth, New York, 1997. [36] T. Yamada, J. Yoruzuya, K. Kataoka, Enumerating extreme points of a highly degenerate polytope, Comput. Oper. Res. 21 (1994) 397–410. [37] G. Ziegler, Lectures on Polytopes, Springer, New York, 1995.