Computerschem. EngngVol.21, No. 4, pp. 351-369, 1997 Copyright© 1996ElsevierScienceLtd Printedin GreatBritain.All rightsreserved PII: S0098-1354(96)00282-7 0098-1354/97$17.00+0.00
Pergamon
GLOBAL OPTIMIZATION IN GENERALIZED GEOMETRIC PROGRAMMING COSTASD. MARANASand CHRISTODOULOSA.
FLOUDAS
Department of Chemical Engineering, Princeton University, Princeton, NJ 08544-5263, U.S.A. (Received 17 July 1995; revised 24 January 1996) Abstract A deterministic global optimization algorithm is proposed for locating the global minimum of generalized geometric (signomial) problems (GGP). By utilizing an exponential variable transformation the initial nonconvex problem (GGP) is reduced to a (DC) programming problem where both the constraints and the objective are decomposed into the difference of two convex functions. A convex relaxation of problem (DC) is then obtained based on the linear lower bounding of the concave parts of the objective function and constraints inside some box region. The proposed branch and bound type algorithm attains finite ~-convergenceto the global minimum through the successive refinement of a convex relaxation of the feasible region and/or of the objective function and the subsequent solution of a series of nonlinear convex optimization problems. The efficiency of the proposed approach is enhanced by eliminating variables through monotonicity analysis, by maintaining tightly bound variables through rescaling, by further improving the supplied variable bounds through convex minimization, and finally by transforming each inequality constraint so as the concave part lower bounding is as tight as possible. The proposed approach is illustrated with a large number of test examples and robust stability analysis problems. Copyright © 1996 Elsevier Science Ltd Keywords: global optimization; generalized geometric programming; signomials; robust stability analysis
design (Avriel and Wilde, 1967), oxygen production (Hellinckx and Rijckaert, 1972), membrane separation process design (Dembo, 1978b), optimal design of cooling towers (Ecker and Wiebking, 1978), chemical equilibrium problems (Passy and Wilde, 1967b), optimal control (Jefferson and Scott, 1978), batch plant modeling (Salomone and Iribarren, 1992; Hellinckx and Rijekaert, 1971), optimal location of hydrogen supply centers (Avriel and Gurovich, 1967) and many more. By grouping together monomials with identical sign, the generalized geometric (GGP) problem can be formulated as the following nonlinear optimization problem:
1. I N T R O D U C T I O N
Generalized geometric or signomial programming (GGP) problems are characterized by an objective function and constraints which are the difference of two posynomials. A posynomial G(x) is simply the sum of a number of posynoraial terms or monomials gk(x), k = 1,...,K multiplied by some positive real constants ck, k = 1,...,K. Each monomial gk(X) is in turn the product of a number of positive variables each of them raised to some real power, gk(x)=~'~-'~...~N~,k= 1..... K where dla, d2.k..... dN.k ~ R and are not necessarily integers. The term geometric programming was adopted because of the key role that the well known arithmeticgeometric inequality played in the initial developments. Generalized geometric problems were first introduced and studied by Passy and Wilde (1967a) and Blau and Wilde (1969) when existing (posynomial) geometric programming (GP) formulations failed to account for the presence of negatively signed monomials in models for important engineering applications. These applications are extensively reviewed in Rijckaert and Martens (1978a) and Ecker and Wiebking (1978). Chemical engineering applications include heat exchanger network design (Duffin and Peterson, 1966), chemical reactor design (Blau and Wilde, 1969, 1971), optimal condenser
Department of Chemical Engineering, The Pennsylvania State University, University Park, PA 16802, U.S.A.
min Go(t)=G~(t) - Go(t) t
subject to Gj(t)=GT(t) - G T ( t ) - 0 , j = 1..... M (GGP) ti->0, i= 1..... N N
where GT(t)= ~E cjk/__[Il ~'~j=0 ..... M N
G/(t)= ,.E cj, ~=, g','j=O ..... M where t=(tl,...,tN) is the positive variable vector; Gf,G7 j = 0 ..... M are positive posynomial functions in t; auk are arbitrary real constant exponents; and cjk are positive coefficients. Also, sets Kf,K7 count how many positively/negatively signed monomials form posynomials GT,Gf, respectively. In general, formulation (GGP) corresponds to a nonlinear optimization problem
351
352
C.D. MARANASand C. A. FLOUDAS
with nonconvex objective function and/or constraint set. Note that if we set ~ - = 0 for all j =0,...,M then the mathematical model for (GGP) reduces to the (posynomial) geometric programming (GP) formulation which laid the foundation for the theory of generalized geometric (GGP) problems. Unlike (posynomial) (GP) problems, (GGP) problems remain nonconvex in both their primal and dual representation and no known transformation can convexify them. They may involve multiple local minima and/or nonconvex feasible regions and therefore are much more difficult problems to solve. Local optimization approaches for solving GGP problems include bounding procedures based on posynomial condensation (Duffin, 1970; Avriel and Williams, 1970, 1971; Duffin and Peterson, 1972; Avriel et aL, 1975a,b; Jefferson and Scott, 1978); iterative solution of KKT conditions (Blau and Wilde, 1971; Kuester and Mize, 1973; Rijckaert and Martens, 1978b) and adaptations of general purpose nonlinear programming methods (Ratner et al., 1978; Rijckaert and Martens, 1978a; Kochenberger et aL, 1973; Abrams and Wu, 1978; Bradley, 1973; Beck and Ecker, 1975; Haarhoff and Buys, 1970). A computational comparison of available codes for signomial programming is given in (Dembo, 1978a,b; Rijckaert and Martens, 1978b). While local optimization methods for solving GGP problems are ubiquitous, application of specialized global optimization algorithms on GGP problems is scarce. Existing global optimization algorithms such as GOP (FIoudas and Visweswaran, 1990; Floudas and Visweswaran, 1993; Visweswaran and Floudas, 1993; Liu and Floudas, 1994) and aBB (Maranas and Floudas, 1992, 1993, 1994a,b) are not currently designed so as to handle efficiently the special structure of signomial terms. Falk (1973) proposed such a global optimization algorithm based on the exponential variable transformation of (GGP) and the convex relaxation and branch and bounding on the space of exponents of negative monomials (j = 1..... M and k ~ K j ). In this paper, (i) an alternative partitioning in the typically smaller space of variables i =l .....N is investigated; (ii) a number of features aimed at improving the efficiency of the branch and bound procedure are discussed; (iii) the convergence properties of the proposed approach are analyzed; (iv) and a number of example problems in the areas of engineering design and. stability analysis are considered.
nonlinear functions may involve eigenvalues of either sign implying that in general they are neither convex nor concave. However, by applying the transformation (Duffin, 1970), ts=exp zs,i= 1..... N to the original formulation (GGP) we obtain the following programming problem (DC). min G0(z)=G~(z) - Go(z) z
subject to
Gj(z)=G[(z)
-
Gj-(z)_<0,j= 1..... M (DC3)
z~<-zi<-ziv, i= 1..... N
G~(z)=
Y~ cjkexp{ E~=, ctiikzs}
..... M
Because the exponentiation of a linear expression is convex, both the objective function and constraints of formulation (DC) are the difference of two convex functions. Also, since z~=log(~) it is necessary that the lower bounds ~ of all tl must be strictly positive for such a reformulation to exist. This problem can be overcome by appropriately scaling all problematic original variables tl so as ti' =ti + max { O, - ~ + E}, e>0. Note that if G~(z)=0 for every j=O,...,M then problem (DC) becomes a convex programming problem. 2.2. Lower bounding A lower bound on the solution of problem (DC) can be obtained by solving a convex relaxation of (DC). Such a convex relaxation can be realized by underestimating every concave function, - G j - ( z ) with a linear function - L f ( z ) for every j=0,...,M. This linear function is constructed by underestimating every implicitly separable t e r m - e x p {
,if,=, cr~#z~} with a linear
function. This defines the following relaxed convex programming problem (R) whose solution provides a lower bound on the solution of (DC). min G ~ ; ' V ( z ) = G ~ ( z ) - L o ( z )
2. ANALYSIS 2.1. Difference of two convex functions transformation The objective function and constraints in the original formulation GGP are in general nonconvex functions. Based on an eigenvalue analysis, it is quite straightforward to show that the Hessian matrices of these
z
subject to Gj°""(z)=G/(z) - L/(z)--
Global optimization in generalized geometric programming
353
Table 1. Maximum separation as a function of interval width
= yo. ix
and A# =
A,,Jexp(ix)
0.00026 0.00275 0.02808 0.26449 1.75079 2 5 10
~exp(~)- ~exp(~) ~_ ~ , exp(Y~jk)- exp(Y~#)
10~ 10~ 10~ 10~ 1 1.51572 71.2807 14752.3
,
~=
N X " L u mln(trokz~ ,Bijkz~ ), i=1 N
~=
E max(~,~#,%,ff)
i=1
Note that the linear underestimator of - G f , - 1.7 (z), is composed by the sum of a number of linear functions, each one of which is a lower bound on an implicitly univariate function of the form -exp(Y). Clearly, the smaller the difference between the original functions G7 (z) and the linearizations Lf (z) the closer the solution of(R) will be to the solution of (DC). The quality of this lower bounding can be analyzed by examining the tightness of underestimation of every concave term of the form -exp(Y) with a linear function inside some interval [l't,l'~]. Let A(Y) be the separation between the concave function - exp(Y) and its linear underestimator inside the interval [I/-, Y~]. A(Y) is concave in Y and it reaches its single maximum at y. ,
/exp(~)-exp(~)l
other hand, as 6 goes to infinity, A,,~ goes to infinity as zl,,~ ~ O (exp(6)). The scaled maximum separation A,,~ exp(YL) for different values of S is shown in Table 1. Note that for small values of 6< ~ 0.5 the maximum separation A,,x goes as 0(6 2) following the theoretically derived predictions. Therefore, if for example a maximum separation of 10-4 is required an interval width of just 10 -2 suffices. On the other hand, for larger values of 6 z 2 the maximum separation grows exponentially with 6 rendering this underestimating scheme inefficient. 2.3. Variable scaling
To maintain a tight convex underestimation of the concave terms it is important to keep the ranges of the linear terms participating as narrow as possible. To this end, a variable rescaling procedure is proposed on the original formulation (GGP). This variable scaling can be accomplished if all ai#'s are integrands or rational numbers that can be reduced into integrands. Note that, by preserving all constants in the transformed objective function and constraints the same convergence and feasibility tolerances can be maintained. The proposed linear rescaling is the following
with a value
ti*-~+
A ~ = exp(YC)( 1 - Z+ Zlog(Z)) where Z= exp(~) -6
1,6= rv_ yL
Note that as the interval width 6= 1~ - I'~ goes to zero, Z approaches one and therefore the maximum separation goes to zero as well, 6 ---* 0, Z ---, 1, and A,~ ---, 0. The rate at which this maximum separation goes to zero can be determined by the Taylor expansion of Am,~(6) at 6=0.
tci--'---~ (~e" - ~'"e"), i= 1..... N
U.new ,new ti - - l~i
where ~.,e,,( ..... are selected so that log(t~"ew) log(~ '"e~) is small. The effect of this scaling on the maximum separation can be realized by considering the following simple example: ~ t exp(Im) =tv subject to 0 < tL=exp(YL) min --< t --
It was shown earlier that the maximum separation between the transformed concave objective function and the linear underestimation is Amax
Amax
exp(YL) exp(YL)[ 1 82
63
118'
563
~- + ~g + 5 - ~ + ~
4163
+~
e1x_p_(66 ) -
+ e. _x _p (66 ) - l log e x p ( 6 ) - 1 )
567
+~
+o~#)
By considering only the first leading term of the positive series expansion we deduce that the rate at which A,,~x approaches zero as 6goes to zero is A,,o~~ 0(82). On the
The single variable t of the problem can be scaled as follows: t~--tL+
tu -
lL
t t ..w - tL,,,w~i .
tU, ne w - - tL.new x
354
C. D. MARANASand C. A. FLOUDAS
Without loss of generality we select tZ'"ew=exp(YL'")= I which means that I"L'~=0 and therefore I'~"`~'= ~"~. The original minimization problem now becomes,
r
exp(Yv) exp(YL) ~ t ~ 7 . ~ _ ~ (t~ ' ' - 1)] 1
min - [exp(YL)+
subject to 1 -< t -< exp(t~ "w)
original constraint Gj(t) and the fact that all variables t are strictly positive. The positivity of t implies that for every nonlinear inequality constraint Gj(t) there exists a family of nonlinear constraints Gi(t ) whose elements are completely interchangeable with G~(t) and are derived by multiplying the original constraint by ~'Jr-'~--.~ where d0, i =I,...,N, j =I,...,M are arbitrarily selected real parameters.
The maximum separation after rescaling is now A. . . . . . . exp(8)- 1 ,,~=exptr-) e x p ~ i Z 1 'I
1
exp(8 '~') - 1 exp(8 ~`w)- 1 exp(8 '°~) - I ] ~,w + 8,ew log 8,~w
Note that even though the values of the elements of Gj might not he identical for every t, because ~,,..... ~, O, they always maintain the same sign as Gj(t). This implies that every element of Gj
m
N
The ratio of the maximum separation for the scaled example over the maximum separation for the original one is therefore, R(~,~ ~w)=
new Araax Amax
exp(6 "~w)- 1
1 1 exp(8~`w)- 1 b~w + ~-7 log fi,~w
1
1
exp(8)- 1
6
1.
+ ~ tog
exp(6)-I
It must be emphasized, however, that the proposed variable scaling does not affect the scaling of the constraints. This implies that the same feasibility tolerance ef can still be utilized before and after the variable scaling. The effect of this rescaling procedure on the global optimization algorithm is illustrated in the motivating example of subsection 4.1. 2.4. Transformation of inequalities
The feasible region of the original problem (GGP) is defined by the fraction of the intersection of the nonlinear inequality constraints Gj(t)=GT(t)- GT(t) -< 0 , j = l .... ,M
N
Gj
i= I
k~KZ
is an equivalent representation of the original constraint Gj(t) and therefore it can replace it in formulation (GGP). Clearly, unlike the variable scaling presented in the previous subsection, this multiplicative transformation affects the scaling of the problem. Therefore, appropriate changes have to be made on the feasibility as well as convergence tolerances to account for the changed scaling of the objective function and constraints. After applying the exponential transformation on the constraint Gj(z,tj) we obtain,
exp[ c which can replace the constraint Gj(z) in problem (DC). Even though all constraints Gi(z,dj) ~ Gj are interchangeable, this is not the case for their convex relaxations. By linearizing the concave terms Gf(z,dj), different elements of G give rise to distinctly different convex relaxed constraints GT"V(z,dj) whose domain of feasibility depends on the selection of dj.
which lies within the hyperectangle defined by the box constraints O < ~<-ti<-rV,,i=l ..... N
The convex relaxation (R) of (GGP) approximates the original feasible region with a relaxation of its convex hull. The closer this relaxation is to its convex hull the tighter the lower bounding of the objective function will be. This motivates the need to preprocess the original nonlinear constraints so that the employed relaxation will be as tight as possible. This can be accomplished by taking advantage of the mathematical structure of each
Because the tightest lower bounding of Gj(zj) is sought, for a given z the best selection o f d i is the one for which Gj(z,dj) is maximized, d~(z)=arg max Gj(z,dj) dr
Global optimization iri generalized geometric programming Note that it is not possible to consider the convex relaxations of all elements of Gj because their number is infinite. Unfortunately, the maximization problem not only is parametric in z but also for a given z is as difficult to solve as the original problem (DC). Therefore, instead the following two criteria will be utilized to guide the selection of a good value for dj for every constraint j. The first criterion is to minimize the number of variables z~ participating in every concave function Gf(z,dj). This requirement reflects the primary concern of any deterministic global optimization algorithm which is to keep the number of variables where branching is required at a minimum. The second criterion is associated with the observation that for a given constraint j, a very important consideration when selecting d i is to maintain Y~ka s V~ small as possible for every k ~ K p . Although the rescaling of variables aims at this, it is sometimes necessary to complement this rescaling with an appropriate selection of di. Because the primary concern is not to allow any Yikto become larger than ~ 2-5 rather than minimizing a weighted sum of intervals ~ i - ~ , it appears to be more meaningful to minimize the maximum ~ - ~ over all k e K i (worst case analysis). Furthermore, in practice, more than one convex relaxed function G)""'(z,dj) per constraint j can be incorporated in formulation (R) to tighter approximate the convex hull of constraint Gj(z). Although no rigorous way is provided for selecting the optimum selection of the set of dji's per constraint, computational experience reveals that by utilizing the proposed criteria convex underestimation of every constraint j is improved.
U
2.5. Reduction o f rectangular partitions
As discussed in an earlier subsection, the convex relaxation (R) of the problem (DC) is aimed at deriving a lower bound to the solution of (DC). This lower bound is calculated by solving the convex programming problem (R) inside some hyperectangle defined by: <- zi <- zUi, i= l ..... N
Clearly, the smaller this hyperectangle, the tighter the convex lower bounding of all Gj(z), j =0,...,M and therefore the closer the solution of (R) will be to the solution of (DC). Deterministic global optimization algorithms improve the quality of the obtained lower bounds by partitioning the initial hyperectangle into smaller ones, and repeatedly solving (R) inside each one of them. A number of branching rules are utilized which make the subdivision process as efficient as possible. However, the employed branch and bound procedure is exponential in nature and therefore potentially very computationally consuming. This means that one would desire to fathom all or part of a hyperectangle region
355
without having to resort to the CPU demanding branch and bound whenever possible. To meet this objective, one can attempt to locate the minimum/maximum value for every variable zi for which the nonlinear constraint set of (DC) remains feasible and also the value of the objective function is at least as good as the best current upper bound U. min/max z~,, i' = i ..... N subject to Gj(z) -< 0, j = 1..... M Go(z) -< U ~ --< Zi --< Z~, i = l ..... N However, this is a nonconvex problem of equivalent difficulty with the initial problem (DC). Instead, one can solve a relaxation of this problem by replacing all nonlinear constraints and objective function with their respective convex relaxations. min z~,, i' = I ..... N ( B R ) (BR)
subject to G~°""(z) - 0, j = 1..... M Goc o r n , (z) -< U
#<
.y
,: --Z~ < ~ , i = l
..... N
The solution of these convex bound reducing (BR) problems provide valid lower/upper bounds on all z, so that all relaxed nonlinear constraints are feasible and the relaxed objective function is less than some upper bound U. Note that if (BR) is infeasible this implies the intersection of the feasible region of problem (DC) with the domain where the objective function is at least as good as some prespecified bound U is the empty set. In this case, the corresponding rectangular partition can be eliminated because the global solution is guaranteed not to be situated inside it. As shown in Fig. 1 by minimizing/maximizing all z~, the feasible region of (R)
/
Initial box constraints Convex / relaxation
Nonconvex feasible region
at' Improved box constraints Fig. 1.
C. D. MARANASand C. A. FLOUDAS
356
is inscribed inside the smallest possible rectangle defining a new refined set of bounds of z. Because the reduction of the set of box constraints results in tighter underestimating functions G~°""(z) and G~)°'~(z)the minimization/maximization of all z~ can be repeated by dynamically updating the bounds ZL,Z~ for more than one iteration until no further significant improvement on the bounds is achieved. This defines the following bound improving procedure:
solution or to be infeasible. Note that the form of the objective and constraints in formulation (R) is
STEP 0. Set i'*-I STEP 1. Solve
Note that because the function exp(x) is monotonically increasing, a lower bound G)''''L of Gj'""(z) can be obtained by fixing all ~k, k E K7 at their lower bounds Y~jk,and all Yjk,k ~ K / at their respective upper bounds since Bjk --> 0.
rain zi' G~°O~(z) -< 0 , j = 1..... M (BRmin)
subject to
Gocony (z)
< =U
G~°""(z)=E~,r7 cj,exp(\ ,=,~~k)
-E,,K/ c#~j,+B#(
~i=, ~')t j=O ..... m
~ < Zi< Zv, i= 1..... N STEP 2. Set
G~
-k
c#exp *
i=l
Z~i ,°hI ~ . -
=° ..... -
STEP 3. Solve max Zr G~°"V(z) - 0, j = 1..... M (BRmax) COIllY Go (z) -< U
subject to
z~ <- zi <- zVi, i= l ..... N STEP 4. Set zV,,'°ld *"- z~ z~ ~- zT, STEP 5. If i'
n (zf- zY) STEP 6. If
Ni=~ -> r t h e n go to STEP0. rI (z~ .°'~ - zy .°'~) i=l
Otherwise END Parameter r < l typically r ~ 0.6-0.9, is simply the minimum hyperectangle volume reduction per iteration which is required to further continue the bound improving procedure. It must be noted, however, that with this procedure 2N convex programming problems must be solved per iteration. In this work, this procedure is applied only in the initial partition. For intermediate rectangular partitions, a relaxation of this procedure is utilized for detecting whether the current partition element can be fathomed. This alternative check is based on first deriving a lower bound for the objective function and every constraint of formulation (R) using monotonicity analysis principles. If the lower bound of the objective function is greater than U or if the lower bound of any of the constraints is greater than zero then the corresponding partition element can be eliminated since it is guaranteed either not to include the global minimum
The corresponding partition element can be eliminated if
G~o°~vz >- U, or 3 j = 1..... M such that G)°"'z >- 0 These feasibility checks can be applied to every subdivision element before the convex minimization of problem (R) is performed and the associated computational effort is negligible. Note that the relaxation of the bounding procedure is a computationally efficient check for detecting infeasibility of the constraint set especially at earlier stages of the algorithm. Computational experience has shown that this results in significant reduction in the number of required convex minimizations (10-50%).
2.6. Monotonicity analysis Monotonicity analysis principles can be utilized to improve the convex relaxation of (DC) by dictating that some nonlinear inequality constraints must be satisfied as equality constraints at the global minimum solution. min G0(z) z
subject t o
Gj(z)--~0j = l .....
M (DC)
Hansen et al. (1989) pointed out that if there exists a global minimum solution for (DC) and the objective function G0(z) is monotonically increasing (decreasing) in some zi, then at least one constraint which is monotonically decreasing (increasing) with respect to zi will be active at the global minimum solution. Note also that if z~ is not present in the objective function then at least one constraint involving zi will be active at the global minimum solution. The conclusions from the
Global optimization in generalized geometric programming monotonicity analysis can be more conveniently expressed by associating a boolean variable a s for every constraint j = I,...,M. If the constraint j is active at the global minimum solution then ai = 1, otherwise as =0. Let also J,- , j s , ~ denote the sets of indices of constraints j involving variables z~which are monotonically decreasing, monotonically increasing, and not monotonous or with unknown monotonicity in z~, respectively. The monotonicity principles can then be expressed mathematically as follows: 5~ a s >- 1, if G0(z) is monotonically decreasing in z~
J~7u ~,
j~, u~
as --> 1, if G0(z) is monotonically increasing in
i,cu~,Za~ -> Zi
Clearly, the smaller the sets f,, the less the number of ai's that will participate in inequality relations, and therefore it will be more likely to have some aj's fixed at one. Note that if aj = I, then the corresponding constraint j is satisfied as an equality. This can lead to elimination of a variable or addition of an extra constraint. A constraint G~(z) of formulation (DC),
~ao,z~ ) ~,K;c,~exp( \ i=~
Gj(z)= E
k~gj-
i= ;
is guaranteed to be monotonic in z~ if a(ik - 0, Yk E Kf and or
ao~ <--0, Vk ~ K j
ao~ <- 0, Vk E K~ and
Gj(z,d)) is monotonically decreasing in z~ if, max a ~ -< min a0~
ot0k >- 0, Yk E K/
However, by appropriately replacing the initial constraint Gj(z) with one from the family Gj
do
and
~[
min
max
aOk]
These monotonicity analysis principles can be utilized to eliminate variables or further constraint formulation (DC). In the next section, a branch and bound type global optimization algorithm is introduced for solving problem (DC).
zi
1 or so,~g aj -> 1, if G0(z) is independent of
357
3. G L O B A L O P T I M I Z A T I O N
3. I. Description A global optimization algorithm is proposed for locating the global minimum solution of (DC) based on the refinement of converging lower and upper bounds through the solution of convex programming problems (R). Clearly the value of the objective function Go(z) at any feasible point z provides an upper bound on the global minimum Go. Lower bounds on Go within some box constraints are derived by solving the convex lower bounding optimization problem (R). Since (R) is convex, its single global minimum within some box constraints can be routinely found with a commercially available optimization algorithm (e.g. MINOS 5.4 Murtagh and Saunders, 1987) and will always underestimate the global minimum of (DC) within the same box constraints. Assuming that this global minimum solution of (R) is a feasible point for (DC), an upper bound on Go can then be obtained by simply calculating Go(z) at the global minimum point of (R). Based on the analysis performed in subsection 2.2, the gap between the upper bound and the lower bound on G~ will be at most, Ao..,~= E cokexp(~) k~Kc~
Gs(z'dJ)= , ~ ; c/'exp[ ,~
(aJ/~+d~s)zi]
exp(6c~)- I 1
- ' ~ x j - cj~exp[
,=[~(a°k+d°)zl}
one can come up with less restrictive conditions for monotonicity. More specifically, G)(z,d)) is monotonically increasing in zi if, min a0, ~ max a~,
and d u e [ max ke~/
where
60~
1
exp(6ok)- 1 . exp(600- 1 / + 6ok log
]
6ok= ~ - ~
and ~ =
N
Z m l"n ( otlokziL,OhokZiu),
i=l N
Y~ok= i=l E max( aiokZ~,aiokz~) Note that A0...... goes to zero as all 6ok, k e Kc; approach zero. Furthermore, because
rain ot0k)
O~Uk' k e g ;
lim
6~=:,U- ~--0 ÷, V i= I,..,.N
8o~=0, Y k~K~7
358
C.D. MARANASand C. A. FLOUDAS
we have
linearly independent Y'#s, j = 0 ..... M, k ~ Kf . Typically lim
~ = :~r - :~---O+, Y i = l ,. , ,,N
this number is equal to the number of zi participating in any of the terms Gf (z). This defines the following set of variables where branching is required.
Ao.m~x= 0.
This implies that as the current box constraints [zL,zU] collapse into a point the maximum separation Ao..... between the original objective function of (DC) and its convex relaxation in (R) becomes zero. Because A0..... is a continuous function of ~k -> 0, k ~ Ko and thus of t~ -> 0, i=1 ..... N, for every positive number e~ there will always be a positive number 6" such that if all tS0~-
~
kEK~-
c:exp(~) 1
exp(~,)- 1
1
~k
+
exp(cS:)- 1,
~k
log
exp(~,)- 1 1
where ~ , : ~ - N
and Y~:= i=1 Y- mln(°liJt~Zi'°lifl
Y~:= Z max(ai:z~,ao:~) i=1
go to zero as all ~,, V k E K f or alternatively 6~= z~-z~---,0 +, V i=,...,N approach zero. Furthermore, Aj...... are continuous functions of 3:, k ~ K f and also of 6~, i= 1..... N. This implies that for every positive number e/there will always be a positive number 6" such that by selecting 6# -< 6* we have max A/.m,x <-- e/. jEK~
The interpretation of this result is that by reducing the bounds [zL,zv] on z, differences between the feasible region of the original problem (DC) and its convex relaxation (R) become arbitrarily small. Therefore, any feasible point z' of problem (R) (even the global minimum solution) becomes at least e:feasible for problem (DC) by sufficiently tightening the bounds on z around this point. After establishing upper and lower bounds on the global minimum inside some rectangular domain, the next step is to tighten them. This is accomplished by restricting the rectangular size. Tighter box constraints can be realized by partitioning the rectangle that the initial box constraints define into a number of smaller rectangles. The minimum number of variables along which subdivision is required is equal to the number of
Z= {zr : 3ai,#~O,j=O ..... M, and k~Kj- } In the great majority of cases the number of z~ e Z is much smaller than the total number of terms Yj~,k e K j in formulation (DC). Therefore in this work subdivision is performed on the set of variables z~belonging in Z. One way of partitioning is to successively divide the current rectangle in two subrectangles by halving on the middle point of the longest side of the initial rectangle (bisection). Presumably, at each iteration the lower bound of Go is simply the minimum over all the minima of problem (R) in every subrectangle composing the initial rectangle. Therefore, a straightforward (bound improving) way of tightening the lower bound is to halve at each iteration, only the subrectangle responsible for the infimum of the minima of (R) over all subrectangles, according to the rules discussed earlier. This procedure generates a nondecreasing sequence for the lower bound of G~. Furthermore, we construct a nonincreasing sequence for the upper bound by selecting it to be the infenum over all the previously recorded upper bounds. Clearly, if the single minimum of (R) in any subrectangle is greater than the current upper bound we can safely ignore this subrectangle because the global minimum of Go(z) cannot be situated inside it (fathoming step). The next question is how small these subrectangles must become before the upper and lower bounds of G0(z) are within E,. and also the feasible region of (DC) is e:close to the feasible region of (R). Because E, and are very small numbers the maximum separations /tj........ will be proportional to the square of the diagonal t~of the current subrectangle (~ 0(3-')). This means that the required for convergence value of 6 is proportional to the square root of e:. Therefore, if for example e,., e/are set to be 0.0001, it suffices for 6 to be proportional to 0.01. The basic steps of the proposed global optimization algorithm are summarized in the following section.
3.2. Steps of the global optimization algorithm 3.2.1. STEP 1--Initialization A convergence tolerance e< and a feasibility tolerance e/are selected and the iteration counter lter is set to one. Appropriate global bounds ~8o, z U,Bo on the zi's are calculated based on the bound improving procedure discussed in subsection 2.5 and local bounds z~'l'~',zU,J"r for the first iteration are set to be equal to the global ones. The initial constraints Gj(z), j = 1..... M are replaced by an appropriately selected (based on the analysis of subsection 2.3) set of constraints Gj(z,dD E G. In this description of the algorithm, the parameters dj are
Global optimization in generalized geometric programming omitted from the constraints for the sake of simplicity. Finally, lower and upper bounds Go~°, G~~ on the global minimum G[ are initialized and an initial current point z~J'er is selected. 3.2.2. STEP 2--Feasibility check and update o f upper bound GUoso If the maximum over all constraints Gj calculated at the current point z~'/" is less than ep max Gj(z~s'~r) -< e: j = I,...,M
then the constraint set is e:feasible at the current point. If so, the objective function Go is calculated at the current point z~s'r and the upper bound G~s° is updated as follows, G~~° = min(GoUn°, G0(z~.U~"))
359
satisfy G~o°""'L >- G~n° or G~°"~* >- O, for some j = 1..... M then the corresponding subrectangle is eliminated (fathoming), go to STEP 6. Otherwise, proceed with STEP 5. 3.2.5. STEP 5--Solution of convex problems inside subrectangles Solve the following convex optimization problem (R) in both subrectangles (r=l,2) by using any convex nonlinear solver (e.g. MINOS 5.4 Murtagh and Saunders, 1987). min G[°"'(z) z
3.2.3. STEP 3--Partitioning of current rectangle The current rectangle [~ ,iter , Z~'t'~r], i=1 .... ,N is partitioned into the following two rectangles (r= 1,2):
subject to G~°"~(z) ~ 0 , j - 1.... ,M Z L ~-~ Z < : Z U
"Z~l ,lter
ZUI.Iter
_L,her _ U,lter~ ~lher "I" Zlller )
L,Iter Zltler
z~,,,~
- ZlL'lter
zUi,her"
..Liter-- U lte~ tilter "~"Zl ter ]
'
2
2 Z ~ Iter
z~.l,~r
U,Iter Zltte~
gUNd'er
where l Iter corresponds to the variable with the longest side in the initial rectangle,
ll'~r=arg max (ffs .... ~.l,~) i
3.2.4. STEP 4---Feasibility checks of convex relaxation (R) Update in both subrectangle (r = 1,2) the parameters Y~jk,Y~jk,a:k, B: as follows:
If a solution G'~'o~is less than the current upper bound, G~s° then it is stored along with the value of the variables z at the solution point Z~ot~S"r. 3.2.6. STEP 6---Update iteration counter iter and lower bound G ~ ° The iteration counter is increased by one, lter *-- lter + 1 and the lower bound G~ ° is updated to the minimum solution over the stored ones from previous iterations. Furthermore, the selected solution is erased from the stored set. G t S J o - Mr',lter' 0 -- ~O,sol
where
f'ff'dter'_ "~ O.sol --
min G[']ol, r= 1,2, 1= 1..... l t e r - 1 r,l
N
~ = Z min(ao: ~, aokz~), i=l N
Y~:= E max(auk ~, Oti#ZUi) i=l
3.2.7. STEP 7--Update current point Z c'[ler and current bounds z L'u'~, z v't'er on z The current point is selected to be the solution point of the previously found minimum solution in STEP 6,
Ajk= Y jkexp(Y jk)- Y:exp(Y j ) exp(Y~#) - e x p ( ~ )
zCdter ~ 7r',lter' ~ ~i,sol
and the current rectangle becomes the subrectangle containing the previously found solution,
~,,,e,, z~,,,~,, If there exists j=0,...,M such that one of the lower bounds
[~,,er,#,,,q=
$~;,:r' "'~'"" "'"' ) ,ifr'=l ~1,,,: ~-z:,,,. 2
kc~K~
i=l
~.,,,e zUj,,~,
360
C, D . MARANAS a n d C . A . FLOUDAS
lim (G~':~- G;~")=0, (58)
/ter---*~
[ z~'"" S, ""q =
.L,Iter'-- Ulter'x ~ltter "P Zltt'r' )
2 Z~ lter'
U.her' ZI#e(
, if r'=2
zUN,IIer'
where G;i"~" is a lower bound of G; inside the (r,lter) partition element and Gg.'~ is the best upper bound at iteration Iter not necessarily occurring inside the (r,her) partition element. In practice, the requirement
lira /ter...-.~
3.2.8. STEP 8--Check for convergence If(G~B°-G~ °} 6o, then retum to STEP2 Otherwise, ~:convergence has been reached and the global minimum solution, and solution point are:
r,lte (Gto'.~- Goz 9=0 for any infinitely decreasing sequence
of successively refined partition elements is difficult to verify because G~o"gis not necessarily attained at the partition element (r,her). Therefore, in view of the inequality G~'fber >- Grote,:>_ G~", the following, at least equally strong condition, suffices to be shown:
Go " - Go(Z':'"')
lim (Go.v r.her - r,he -GO.L r)-O
Iter-.oo
Z* ~ ' - Z c'lter''
where Iter" = arg { Go(Z:'I) = GgBn, I= 1..... lter}. 1
In the following section, a mathematical proof that the proposed global optimization algorithm converges to the global minimum is given based on the analysis of a standard deterministic global optimization algorithm presented in Horst and Tuy (1990).
In subsection 3.1 we have shown that by reducing the size of the partition element [zL,zu] where we assume that (R) is feasible then (DC) is Eyfeasible and the objective function G0(z) becomes arbitrarily close to its convex relaxation G;°~V(z).This means that,
G'o':b ~r- Gro':L" =O
lim
~,~=,~- =,~o+, v
i=
l,...,N
Therefore, it suffices to show that
3.3. Proof of convergence to the global minimum Convergence properties of a global optimization algorithm depend on: the limit behavior of the difference G~B°- @~n, j=0,...,M, for unfathomed successively refined partitions, (ii) the subdivision process of the current partitions, and (iii) the employed selection process of the partition(s) that have to be further refined. (i)
In the employed global optimization algorithm, a lower bound G~':;`r is obtained as the solution of the convex minimization of (R) for every partition element (r,lter). Also an upper bound Gt':b"~ is obtained as the value of Go at the single solution of problem (R) assuming that it is an ~ffeasible point for (DC). The partition element involving the minimum lower bound G~ger is selected for further refining according to the bisection rule and partition elements whose lower bound G,a,,r 0.L is greater than the current upper bound G~s° are fathomed. A sufficient condition for a global optimization algorithm to be convergent to the global minimum, stated in Horst and Tuy (1990) requires that the bounding operation must be consistent and the selection operation bound improving. A bounding operation is called consistent if at every step any unfathomed partition can be further refined, and if any infinitely decreasing sequence of successively refined partition elements satisfies,
lirn Iter-.~
max &i=0 i
This is equivalent with requiring that any unfathomed infinitely decreasing sequence of successively refined partition elements is exhaustive (Horst and Tuy, 1990). The employed subdivision process is the bisection where every partition element (r,her) is subdivided into two subrectangles r= 1,2 of equal volume by halving at the midpoint of the longest side. By scaling all sides of any partition element with the sides of the rectangle that the initial global constraints define, the scaled sides of the initial rectangle are all equal to one. Therefore, the condition of always subdividing along the longest side can be satisfied by simply subdividing first along the side k= 1, then along the side k=2, etc. until the last side k=K is encountered when the subdivision starts again from k = 1. By partitioning in this orderly manner each side of every successively refined partition element is halved exactly once every K subdivisions. Consequently, aider K subdivisions the diagonal of the resulting partition elements is one halftbe diagonal of the original partition element. Therefore, as the number of successive subdivisions of a partition element goes to infinity, the diagonals of the resulting partition elements go to zero. This implies that the employed subdivision process is
exhaustive. Consequently, for any infinitely decreasing sequence of successively refined partition elements lim
/ter..--~
(G'fl.b"-
G~'~")=0
Global optimization in generalized geometric programming meaning that the employed bounding operation is consistent. A selection operation is called bound improving if at least one partition element where the actual lower bound is attained is selected for further partition after a finite number of refinements. Clearly, the employed selection operation is bound improving because the partition element where the actual lower bound is attained is selected for further partition in the immediately following iteration. In summary, we have shown that the bounding operation is consistent and that the selection operation is bound improving, therefore according to Theorem IV.3. in Horst and Tuy (1990) the employed global optimization algorithm is convergent to the global minimum. In the next section the proposed global optimization algorithm is applied to a number of example problems. 4. COMPUTATIONALRESULTS First, a prototype generalized geometric problem is addressed and the effect of rescaling on the total number of required for convergence iterations is examined and shown to follow the theoretical predictions. Next, a number of engineering design problems are addressed, two of them in detail, and e-convergence to the global minimum is achieved in all cases. Finally, in the last subsection a model for checking the stability of given control strategies is presented which requires the solution of (GGP) problems and a number of stability analysis examples are considered. The proposed deterministic global optimization algorithm has been implemented in GAMS and computational times are reported for all examples on a HP-730 workstation. More efficient implementations in C are expected to result in significant CPU reductions.
4.1. Motivating example The following motivating example, taken from Avriel et al. (1975b) involves the minimization of a single variable subject to two nonlinear constraints.
361
"x2
KKT Points A:(3.823, 4.823) B:(4.0, 4.0)
^ ~(1.178, 2.178)
5 4,
2 1
2
3
4
5
"Xl
Fig. 2. though the objective function is linear there still exists three distinct KKT points A, B and C. Therefore, local optimization approaches can converge to either one of them at best depending on which basin of attraction the initial point is inside. The following linear variable scaling is then applied on both x~, x 2 and its effect is measured on the total number of iterations for convergence. 4.5 4.5 x, ,--- 1+ - - r x ' . . . . . 1), X2~"-I + - - ( X ' ~ U - 1 ~l U-l--
~ ' ' ' - 1).
This transformation defines a one to one mapping of the original variables x~, x 2 onto the new variables ~"", x'~'''~ whose upper bound U is a manipulated parameter. U-I x';'=l+~-(xt-I),x2=l+
U-1 4.~(x2-1)
1 "~
- a ; i lew , X~rew _ -
After substituting the expressions for x u, x_, into (P) we obtained the following transformed problem, parametric in U: min 1 + 4 . 5 x ''~ ,r",< .... ~ _ ~ - ( , " - I)
min xt q.r_~
1
1
subject to 1
1
subject to 4LXI+~X2-- ~ - -
~--1
<--0(P)
63+18U
3 2 ( U - 1)2 i 1 1 -1- 4 ~ + - -14 4 ' + 1 -
1 ---x, I
3
~ xl
27+54U 3 2 ( U - 1)2x'~
81 6 4 ( U - 1)-~(X7")2 (Pu)
3 -
~ x 2 <_ 0
<-5.5
81 21 +48U+ 12U2 -<0 6 4 ( U - 1)2 (x'~"'~)2 3 2 ( U - 1)2
<- x 2 <- 5.5
These two nonlinear constraints define a nonconvex crescent-shaped feasible region shown in Fig. 2 and even
81 81 17+56U+8U 2 56(U 1)2 (xJ~ew)2+ "(x~'ewh2+ )2 -5 6 ( U - 1 ) 2' 2 , 28(U-1
362
C. D. MARANASand C. A. FLOUDAS 45 + 3 6 U xTd..- 45+36U 2 8 ( U - 1)2 ~ 2 8 ( U - 1)2 ~ " -< 0 I < =~7",~
e " <-
U
Clearly, for any value of the new upper bound U > 1, problem (Pv) has the same solution with (P). Furthermore, the performed variable scaling does not affect the scaling of the constraints. Note that even though the scaling of the new variables x'~e"', ~ " depends on U, the value of the objective function and constraints remains unchanged in (Pv). However, the solution of the convex relaxation (Rtj) of (Pv) is a function of U. This implies that different values for U yield different number of required for convergence iterations for (Pu). After selecting the convergence as well as the feasibility tolerances to be equal t o ec=ef=10-'*, the proposed global optimization procedure is applied and convergence to the global minimum point (x~=1.178, x.~=2.178) is achieved from all starting points for different values of U. The average number of iterations as a function of U are summarized in Table 2. It appears that the total number of iterations for solving (Pu) is a monotonically increasing function of U. This implies that the tighter the rescaling, the better the underestimation will be. This is consistent with the theoretical results presented in subsection 2.2. Note that as U approaches one, numerical stability problems offset the benefits of marginally better underestimators. 4.2. A l k y l a t i o n p r o c e s s d e s i g n
This example involves the design of an alkylation unit (Dembo, 1978). The objective is to improve the octane number of some olefin feed by reacting it with isobutane in the presence of acid. The product of the reaction is distilled and the unreacted isopropane is recycled back to the reactor. All equality constraints are eliminated and the problem has been formulated as a signomiai optimization problem.
C21X7 + C2~C2X3 Ix4 I -- C23X2X 3 I ~
c24x7 J + c25x.,x[ i x ( ' - c 2 0 x ~ 7 ~x~ ' x ( ' <- 1 c27x.~ ~+c28x.~ ~x7 <- l C29X5 -- C30X7 ~---
1
C31X3 --C32X I ~
1
C33XiX ~ I'+'C34X31 ~
1
c 3 ~ 3 ix4- ~- c36x.-,x~~ --< I c~Tx4+c38x~lx~4 <-- 1 c39x~x6 + c~x~ - CalX3 <-- 1 c 4 ~ ~ ~x3 +c43x ~ ~ - c ~ x 6 <-- 1
1500 --< x~ --< 2000 1 -< x2 -< 120 3000 --< x3 -< 3500 85 -- x4 --< 93 90 - x5 --< 95 3-
min c~x~ + c 2 x ~ x 6 + c ~ 3 +cd¢,.+c5 - c6x~5
92.18 -< x 5 -< 95.00
subject to cT~6+csx? Ix3 - c9x6 <- 1
3.78 -< x6 -< 10.68
CloX~X 3 i + c , x l x ~ tx6 _ cl,xlx. ~ i ~ <_ 1
145.00 -< x7 -< 153.53
c L ~ 6 + c l 4 x 5 - clyr 4 - c~ex 6 <-- 1 clTx~ +cl~x~x6+c~gxdc~
- C2oX~-~" <-- 1
Table 2. Number of Iterations as a function of U U
Iterations
I.l 1.5 2 5.5 10 100 1000
17 18 19 20 21 27 58
I
Convergence to the global minimum point, xT= 1698.18 x_;=53.66 x;=3031.30 x~=90, l 1
x;=95 x~= 10.50 x ; = 153.53
Global optimization in generalized geometric programming
363
Table 3. Coefficientsfor alkylationexample i
G
i
Ct
I 2 3 4 5 6 7 8 9 10 I1 12 13 14 15
1.715 0.035 4.0565 10.000 3000.0 0.063 0.59553571 0.88392857 0.11756250 1.10880000 0.13035330 0.00660330 0.66173269 0.17239878 0.56595559
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
0.19120592 0.56850750 1.08702000 0.32175000 0.03762000 0.00619800 0.24623121 0.25125634 0.16118996 5000.0 0.48951000 0.44333333 0.33000000 0.02255600 0.00759500
E-2
E-3 E-I E-2
having an objective function value of f = 1 2 2 7 . 0 0 is reached from every initial point in about 200 iterations requiring 30 s of CPU time. 4.3. C S T R s e q u e n c e d e s i g n with capital cost constraints
This example was first proposed in (Manousiouthakis and Sourlas, 1992) and involves the design of a sequence of two CSTR reactors where the consecutive reaction A ---, B ---, C takes place. The concentration C~2 of B in the outlet stream of the second reactor needs to be maximized subject to an upper bound on the investment cost. First order kinetics are assumed for both reactions and the reaction constants for the first (a) and second (b) reaction in reactors (1) and (2) are
E-I E+2
E+4 E+2 E+3 E+6 E+2
i
C,
31 32 33 34 35 36 37 38 39 40 41 42 43 44
0.00061000 0.0005 0.81967200 0.81967200 24500.0 250.0 0.10204082 E-I 0.12244898 E-4 0.00006250 0.00006250 0.00007625 1.22 1.0 1.0
(2) provides an upper bound on the total investment costs. Also constraints (3) and (4) define some initial bounds on the variables. This problem has been reported to involve two local minima (Manousiouthakis and Sourlas, 1992). First, by applying the bound improving procedure described in subsection 2.5 the initial bounds are refined into the following: 0.4090 -< c,a <- 0.9119 0.2128 -< c,,2 <- 0.8309 0.0845 -< Cb~ --< 0.4501 0.0571 -----Cb2 --< 0.7099 1.0000 <- V, -< 9.0000
k,,i =9.6540 10-2s - i
1.0000 --< V, --< 9.0000
k~,l =3.5272 10-:s-i,
Application of the proposed global optimization procedure yields the following global optimum solution
k,,.,=9.7515 10-2s -I,
c~i~=0.6587 kh2=3.9191 10 2s-I, c~e=0.5161 The inlet concentration of A is c,,O= 1.0/,mol/1, and zero for both B and C. If V~,V2 are the residence times for the two reactors and C,,~,Cb~,C,,2,Cb~. are the corresponding exit concentrations then the reactor design problem can be formulated as the following nonconvex optimization problem.
c~,,=0.2869 c~,2=0.3866 ~=5.3666 ~=2.8338 from every initial point in about 400 iterations and CPU of 20 s with convergence and feasibility tolerances of 10 -3"
min - c~., subject to (c,,, - c,~)+k,,ic,aVt =0
Next, the bounds of the reactor volumes are expanded
(c,,2 - c,,,) + k,,zc,,,. V2 = 0 tO:
(Cl, I +Cal -- C , , o ) + k b l c b l V I = 0
10 ~--< Vt, V2 ~ 16 (G,2 - Cb~ + C,n -- C,,i ) + kb2C~,.V,_ = 0
V~I'5+ V~5 -- 4
(2)
O<--C,l,Cbl,C,>Cb2 <-- 1
(3)
I<--Vt,V,. <-- 100
(4)
Assuming that the capital cost of a reactor is proportional to the square root of its residence time, constraint
and all concentration variables c,,~,c,2,c~,~ and cj,2 are eliminated from the initial formulation for the reactor design problem. The resulting compact formulation involves only two variables Vu V~.
min - ct,,_= - c,,o
k,,2V,_(1 +kbiVl)+k,aVl(1 +k,,,_V,.)
( 1 +k,,i V~)(I +kbl Vi)(l + k,,2V2)( 1 +kh2V~_)
364
C. D. MARANASand C. A. FLOUDAS subject to ~.5+ ~ 5 _< 4 10 -6 ~ VI, V2 ~ 16
Note that unlike methods based on the bilinearization of the original problem (Manousiouthakis and Sourlas, 1992; Ryoo and Sahinidis, 1995) the proposed approach can readily handle ratios of polynomial expressions and thus take advantage of the reduction of the total number of variables from six to only two. After transforming the fractional objective into a signomial form it takes only about 282 iterations and 17 s of CPU to locate the global optimum solution ( ~ = 15.992 = 16, ~ = 10 -6 = 0). This corresponds to over an order of magnitude reduction in the computational requirements over those reported in (Manousiouthakis and Sourlas, 1992), (7,950 iterations and 7,950 seconds on an Apollo DN10000). This is attributed to the fact that not only no additional variables are introduced but also elimination of existing variables is possible. Consequently, the obtained lower bounds are much tighter and the branch and bound procedure is performed on a smaller set of variables. Recently, Ryoo and Sahinidis (1995) solved globally the same problem by convex lower bounding the bilinear terms for a different set of reaction constants. A number of variations on the main algorithm were considered and the best one yielded a CPU of 23 s on a SPARC 2.
k,, =9.755988 × 10-2s - ~, k/,~=3.919080 × 10-2s - ~, k,2=9.658428
x 1 0 - 2 S - I,
k/,,= 3.527172 × 10-2s - i, The approach proposed here, converged to the global optimum (~=3.037, ~ = 5 . 0 9 6 ) after 299 iterations and about 20 s of CPU time. This is competitive with the CPU requirements in Ryoo and Sahinidis (1995). Computational results on a number of additional examples corresponding to optimal reactor design, heat exchanger design as well as standard test examples are shown in Table 4.
4.4. Robust stability analysis of nonlinear systems Robust stability analysis of nonlinear systems involves the identification of the largest possible region in the uncertain model parameter space for which the controller manages to attenuate any disturbances in the
system. The stability of a feedback structure is determined by the roots of the closed loop characteristic equation: det(l+ P(s,q)C(s,q) ) = 0 where q is the vector of the uncertain model parameters, and P(s), C(s) the transfer functions of the plant and controller, respectively. After expanding the determinant we have:
P(s,q)=a,(q)s" +a,_ i(q)s"- I +...+al(q)s +ao(q)=O where the coefficients a~(q), i=O,...,n are typically multivariable polynomial functions. The zero exclusion condition (ZEC) implies that a system with characteristic equation P(q,s)=0 is stable only if it does not have any roots on the imaginary axis for any realization of the q's in the uncertain model parameter space Q. 0 ~t P(j~o,q), V q E Q, and V to E [0, ~ ] A stability margin k,, can then be defined as follows: k,,(j(o)=inf{k : P(ico, q(k))=0, V q ~ Q} Robust stability for this model is then guaranteed if and only if:
km---l Geometrically, k,, expands the initial uncertain parameter region Q as much as possible without loosing stability (see Fig. 3). Note that, typically real parameter uncertainty is expressed as bounds on the real parameters of the model. Checking the stability of a particular system with characteristic equation P(jo~q) involves the solution of the following nonconvex optimization problem. min
qi,k>--O,oJ>--O
k (S)
subject to Re[P(o~,q)] =0
Im[ P(jo~,q) ] =0 q~ - Aq~- k <_ qi <- q~ + Aq+, k, i= 1..... n where (IN is a stable nominal point for the uncertain parameters and Aq ÷, A q - are estimated bounds. Note that it is important to be able to always locate the global minimum of (S), otherwise the stability margin might be overestimated. This overestimation can sometimes lead
Table4. Computationalresults on additionalexamples. Example #Var. #Neg.Mon. Solution Heat exchangerdesign (Avrieland Williams, 1971) 8 5 7049.24 Optimal reactordesign (Dembo, 1976) 8 2 3.9511 Colville'stest problem(Rijckaertand Martens, 197gb) 5 4 1.1436 Problem 10 of Rijckaertand Martens, 1978b 3 2 -83.254 Problem 11 of Rijckaertand Martens, 1978b 4 1 -5.7398 Problem 12 of Rijckaertand Martens, 1978b 8 2 -6.0482 Problem 14 of Rijckaertand Martens. 1978b 10 2 I. 1436 Problem 17 of Rijckaertand Martens. 1978b I1 5 0.1406
Iter. 1600 71 30 50 7 82 290 2950
CPU 100 6.8 2.0 17 0.4 12 22 427
365
Global optimization in generalizedgeometric programming ql = 1073.4 q~=3.318 q~=4.975
Computational requirements were 15 iterations and 0.5 s of CPU time. Problem 2
This example examines the l® stability margin for the closed-loop system described in Sideris and Sanchez Pena (1988). The characteristic polynomial is: P(s,q,,qz,q3) = $4+ (q~q2) S3+ (q~q~q3)s2 + ( q, q~q~)s + q~
Nominial'p
The nominal parameter values are
q Fig. 3. to the erroneous conclusion that a system is stable when it is not. Because for most problems without time delays a,(q), i=O,...,n are multivariable polynomial functions, formulation (S) corresponds to a generalized geometric problem. In the following, a number of stability analysis examples are considered.
q~l= 1.4, q~2= 1.5, q~3=0.8 and the bounded perturbations dq~ = Aq? =0.25, Aq~ = dq~ =0.20, Aq~ = dq3 = 0.20
Again after eliminating w the zero exclusion formulation becomes: mink
4.4.1. Stability analysis examples Problem 1
This example was studied in Daston and Safonov (1988) and Psarris and Floudas (1994). The plant has three uncertain parameters and the characteristic equation is: P( s,q l,qz,q3) = s4 + ( l O + q2 + q3) ss +(q~q3+ 10q2+ 10qg)s2+ (1 - q2q3+q~)s+2qa
4 4 4 subject to qlq2 - ql4 - q2q3 =0
1.4-0.25k -< q~ <- 1.4+0.25k 1.5 - 0.20k -< q2 -< 1.5+0.20k 0 . 8 - 0.20k -< q3 - 0.8+0.20k The stability margin is found to be equal to k,= 1.089 implying that the system is stable. Furthermore, the closest to the nominal point unstable point is:
The nominal values of the parameters of the system are q~= 1.1275 q~ =800, q~2=4, q~3=6 q[ = 1.2820 and the bounded perturbations q3= 1.0179 Aq~ =Aq? =800, Aq~=Aq~ =2, Aq~ =Aq 3 =3
After eliminating to the zero exclusion formulation becomes: mink
subject to lOq~q~+10q~q~+200q~q~=1004q~ + 100q2q~ + q tq2q ] + q,q2q3 + 1000q2q 2 + 8q,q23 + 1000q~q3 + 8q~q 2+ 6q~q2qa +60q~q3 +60q~q2 - q~ - 200qt <-- 0
8 0 0 - 800k -< qt -< 800+800k 4 - 2k ---
Computational requirements were 26 iterations and 1A s of CPU time. Problem 3
This example involves affane coefficient functions and it was first proposed by Ackermann et al. (1991). The closed-loop characteristic polynomial is: P( s,q l,qz,q3) =
16s4+(16ql - 24)s3+( - 9.625qj - 16q2+78)s 2 + (19ql + 8qz+ q3 - 44)s+(q3 - q2 - 4ql + 12) The nominal parameter values are ~=2.25, ~ = 1,5, q~3= 1.5 and the bounded perturbations Aq; = Aq~ =0.25, Aq; = A q ; =0.50, Aq; =/tq3- = 1.50
C. D. MARANASand C. A. FLOUDAS
366
In this example to is not eliminated and therefore the formulation becomes:
aq; = a q ; =0.0o5
The zero exclusion formulation is:
mink subject to
mink subject to a4(q)w4 -az(q)w2+ao(q)=O
q3 +9.625q~¢o+ 16q2w+ 16a~+ 12 - 4qj - q2 - 78oJ=0
a3(q)w 2 - al(q) = 0
16qf oJ+44 - 19q~ - 8q2 - q3 - 24o~=0
1 0 . 0 - 1.0k -< ql ~< 10.0+ 1.0k
2 . 2 5 - 0.25k -< qj --< 2.25+0.25k
1 . 0 - 0 . 1 k - < q_, < 1.0+0.1k
1 . 5 - 0.50k -< q2 -< 1.5+0.50k
1 . 0 - 0.01k < q3 -< 1.0+0.01k
1 . 5 - 1.50k -< q3 -< 1.5+1.50k The stability margin is found to be equal to km=0.8175 which means that the system is stable. Note that application of a local solver (Murtagh and Saunders, 1987) consistently overestimated the stability margin as k,,=0.8765. The critical parameter values for instability are: q~=2.4544
0 . 2 - 0.01k < q4 -< 0.2+0.01k 0 . 0 5 - 0.005k -< q5 -< 0.05 +0.005k Computational requirements were 3,076 iterations and 485.75 seconds of CPU time. Problem 5 This example addresses the stability of a wire guided Daimler Benz 0305 bus (Ackermann et al., 1991). The transfer function of the linearized system is
q* = 1.9088 q;=2.7263
G(s,ql,q2) =
q~(48280q~ + 388600s +609.8qtq2s 2) ~ ~ ,, ( 16.8q~q2 + 270000 + 1077qjq2s + qT~s')s"
w'=1.3510 Computational requirements were 51 iterations and 5 s
The transfer of the controller is
of CPU time. Problem 4 This example studies the stability of the mechanical system proposed in Vicino et al. (1990). The corresponding closed-loop characteristic polynomial is:
C(s) =
9375 + 10938s + 2344s-" 15625 + 1250s + 50s 2+ s 3
The closed-loop characteristic polynomial is 8
P(s,ql,q2) = ~ ai(ql,q2)s i
P(s,q)=a4(q)s 4 +a3(q)s 3+a2(q)s 2+al(q)s j +ao(q)
where
i=0
where ao(ql,q2)=453 x 106q~ a4(q) = q~q2(4q2 + 7q i)
a3(q) =
al(q~,q2)=528 x 106q~+3640 × 106qt a2(qt,q2)=5.72 x 106q~q2+ 113 × 106q~+4250 x 106q~
7 q4q~q2 - 64.918q~q2 + 380.067q3q, + 3qsq2 + 3 qsq l a~(q) =
a3(q~,q2)=6.93 × 106q~q2+911 x I06q~+4220 x 106 a4(ql,q2)= 1.45 x 106q~q2+ 16.8 x 106qlq2+338 x 106
3( - 9.81 q3q~. - 9.81 q3q Jq2 - 4.312q~q2
+ 264.896q3q2 + q4q5 - 9.274q.0 1
a,(q) = ~ ( - 147.15q4q3q2 + 1364.67q3q2 - 27.72q5)
as(q~,q2)= 15.6 x 103q~q~+840q~q2+ 1.35 × 106q~q2
+ 13.5
x 10 6
a6(q,,q2) = 1.25 x 103q~q~+ 16.8q~q,. + 53.9 x 103qtq2 +270 x 103
ao(q) = 54.387q3q2 The nominal parameter values are q~= 10.0, q~.,=1.0, q~3--1.0, q~4=0.2, q~5=0.05 and the parameter perturbations Aq~ = Aq~-= 1.0,Aq~ = Zlq2-= 0.1, A q ; = Aq3 = O. 1,Aq2 = A q ; = 0.01, CACE 21-4-B
aT(q,,q2)= 5Oq~q~.+ 1080q,q2 as(q,,q2)=q~q~.
The nominal parameter values are q~= 17.5, q~.,=20.0 and the parameter perturbations Aq~ = Aq? = 14.5, Aq~ = Aq~. = 15
Global optimization in generalized geometric programming The zero exclusion formulation is: mink
-
-
O.0848q,q3q6q 7+ 0.0552q~
+0.3652q2qsqr +O.O378954qlq.~q7 + 0.2208q,qsq7
subject to as(q)to 8 - ar(q)to6 +a4(q)to 4 - a,(q)w 2 - a,(q)=0
a3(q) = 0.0189477qlq~ + 0.1 104q2q~
a7(q)w 6 - a~(q)to4 +a3(q)(o 2 - al(q)=0
+5.16120× 10 4qsqr+q~q~
1 7 . 5 - 14.5k <- q~ <- 17.5+14.5k
+7.61760 × lO-4q~q~+q~q4
2 0 . 0 - 15.0k -< q2 -< 20.0+ 15.0k
+O.1586qtq2q~ +4.02141 × lO-4qlq2q~
The system was found to be stable, k,,> 1.0, for the given range of uncertain parameters. Computational requirements were 109 iterations and 22 s of CPU time. Problem 6 In this example, an analysis of the stability margin of the spark ignition engine Fiat Dedra (Barmish, 1994; Abate et al., 1994) is carried out for a continuous range of operating conditions involving seven uncertain parameters. The closed-loop characteristic polynomial of seventh degree is:
+0.0872qlq3q]+0.034862qlq4q5
+ O.O0336706qlqaq7 + 0.00287416qlqsq6 +6.28987 × 10-SqlqrqT+O.OOlO3224q2q6q7
+ O. 1104q3q4q5 + O.0237398q3q4q6 + 0.00152352q3q4q7 -- O.O0234048q3qsq6 + O. 1826q~qsq~ + O. 1104q~q,sq7 +O.0237398q~qrq7 -- O.0848q~qaq6 + 0.872qlq2q4q5 + O.034862qlq2q4q7
P(s,q)= ~ al(q)s i i=0
+0.0215658qlq2qsq6+ O.0378954qlq2qsq7
where the characteristic polynomial coefficients ai(q), i=0,...,7 are polynomial functions of qi, i=0,,..,7.
+ 0.00287416qlq2qrq7
aT(q)=q~
+ 0.0189477qlq3q4q7 + 2q,_q3q4q5
am(q) = 0.1586q~q~ + 2q~,q~+ 2qsq7 + 0.1826qrq7 + 0.0552q7 as(q) = 0.0189477qLq7 + 0.1104q2q~ + 0.1826qsq6
+0.1586qlq~q4q5 + 0.0215658qlq3q4q6
+ O. 1826q2q3q4q6 + O. 1104q2q3q4q7 - O.0848q,.q3qsq6 - O.O0234048q2q3qrq7
+ 0. I 104qsq7 + 0.0237398qrq7
+7.61760 × lO-4q~+O.O474795q,_q,sq6
+ q~q~+ O. 1586qlq2q~ + O.O872qlq4q7
+8.04282 x lO-4qlq.sq7
+ 0.0215658q~qrq7 +0.3652q2q6q7
+O.O0304704q,_qsq7
+ 2q3q4q: - O.0848q3qrq7 + q~
a2(q)=4.02141 x lO-4qlq~+O.OO152352qzq~
+7.61760 x lO-4q~+O.3172qlqsq7
+ 0.0552q~q~ + 0.0552q]q]
+4q2q.sq7
+0.0189477qlqzq~+0.034862qLq3q4
a4(q)=O.1586q~q~+4.02141 × lO-4q~q~+ 2q2q~
+O.O0336706qlq4q5
+0.00152352q2q~+0.0237398qsq6
+6.82079 × lO- Sqlq4q7 +6.28987 × lO-'Sqlqsq6
+0.00152352qsqT+5.16t20 × 10-4qrq7
+ 0.00152352q.~q4q~
+0.0552q~qT:+0.0189477q~q2q~
+ 5.16120 X 10- 4q3q4q6 -- O.O0234048q~q4q6
+ O.0872q~q4q~ + O.034862q~q4q7
+ O.034862qlq,_q4q.~
+ O.0215658q~qsqr +O.OO287416q~qrq7
+O.0237398q~qsqr +O.OO152352q~qsq7
+ O.0474795qzqrq7 + 2q3q4q5 + O. i 826q3q4q6
+ 5.16120 X 10- 4q~qrq7
+ O. I 104q3q4q7 - O.0848q.~q~q6
+ O.O0336706qlq2q4q7 + 0.00287416qlq2q.sq6
- O.O0234048q3qrq7 + 2q~q,sq7+ O. 1826q~qrq7
+ 8.04282 × I 0 -4qlq2qsq7
+O.0872q~q,q4qT +0.3172q~q,qsq 7
+6.28987 × lO-'Sqlq2qrqT+O.O189477qlq3q4q.~
+ O.0215658q ~q2q6q7+ O.1586q~q3q~qT + 2q,.q3q4q7
+ 0.00287416qlq3q4q6
367
368
C. D. MARANASand C. A. FLOUDAS + 4.02141 x I 0 -
4ql q3q4q7 + O. 1 104q2q3q4q5
+ O.0237398q2q.~q4q6 + 0.00152352q2q3q4q7 - O.O0234048q2q~qsq6 + 0.00103224q2qsq6 a l ( q ) = 7 . 6 1 7 6 0 x 10 -4 q.;q~+7.61760x , , ~, 10 -4 +q~q~ +4.02141 x lO-4qlq2q~
+O.O0336706qlq3q~+6.82079 × 10-Sqlq4q5 +5.16120X lO-4q~qsq6
+O.O0336706qlq2q4qs+6.82079 × 10-Sqlqzq4q7 +6.28987 x lO-Sq~q2qsq6 +4.02141 X lO-4qrq3q4qs+6.28987 x l0 5q q3q4q6+O.OO152352q2q3q4q5 + 5.16120 x l 0 - 4q2q3q4q6 a0(q)=6.82079 x 10 -~qlq3q~ +6.82079 x lO- Sqlq2q4q5 The nominal parameter values and perturbations are +_
q~(=3.4329, Aql - 0 , dql- = 1.2721 q~ =0.1627, Aq~ =0, Aq~ =0.06 q~ =0.1139, dq3 =0, dq3 =0.0782
generalized geometric problems (signomials) (GGP). This class of optimization problems has been extensively used to model a host of different engineering design and robust stability problems. An exponential variable transformation was employed to the initial nonconvex problem ( G G P ) to reduced it into a (DC) programming problem. A convex relaxation (R) of problem (DC) is then obtained based on the linear lower bounding of the concave parts of the objective function and constraints. The proposed algorithm was shown to attain finite econvergence to the global minimum through the successive refinement of a convex relaxation of the feasible region and/or of the objective function and the subsequent solution of a series of nonlinear convex optimization problems. A number of procedures aimed at improving the efficiency of the proposed approach are also discussed. The proposed approach was applied to a number of small to medium size engineering design problems and a number of test as well as real size robust stability analysis problems. In all cases, convergence to the global minimum was achieved. Convergence was particularly expedient for problems with a small number of variables participating in negative monomial terms and preferably a small number of negative monomial terms.
q~4=0.2539, Aq~'=0.3068, Aq4-=0 q~'5=0.0208, Aq~'=0, dq~ =0.0108 q~6= 2.0247, Aqg = 2.4715, dq6 = 0 q~'7= 1.0000, Aq~"=9.0000, Aq( = 0 The zero exclusion formulation yields:
Acknowledgements--Financial
support from the National Science Foundation under Grants CBT8857013, CTS-9221411, The Airforce Office of Scientific Research, the Binational Science Foundation as well as Exxon Co. and Mobil Corporation is gratefully acknowledged.
min k subject to - a6(q)to6+ a4(q)to4+
-
-
a2(q)o9 2+ ao(q) = 0
a7(q)to6 -- as(q)to4+a3(q)to 2 -- a l ( q ) = 0 3 . 4 3 2 9 - i.2721k --< q~ -- 3.4329 0.1627 - 0.06k --< q2 -< 0.1627 0.1139 - 0.0782k -----q3 --< 0. I 139 0.2539 -< q~ -< 0.2539+0.3068k 0 . 0 2 0 8 - 0.0108k -< q5 -< 0.0208 2.0247 --< q6 -< 2.0247+2.4715k 1.0000 ~ q7 ~ 1.0000+9.0000k The system was found to be stable km 1.0, throughout the entire range of the uncertain parameters. Computational requirements were 4896 iterations and about 10,000 s of CPU time. 5. SUMMARY AND CONCLUSIONS
In this paper a deterministic branch and bound type global optimization algorithm was proposed for solving
REFERENCES
Abate M., Barmish B.R., Murillo-Sanchez C. and Tempo R., Application of Some New Tools to Robust Stability Analysis of Spark Ignition Engines: A Case Study, IEEE Trans. Contr. Syst. Tech., 2, 1 (I 994) 22 Abrams R.A. and Wu C.T., Projection and restriction methods in geometric and related problems, JOTA, 26, (1978) 59 Ackermann J., Kaesbauer D. and Muench R., Robust GammaStability Analysis in a Plant Parameter Space, Automatica, 27, (1991) 75 Avriel M. and Wilde D.J., Optimal Condenser Design by Geometric Programming, I&EC Proc. Des. Dev., 6, (1967) 256 Avriel M. and Gurovich V., Optimal Condenser Design by Geometric Programming, I&EC Proc. Des. Dev., 6, (1967) 256 Avriel M. and Williams A.C., Complementary geometric programming, SIAM J. Appl. Math., 19, (1970) 125 Avriel M., Dembo R. and Passy U., Solution of generalized geometric programs, SIAM Intern. J. Numer. Methods Engnr., 26, (1975) 291 Avriel M., Dembo R. and Passy U., Solution of Generalized Geometric Programs, International Journal for Numerical Methods in Engineering, 9, (1975) 149 Avriel M. and Williams A.C., An Extension of Geometric Programming with Applications in Engineering Optimization, Journal of Engineering Mathematics.. 5, 3 (I 971 ) 187 Barmish B. R., New Tools for Robustness of Linear Systems.
Global optimization in generalized geometric programming Macmillan Publishing Company, New York (1994). Beck P.A. and Ecker J.G., A modified concave simplex algorithm for geometric programming, JOTA. 15, (1975) 189 Blau G.E. and Wilde D.J., Generalized Polynomial Programming, Can. J. Chem. Eng., 47, (1969) 317 Blau G.E. and Wilde D.J., A lagrangian algorithm for equality constrained generalized polynomial optimization, AIChE, 17, (1971) 235 Bradley, J., An algorithm for the numerical solution of prototype geometric programs, Institute of Industrial Research and Standards, Dublin, Ireland (1973). Daston R.R.E. and Safonov M.G., Exact Calculation of the Multi-Loop Stability Margin, IEEE Trans. Automat. Contr., 33, (1988) 68 Duffin R.J. and Peterson E.L., Duality theory for geometric programming, SIAM J. Appl. Math., 14, (1966) 1307 Duffin R.J. and Peterson E.L., Reversed geometric programming treated by harmonic means, Indiana Univ. Math. J., 22, (1972) 531 Dembo R.S., Optimal design of a membrane separation process using signomial programming, Math. Prog., 15, (1978) 12 Dembo R.S., Current state of the art of algorithms and computer software for geometric programming, JOTA, 26, (1978) 149 Duffin R.J., Linearizing geometric problems, SIAM Review, 12, (1970) 211 Dembo R.S., A Set of Geometric Programming Test Problems and their Solutions, Math. Prog., 10, (1976) 192 Ecker J.G. and Wiebking R.D., Optimal design of a dry-type natural-draft cooling tower by geometric programming, JOTA, 26, (1978) 305 Falk, J. E., Global Solutions of Signomial Programs, Report, The George Washington University, Program in Logistics, (1973). Floudas C.A. and Visweswaran V., A global optimization algorithm (GOP) for certain classes of nonconvex NLP's: I. Theory, Comput. chem. Engng., 12, 14 (1990) 1397 Floudas C.A. and Visweswaran V., A primal-relaxed dual global optimization approach, Journal of Optimization Theory and Applications, 78, 2 (1993) 187 Hansen P., Jaumard B. and Lu S.H., Some Further Results on Monotonicity in Globally Optimal Design, Journal of Mechanisms, Transmissions, and Automation in Design, 111, (1989) 345 Haarhoff P.C. and Buys J.D., A new method for the optimization of a nonlinear function subject to nonlinear constraints, Comp. J., 13, (1970) 178 Hellinckx L.J. and Rijckaert M.J., Optimal Capacities of Production Facilities: An Application of geometric programming, Can. J. Chem. Eng., 50, (1972) 148 Hellinckx L.J. and Rijckaert M.J., Minimization of capital investment for batch processes, I&EC Proc. Des. Dev., 10, (1971) 422 Horst R. and H. Tuy, Global Optimization, Deterministic Approaches. Springer-Verlag, Berlin (1990). Jefferson T.R. and Scott C.H., Generalized geometric programming applied to problems of optimal control: I. Theory, JOTA, 26, (1978) 117
369
Kochenberger A., Woolsey R.E.D. and McCarl B.A., On the solution of geometric programming via separable programming, Oper. Res. Quart., 24, (1973) 285 Kuester, J. L. and Mize, J. H., Optimization Techniques with Fortran. McGraw-Hill Book Company, New York, NY (1973). Liu W.B. and Floudas C.A., A remark on the GOP algorithm for global optimization, Journal of Global Optimization, 3, (1994) 519 Manousiouthakis V, and Sourlas D., A Global Optimization Approach to Rationally Constrained Rational Programming, Chem. Eng. Comm., 115, (1992) 127 Maranas C.D. and Floudas C.A., A Global Optimization Approach for Lennard-Jones Microclusters, J. Chem. Phys., 97, 10 (1992) 7667 Maranas C.D. and Floudas C.A., Global Optimization for Molecular Conformation Problems, Annals of Operations Research, 42, (1993) 85 Maranas C.D. and Floudas C.A., Global Minimum Potential Energy Conformations of Small Molecules, Journal of Global Optimization, 4, (1994) 135 Maranas C.D. and Floudas C.A,, A Deterministic Global Optimization Approach for Molecular Structure Determination, J. Chem. Phys., 100, 2 (1994) 1247 Murtagh B. A. and M. A. Saunders, MINOS 5.3 User's Guide, Systems Optimization Laboratory, Department of Operations Research, Stanford University (1987). Passy U. and Wilde D.J., Generalized polynomial optimizations, SIAM J. Appl. Math., 15, (1967) 1344 Passy U. and Wilde D.J., A geometric programming algorithm for solving chemical equilibrium problems, SIAM Review, 11, (1967) 89 Psarris P. and C. A. Floudas, Robust Stabili~" Analysis of Linear and Nonlinear Systems with Real Parameter Uncertainty, submitted, 1994. Rather M., Lasdon L.S. and Jain A., Solving geometric problems using GRG: Results and comparisons, JOTA, 26. (1978) 253 Rijckaert M.J. and Martens X.M., Bibliographical note on geometric programming, JOTA, 2, (1978) 325 Rijckaert and Martens X.M., Comparison of generalized geometric programming algorithms, JOTA, 26, (1978) 205 Ryoo H.S. and Sahinidis N.V., Global Optimization of Nonconvex NLPs and MINLPs with Applications in Process Design. Computers chem. Engng, 19, 5 (1995) 551 Sideris A. and R. S. Sb.nchez Pefia, Fast Computation of the Multivariable Stability Margin for Real Interrelated Uncertain Parameters. In Proc. ACC, Atlanta, Ga (1988). Salomone H.E. and Iribarren O.A., Posynomial modeling of batch plants: A procedure to include process decision variables, Computers chem. Engng, 16, (1992) 173 Vicino A., Tesi A. and Milanese M., Computation of Nonconservative Stability Perturbation Bounds for Systems with Nonlinearly Correlated Uncertainties, IEEE Trans. Autom. Contr., 35, (1990) 835 Visweswaran V. and Floudas C.A., New properties and computational improvement of the GOP algorithm for problems with quadratic objective functions and constraints, Journal of Global Optimization, 3, (1993) 439