Refinement strategies for piecewise linear functions utilized by reformulation-based techniques for global optimization

Refinement strategies for piecewise linear functions utilized by reformulation-based techniques for global optimization

Andrzej Kraslawski and Ilkka Turunen (Editors) Proceedings of the 23rd European Symposium on Computer Aided Process Engineering - ESCAPE 23, June 9-12...

287KB Sizes 0 Downloads 15 Views

Andrzej Kraslawski and Ilkka Turunen (Editors) Proceedings of the 23rd European Symposium on Computer Aided Process Engineering - ESCAPE 23, June 9-12, 2013, Lappeenranta, Finland c 2013 Elsevier B.V. All rights reserved.  

Refinement strategies for piecewise linear functions utilized by reformulation-based techniques for global optimization Andreas Lundell and Tapio Westerlund∗ Center of Excellence in Optimization and Systems Engineering, Åbo Akademi University, Biskopsgatan 8, 20500 Åbo, Finland

Abstract The signomial global optimization algorithm is a method for solving nonconvex mixedinteger signomial problems to global optimality. A convex underestimation is produced by replacing nonconvex signomial terms with convex underestimators obtained through single-variable power and exponential transformations in combination with linearization techniques. The piecewise linear functions used in the linearizations are iteratively refined by adding breakpoints until the termination criteria are met. Depending on the strategy used for adding the breakpoints, the complexity of the reformulated problems as well as the solution time of these vary. One possibility is to initially add several breakpoints thus obtaining a tight convex underestimation in the first iteration at the cost of a more complex reformulated problem. This breakpoint strategy is compared to the normal strategies of iteratively adding more breakpoints through illustrative examples and test problems. Keywords: global optimization, reformulation techniques, convex underestimators, MINLP, signomial functions, piecewise linear functions, SGO algorithm

1. Introduction The signomial global optimization (SGO) algorithm is a technique for solving nonconvex mixed-integer nonlinear programming (MINLP) problems containing signomial or posynomial functions to global optimality, cf. Lundell et al. (2009, 2012). In the algorithm, reformulated MINLP problems are iteratively solved and refined until the global solution is found. For nonconvex signomial terms, the reformulations are based on singlevariable exponential and power transformations. In addition, a technique called the αreformulation (αR) was introduced in Lundell et al. (2012). The αR can be used to reformulate also general twice-differentiable nonconvex functions in a way combinable with the transformation technique in the SGO algorithm. Both the transformation schemes for signomials and the αR utilize piecewise linear functions (PLFs) to approximate the nonconvex equality constraints obtained as part of the reformulation. Due to the linearizations, the result is a convex overestimated problem in an extended variable-space consisting of the original variables in the problem, as well as the added transformation variables and variables used in the PLF formulation. If the solution of the reformulated convex MINLP problem fulfills the constraints in the original nonconvex problem it is the global solution, since the feasible region of the nonconvex problem is a subset of the feasible region of the reformulation. If the solution is not valid ∗ andreas.lundell@abo.fi,

tapio.westerlund@abo.fi



A. Lundell and T. Westerlund

in the nonconvex problem, the PLFs are updated by including more breakpoints, and the feasible region with respect to the original variable-space is reduced. Several different breakpoint strategies for refining the PLFs exist, e.g., adding the solution point of the previous iteration or adding the midpoint of the intervals the solution point belongs to. The strategies have a large direct impact on the rate of convergence of the methods, i.e., the solution time or number of iterations required for reaching the global optimum. Another option investigated in this paper is to include more breakpoints in the initial iteration. The result is a tighter underestimator, at the cost of a combinatorially more difficult reformulated MINLP problem.

2. The Reformulation Technique The SGO algorithm is applicable to the following type of nonconvex NLP or MINLP problem: minimize subject to

f (x), Ax = a,

Bx ≤ b,

s(x) + q(x) ≤ 0,   

(1)

h(x)

x = [x1 , x2 , . . . , xN ]T ,

x ≤ x ≤ x.

The objective function f is assumed to be convex in the problem; however, a nonconvex objective function f can easily be rewritten as a constraint by replacing the objective function with a variable μ and including the constraint f (x)− μ ≤ 0. In the linear equality and inequality constraints A and B are N × N matrices and a and b are N × 1 vectors. The nonlinear inequality constraints hk (x) ≤ 0 consist of a sum of a signomial part sk and a convex part qk (x). If the αR is used, the constraints may also contain a general nonconvex part g(x). For the scope of this paper, it is assumed that the only nonconvexities are of signomial type. The variables xi in x are allowed to be integers (resulting in a MINLP problem) or reals, and are bounded by lower and upper bounds xi and xi . An equality constraint is reformulated as a positive and a negative inequality constraint. A signomial function is a multivariate polynomial with real powers, i.e., sk (x) =

J

N

j=1

i=1

∑ c j ∏ xi ji , p

(2)

where the c’s and p’s are real powers and coefficients respectively. For variables involved in nonconvex signomial terms, generally strict positivity of the lower bound is assumed. If all terms are positive, the function is a posynomial, and in the case when all signomial functions are posynomials, the problem is called a geometric programming (GP) problem. The reformulation techniques summarized next are based on utilizing single-variable transformations to rewrite the nonconvex signomial terms in a convex form. For more details on the transformations see Lundell (2009) or Lundell and Westerlund (2012). 2.1. Transformation Schemes for Positive Signomial Terms For positive signomial terms, two transformation schemes are possible. The first transformation type is based on the fact that a positive signomial term, where all powers are negative except for one and where the sum of the powers is greater than or equal to one, is convex.

Refinement strategies for piecewise linear functions utilized by. . .



Definition 1. Positive power transformation (PPT). The PPT convex underestimator for positive signomial terms is obtained by applying the single-variable transformation xi = Ti (Xi ) = XiQi to all variables with positive powers in the term. The transformation powers Qi are negative for all indices i, except for one (i = k), for which Qk ≥ 1. Furthermore, the sum of the powers in the transformed term must be greater than or equal to one for the term to be convex. Also, for the term to be underestimated, the inverse transformation 1/Q Xi = Ti−1 (xi ) = xi i is to be approximated by a PLF Xi . The other option for reformulating positive terms uses exponential and power transformations applied to the variables with negative powers in the nonconvex term. Definition 2. Mixed power and exponential transformation (MPET). The MPET convex underestimator for positive signomial terms is obtained by applying either of the transformations xi = Ti (Xi ) = XiQi , where Qi < 0, or xi = Ti (Xi ) = exp(Xi ) to any variables xi with 1/Q positive powers in the term, as long as the inverse transformations Xi = Ti−1 (xi ) = xi i −1 and Xi = Ti (xi ) = ln(xi ) respectively, are approximated by PLFs Xi . 2.2. Transformation Schemes for Negative Signomial Terms For negative signomial terms, there is only one choice of transformation type, valid since a negative signomial term is convex if all powers are positive, less than or equal to one, and the sum of them is less than or equal to one (Lundell, 2009). Definition 3. Power transformation for negative terms (PT). The PT convex underestimator for a negative signomial term is obtained by applying the transformation xi = Ti (Xi ) = XiQi to the individual variables in the term. The transformation powers used are 0 < Qi ≤ 1 for all variables with positive powers and Qi < 0 for all variables with negative powers. Furthermore, the sum of the powers in the transformed term must be greater than 1/Q zero and less than or equal to one, and the inverse transformations Xi = Ti−1 (xi ) = xi i approximated by PLFs Xi .

3. The SGO Algorithm By applying the transformation techniques presented in the previous section, any nonconvex signomial problem of type (1) can be reformulated to a convex form overestimating the feasible region of the nonconvex problem. This fact is utilized in the SGO algorithm, where such reformulated problems are iteratively solved to optimality. If the solution of the subproblem fulfills all the constrains in the original nonconvex problem the global solution has been found since the reformulated problem fully contains the feasible set of the nonconvex problem. If the solution does not fulfill all the constraints, it does not belong to the feasible set of the nonconvex problem, and the PLFs must be updated with more breakpoints before the next subproblem is solved. The strategy selected for adding the breakpoints have a large direct impact on the solution process, since it determines what part of the overestimated feasible region should be refined. The most common strategies are to add the solution point from the previous iteration as new breakpoint, or the midpoint of the breakpoint interval the solution point belongs to. A positive feature of these two strategies is that the same point is added regardless of the transformation used. This is not the case when adding the point where the maximal approximation error (in the breakpoint interval where the solution lies) of the



A. Lundell and T. Westerlund

24

24

24

23

23

23

22

22

22

2

3

4

5

x

2

3

4 x

5

2

3

4

5

x

Figure 1. The nonconvex function s(x) = −0.22x4 + 3.20x3 − 16.4x2 + 34.0x as well as the resulting piecewise convex underestimator with equidistant breakpoints (left), breakpoints iteratively added using the midpoint strategy (middle) and solution point strategy (right).

PLF occurs, which is dependent on the transformation type (power or exponential) and transformation power Q. Therefore, if the latter strategy is used, different binary variable or SOS sets must be used in the PLF approximation of each individual transformation, increasing the combinatorial complexity of the reformulated problem. More details on the different breakpoint strategies can be found in, e.g., Lundell (2009) or Lundell and Westerlund (2012).

4. Computational Results In this section, the SGO algorithm is applied to some test functions and problems to illustrate the breakpoint refinement strategies. The solver implementation based on the SGO algorithm, as described in Lundell and Westerlund (2009), interfaces with the General Algebraic Modeling System (GAMS) and was used for the numerical tests. Any of the convex MINLP solvers in GAMS can be utilized for solving the subproblems; here, the convex MINLP solver SBB was used if not otherwise stated. The tests were performed on a computer with a quad core Intel i7 2.8 GHz processor. To exemplify how the underestimator differs when breakpoints are added initially or iteratively, the nonconvex polynomial s(x) = −0.22x4 +3.20x3 −16.4x2 +34.0x is considered. The goal is to find the global minimum of the function on the interval [1.5, 5], assuming that the underestimation error in the solution point must be less than ε = 0.01. To reach this goal using initial partitioning, 148 breakpoints are required (including the endpoints). When applying the SGO algorithm using the solution point, midpoint or maximal error point strategies, 46, 52 and 45 iterations are required respectively, corresponding to 47, 53 and 46 breakpoints. The combinatorial complexity of the reformulated problem when using initial partitioning is larger, but in this simple example, the solution times are longer when iteratively adding breakpoints, since only one SGO iteration is required in the first case. In Figure 1, the final underestimators in different cases are shown, and in Table 1 comparisons of the three breakpoint strategies are given. To compare the breakpoint strategies, the SGO algorithm is now applied to some test problems. Problems 1–4 are from Floudas and Pardalos (1999) and Problem 5 is from



Refinement strategies for piecewise linear functions utilized by. . . Breakpoint strategy

Iterations

LB1

Breakpoints in final iteration

Time (s)

Solution point Midpoint Max error point

46 52 45

-141.29 -141.29 -141.29

47 53 46

11.3 12.6 11.4

Breakpoint strategy

Initial equidistant breakpoints

Iterations

LB1

Time (s)

3 4 8 16 32 64 128 147 148 256 512 1024

51 44 42 34 27 17 7 2 1 1 1 1

-31.1 -3.0 17.5 21.1 21.9 22.0 22.1 22.1 22.1 22.1 22.1 22.1

12.2 10.5 10.3 9.1 9.1 9.0 7.4 2.4 1.3 2.7 8.8 37.2

Initial partitioning

Table 1. Results from minimizing the function s(x) in Section 4. LB1 corresponds to the lower bound of the first SGO iteration and the time is the total time of solving the reformulated MINLP subproblems in GAMS. When using the initial partitioning strategy, the midpoint strategy is used in subsequent iterations if more breakpoints are required.

Westerlund and Papageorgiou (2004). When solving the last problem, αECP was used for solving the convex MINLP problems in GAMS instead of SBB. The problems are compared in Table 2 results are given in Table 3. The results indicate that the optimal breakpoint strategy is very problem dependent. However, the midpoint strategy seems to be the most solid one, albeit not always the fastest.

5

Conclusions

In this paper, some strategies for refining the PLFs in the SGO algorithm was described and compared through illustrative figures and by applying them to some test problems. The strategy of adding many breakpoints to the initial PLF approximation may, for simple problems with few required transformations, be a good idea. For larger problems, however, this will lead to a too complex transformed problem. Therefore, it is often beneficial to iteratively add breakpoints, since new points are then only added in regions where required for reducing the feasible region of the overestimated problem.

Acknowledgments Support from the Foundation of Åbo Akademi University, as part of the grant for the Center of Excellence in Optimization and Systems Engineering, is gratefully acknowledged.



A. Lundell and T. Westerlund

Problem

Real

1 (7.2.1) 2 (7.2.2) 3 (7.2.4) 4 (7.2.6) 5

7 6 9 8 58

Variables Binary 20

Integer

Nonconvex inequalities

Transformations required

31

13 9 5 2 1

15 12 9 4 3

Table 2. Characteristics of the test problems in Section 4. The designation within parenthesis corresponds to the number of the problem in the source.

Prob. 1 2 3 4 5

Midpoint strategy Iters Bpts Time (s) 9 12 10 9 20

70 78 88 30 45

77 35 5400 2.6 210

Solution point strategy Iters Bpts Time (s) 34 12 11 8 7

166 77 95 23 15

3930 40 7620 2.4 61

Max error point strategy Iters Bpts Time (s) ≥9 12 8 6 19

≥150 156 81 28 43

>43200 2020 2400 1.7 190

Table 3. Results of solving the test problems in Section 4. The columns correspond to number of SGO iterations, breakpoints in final iteration and total CPU time for solving the reformulated subproblems. Problem 1 was not solved to optimality with a time limit of 12 hours using the max error point strategy.

References Floudas, C., Pardalos, P., 1999. Handbook of test problems in local and global optimization. Nonconvex optimization and its applications. Kluwer Academic Publishers. Lundell, A., 2009. Transformation techniques for signomial functions in global optimization. Ph.D. thesis, Åbo Akademi University. Lundell, A., Skjäl, A., Westerlund, T., 2012. A reformulation framework for global optimization. Journal of Global Optimization (available online). Lundell, A., Westerlund, J., Westerlund, T., 2009. Some transformation techniques with applications in global optimization. Journal of Global Optimization 43 (2), 391–405. Lundell, A., Westerlund, T., 2009. Implementation of a convexification technique for signomial functions. In: Jezowski, J., Thullie, J. (Eds.), 19th European Symposium on Computer Aided Process Engineering. Vol. 26 of Computer Aided Chemical Engineering. Elsevier, pp. 579– 583. Lundell, A., Westerlund, T., 2012. Global optimization of mixed-integer signomial programming problems. In: Lee, J., Leyffer, S. (Eds.), Mixed Integer Nonlinear Programming. Vol. 154 of The IMA Volumes in Mathematics and its Applications. Springer New York, pp. 349–369. Westerlund, J., Papageorgiou, L. G., 2004. Improved performance in process plant layout problems using symmentry breaking constraints. In: Discovery through product and process design, Proceedings of FOCAPD 2004.