Chance-constrained optimization for nonconvex programs using scenario-based methods

Chance-constrained optimization for nonconvex programs using scenario-based methods

ISA Transactions xxx (xxxx) xxx Contents lists available at ScienceDirect ISA Transactions journal homepage: www.elsevier.com/locate/isatrans Resea...

815KB Sizes 0 Downloads 97 Views

ISA Transactions xxx (xxxx) xxx

Contents lists available at ScienceDirect

ISA Transactions journal homepage: www.elsevier.com/locate/isatrans

Research article

Chance-constrained optimization for nonconvex programs using scenario-based methods ∗

Yu Yang a , , Christie Sutanto a,b a b

Chemical Engineering Department, California State University Long Beach, CA, 90840, USA Department of Chemical Engineering and Materials Science, University of California, Irvine, CA, USA

highlights • • • •

A scenario-based approach is presented for nonconvex chance-constrained programs. A sequential method is proposed to solve nonconvex programs to a global optimum. A branching-and-sampling method is proposed for 0–1 chance-constrained programs. The branching-and-discarding method is proposed for 0–1 chance-constrained programs.

article

info

Article history: Received 18 August 2018 Received in revised form 6 December 2018 Accepted 12 January 2019 Available online xxxx Keywords: Nonconvex program Chance constraints Scenario method

a b s t r a c t This paper presents a scenario-based method to solve the chance-constrained optimization for the nonconvex program. The sample complexity is first developed to guarantee the probabilistic feasibility. Then through the sampling on uncertain parameters, many scenarios are generated to form a largescale deterministic approximation for the original chance-constrained program. Solving the resulting scenario-based nonconvex optimization is usually time-consuming. To overcome this challenge, we propose a sequential approach to find the global optimum more efficiently. Moreover, two novel schemes: branching-and-sampling and branching-and-discarding are developed for the chance-constrained 0–1 program by refining the scenario set in order to find a less conservative solution. Finally, model predictive control and process scheduling problem are taken as examples to evaluate the effectiveness of proposed optimization approaches. © 2019 Published by Elsevier Ltd on behalf of ISA.

1. Introduction Optimization under uncertainty has become an essential topic in the process control and scheduling as uncertainties widely exist in practice [1]. An optimal solution based on the nominal model may not achieve desired outcomes and meet state/output constraints due to model uncertainties. Therefore, the development of a robust optimal solution that accounts for model uncertainties is of course desirable. The early works of the optimization under uncertainty focus on the feasibility under worst case among all admissible scenarios. Even though such an approach is able to reject any uncertainties, its solution is very conservative for practical implementation [2]. Besides, obtaining the worst case is non-trivial especially when the uncertain parameters randomly distributed with infinite support. Instead of the worst case analysis, the probabilistic approach is more attractive. The fundamental idea is to introduce the probabilistic feasibility, which requires a solution to satisfy the chance ∗ Corresponding author. E-mail address: [email protected] (Y. Yang).

constraints with probability at least 1 − ϵ . This relaxation significantly reduces the conservatism of the robust optimization and leads to more practical formulas. In [3], the concept: price of robustness is introduced to limit the probability of constraints violation under bounded uncertainties. In [4], the robust counterpart optimization for linear program (LP) and mixed-integer linear program (MILP) on different uncertainty set is discussed. The bound on probabilistic feasibility for LP and MILP is shown in [5]. The quality of the solution for robust optimization is improved in [6] through the posterior bound. Ref. [7] considers transferring an unbounded distribution into a bounded one to enable the robust counterpart optimization. The robust reformulation and probabilistic feasibility of quadratic programming, semidefinite programming, and linear matrix inequality (LMI), are presented in [8]. Chance-constrained programming is another type of method to obtain a robust solution when uncertain parameters satisfy a known distribution. This method has been applied to solve the stochastic linear program with normally distributed parameters [9]. For a stochastic convex program with ambiguous distribution, Bernstein approximation is employed to obtain a computationally tractable reformulation [10]. In [11], a gradient-based randomized algorithm is

https://doi.org/10.1016/j.isatra.2019.01.013 0019-0578/© 2019 Published by Elsevier Ltd on behalf of ISA.

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

2

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx

proposed to generate a qualified solution with high confidence. In the optimization-based controller design, probabilistic constraints can be integrated with the optimization formula to bound the output with high probability. In [12], a simulation-based method is proposed to calculate the probability and gradient of the constraint for chance-constrained linear system control. Ref. [13] handles the chance constraints in MPC by enforcing an augmented state within an invariant set. In [14], chance constraints in the linear quadratic regulator are converted to deterministic ones through the Chebyshev bound. In [15], chance constraints in a linear system are recast as the LMI. However, all these methods require elaborate analysis of model structures and known distribution of uncertainties to obtain a tractable reformulation. Different from probabilistic approaches mentioned above, this study focuses on the sampling-based method, which is distribution-free and does not require any analytical reformulation. It is applicable to the program in which uncertain parameters exist in constraints. By generating many samples of uncertain parameters to build scenarios, a deterministic approximation is established to address the chance-constrained optimization. This idea has been accepted in the automatic control community. For instance, in [16], the scenario-based MPC is devised for general linear systems with additive and multiplicative disturbances. Ref. [17] presents an offline sampling method for MPC to address the computational demand and recursive feasibility. The modulating robustness approach is proposed to compromise the constraints and cost violation probabilities based on the sampling method [18]. In [19], the robust optimization and scenario approach are combined to provide less conservative solutions. Overall, if the amount of samples is large enough, then solving the scenario-based deterministic approximation may lead to a probabilistic feasible solution. On the other hand, if there are too many scenarios, the solution will be very conservative. Thus, an essential research objective is to determine the necessary number of scenarios (sample complexity). For the robust convex program, a lower bound on sample complexity can be found. If the number of scenarios is greater than this lower bound, then the probabilistic feasibility is achieved by confidence 1 − β [20]. The parameters ϵ and β should be small enough and their proper ranges can be found in [21]. Following this direction, [22] provides a tighter bound on sample complexity for the robust convex program. The significance of reducing sample complexity is twofold. First, a large number of scenarios may render the resulting optimization intractable. Second, more scenarios may lead to a more conservative solution. In order to further decrease computational time and pursue optimality, both [23] and [24] develop constraint removal rules using different bounds on the sample complexity. The methodology of sequential validation (SPV) is proposed in [22]. Given a candidate solution, the SPV generates a validation set iteratively to evaluate its probabilistic feasibility and further refine it. SPV is not restricted by the convex program but can be only utilized to check the probabilistic feasibility of the solution. In fact, the nonconvex problem is more common in the process control and scheduling, but only a few papers recently show some promising results. In [25], the support scenarios for a nonconvex program are estimated through a posterior evaluation. In [26], a particular class of mixedinteger convex program is considered and its sample complexity is developed. In [27], the robust program with nonconvex objective function is addressed and its sample complexity is developed based on a convex-hull set. In [28], the sample complexity for the mixed-integer convex program is determined by finding its Helly’s dimension. The first contribution of this paper is the establishment of a lower bound on sample complexity for the nonconvex chanceconstrained program that only contains polynomial functions. The key idea is to introduce the auxiliary variables for polynomial

functions and build its convex/affine relaxation. It is worthwhile to note that the convex relaxation is only used to determine the necessary number of scenarios: N. Then, a nonconvex deterministic approximation is created and solved based on N sampled scenarios. The second contribution is the development of a fast global optimization algorithm for the scenario-based nonconvex program. Directly using the state-of-the-art optimization solvers, such as BARON [29] and ANTIGONE [30] may take a long time to obtain the global optimum for such a large-scale problem. A sequential optimization scheme is more efficient to handle the scenario-based formula. The proposed algorithm starts from solving a deterministic optimization without incorporating any sampled scenarios and continuously evaluate its solution on all constraints in the sampling set. The support scenarios/constraints can be identified and appended to the formula. This procedure is repeated until a globally optimal solution is reached, and we show that all constraints in the sampling set are satisfied. The proposed method restricts the size of an optimization formula and thereby reduces its computational time. The third contribution is the development of two special schemes for the chance-constrained 0–1 program to obtain less conservative solutions. Notice that the bound on sample complexity is dependent on the number of decision variables in the optimization formula. For the 0–1 program, the binary variable tightening and branching can reduce the number of decision variables. Thus, we can adopt these approaches for scenarios reduction. The first scheme is the branching-and-sampling which regenerates scenario set with a much smaller size by branching on binary variables. The second scheme is the branching-and-discarding approach based on [24], which removes some restrict scenarios in the initial sampling set. All these schemes can improve the optimality and preserve the probabilistic feasibility of the solution in a 0–1 program under uncertainties. The rest of paper is organized as follows. The problem statement is shown in Section 2. A preliminary discussion of sample complexity is introduced in Section 3. The main methods, including new sampling bounds for the nonconvex program, variable tightening, branching-andsampling, and branching-and-discarding are proposed in Section 4. Case studies are presented in Section 5 to show the optimality and probabilistic feasibility of the solution obtained through proposed methods. Finally, conclusions are drawn in Section 6. Notation. Throughout this paper, vectors (matrices) are denoted by lowercase (uppercase) boldface letters. 2. Problem statement Let us consider a special nonconvex optimization formula with polynomial functions f 1 , f 2 and g : min

x∈[x,x]⊂RDx ,y ∈{0,1}Dy

c T1 x + c T2 y

(P )

s.t. f 1 (x, y) = 0, f 2 (x, y) ⩽ 0, g (x, y , θ ) ⩽ 0, where upper and lower bars represent the upper and lower bounds D on variables, respectively; f 1 : RDx +Dy → R f1 and f 2 : RDx +Dy → RDf2 are nonconvex deterministic functions; g = [g1 g2 . . . gDg ]T : RDx +Dy +Dθ → RDg are nonconvex for any fixed value of θ ∈ Θ ⊆ RDθ . Decision variables are x ∈ [x, x] ⊂ RDx and y ∈ {0, 1}Dy . θ ∈ Θ represents uncertain parameters that may satisfy any distribution. The solution that meets constraint g (x, y , θ ) ⩽ 0, ∀θ ∈ Θ can be very conservative or even does not exist. Therefore, this constraint is softened to yield a general chance-constrained optimization formula: min

x∈[x,x],y ∈{0,1}Dy

c T1 x + c T2 y

(CCP )

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx

s.t. f 1 (x, y) = 0,

Table 1 Sample complexity N for different Dx .

f 2 (x, y) ⩽ 0,

P(g (x, y , θ ) ⩽ 0) ⩾ 1 − ϵ,

(1)

where ϵ is a positive value that represents risk level. Here we assume that the solution set of (CCP ) is nonempty. A scenario approach of the deterministic approximation to (CCP ) is shown below: min

x∈[x,x],y ∈{0,1}Dy

c T1 x + c T2 y

Dx

N

30 40 50 60 70 80 90

1344 1633 1911 2155 2447 2708 2964

(SP )

s.t. f 1 (x, y) = 0, f 2 (x, y) ⩽ 0, g (x, y , θ j ) ⩽ 0, ∀j ∈ {1, 2, . . . , N }, where θ j , ∀j ∈ {1, 2, . . . , N } are independent and identically distributed (i.i.d.) samples from set Θ . N is the total number of samples. The program (SP ) is a large-scale optimization formula with Dg × N + Df1 + Df2 constraints. The objective of this research is to identify a proper N that guarantees the probabilistic constraint (1) to be satisfied with high confidence 1 − β . In addition, an efficient algorithm for (SP ) will be developed. 3. Preliminary results: bounds on sample complexity for convex program As a preliminary, we present the sample complexity for the following chance-constrained convex program: min c T1 x

(COP )

x∈[x,x]

better solution of (COP ) can be found. Finally, it is also worthwhile to note that both bounds (2) and (3) are distribution free and dependent on the number of decision variables, rather than the number of uncertain parameters. 4. Main result 4.1. Nonconvex chance-constrained optimization The bound on sample complexity for a special nonconvex program is presented in this section. First, the following Lemma is proposed to show the probabilistic feasibility of a feasible solution in the scenario-based convex program: Lemma 1. Given a feasible solution x′ in the scenario-based convex program (COSP ), if one of following conditions holds, 1. N satisfies Eq. (2) 2. N and k satisfy (3) then P(h(x′ , θ ) ⩽ 0) ⩾ 1 − ϵ with confidence 1 − β .

s.t. w(x) ⩽ 0,

P(h(x, θ ) ⩽ 0) ⩾ 1 − ϵ, where w : RDx → RDw is an affine function; h : RDx → R is a convex function for every fixed θ ∈ Θ . Then the N-scenario-based approximation of (COP ) is: min c T1 x

(COSP )

x∈[x,x]

Proof. Let Ω denote the feasible region of (COSP ), which is a closed convex set. Then we first consider a feasible solution at the boundary of Ω : x′ ∈ ∂ Ω . A new scenario-based convex program then can be constructed by changing the objective function in (COSP ): T

min cˆ 1 x

x∈[x,x]

s.t. w(x) ⩽ 0,

h(x, θ j ) ⩽ 0, ∀j ∈ {1, 2, . . . , N },

We first introduce the bound derived from [22] to determine N: 1

a

1

(ln + (Dx − 1)lna) (2) ϵ a−1 β where β is the confidence parameter given as a prior. It can be a>1

shown that the optimal solution x∗ of N-scenario-based convex program satisfies probabilistic constraint: P(h(x∗ , θ ) ⩽ 0) ⩾ 1 − ϵ with confidence 1 − β . The right-hand side of (2) can be minimized over a by using a global optimization solver, such as BARON. Given ϵ = 0.05 and β = 10−6 , the minimal integer value of N for different Dx is shown in Table 1. We also need to remark that as β decreases to 10−10 or even 10−20 , the required number of scenarios will not significantly increase [21]. The second bound on sample complexity for a convex program is from [24]. Given N sampled scenarios, when k of them are eliminated by any rule, the probabilistic feasibility of resulting optimal solution x∗ is preserved with confidence 1 − β as long as N and k satisfy the following inequality:

(

) k+∑ Dx −1 ( )

k + Dx − 1 k

N

i=0

i

ϵ (1 − ϵ ) i

N −i

(N COSP 1 )

s.t. w(x) ⩽ 0,

h(x, θ j ) ⩽ 0, ∀j ∈ {1, 2, . . . , N }.

N ⩾ inf

3

⩽β

(3)

Thus, the maximum k for a given N can be determined via (3). The larger k implies that more scenarios are eliminated, and thereby a

where cˆ 1 is a vector such that x′ is a unique optimal solution of (N COSP 1 ). Here θ j , ∀j ∈ {1, 2, . . . , N } is the same scenarios set with (COSP ). If N satisfies condition (2), according to Ref. [22], x′ satisfies probabilistic constraint P(h(x′ , θ ) ⩽ 0) ⩾ 1 − ϵ with confidence 1 − β . Similarly, given (COSP ) containing N − k scenarios, a new (N COSP 1 ) can be constructed by the same set of scenarios such that x′ is its optimal solution. If N and k satisfy condition (3), according to Ref. [24], x′ also satisfies probabilistic constraint P(h(x′ , θ ) ⩽ 0) ⩾ 1 − ϵ with confidence 1 − β . Now we show that the interior point of Ω also has the same property. Let us move the stochastic constraint: h(x, θ ) ⩽ 0, and consider the following scenario-based convex program: T

min c˜ 1 x

x∈[x,x]

(N COSP 2 )

s.t. w(x) ⩽ 0, h(x, θ j ) ⩽ δ, ∀j ∈ {1, 2, . . . , N }, where θ j , ∀j ∈ {1, 2, . . . , N } is the same scenarios set with (COSP ); parameter δ < 0 should ensure the feasibility of (N COSP 2 ). Then ∃˜c 1 ∈ RDx and ∃j′ ∈ {1, 2, . . . , N } such that the optimal solution x∗ of (N COSP 2 ) satisfies h(x∗ , θj′ ) = δ . Due to δ < 0, x∗ ∈ Ω ◦ is an interior point of Ω . Note that x∗ is also on the boundary of (N COSP 2 ) and thereby P(h(x∗ , θ ) ⩽ δ ) ⩾ 1 − ϵ with

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

4

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx

confidence 1 − β . This further implies that P(h(x∗ , θ ) ⩽ 0) > 1 − ϵ with confidence 1 − β . By varying δ and c˜ 1 , x∗ can be any interior point in the closed convex set Ω , and thus the conclusion of this Lemma is confirmed. □ For polynomial functions, their affine (convex) relaxations are computable through the auxiliary variable method [31]. Hence, the scenario-based convex relaxation of (SP ) exists and is shown below: min

D x∈[x,x],y ∈[0,1] ,z 1 ∈R z1 Dz Dz D z 2 ∈R z2 ,z g ∈R g ,z q ∈R q Dy

c T1 x + c T2 y

The main theorem is proposed based on the concept of convex relaxation: Theorem 1. Given a global optimal solution {x∗ , y ∗ } of the nonconvex scenario-based program (SP ), P(g (x∗ , y ∗ , θ ) ⩽ 0) ⩾ 1 − ϵ with probability 1 − β if one of following conditions is satisfied: 1 a (ln β1 + (Dx + Dy + Dzˆ − Df1 − 1)ln a), (k+Dx +Dy +Dzˆ ϵ−Da−f 1−1) ∑ k+Dx +Dy +Dzˆ −Df −1 (N ) 1 1 (b) ϵ i (1 − ϵ )N −i ⩽ β , i=0 i k

(a) N ⩾ infa>1

(CRSP )

s.t. f˜1 (z 1 , x, y) = 0, f˜ 2 (z 2 , x, y) ⩽ 0, g˜ (z g , x, y , θ j ) ⩽ 0, ∀j ∈ {1, 2, . . . , N }, q(x, y , z 1 , z 2 , z g , z q ) ⩽ 0, y ∈ [0, 1]Dy , where f˜ 1 is obtained by replacing any nonlinear terms in f 1 , denoted by r 1 (x, y), with variables: z 1 ; similarly, f˜ 2 and g˜ are obtained by replacing nonconvex terms in constraints f 2 and g , denoted by r 2 (x, y) and r g (x, y), with z 2 and z g , respectively. Then auxiliary constraints q and variables z q have to be introduced to construct a convex relaxation. Here f˜ 1 should be a vector of affine functions. f˜ 2 , g˜ , and q are convex functions. For example, to relax bilinear term x1 x2 , an intermediate variable: z1,2 = x1 x2 is introduced. Its tightest convex relaxation is derived by [32], in which auxiliary constraint set q(x, y , z 1 , z 2 , z g , z q ) ⩽ 0 consists of following inequalities: z1,2 ⩾ x1 x2 + x1 x2 − x1 x2 , z1,2 ⩾ x1 x2 + x1 x2 − x1 x2 , z1,2 ⩽ x1 x2 + x1 x2 − x1 x2 , z1,2 ⩽ x1 x2 + x1 x2 − x1 x2 . Another example is the trilinear term: z1,2,3 = x1 x2 x3 . A recursive method can be used by further introducing another auxiliary variable z1,2 = x1 x2 . Here z1,2 is the z q in q(x, y , z 1 , z 2 , z g , z q ). Then auxiliary constraint set q(x, y , z 1 , z 2 , z g , z q ) ⩽ 0 consists of following inequalities: z1,2 ⩾ x1 x2 + x1 x2 − x1 x2 , z1,2 ⩾ x1 x2 + x1 x2 − x1 x2 , z1,2 ⩽ x1 x2 + x1 x2 − x1 x2 , z1,2 ⩽ x1 x2 + x1 x2 − x1 x2 , z1,2,3 ⩾ z 1,2 x3 + z1,2 x3 − z 1,2 x3 , z1,2,3 ⩾ z 1,2 x3 + z1,2 x3 − z 1,2 x3 , z1,2,3 ⩽ z 1,2 x3 + z1,2 x3 − z 1,2 x3 , z1,2,3 ⩽ z 1,2 x3 + z1,2 x3 − z 1,2 x3 . Alternatively, the method in [33] is recommended to relax more complex polynomial terms with fewer auxiliary variables. Thereafter, we will use notation zˆ to represent the vector [z 1 ; z 2 ; z g ; z q ] ∈ RDzˆ . The binary variable y is relaxed by extending its domain to [0, 1]. Remark. Convex relaxation extends and lifts the feasible region of the nonconvex program (SP ) to a closed convex set. It does not eliminate any feasible solution of the original problem. Thus, ′ given a feasible solution of (SP ): x′ and y ′ , there always exists zˆ satisfying all constraints in (CRSP ). Hence, any feasible solution of (SP ) (lower dimensional space) links to a or many feasible solutions of (CRSP ) (higher dimensional space) constructed by the same scenarios set.

where k is the number of scenarios removed from N samples.

Proof. The global optimal solution of (SP ): {x∗ , y ∗ } is associated with a feasible solution of (CRSP ) constructed by the same scenarios set. This feasible solution always exists and is denoted by ∗ {x∗ , y ∗ , zˆ }, where ∗

zˆ = [z ∗1 , z ∗2 , z ∗g , z ∗q ] = [r 1 (x∗ , y ∗ ), r 2 (x∗ , y ∗ ), r g (x∗ , y ∗ ), z ∗q ],

(4)

There exists a z q such that constraints: q(x , y , z 1 , z 2 , z g , z q ) ⩽ 0 ∗













hold. Given equalities f˜ 1 (z ∗1 , x∗ , y ∗ ) = 0 in (CRSP ), Df1 variables can be eliminated in total, and then the number of independent decision variable is Dx + Dy + Dz − Df1 in (CRSP ). According to Lemma 1, if N satisfies condition (a) or (b) with proper k, then the feasible solution of (CRSP ) achieves probabilistic feasibility. It means P(g˜ (z ∗g , x∗ , y ∗ , θ ) ⩽ 0) ⩾ 1 − ϵ with confidence 1 − β . Since z ∗g = r g (x∗ , y ∗ ), it implies g˜ (z ∗g , x∗ , y ∗ , θ ) = g˜ (r g (x∗ , y ∗ ), x∗ , y ∗ , θ ) = g (x∗ , y ∗ , θ ). Then, we can conclude that P(g (x∗ , y ∗ , θ ) ⩽ 0) ⩾ 1 − ϵ with confidence 1 − β . □ It is worthwhile to note that the convex relaxation (CRSP ) is not solved. Instead, it demonstrates how to derive the number of required scenarios (sample complexity). Theorem 1 makes use of the fact that when x, y and zˆ are determined, then the probabilistic constraint is dependent on uncertain parameters only. Thus, if a solution of the convex relaxation (higher dimensional space): ∗ {x∗ , y ∗ , zˆ } determined by (4) owns the probabilistic feasibility, then the solution in the nonconvex program (lower dimensional space): {x∗ , y ∗ } also has the probabilistic feasibility because the nonconvex term in g can be replaced by r g (x∗ , y ∗ ). Notice that apart from decision variables, the number of nonconvex terms impacts N and k as well. Hence, a nonconvex program with uncertainties requires more scenarios to guarantee its probabilistic feasibility. The major difference between the proposed sample complexity and existing works lies in the convex relaxation. A recent result [34] on the sample complexity for general nonconvex programs shows that its support constraints may exceed the dimension of decision variables, and thus the necessary number of scenarios cannot be determined in prior. However, in this paper, we consider a special case that affine/convex relaxation can be made for a nonconvex program only containing polynomial functions. Thus, the sample complexity can be determined based on the number of nonconvex terms and the dimension of decision variables. 4.2. Sequential optimization method In this section, a fast algorithm for solving (SP ) is proposed. Global optimization for the scenario-based nonconvex program is very challenging, especially when a large number of constraints exist. We note that among all N × Dg constraints introduced by sampled scenarios, a few of them dominate others. If these support constraints in the sampling set are selected efficiently, then the size of the optimization formula can be substantially reduced. To this end, an algorithm is developed based on the following optimization: min

x∈[x,x],y ∈{0,1}Dy

c T1 x + c T2 y

(PN C )

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx

s.t. f 1 (x, y) = 0,

UBPN C UB∗ PN C

gi (x, y , θ j ) ⩽ 0, ∀j ∈ ∆i , ∀i ∈ {1, 2 . . . , Dg }, where index set ∆i ⊆ {1, 2, . . . , N } contains all scenarios considered for constraint gi . If the cardinality of set ∆i , ∀i ∈ {1, 2 . . . , Dg } is small, then (PN C ) is small-scale and can be solved by BARON to global optimum rapidly. The procedure of building ∆i and searching the globally optimal solution of (SP ) is shown below: output: solution x∗ , y ∗ and the objective value of (SP ) Initialize: ∆i = ∅, ∀i ∈ {1, 2 . . . , Dg }; vj = 1, ∀j ∈ {1, 2 . . . , N } ; while ∃{i, j ∈ / ∆i } such that maxi,j {vj,i } > 0 do Solve problem (PN C ) and obtain its optimal solution x∗ and y ∗ ; for i ← 1 to Dg do for j ← 1 to N and j ∈ / ∆i do vj,i = gi (x∗ , y ∗ , θ j ) ; end if maxj∈/ ∆i {vj,i } > 0 then j′ = argj maxj∈/ ∆i {vj,i } ; ∆i ← ∆i ∪ j ′ ; end end end

where vj = [vj,1 vj,2 . . . vj,Dg ]T represents the constraints violation under scenario j. Given the solution of (PN C ): {x∗ , y ∗ }, Algorithm 1 evaluates the constraints violation across all scenarios. Then, the most violated constraint will be incorporated into the set ∆i . At each iteration, previous optimal solution {x∗ , y ∗ } will be eliminated due to new appended constraints. Since (PN C ) is a relaxation of (SP ), if the globally optimal solution of (PN C ): {x∗ , y ∗ } satisfies all constraints in (SP ), then the global optimum of (SP ) is achieved and Algorithm 1 can be terminated. In practice, the global optimization of nonconvex program is terminated when relative gap reaches a threshold γ . Here the relative gap for a minimization problem is defined by (5). Upper bound − Lower bound

(5)

|Lower bound|

The lower and upper bounds enclose the true globally optimal objective value. In the sequential method, (PN C ) is also solved to reach the threshold γ and we can claim the following proposition. Proposition 1. If (1) the relative gap of (PN C ) in Algorithm 1 is γ , (2) lower and upper bound values have the same sign, then the relative gap of this solution in (SP ) is no larger than γ . Proof. Let us denote the objective value of the final solution in Algorithm 1 as UB∗PN C . According to the condition, there exists a UB∗

|LB∗PN C |

, and thus

∗ UB∗ SP −LBSP |LB∗SP |

<

∗ UB∗ PN C −LBPN C |LB∗PN C |

SP

= γ. □

Remark. BARON is a benchmark global optimization software that can solve most of small/median-scale nonconvex programs to the global optimum within a reasonable time. Here the proposed sequential method still needs BARON to solve (PN C ). Alternatively, other deterministic global optimization solvers may be used. If the problem is too difficult to find its global optimum, we recommend some heuristic approaches which have been used for the robust controller design, such as multi-objective extremal optimization [35] or constrained population extremal optimization [36]. 4.3. Bound tightening for binary variables

Algorithm 1: Sequential optimization method for scenariobased program.

−LB∗

C PN C lower bound value of (PN C ): LB∗PN C such that PN = γ. |LB∗PN C | Since (PN C ) is a relaxation of (SP ), we can find a lower bound value of (SP ): LB∗SP , such that LB∗PN C ⩽ LB∗SP and LB∗PN C LB∗SP > 0. The final solution of Algorithm 1 satisfies all constraints in ∗ UB∗ SP −LBSP (SP ), thereby UB∗PN C = UB∗SP . To compare and |LB∗ |

∗ UB∗ PN C −LBPN C |LB∗ |

< 0, then |LB∗SP | < |LB∗PN C |. However, since UB∗ < 0 according to the condition, we still have |LB∗SP | <

If LB∗PN C ∗

f 2 (x, y) ⩽ 0,

Relative gap =

5

SP

Binary variables representing logic decisions commonly exist in the process design and scheduling problems. Hereafter this paper will focus on the scenario-based method to solve the chanceconstrained 0–1 program. The aim is to find a less conservative solution by reducing the required scenarios in (SP ). The sample complexity is dependent on the number of decision variables. This fact motivates us to reduce the dimension of decision variables in a 0–1 program, and then reconstruct (SP ) with fewer scenarios. Bound tightening is usually utilized to speedup nonconvex optimization, and it is also applicable to reduce the number of binary decision variables. A simple method of bound tightening is the interval analysis, but it usually yields very loose bounds. In order to obtain tighter bounds on binary variables, the following optimization formula is solved: max

/

min

x∈[x,x],y ∈{0,1}Dy x∈[x,x],y ∈{0,1}Dy

yl

(BT )

s.t. f 1 (x, y) = 0, f 2 (x, y) ⩽ 0, c T1 x + c T2 y ⩽ UB,

(6)

where UB is the optimal objective value of (SP ) constructed by N scenarios. Formula (BT ) is always feasible and its solution is the upper or lower bound on yl , ∀l ∈ {1, 2, . . . , Dy }. Several comments of (BT ) are in order. First, constraint (6) plays the most essential role in this bound tightening scheme. If the objective value c T1 x + c T2 y associated with the upper/lower bound of yl is higher than UB, then this upper/lower bound can be further tightened. Second, any constraints with uncertain parameters are not incorporated in (BT ). Thus it is a small-scale deterministic nonconvex optimization, which can be solved by BARON. Third, optimization-based bound tightening is computationally more expensive than the interval analysis, but beneficial to the global optimization if its computational cost is under control [37]. Its efficiency to improve the convergence has been demonstrated in the decomposition-based global optimization algorithm [38]. If solving such a small-scale nonconvex optimization is still time-consuming, its convex relaxation can be constructed to obtain lower and upper bounds on variables. Fourth, we only check elements in the vector y and find their upper and lower bounds. For each binary variable yi , if its upper bound is less than 1 or lower bound is greater than 0, then its value is determined, and the quantity of decision variables is thus reduced. 4.4. Branching-and-sampling

, we discuss two cases:

PN C

If LBPN C > 0, then |LBSP | > |LBPN C | and UBPN C > 0. It ∗

implies γ.

UB∗ SP

|LB∗SP |



<

UB∗ PN C

|LB∗PN C |

, and thus



∗ UB∗ SP −LBSP |LB∗SP |



<

∗ UB∗ PN C −LBPN C |LB∗ | PN C

=

In the 0–1 program, branching on binary variables to break a complex problem into several smaller subproblems is an efficient method to find the globally optimal solution. Motivated by this

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

6

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx

idea, we branch the binary variable y to reduce the number of unknown decision variables such that required samples N can be decreased according to Theorem 1. Then for each subproblem, a new and smaller set of scenarios is regenerated for (SP ) with partially fixed y. With fewer scenarios, we have more chance to obtain a less conservative solution. In detail, Algorithm 1 is employed to solve the (SP ) constructed by N scenarios, whose optimal solution is denoted as UB. This UB is used in (BT ) to tighten the bounds on y. Some binary variables thus can be determined and make up the vector yˆ 1 . Other undetermined binary variables form a vector yˆ 2 that will be branched. The dimensions of yˆ 1 and yˆ 2 are denoted by Dyˆ 1 and Dyˆ 2 , respectively. Then elements in yˆ 2 are sequentially enumerated and artificially fixed as 1 or 0, which divides the original problem into lower dimensional subproblems. For each subproblem, the number of required scenarios becomes much smaller and its optimal solution can be found faster. The procedure of this algorithm is outlined below: output: solution x∗ , y ∗ and minimal value of objective function Initialize: Determine N; Solve (BT ) to determine yˆ 1 and yˆ 2 ; Fix (a)

the value of yˆ 1 ; Set yˆ 2 = [1, 1, . . . , 1], yˆ (b)

(a) 2

=

(b)

[1, 0, . . . , 0], yˆ 2 = [0, 1, . . . , 1], yˆ 2 = [0, 0, . . . , 0], Π = (a)

(a)

(b)

(b)

[ˆy2,1 , yˆ 2,2 , . . . , yˆ 2,M , 1, 0, . . . , 0] ;

(a) 2

[ˆy2,1 , yˆ 2,2 , . . . , yˆ 2,M , 0, 0, . . . , 0]; n

(a)

(a)

(b)

(b) 2

min

x∈[x,x],y ∈{0,1}Dy

c T1 x + c T2 y

(PN S )

g (x, y , θ j ) ⩽ 0, ∀j ∈ ∆,

=

n

(b)

yˆ 2 = [ˆy2,1 , yˆ 2,2 , . . . , yˆ 2,Mn , 0, 1, . . . , 1], yˆ

In this section, a branching-and-discarding algorithm for the chance-constrained 0–1 program is developed based on Theorem 1. The first inequality in Theorem 1 is used to decide the required number of scenarios N and resulting formulation (SP ) is solved by Algorithm 1. The (BT ) is still used to determine fixed vector yˆ 1 and unknown vector yˆ 2 . By branching on binary variables in yˆ 2 , the number of unknown decision variables becomes smaller and many scenarios can be discarded while probabilistic feasibility remains intact. The second inequality thereby is used to determine the number of removing scenarios k. Solving 0–1 program with N − k scenarios always yields a better or equal solution compared with the original N-scenario (SP ). The challenge is how to select k among N scenarios. Since we consider scenarios rather than constraints in this scheme, (PN C ) is rewritten in the following form:

f 2 (x, y) ⩽ 0,

Determine N according to Theorem 1; for i ← 1 to R do Generate N samples; Use Algorithm 1 to solve problem (PN C )i and obtain: x∗ and y ∗ ; If (PN C )i is feasible and objPN C i
4.5. Branching-and-discarding

s.t. f 1 (x, y) = 0,

{[yˆ 2 , yˆ 2 , 1], [yˆ 2 , yˆ 2 , 1]}, n = 0 ; while Πn ̸ = ∅ do n ← n + 1; [yˆ , yˆ 2 , Mn ] ← Πn ; Dy = Dyˆ 2 − Mn ; 2

yˆ 2 = [ˆy2,1 , yˆ 2,2 , . . . , yˆ 2,Mn , 1, 1, . . . , 1], yˆ

Although bound tightening is only used to determine yˆ 1 and yˆ 2 at the beginning of this algorithm, it can be triggered during the solving procedure if a much better solution (new UB) is found. Once the queue Π becomes empty, it implies that all subproblems have been enumerated and the algorithm can be terminated.

=

(b)

Π ← Π ∪ {[yˆ 2 , yˆ 2 , Mn + 1], [yˆ 2 , yˆ 2 , Mn + 1]} ; end end Algorithm 2: Branching-and-Sampling. where R is a pre-specified parameter representing times of resampling; PN C i is the formulation of (PN C ) constructed by ith resampling; Π is a queue storing triples with the following format: [lower bound of yˆ 2 , upper bound of yˆ 2 , the index of branched component in yˆ 2 ]; Πn is nth element in the queue Π ; During the branching, the queue Π continuously incorporates new elements, and the algorithm does not terminate until all elements in Π are enumerated; Mn represents the highest index of branched components in yˆ 2 ; Dy is updated at each iteration to represent the dimension of undetermined y. It is worthwhile to note that as more binary variables of yˆ 2 are fixed at 1 or 0, the required number of scenarios according to Theorem 1 becomes fewer while probabilistic feasibility is still preserved. However, this does not imply that algorithm 2 always yields a better solution because re-sampling θ is fully randomized. Branching yˆ 2 leads to several subproblems, and at least one of them contains the optimal solution of (CCP ). Some subproblems may not be feasible for any re-sampled scenario set and thus it will not be further branched.

where ∆ ⊆ {1, 2, . . . , N }. We propose a new algorithm that uses the sequential optimization method to find support scenarios and the optimal solution of (PN S ). All these support scenarios will be eliminated from the set ∆. This procedure is repeated until the number of removed scenarios reaches the threshold k determined by Theorem 1. By removing constraints together with the branching on binary variables, we can continuously improve the solution. The algorithm is shown below. In the algorithm, Υ represents the set of removed scenarios, and |Υ | is its cardinality. Through sequential optimization method, a set of support constraints are identified to determine the optimal solution. Removing the scenarios associated with those constraints leads to a better solution. However, the allowable quantity of removing scenarios is restricted by the second inequality in Theorem 1. It shows that as the number of decision variables reduces, k can be increased. This fact implies that branching on y will enable us to eliminate more scenarios and thereby achieve a better solution. The Algorithm 3 terminates when all elements in yˆ 2 are branched and associated subproblems are solved. Here sampling, solving and removing are interactive procedures to relax highly restrictive scenarios. Hence, it obtains a better solution than merely sampling N − k scenarios. Algorithms 3 is heuristic and it may not provide an optimal choice of k scenarios. Ref. [39] develops a scheme to find the globally optimal solution for chanceconstrained programming with finite scenarios by introducing a binary variable for each scenario. However, that method may introduce too much computational work for the nonconvex program that consists of a large number of scenarios. Finally, Algorithm 3 can be used for general optimization program by discarding k scenarios without branching, and the solution is still improved. 5. Computational studies In this section, the proposed algorithms are compared with the state-of-the-art global optimization solver: BARON. The problems are solved on an Intel Core i7-7500U CPU, 2.70 GHz with 8 GB memory. All algorithms are implemented through the GAMS 24.9.1 with BARON 17.8.7.

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx

7

Table 2 Upper/lower bounds of output in example 1. Method

t=1

t=2

O O

9

8.5

−9

−8.5

t=3

t=4

1

−1

t=5

(b)

0.2

0.15

0.12

0.1

0.05

−0.2

−0.15

−0.12

−0.1

−0.05

2

(b)

{[yˆ 2 , yˆ 2 , 1], [yˆ 2 , yˆ 2 , 1]}, n = 0 ; while Πn ̸ = ∅ do n ← n + 1; [yˆ , yˆ 2 , Mn ] ← Πn ; Dy = Dyˆ 2 − Mn ; 2 Determine k according to Theorem 1; Υ = ∅ ; while |Υ |< k do ∆=∅; do

if Solve (PN S ) successfully and obtain x∗ , y ∗ then for j ← 1 to N and j ∈ / Υ do vj = g (x∗ , y ∗ , „j ) ; end if maxi,j∈/ Υ {vj,i } > 0 then j′ = argj maxi,j∈/ Υ {vj,i } ; ∆ = ∆ ∪ j′ ; else if objPN S 0 and (PN S ) is feasible; if (PN S ) is feasible, then Υ ← Υ ∪ ∆; else break ; end if Mn < Dyˆ 2 and (PN S ) is feasible then

uncertainties. Hence we can use the scenario-based method to restrict the output within a well-designed tube with high probability. Similar work has been done for the constrained linear stochastic system [40]. Different from the tube-based robust nonlinear MPC [41], reachable set analysis for MPC [42], min–max MPC for state/input dependent uncertainties [43], and LMI-based MPC [44], the proposed nonlinear MPC is applicable when model parameters are subject to unbounded uncertainties. Notice that the polynomial function is widely used to approximate nonlinear process [45]. We consider following discrete model derived from the finite difference: x1 (t + 1) = x1 (t) + 0.5(x2 (t) + 0.5x1 (t)3 − 1.5x1 (t)2 ),

(7)

x2 (t + 1) = x2 (t) + 0.5u(t),

(8)

O(t) = θ1 x1 (t) + θ2 x2 (t)3 ,

10 ∑

min

u(t +h|h),x1 (t +h|h), x2 (t +h|h),O(t +h|h) t =1

[ˆy2,1 , yˆ 2,2 , . . . , yˆ 2,M , 0, 0, . . . , 0]; n

(a)

(a)

(b)

2

(b)

end end Algorithm 3: Branching-and-Discarding Algorithm.

Solving time (s)

Obj

Violation chance

BARON Sequential method Discard k-scenarios (no branching)

17.5 5.8 241.1

26.70 26.70 26.16

0.26% 0.26% 3.03%

t =0

t = 1, 2, . . . , 10) ⩾ 1 − ϵ, u(t + h|h) ∈ [−5, 5], t = 0, 1, 2, . . . , 9. Here the setpoint is determined by the desired equilibrium {x1 (t) = 0, x2 (t) = 0, u(t) = 0}. Control and prediction horizons are all set as 10. Even though the objective function in (MPC ) is nonlinear, an auxiliary variable ν can be introduced to replace that objective function, which yields a convex constraint (11): 10 ∑

x1 (t + h|h)2 + x2 (t + h|h)2 +

t =1

Method

0.2u(t + h|h)2

P(O(t + h|h) ⩽ O(t + h|h) ⩽ O(t + h|h),

ν⩾

Table 3 Optimization results in example 1.

9 ∑

s.t. Eqs. (7), (8), (9),

=

Π ← Π ∪ {[yˆ 2 , yˆ 2 , Mn + 1], [yˆ 2 , yˆ 2 , Mn + 1]} ;

x1 (t + h|h)2 + x2 (t + h|h)2 +

(MPC )

n

(b)

(10)

where θ1 ∈ N (2, 0.1) and θ2 ∈ N (1.5, 0.05) are normally distributed random parameters affecting output O(t). The upper/lower bounds on O(t) shown in Table 2 form a tube to ensure the convergence. Due to unbounded uncertainties, the hard constraint (10) cannot be satisfied. Instead, a probabilistic constraint P(O(t) ⩽ O(t) ⩽ O(t), t = 1, 2, . . . , 10) ⩾ 1 − ϵ with ϵ = 5% is considered. At time instant h, the resulting chance-constrained MPC is:

yˆ 2 = [ˆy2,1 , yˆ 2,2 , . . . , yˆ 2,Mn , 1, 1, . . . , 1], yˆ = 2 [ˆy2,1 , yˆ 2,2 , . . . , yˆ 2,M , 1, 0, . . . , 0] ; yˆ 2 = [ˆy2,1 , yˆ 2,2 , . . . , yˆ 2,Mn , 0, 1, . . . , 1], yˆ

(9)

O(t) ⩽ O(t) ⩽ O(t),

(a)

(b)

t = 10

0.3

(b)

(a)

t=9

−0.3

[1, 0, . . . , 0], yˆ 2 = [0, 1, . . . , 1], yˆ 2 = [0, 0, . . . , 0], Π = (a)

t=8

0.5

Initialize: Determine N; Solve (BT ) to determine yˆ 1 and yˆ 2 ; Fix (a) the value of yˆ 1 ; Set yˆ 2,1 = [1, 1, . . . , 1], yˆ = (a)

t=7

−0.5

output: solution x∗ , y ∗ and minimal value of objective function

(b)

t=6

9 ∑

0.2u(t + h|h)2

(11)

t =0

Then the MPC can be rewritten as: min

u(t +h|h),x1 (t +h|h) x2 (t +h|h),ν

ν

(LMPC )

s.t. Eqs. (7), (8), (11), 5.1. Example 1

(12)

P(O(t + h|h) ⩽ θ1 x1 (t + h|h) + θ2 x2 (t + h|h)

3

(13)

⩽ O(t + h|h), t = 1, . . . , 10) ⩾ 1 − ϵ,

In the first example, a model predictive controller (MPC) is designed for a nonlinear process to track the setpoint under uncertainties. MPC online solves a constrained optimization problem to find optimal inputs along the control horizon. However, the control law based on the nominal model parameters may not be able to achieve desired performance when output is subjected to

u(t + h|h) ∈ [−5, 5], t = 0, 1, 2, . . . , 9,

(14)

where the output function is substituted by Eq. (9). There are 31 continuous decision variables: u(t + h|h), x1 (t + h|h), x2 (t + h|h) and ν , 30 nonconvex/nonlinear structures: x1 (t + h|h)3 , x2 (t + h|h)3 and x2 (t + h|h)2 , and 20 equalities:(7), (8). Thus, Dx + Dy + Dz − Df1 = 41 and required number of scenarios is N = 1661. Both the sequential

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

8

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx

Fig. 1. The output of all 10000 new tests based on the solution of sampling N scenarios (Algorithm 1). Dashed line is the upper constraint.

optimization method and BARON with all scenarios formula are used to compute the open-loop control sequence u(h|h), u(1 + h|h), . . . , u(9 + h|h) at time instance h, respectively. The solving procedure terminates if the relative gap reaches the threshold γ = 0.01%. Table 3 shows the computational time and objective values. Both methods obtain the same solution, but the proposed sequential optimization method is much faster than using BARON with all scenarios formula. In fact, the globally optimal solution is found after three iterations by using the sequential method. This advantage is very important to MPC since online computational time is limited especially for fast process. The solution is empirically verified via 10000 new scenarios sampled from the same distribution, and its violation chance is far less than ϵ = 5%, see Fig. 1. Even though there is no binary variable and thus the branching method cannot be used, Theorem 1 is still applicable to compute the number of removed scenarios k. Given 40 variables and N = 1661, k is 64. Then, Algorithm 3 is executed but without branching. The solution at each iteration is shown in Fig. 2. The sequential method is adopted to find the support scenarios/constraints and associated optimal solution. Then, these scenarios/constraints are abandoned, and the sequential method restarts based on the refined scenario set until the total number of discarded scenarios reaches 64. During this process, the optimal objective value (minimization) decreases and finally reaches 26.16, shown in Table 3. This solution is empirically verified by 10000 new scenarios and the violation chance is shown in Fig. 3. The violation probability is increased but still less than 5%. Finally, the resulting states evolutions along the control horizon are shown in Fig. 4. Remark. Although the computational time of Algorithm 3 is much longer than the Algorithm 1, it can find a less conservative solution by refining scenarios and re-optimizing the problem. This feature is advantageous to the case where performance is more important than the computational speed. The purpose of this example is to show the application of proposed optimization method in process control. How to determine a proper output tube and design a robust MPC to guarantee the probabilistic stability will be explored in the future work.

Fig. 2. Optimal solution vs. iteration (Algorithm 3, example 1). Each curve represents solutions using the sequential optimization method (Algorithm 1) based on a refined scenario set. The last circle of each curve is the optimal solution associated with that refined scenario set.

Fig. 3. The output of all 10000 new tests based on the solution of sampling N and discarding k scenarios (Algorithm 3). Dashed line is the upper constraint.

and [47] to demonstrate various robust optimization algorithms. There are 5 raw materials and 11 candidate processes to produce 5 products. The superstructure of the process network is shown in Fig. 5. The properties of inflows for processes 4, 7, 8 and 11, are subjected to uncertainty. An important modification on the model is introducing bilinear terms to describe the pooling process (blending and splitting), which impacts the calculation of quality. The selection of process is represented by binary variable y, and thus two branching-based algorithms can be adopted to improve the solution. The optimization formula is shown below:

max

5.2. Example 2 The second example is a capacity planning problem for multiproduct and multiprocess, which has been studied by the Refs [46]

5 ∑

η i Pi −

i=1

5 ∑

αj RMj −

j=1

s.t . OSm = PCm ISm , ∀m, Pi ⩽ Di , ∀i,

11 ∑

(OCm ISm + DCm Qm + FCm ym )

(15)

m =1

(16) (17)

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx

9

Table 5 Mean/standard deviation of θ in example 2. Index

1

2

3

4

5

6

7 8 9

10

Mean 15 22 19 20.5 15 19.5 0 0 17 21 Standard deviation 0.45 0.66 0.57 0.615 0.45 0.585 0 0 0.51 0.63

Fig. 4. The states evolution of process (x1 vs. x2 ).

RMj ⩽ RM j , ∀j,

(18)

ISm = MIm Qm , ∀m,

(19)

ym Q m ⩽ Qm ⩽ ym Q m , ∀m,

(20)

RM1 = IS1 + IS2 + IS3 ,

(21)

RM2 = IS5 + IS6 ,

(22)

RM3 + OS1 + OS2 + OS3 + f2 = IS4 ,

(23)

RM4 + OS5 + OS6 = f4 + IS8 ,

(24)

f3 + f4 = IS7 ,

(25)

RM5 = IS9 + IS10 ,

(26)

OS9 + f5 = f1 ,

(27)

OS10 = f5 + P1 ,

(28)

f1 = f2 + IS11 ,

(29)

OS4 = f3 + P3 ,

(30)

P2 = OS11 ,

(31)

P4 = OS7 ,

(32)

P5 = OS8 ,

(33)

y1 + y2 + y3 ⩾ 0,

(34)

y5 + y6 ⩾ 0,

(35)

θ9 OS9 + θ10 f5 ⩽ Spec11 (OS9 + f5 ), (θ1 OS1 + θ2 OS2 + θ3 OS3 + U3 RM3 )(OS9 + f5 ) + f2 (θ9 OS9 + θ10 f5 )

(36)

⩽ Spec4 (RM3 + OS1 + OS2 + OS3 + f2 )(OS9 + f5 ),

θ5 OS5 + θ6 OS6 + U4 RM4 ⩽ Spec8 (OS5 + OS6 + RM4 ), (θ5 OS5 + θ6 OS6 + U4 RM4 )f4 + θ4 f3 (OS5 + OS6 + RM4 ) ⩽ Spec7 (OS5 + OS6 + RM4 + f3 ),

yi ∈ {0, 1}, ∀i ∈ {1, 2, . . . , 11},

(37) (38) (39) (40)

where OS , IS , f , P , RM , Q , y are decision variables, representing outflow, inflow, intermediate material flow, product, raw material, volume, and process selection. η, α , OC , DC , FC , PC , MI, D, U, Spec are deterministic parameters, representing sale price, raw material price, operational cost, capacity cost, process initiation cost, yield constant, flow to volume relation, maximum demand, the property

of raw materials, specification of inflow. RM and Q are upper bounds of available raw materials and capacity. Q is the minimum inflow volume if a process is selected. Eqs. (36)–(39) are quality constraints, which have to be satisfied with 95% confidence. The values of deterministic parameters are shown in Table 4. θ1 − θ10 are normally distributed uncertain parameters with mean and standard deviation shown in Table 5. Eq. (34) requires at least one of processes among 1, 2, 3 be selected. Similarly, Eq. (35) requires at least one of processes between 5 and 6 be chosen. There are 59 decision variables, 35 equalities, and 16 bilinear terms. Thus, Dx + Dy + Dz − Df = 40 and required number of scenarios is N = 1633. In total, 6582 constraints are generated from Nscenario-based formulation (SP ). The resulting optimization problem is solved by the proposed sequential method and BARON using full scenarios formula to reach the globally optimal solution with a relative gap γ = 0.01%. We present the solution time and objective value in Table 6 for comparison. To verify the probabilistic feasibility, we draw 50000 samples from the same normal distribution and empirically estimate the chance of constraints violation, also shown in Table 6. The sequential optimization method identifies 8 support constraints within 3 iterations to reach the same global optimal solution, which is more efficient as compared to BARON using all scenarios. From Table 6, the chance of constraint violation for the solution derived from the scenario-based method is much less than ϵ = 5%. It implies that there is still some room to improve it. Thus, the bound tightening scheme is used to determine yˆ 1 . The results of (BT ) show that 5 binary variables can be fixed at 1 or 0. Then, the vector yˆ 2 including y1 , y2 , y3 , y5 , y6 , y9 is branched. For algorithm 2, we set R = 5, which means re-sampling 5 scenarios set when binary variables are branched. Fig. 6 is a snapshot of the objective value verse iterations when 5 binary variables are fixed as constant. It shows that the re-sampling method cannot monotonically increase the optimal solution for the maximization problem. For Algorithm 3, when the amount of scenarios is N = 1633, the maximum k is 98 if all binary variables are fixed, according to Theorem 1. Fig. 7 is a snapshot of the objective value verse iterations when all binary variables are fixed as constant. The scenario set is continuously refined until an optimal solution is found. Therefore, the last curve shows the highest optimal solution. Table 7 shows the solution time, chance of violation and optimal results found by branching-and-sampling and branching-and-discarding approaches. The branching-and-discarding method identifies key scenarios and removes them to lift the objective value deterministically, resulting in a less conservative solution. The branchingand-sampling method, although requires a much smaller scenario set and less computational time, its performance cannot be guaranteed due to its randomized nature. If R increases, a better solution may be found by using branching-and-sampling, but the computational time will also be extended. Both methods improve the objective value while associated probabilities of constraints violation also increase. In addition, both Algorithms 2 and 3 are much slower than Algorithm 1, because they need to branch binary variables and solve the new scenario set repeatedly. Lastly, Tables 8 and 9 show the outflow OS and inflow IS derived by the sequential method, branching-and-sampling, and branching-and-discarding. Although all three methods choose the same recipe, including processes 2, 4, 6, 7, 8, 9, 10, 11, to yield products, their designed flows are slightly different.

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

10

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx Table 4 Deterministic parameters in example 2. (Spec = 0 means that the specification of inflow for that process is not concerned.) Index

1

2

3

4

5

6

7

8

9

10

11

PC MI OC DC FC Spec D

13 18 450 2500 4000 0 28 200 600 3 0.001 33

15 20 375 2500 2500 0 27 320 650 3 0.001 34

17 15 400 2500 3500 0 29 445 500 3 0.001 32

14 20 395 2500 3000 20 28 455 400 3 0.001 35

10 20 415 2500 4500 0 30 300 700 3 0.001 35

15 21 405 2500 2500 0

16 15 400 2500 3000 20

11 15 400 2500 2200 20

13 25 450 2500 2800 0

15 15 365 2500 2700 0

17 20 400 2500 2500 20

3 0.001

3 0.001

3 0.001

3 0.001

3 0.001

3 0.001

α η Q Q RM

Fig. 5. Superstructure of capacity planning problem. Table 6 Optimization results in example 2. Method

Solving time (s)

Obj

Violation chance

BARON Sequential method

259.7 2.7

51978.7 51978.7

0.086% 0.086%

Table 7 Optimization results by branching on yˆ 2 in example 2. Methods

Solving time (s)

Obj

Violation chance

Branching-and-sampling Branching-and-discarding

208.6 546.0

51993.0 52092.4

0.2% 4.02%

Table 8 Outflow OS in example 2. (OSi not listed is zero.) Methods

Sequential method

Branching-andSampling

Branching-andDiscarding

OS2 OS4 OS6 OS7 OS8 OS9 OS10 OS11

0.300 30.134 0.315 28.000 30.000 2.463 28.977 27.000

0.300 30.202 0.315 28.000 30.000 2.252 29.193 27.000

0.371 30.247 0.315 28.000 30.000 1.882 29.496 27.000

Fig. 6. Optimal solution vs. iteration (Algorithm 2, example 2). y4 , y7 , y8 , y10 , y11 are fixed. Each curve represents solutions using the sequential optimization method (Algorithm 1) based on a new sampled scenario set. The last circle of each curve is the optimal solution associated with that scenario set.

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx

11

References

Fig. 7. Optimal solution vs. iteration (Algorithm 3, example 2). All binary variables are fixed. Each curve represents solutions using the sequential optimization method (Algorithm 1) based on a refined scenario set. The last circle of each curve is the optimal solution associated with that refined scenario set. As more scenarios are removed, the optimal solution becomes higher for a maximization problem. Table 9 Inflow IS in example 2. (ISi not listed is zero.) Methods

Sequential method

Branching-andSampling

Branching-andDiscarding

IS2 IS4 IS6 IS7 IS8 IS9 IS10 IS11

0.020 2.152 0.021 1.750 2.727 0.189 1.932 1.588

0.020 2.157 0.021 1.750 2.727 0.173 1.946 1.588

0.025 2.161 0.021 1.750 2.727 0.145 1.966 1.588

6. Conclusion This paper presents a scenario-based approach to solve a special type of nonconvex program when constraint parameters are subject to uncertainties. Bounds on the sample complexity are derived for building a scenario-based approximation to that nonconvex program. A sequential optimization method is proposed to identify support constraints in the program and find the globally optimal solution efficiently. Moreover, we further develop the branchingand-sampling and branching-and-discarding approaches to improve the objective value for 0-1 programs. Numerical studies show three advantages of the proposed methods. First, sequential optimization obtains an optimal solution much faster than the state-of-the-art software. Second, the proposed sample complexity indeed guarantees the probabilistic feasibility of the solution for the nonconvex program. Third, both branching-and-discarding and branching-and-sampling require fewer samples, and thereby improve the optimal solution associated with that scenario-based approximation. Acknowledgment Author is grateful to the financial support of Faculty Startup Fund from California State University Long Beach. Conflict of interest There is no conflict of interest.

[1] Sahinidis NV. Optimization under uncertainty: state-of-the-art and opportunities. Comput Chem Eng 2004;28:971–83. [2] Bertsimas D, Thiele A. Robust and data-driven optimization: modern decision making under uncertainty. In: INFORMS tutorials in operations research: Models, methods and applications for innovative decision making. 2006. [3] Bertsimas D, Sim M. The price of robustness. Oper Res 2004;52:35–53. [4] Li Z, Ding R, Floudas CA. A comparative theoretical and computational study on robust counterpart optimization: I. Robust linear optimization and robust mixed integer linear optimization. Ind Eng Chem Res 2011;50:10567–603. [5] Li Z, Tang Q, Floudas CA. A comparative theoretical and computational study on robust counterpart optimization: II. Probabilistic guarantees on constraint satisfaction. Ind Eng Chem Res 2012;51:6769–88. [6] Li Z, Floudas CA. A comparative theoretical and computational study on robust counterpart optimization: III. Improving the quality of robust solution. Ind Eng Chem Res 2014;53:13112–24. [7] Zhang Y, Feng YP, Rong G. New robust optimization approach induced by flexible uncertainty set: optimization under continuous uncertainty. Ind Eng Chem Res 2017;56:270–87. [8] Ben-Tal A, Ghaoui LE, Nemirovski A. Robust optimization. Princeton, New Jersey: Princeton University Press; 2006. [9] Prékoba A. Stochastic programming. Netherlands: Kluwer Academic Publishers; 1995. [10] Nemirovski A, Shapiro A. Convex approximation of chance constrained programs. SIAM J Optim 2006;17:969–96. [11] Fujisakia Y, Oishi Y. Guaranteed cost regulator design: A probabilistic solution and a randomized algorithm. Automatica 2007;43:317–24. [12] Li P, Wendt M, Wozny G. A probabilistically constrained model predictive controller. Automatica 2002;38:1171–6. [13] Cannon M, Kouvaritakis B, Wu X. Model predictive control for systems with stochastic multiplicative uncertainty and probabilistic constraints. Automatica 2009;45:167–72. [14] Zhou Z, Cogill R. Reliable approximations of probability-constrained stochastic linear-quadratic control. Automatica 2013;49. 2435-2439. [15] Schildbach G, Goulart P, Morari M. Linear controller design for chance constrained systems. Automatica 2015;51:278–84. [16] Schildbach G, Fagiano L, Frei C, Morari M. The scenario approach for stochastic model predictive control with bounds on closed-loop constraint violations. Automatica 2014;50:3009–18. [17] Lorenzen M, Dabbene F, Tempo R, Allgower F. Stochastic MPC with offline uncertainty sampling. Automatica 2017;81:176–83. [18] Garatti S, Campi MC. Modulating robustness in control design: principles and algorithms. IEEE Control Syst Mag 2013;33:36–51. [19] Margellos K, Goulart P, Lygeros J. On the road between robust optimization and the scenario approach for chance constrained optimization problems. IEEE Trans Automat Control 2014;59:2258–63. [20] Calafiore GC, Campi MC. Uncertain convex programs: Randomized solutions and confidence levels. Math Program 2005;102:25–46. [21] Calafiore GC, Campi MC. The scenario approach to robust control design. IEEE Trans Automat Control 2006;51:742–53. [22] Alamo T, Tempo R, Luque A, Ramirez DR. Randomized methods for design of uncertain systems: Sample complexity and sequential algorithms. Automatica 2015;52:160–72. [23] Calafiore GC. Random convex programs. SIAM J Optim 2010;20:3427–64. [24] Campi MC, Garatti S. A sampling-and-discarding approach to chanceconstrained optimization: feasibility and optimality. J Optim Theory Appl 2011;148:257–80. [25] Campi MC, Garatti S, Ramponi FA. Non-convex scenario optimization with application to system identification. In: 54th IEEE conference on decision and control. 2015, p. 4023–8. [26] Esfahani PM, Sutter T, Lygeros J. Performance bounds for the scenario approach and an extension to a class of non-convex programs. IEEE Trans Automat Control 2015;60:46–58. [27] Grammatico S, Zhang X, Margellos K, Goulart P, Lygeros J. A scenario approach for nonconvex control design. IEEE Trans Automat Control 2016;61:334–45. [28] Calafiore C, Lyons D, Fagiano L. On mixed-integer random convex programs. In: 51st IEEE conference on decision and control. 2012, p. 3508–13. [29] Sahinidis N. BARON: A general purpose global optimization software package. J Global Optim 1996;8:201–5. [30] Misener R, Floudas CA. GloMIQO: Global mixed-integer quadratic optimizer. J Global Optim 2013;57:3–50. [31] Tawarmalani M, Sahinidis NV. Global optimization of mixed-integer nonlinear programs: A theoretical and computational study. Math Program Ser A 2004;99:563–91. [32] McCormick G. Computation of global solutions to factorable nonconvex programs: Part I convex underestimating problems. Math Program 1976;10:147– 75. [33] Myer CA, Floudas CA. Frontiers in global optimization. Massachusetts: Kluwer Academic Publishers; 2003.

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.

12

Y. Yang and C. Sutanto / ISA Transactions xxx (xxxx) xxx

[34] Campi MC, Garatti S, Ramponi FA. A general scenario theory for nonconvex optimization and decision making. IEEE Trans Automat Control 2018;63:4067–78. [35] Chen M, Lu Y. A novel elitist multiobjective optimization algorithm: Multiobjective extremal optimization. European J Oper Res 2008;188:637–51. [36] Lu K, Zhou W, Zeng G, Zheng Y. Constrained population extremal optimization-based robust load frequency control of multi-area interconnected power system. Electr Power Energy Syst 2019;105:249–71. [37] Gleixner AM, Berthold T, Müller B, Weltge S. Three enhancements for optimization-based bound tightening. J Global Optim 2017;67:731–57. [38] Li D, Li X. Domain reduction for Benders decomposition based global optimization. Comput Chem Eng 2016;93:248–65. [39] Liu X, Küçükyavuz S, Luedtke J. Decomposition algorithms for two-stage chance-constrained programs. Math Program 2016;157:219–43. [40] Calafiore GC, Fagiano L. Robust model predictive control via scenario optimization. IEEE Trans Automat Control 2013;58:219–24.

[41] Mayne DQ, Kerrigan EC, Van Wyk EJ, Falugi P. Tube-based robust nonlinear model predictive control. Internat J Robust Nonlinear Control 2011;21:1341– 53. [42] Bravo JM, Alamo T, Camacho EF. Robust MPC of constrained discretetime nonlinear systems based on approximated reachable sets. Automatica 2006;42:1745–51. [43] Limon D, Alamo T, Salas F, Camacho EF. Input to state stability of min–max MPC controllers for nonlinear systems with bounded uncertainties. Automatica 2006;42:797–803. [44] Kothare MV, Balakrishnan V, Manfred M. Robust constrained model predictive control using linear matrix inequalities. Automatica 1996;32:1361–79. [45] Paolini E, Romagnoli JA, Desages AC, Palazoglu A. Approximate models for control of nonlinear systems. Chem Eng Sci 1992;47:1161–71. [46] Li Z, Floudas CA. Optimal scenario reduction framework based on distance of uncertainty distribution and output performance: I. Single reduction via mixed integer linear optimization. Comput Chem Eng 2014;50:10567–603. [47] Acevedo J, Pistikopoulos EN. Stochastic optimization based algorithms for process synthesis under uncertainty. Comput Chem Eng 1998;22:647–71.

Please cite this article as: Y. Yang and C. Sutanto, Chance-constrained optimization for nonconvex programs using scenario-based methods. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.01.013.