Direct zigzag search for discrete multi-objective optimization

Direct zigzag search for discrete multi-objective optimization

Computers & Operations Research 61 (2015) 100–109 Contents lists available at ScienceDirect Computers & Operations Research journal homepage: www.el...

1MB Sizes 0 Downloads 24 Views

Computers & Operations Research 61 (2015) 100–109

Contents lists available at ScienceDirect

Computers & Operations Research journal homepage: www.elsevier.com/locate/caor

Direct zigzag search for discrete multi-objective optimization Honggang Wang Department of Industrial & Systems Engineering, Rutgers University, United States

art ic l e i nf o

a b s t r a c t

Available online 11 March 2015

Multiple objective optimization (MOO) models and solution methods are commonly used for multicriteria decision making in real-life engineering and management applications. Much research has been conducted for continuous MOO problems, but MOO problems with discrete or mixed integer variables and black-box objective functions arise frequently in practice. For example, in energy industry, optimal development problems of oil gas fields, shale gas hydraulic fracturing, and carbon dioxide geologic storage and enhanced oil recovery, may consider integer variables (number of wells, well drilling blocks), continuous variables (e.g. bottom hole pressures, production rates), and the field performance is typically evaluated by black-box reservoir simulation. These discrete or mixed integer MOO (DMOO) problems with black-box objective functions are more challenging and require new MOO solution techniques. We develop a direct zigzag (DZZ) search method by effectively integrating gradient-free direct search and zigzag search for such DMOO problems. Based on three numerical example problems including a mixed integer MOO problem associated with the optimal development of a carbon dioxide capture and storage (CCS) project, DZZ is demonstrated to be computationally efficient. The numerical results also suggest that DZZ significantly outperforms NSGA-II, a widely used genetic algorithms (GA) method. & 2015 Elsevier Ltd. All rights reserved.

Keywords: Multiple criteria decision making Pareto optimum Gradient free direct search Blackbox simulation Numerical optimization algorithms Integer and mixed integer programming

1. Introduction Decision making problems with multiple criteria arise in various fields of engineering and sciences [1–6]. It is often not possible that a decision (e.g. an inventory control policy or a production system design plan) simultaneously optimizes all the considered criteria. There are, instead, a set of alternatives that trade off the decision criteria. Multi-criteria decision making problems can typically be analyzed with multi-objective optimization (MOO) models. Researchers and practitioners in mathematics, operations research, engineering, and management sciences, have conducted a tremendous amount of work to develop theories, methodologies, and applications of MOO. Many real-world MOO problems include integer ordered or discrete decision variables. Such discrete multi-objective optimization (DMOO) is more challenging due to the discontinuity of objective functions and the discreteness of optimal solution set. In this work, DMOO problems with discrete decision variables and black-box objective functions are particularly of interest. Most of existing research in multi-criteria decision making is for deterministic and continuous MOO problems with closed-form objective functions [7–14]. Current methods for continuous MOO problems fall into two broad categories: (i) population-based stochastic search methods and (ii) iterative pointwise solution search methods. In category (i), many population-based methods for MOO problems are variants of evolutionary algorithms, e.g. http://dx.doi.org/10.1016/j.cor.2015.03.001 0305-0548/& 2015 Elsevier Ltd. All rights reserved.

genetic algorithms (GA) [15–20] and particle swarm optimization (PSO) [21,22]. Evolutionary algorithms are widely used particularly as comparison methods because they are general and relatively easy to implement. The slow convergence, if they have however, makes them impractical for many real-world MOO problems, where the objective function evaluations may entail intense computing. Much MOO research has been in development of the category (ii) of iterative pointwise solution search algorithms that typically solve a sequence of parameterized single-objective optimization subproblems for a sequence of Pareto optimal solutions [23–25], including the popular weighted sum method [13] and the normal boundary intersection method [11]. Continuation based methods [26,27,14,28–31] that are capable of tracing a Pareto front (characterized by a continuous manifold) have been developed recently. Some other important development for continuous MOO of category (ii) includes zigzag search [32] and other local search based algorithms [33,34]. Refer to [35,36,1] and recent books [9,12,37] for detailed discussions of existing methods, analysis, and applications of continuous MOO. DMOO problems commonly exist in real-life engineering and management projects, but efficient algorithms for discrete and black-box MOO problems are still missing. To fill this important gap and develop practical solution methods, we propose a new DMOO method based on a direct zigzag (DZZ) search approach. DZZ method searches Pareto optimal solutions along a zigzag path close to the Pareto front. The local zigzag path is identified based

H. Wang / Computers & Operations Research 61 (2015) 100–109

101

on a direct search approach (e.g., Hooke Jeeves method [38] or pattern search [39]) in which the search procedure only compares function values without computing the gradients of objective functions. Thus, DZZ is general and can be applied for black-box DMOO problems, where the objective functions are evaluated through numerical or simulation processes. DZZ also guarantees the local Pareto optimality (discussed in Section 3.4), due to the neighborhood search employed within the direct search procedure. We demonstrate the DZZ efficiency using three DMOO example problems including an optimal development case of carbon subsurface sequestration. The optimization performance of DZZ is compared to that of a non-dominated sorting genetic algorithm (NSGA-II) [40], one state-of-the-art method in the family of Genetic Algorithms. The promising optimization results of DZZ suggest that DZZ outperforms NSGA-II in terms of optimization efficiency and solution quality. The rest part of the paper is organized as follows. Section 2 provides a formal statement of DMOO problems and introduces some relevant concepts of MOO. Section 3 discusses the details of the DZZ search method and presents one algorithmic implementation of DZZ, specifically for bi-objective optimization problems. In Section 4, we test the proposed DZZ algorithm on three example problems including two taken from the recent literature and one example case in the carbon geologic sequestration project. Conclusions and future research are discussed in the last section.

Definition 1. For solutions x; y A X, x is said to weakly dominate y, denoted as x≽y, if f i ðxÞ r f i ðyÞ for i ¼ 1; 2; …; k. A solution x A X strictly dominates or simply dominates y A X, denoted as x g y, if x≽y and there exists at least one i A f1; 2; …; kg so that f i ðxÞ o f i ðyÞ.

2. Problem statement

Definition 4. Given a δ 40, a feasible solution xn A X is a δ-approximate local Pareto minimum to (P) if xn ≽y for any yA Sδ ðxn Þ, where Sδ ðxn Þ ¼ fy ¼ ðy1 ; y2 Þ A X : δ r J xn  y J r 1; J xn2  y2 J r δg consists of 2d or fewer feasible points near xn.

General mixed integer MOO problems with both integer-valued and continuous variables are formulated as follows. ðPÞ min f ðxÞ; x A fZd1  Rd2 g; Subject to hi ðxÞ r0; i ¼ 1; 2; …; l: Let X denote the feasible region in (P) and d ¼ d1 þ d2 . The feasible region X is a set in the d-dimensional mixed integer space that can be in general specified by l inequality constraints hi ðxÞ r 0; i ¼ 1; 2; …; l; for unconstrained MOO problems, l ¼0. These constraint functions are assumed to be deterministic with closed functional forms. Within (P) the objective function f : Zd1  Rd2 -Rk is defined by an analytic function or implicitly defined by computer simulation processes. In the context of optimal development of the carbon geological sequestration project, for example, a bi-objective f could be the total quantity of carbon dioxide injected into the storage field and the amount of mobile carbon that has risk of leakage after a long storage time, say 1000 years. The goal of (P) is to seek local Pareto optimal solutions of the objective function f on X. The definition of Pareto optimality is provided below.

Definition 2. A solution x A X is globally Pareto optimal or simply Pareto optimal if there is no yA X such that y g x. A solution x A X is said to be locally Pareto optimal for a neighborhood Nδ ðxÞ associated with x if no y A N δ ðxÞ \ X satisfying y g x. The function values f ðX n Þ for (local) Pareto solutions Xn displayed in the objective space form the (local) Pareto front. A useful definition of N δ is discussed below. Definition 3. Let x ¼ ðx1 ; x2 Þ; x1 A Zd1 ; x2 A Rd2 be a feasible solution in X. The δ-neighborhood Nδ ðxÞ of x includes feasible points y ¼ ðy1 ; y2 Þ A X so that J x  y J r 1 and J x2  y2 J r δ, where y1 A Zd1 and y2 A Rd2 . For general mixed integer MOO problems, i.e. d2 a 0 in (P), Definition 3 defines a neighborhood at x that includes a number of (2d1 or fewer) separate points that are 1 unit distance away from x and a set of points within a hypercube centered at x of dimension δ in Rd2 . In numerical algorithm design, to verify a local Pareto optimum with the neighborhood N δ is computationally expensive and error prone. Thus we consider δ-approximate Pareto optimality based on a discrete (finite) neighborhood as stated in Definition 4.

We develop a new DMOO method, termed DZZ, to quickly identify a set of well-distributed approximate local Pareto optima of (P) based on the above δ-approximate optimum definition. If d2 a 0 in (P), we consider a small enough δ 40 so that DZZ returns a set of δ-approximate local Pareto optimal solutions to (P). If d2 ¼ 0, then (P) is a DMOO problem with integer decision variables. In this case, a neighborhood definition with δ ¼1 will be N1 ðxÞ ¼ fy A X : J x y J ¼ 1g. For general DMOO problems with the feasible set X  Rd containing a set of (finite or countably infinite) discrete points, we may define a generic neighborhood structure N(x) that contains 2d or fewer feasible points closest to x coordinate-wise. Specifically, NðxÞ ¼

8 ðy ; …; yi ; …; yd Þ AX : yi ¼ arg minðxi  yi Þ > < 1 yi

if xi Z yi ; 8 j a i; xj ¼ yj ; i ¼ 1; 2; …; d

> : ðy1 ; …; yi ; …; yd Þ AX : yi ¼ argyminðyi  xi Þ

if xi o yi ; 8 j a i; xj ¼ yj ; i ¼ 1; 2; …; d;

i

where xi and yi are the ith coordinates of x and y respectively. An alternative neighborhood for DMOO is based on Delaunay

Fig. 1. Example neighborhood structures for mixed integer, 2-dimensional integer, and discrete solution spaces.

102

H. Wang / Computers & Operations Research 61 (2015) 100–109

triangulation [41], where the close points connected to a certain point x via triangulation edges form the neighborhood of x. For example, Fig. 1 (right) shows a neighborhood of 7 points (hollow circles) for a considered point x (black filled circle). Note that any neighborhood definition can be chosen by practitioners based on their application problems. In this paper, we use the neighborhood structures (Fig. 1) for integer, mixed integer or discrete MOO problems of the form (P); the proposed DZZ is expected to find local Pareto solutions to (P) based on the definition of δ-approximate local Pareto optimum. DZZ is applicable for discrete or mixed integer MOO problems with Pareto solutions connected by a user chosen neighborhood structure.

3. DMOO method 3.1. Motivations for developing DMOO algorithms Many MOO algorithms, including weighted sum [13], continuation methods [42,29], evolutionary algorithms [28,16], are designed for solving continuous MOO problems. There exist few general algorithms that can be applied directly to DMOO problems. Some evolutionary algorithms, such as GA or particle swarm optimization, can be tailored for DMOO problems by rounding solutions to the closest feasible points in X. But these methods can be slow and may converge to suboptimal solutions if they converge at all. Even for continuous MOO problems, the expense of computing gradients or the Hessian matrix of objective functions makes many gradient based MOO algorithms (e.g., continuation methods [30,31]) inefficient, particularly for reallife applications. Therefore, general DMOO methods that do not require computing gradients or Hessian of objective functions are highly desired for practical DMOO or continuous MOO problems where computing the gradients of objective functions can be computationally expensive or not available. The proposed DZZ search method is particularly designed for DMOO problems or mixed integer MOO problems. DZZ searches Pareto optimal solutions around the Pareto front using a gradient free direct search, thus suitable for black-box or simulation based MOO problems. As shown in the numerical studies (Section 4), by combining zigzag and direct search approaches, DZZ provides both generality and efficiency that are required for practical DMOO problems. We stress that the proposed DZZ fundamentally differs from the continuous zigzag search algorithm developed in [32] for at least two major reasons: (i) DZZ is designed for DMOO problems, requires no gradient evaluations, and DZZ's zigzag search routine is completely different from the one based on gradient in [32] and (ii) DZZ guarantees the local Pareto optimality for integer DMOO problems given a neighborhood definition, while the approach in [32] does not guarantee the Pareto optimality in general. 3.2. DZZ methodology The key idea of DZZ method for DMOO is to search around the Pareto front efficiently by employing local neighborhood search procedures. The efficiency of a zigzag search is due to the expectation that an existing solution path will effectively pass through the Pareto front. How to identify such an efficient and tight solution path close to Pareto front is crucial for algorithm performance of DZZ. Fig. 2(a) illustrates that a DZZ search is performed for a DMOO minimization problem with two objective functions f1 and f2. DZZ first conducts a direct search (based on Hooke Jeeves search and neighborhood search) to find the first Pareto optimum point x1.

DZZ then iteratively performs zig and zag searches to find a sequence of Pareto solutions. As shown in Fig. 2(a), for a biobjective optimization with two discrete variables, a zig search is simply a neighborhood search around the current Pareto solution, say x1, and returns a solution y1 from which a zag search will be applied next. In Fig. 2(a), a zag starts from the point y1 obtained from the previous zig search and searches y1's neighborhood points in some order (coordinate-wise or randomly) to find an improving solution with lower f2. In this case, the zag finds the solution y2 with a smaller f2 (improved in f2 with respect to y1) after evaluations of the neighborhood solutions. The second zag step searches within the neighborhood of y2 to find the next improving (with respect to y2) solution x2, which is indeed a Pareto solution for this case as verified by another neighborhood search at x2. Fig. 2(a) shows a DMOO case where the Pareto front consists of nine Pareto optimal points scattered along the Pareto boundary (denoted with filled red circles lower left in the plot), however, DZZ search is still able to find them through a single optimization procedure. The success of DZZ for this illustrative case is due to the fact that the optimal solutions can be passed by a zigzag solution path that can be identified by a sequential neighborhood search procedure. The zigzag iterations in DZZ tend to identify solutions close to the Pareto front and to return Pareto solutions iteratively along the solution path. Next we discuss the DZZ algorithm design and provide a DZZ implementation with algorithm listings. 3.3. DZZ algorithm A direct local search based DZZ algorithm, specifically for biobjective optimization problems is described in Algorithm 1. The algorithms are designed for general mixed integer MOO problems by assigning δ 4 0 for error tolerance corresponding to continuous variables. Note that in this work we set the same tolerance δ for all the continuous variables. But in general, we may select a vector of tolerance fδ1 ; δ2 ; …; δd2 g for the corresponding d2 continuous variables in (P). For DMOO with d2 ¼ 0 in (P), then the tolerance parameter δ ¼1. Algorithm 1. DZZ algorithm. Require: initial solution x0; tolerance δ; objective function evaluation procedure f ¼ ðf 1 ; f 2 Þ

Ensure: a set of δ-approximate Pareto optimal solutions fxk g; k ¼ 1; 2; … 1: Set k ¼0 2: Set xk þ 1 ¼ FPSðxk ; f ; δÞ 3: Set k ¼ k þ 1 4: Set x ¼ zigðxk ; f ; δÞ {zig search} 5: if f 1 ðxÞ o ¼ f 1 ðxk Þ then 6: Go to 16 7: end if 8: Set x0 ¼ zagðxk ; x; f ; δÞ {zag search} 0 9: if xk g x then 10: Go to 16 11: else 12: Set xk þ 1 ¼ x0 13: Go to 3 14: end if 15: 16: return x1 ; x2 ; x3 ; …

This implementation (Algorithm 1) of the proposed DZZ search method consists of two major parts: a routine FPS (search for First Pareto Solution, line 2) and a loop of alternating zig and zag search

H. Wang / Computers & Operations Research 61 (2015) 100–109

103

Fig. 2. Direct zigzag search for DMOO problems. (a) DZZ for a DMOO problem and (b) Zig and zag search.

iterations (line 3 through line 14). The FPS module (line 2) is the optimization solver used to find the first Pareto optimum x1 that minimizes f1 and meanwhile attains the smallest possible value of f2 at such minimal f1; see e.g. Fig. 2(a), where FPS finds a solution (say p) that minimizes f1, but there is another solution within p's neighborhood that dominates p with the same f1 value but a smaller f2. Therefore, after a Hooke Jeeves search returns a local minimizer p for f1, subsequent neighborhood search for possible dominating solutions is still needed. Algorithm 2 presents a gradient free FPS routine that combines the Hooke Jeeves direct search (HJDS) and a loop of local neighborhood search for the first Pareto optimum. Note that we use Sδ to denote the discrete neighborhood (Definition 4) for δ-approximate Pareto optimality (defined in Section 2). Algorithm 2. FPS: find the first pareto solution. Require: initial solution x0; objective functions f ¼ ðf 1 ; f 2 Þ; tolerance

δ

Ensure: a δ-approximate Pareto solution x1 that minimizes f1 in X 1: Set x1 ¼ HJDSðx0 ; f 1 ; δÞ 2: Set bCont ¼1 3: while bCont do 4: Set bCont¼ 0 5: for x2 A Sδ ðx1 Þ do 6: if x2 g x1 then 7: Set x1 ¼ x2 8: Set bCont ¼1 9: Go to 3 10: end if 11: end for 12: end while 13: 14: return x1

Pareto solution x's neighborhood thus is expectedly close to the Pareto front and is likely close to the next Pareto optimum. Algorithm 3 presents the logic for the zag search procedure called in Algorithm 1. To illustrate the basic idea of zag search, Fig. 2(b) is assumed to show a bi-objective optimization problem with two integer variables, i.e. d1 ¼ 2 and d2 ¼ 0. As seen in Fig. 2(b), a zag search starts at x0, which is the solution obtained from the most recent zig search. The first neighborhood search in this case evaluates 2 more solutions ðx10 ; x20 Þ (the superscript denotes the order of points evaluated). For this zag step, it returns x20 (set x1 ¼ x20 ) for the next neighborhood search, because x20 g x0 . As listed at line 7 in Algorithm 3, zag checks the Pareto dominance for the solutions within the neighborhood with respect to the current candidate solution, in this case x0. Line 7 of Algorithm 3 also ensures that f 1 ðx20 Þ 4 f 1 ðxÞ; this prevents the zag search from a potential infinite repetition. The second neighborhood for this zag search then iteratively evaluates the solutions within the x1's neighborhood. For this neighborhood search, zag cannot find any solution that dominates x1 thus evaluates all the four neighboring solutions (including already evaluated solution x0). In this case, zag finds two promising (with lower f2) solutions x21 and x31 . Zag returns x21 simply because f 1 ðx21 Þ o f 1 ðx31 Þ. This treatment could likely avoid skipping Pareto solutions along the zigzag solution path. A neighborhood search continues at x2 ¼ x21 , and the search cannot find any more locally non-dominated solutions with respect to x1 (not shown in Fig. 2(b)) thus this zag search returns a new Pareto solution x2. As zag search employs neighborhood search, which may evaluate up to 2d solutions, this search strategy might be inefficient if the dimension d of DMOO problems is large. But for many cases, zag search is still efficient because the solution path will be likely close to the Pareto front and thus the number of neighborhood searches will be small. The numerical experiments demonstrate that the zag search is indeed efficient for DMOO problems with d not small (see the carbon sequestration example case in Section 4, where d ¼25). Algorithm 3. Zag search.

The zig search in Algorithm 1 is indeed a straightforward neighborhood search for a solution y such that f 1 ðyÞ 4 f 1 ðxÞ and y A Sδ ðxÞ. If there no such a yA Sδ ðxÞ, DZZ terminates. Whenever a zig search finds a solution y, a subsequent zag search starts at y. The zag search is also based on the neighborhood search but may have multiple iterations of neighborhood searches until a nondominated solution is found. The number of neighborhood search iterations in zag search depends on how far the candidate solution is from the Pareto front. A typical zag search has few neighborhood iterations because the zig returned solution y is within the current

Require: current Pareto solution x; initial guess x0; tolerance δ; objective functions f ¼ ðf 1 ; f 2 Þ

Ensure: δ-approximate Pareto optimum xn 1: Set k ¼0 2: Set bCont ¼1 3: while bCont do 4: Set bCont ¼0 5: Set xList ¼ f g {empty solution list} 6: for xk þ 1 A Sδ ðxk Þ do 7:

if xk þ 1 ≽xk and f 1 ðxk þ 1 Þ 4 f 1 ðxÞ then

104

8: 9: 10: 11: 12: 13: 14: 15: 16: 17:

H. Wang / Computers & Operations Research 61 (2015) 100–109

Set bCont ¼1 Set k ¼ k þ 1 Go to 3 else if f 2 ðxk þ 1 Þ o f 2 ðxk Þ and f 1 ðxk þ 1 Þ 4 f 1 ðxÞ then Append xk þ 1 to xList Set bCont ¼1 end if end for if bCont then Set xk ¼ arg min f 1 ðxListÞ xList

18: 19: 20: 21: 22:

Set k ¼ k þ 1 end if end while return xn ¼ xk

3.4. Convergence of DZZ algorithm We briefly discuss the convergence of DZZ algorithm here; rigorous and detailed convergence analysis of DZZ will be our future work. From the listing of Algorithm 1, if a zig search at x cannot find a solution x0 in x's neighborhood so that f ðx0 Þ 4 f ðxÞ, DZZ terminates. Based on Algorithm 3, if a zag search at x0 cannot find an improving solution x0 such that f 1 ðx0 Þ 4 f 1 ðxÞ and f 2 ðx0 Þ of 2 ðx0 Þ, DZZ terminates as well. Therefore, the success of DZZ returning all the Pareto solutions relies on the neighborhood connectedness of solutions. Suppose for a DMOO problem, if the Pareto solutions are connected by a unique neighborhood linked solution path, then DZZ guarantees to find all the Pareto solutions. If the Pareto solutions are connected through multiple neighborhood connected paths, then some or all the Pareto solutions can be found by DZZ depending on which solution path that DZZ follows. For continuous MOO, there is much work based on continuation theory [43] studying the continuity of Pareto front. But for DMOO, to study the connectedness of discrete Pareto front is much more challenging and will be one focus of our future research in DMOO. For (P) with d2 ¼ 0, DZZ finds local Pareto solutions based on N1 neighborhood structure. For d2 a0, the proposed implementation of DZZ (Algorithm 1) returns a set of δ-approximate Pareto solutions, given a predefined tolerance δ 4 0. If we apply DZZ repeatedly with a sequence of δk -0, we expect to obtain a set of true local Pareto solutions in Xn asymptotically, as k-1. Specifically, at each iteration k, DZZ finds the δk-approximate solution set X nk and limk-1 X nk D X n , as k-1 and accordingly δk -0. How to choose such a tolerance sequence δk and how to use X nk to solve for X nk þ 1 at the ðk þ 1Þ th DZZ iteration are important research questions in our future work.

4. Numerical experiments Three bi-objective optimization problems are used to demonstrate the efficiency of DZZ algorithm described in Section 3. The first two are integer MOO problems with closed-form objective functions; the Pareto solutions are located on a convex and concave curve respectively in the objective function space. In particular, the second example DMOO problem has the Pareto solutions lying in a deep valley (in the objective function space) and the continuous counterpart MOO problem of this case is considered to be difficult for evolutionary algorithms [15,32]. The third example case is a mixed integer MOO problem associated with the optimal development for a carbon subsurface sequestration project. We need to find the best injection sites and injection rates so that a maximal

amount of carbon dioxide will be stored and the probability of carbon leakage is minimized. This problem is challenging because the objective functions (storage volume and leakage risk) can only be evaluated through black-box reservoir simulation, which is often computationally expensive (in practice hours or even days for a single solution evaluation). Such objective functions do not have analytical forms and are highly nonlinear. There are two major goals to achieve for MOO: (1) correct convergence to the Pareto optima and (2) large coverage and even distribution over the Pareto front [16]. For correct convergence of MOO, an average deviation metric for convergence error (CE, called generational distance (GD) in [16]) can be used to represent how far the returned solutions from the true Pareto front: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pn 2 CE ¼ i ¼ 1 Δi =n, where n is the total number of solutions returned from the optimization algorithm and Δi is the distance of the ith solution from the algorithm to the Pareto optimal set in the image domain. Suppose xi is the ith numerical solution obtained from an MOO algorithm and Yn is the Pareto front, then Δi ¼ inf y A Y n J f ðxi Þ  y J . Thus a larger CE indicates a worse convergence performance for the applied algorithm; CE ¼0 verifies that all the returned solutions are in the Pareto optimal set Xn. Zero of CE does not necessarily guarantee the best performance of an MOO algorithm. For instance, an algorithm may return only one Pareto solution in Xn, thus CE ¼0. But the result is not acceptable if Xn contains many Pareto solutions. Therefore, we need to consider the coverage of obtained solutions over Xn. For solution coverage, if the optimization solution set Xn to (P) is finite, i.e. j X n j o 1 where j  j denotes the cardinality of a set, we can define the coverage metric SC ¼ j X \ X n j =j X n j , where X is the solution set obtained from an optimization run. If j X n j ¼ 1, we may use a metric to represent solution distribution [1,25]: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P SD ¼ ð1=ðn  2ÞÞ ni ¼ 11 ‖s  si ‖2 , where n is the total number of solutions returned from the optimization, si ¼ f ðxni Þ the ith returned optimal solution values, and s is the mean of all returned objective function values. Notice that the objective function values si are sorted with respect to f1, i.e. s1 ð1Þ r s2 ð1Þ r s3 ð1Þ; …; rsn ð1Þ. The smaller value of SD represents that the returned solutions are more evenly located in the image domain of f. We use these two measures, CE and SC, described above to quantify the performance of DZZ on the selected problems, where j X n j o 1 for all the three example problems. The comparison of DZZ with NSGA-II in this work can be made based on these measures to quantify solution quality and optimization efficiency. We choose NSGA-II to compare and demonstrate algorithm performance for two main reasons: NSGA is general and widely used in real applications and there are only very few methods suitable for DMOO problems. In [32], a continuous zigzag search algorithm is compared to a few other MOO methods including PAWS [25] and the normal-boundary intersection method [11]. The results in [32] show that the zigzag search approach is more efficient than those recently developed methods for continuous MOO problems. 4.1. Case 1: convex type pareto front This example is modified to be DMOO with 2 integer variables based on the test problem 1 in [19]. p The two objective functions ffiffiffiffiffiffiffiffiffi ffi are f 1 ¼ x1 =100 and f 2 ¼ g  ð1  f 1 =g Þ, where g ¼ 1 þ9  ððx2  x1 Þ= 100Þ2 . The feasible solution set consists of integer points within the square ½0; 100  ½0; 100, i.e., X ¼ f0; 1; 2; …; 100g  f0; 1; 2; …; 100g. For this case, the Pareto optimal solutions are X n ¼ fðx1 ; x2 Þ : x1 ¼ x2 ; x2 ¼ 0; 1; 2; …; 100g (see Fig. 4). We apply DZZ search to this example problem. Fig. 3 shows the progression of the DZZ search displayed in the 2-dimensional objective function space. The initial solution x0 is chosen randomly

H. Wang / Computers & Operations Research 61 (2015) 100–109

105

Fig. 3. Performance of DZZ for Case 1 with convex type Pareto front.

Fig. 5. Performance of NSGA-II for Case 1 with convex type Pareto front.

optimal. Compared to DZZ, the results from this case clearly suggest that NSGA-II finds Pareto solutions slower with lower quality and less coverage. Therefore, DZZ is more efficient for this example problem.

4.2. Case 2: “Deep valley” concave type pareto front

Fig. 4. DZZ identified solution path for Case 1.

from the feasible region X. The FPS procedure in DZZ evaluates about 200 solutions to find the first Pareto solution x1 ¼ ð0; 0Þ with ðf 1 ; f 2 Þ ¼ ð0; 1Þ, where f1 is minimized to be 0. DZZ continues on zigzag direct search from x1. For this case, zigzag search evaluates about 400 solutions in X to identify all the 101 Pareto solutions in Xn. As viewed in the objective function space in Fig. 3, the zigzag solution path is very close to the Pareto front. Fig. 4 plots the 599 solutions evaluated in the course of optimization in the 2-dimensional solution space. The solution path verifies that DZZ searches solutions within neighborhood iteratively and DZZ returns the entire Pareto solution set Xn correctly. Therefore, DZZ solves this DMOO problem with the convergence error CE ¼0 and the solution coverage SC¼ 100%. To compare, we apply NSGA-II to this case. The population size of NSGA-II is 20 and 70 generations are evaluated. The crossover probability is 90% and the mutation probability is 10%. Fig. 5 shows the performance of the NSGA-II optimization run. In the objective function space, we see that many evaluated solutions are dominated solutions and the returned solutions do not cover the whole Pareto front. Based on the results from this NSGA-II run, 80 nondominated solutions with respect to all the 1400 evaluated solutions are obtained and some of them are not truly Pareto

The second test problem is prepared based on the example problem studied in [15,32]. This is a bi-objective optimization problem with two integer decision variables. The two objective functions are f 1 ¼ x1 =100 and f 2 ¼ g  ½1  ðf 1 =gÞ2 , where gðxÞ ¼ 1 þ ðx2 =100Þ1=4 . The feasible set X is again the integer points within the square ½0; 100  ½0; 100, i.e., X ¼ f0; 1; 2; …; 100g  f0; 1; 2; …; 100g containing 10,201 integer solutions. For this case, the Pareto optimum set X n ¼ fðx1 ; x2 Þ : x2 ¼ 0; x1 ¼ 0; 1; 2; …; 100g has 101 Pareto solutions and the Pareto front is located at the bottom of a deep valley from the dominated regions. As a result, for this case, the dominated solution region is much “denser” compared to the Pareto optimal region if we look at the function space. That is, if we randomly sample and evaluate solutions in X, the points in the objective function space will be mostly concentrated in regions far from the Pareto front. As shown in [13,15], a population based algorithm, such as GA based MOO algorithm [15], is not efficient for this type of problems. Fig. 6 shows the progress of a DZZ optimization run for this example problem. The initial solution is randomly picked in X and FPS finds the first Pareto optimum x1 ¼ ð0; 0Þ after about 250 solution evaluations. From Fig. 6, we see that zig search finds solutions that are in the neighborhood (1 unit distance away from the current Pareto solution) but jump higher from the Pareto front in the objective function space. This phenomenon indicates that the lower density of solution close to the Pareto front. The zigzag search in DZZ evaluates about 250 solutions and returns all the 101 Pareto optimal solutions in Xn, which are located on the concave shaped Pareto front in the objective space. Fig. 7 shows the DZZ generated solution path viewed in the solution space. We see that FPS quickly finds the first Pareto optimizer after a Hooke Jeeves search and a number of neighborhood searches. Then zigzag search returns Pareto solutions iteratively from the optimal set X n ¼ fð0; x0 Þ : x0 ¼ 0; 1; 2; …; 100g in an increasing order of x0 .

106

H. Wang / Computers & Operations Research 61 (2015) 100–109

Fig. 6. Performance of DZZ for Case 2 with jumping Pareto front. Fig. 8. Performance of NSGA-II for Case 2 with jumping concave Pareto front.

Fig. 7. DZZ identified solution path for Case 2.

Therefore, for this DMOO case, DZZ efficiently solves the problem with zero convergence error and 100% Pareto optimum coverage. We apply NSGA-II to this case for comparison. For this case, the population size is 30 and 60 generations are evaluated. The crossover and mutation probabilities are 90% and 10% respectively. Fig. 8 shows the results of this NSGA-II run. From the results, NSGA-II evaluates 1800 solutions that are noticeably away from the true Pareto front. None of the 282 returned non-dominated solutions is truly Pareto optimal. Thus, NSGA-II fails to find any optimal solutions that are truly in X or well-approximate solutions close to Xn. Based on the performance comparison, DZZ outperforms NSGA-II for this case. 4.3. Case 3: carbon dioxide sequestration in a subsurface saline reservoir We test the efficiency of DZZ with the DMOO problem associated with a carbon capture and storage (CCS) project. The selected storage reservoir is a deep (8300 ft) saline aquifer of 3 dimensions 33; 000  33; 000  320ft (Length  Width  Depth). The discrete reservoir model for this case contains 23  23  8 blocks, thus a total of 4232 blocks with each block of dimensions about 1435  1435  40 ft3. The reservoir permeability and porosity together with other properties are heterogeneous across the formation (Fig. 9). The average permeability of the reservoir is

Fig. 9. The permeability field (log10 mD) of the storage aquifer.

estimated to be 67 millidarcy (mD) and the vertical permeability anisotropy (the ratio of vertical and horizontal permeability) is assumed to be 0.1. The average rock porosity is estimated to be 0.3 for the storage reservoir. The average rock compressibility is approximately 3  10  6 =psi. The initial conditions for the reservoir are specified as follows. The field initially has the average pressure 2500 psi and the temperature is 103:7o F. We consider three major components in the reservoir: CO2, H2O, and NaCl. The initial concentrations of components are 0% CO2, 99% H2O, and 1% NaCl. After injection, the concentration of CO2 will be constantly increasing until the end of injection. This new CCS project plans to inject 2–8 million tonnes (MT) of CO2 per year that is about the annual emissions from a 300–1200 Mega Watts (MW) power plant. The injection continues for the first 30 years, which results in a total of 60–240 MT of CO2 sequestered into this reservoir. The total injected CO2 in the subsurface condition corresponds to 1–4% of the porous volume of the reservoir, which is within the appropriate range suggested by the Intergovernmental Panel on Climate Change [44]. For this CCS project, it is determined to drill four horizontal injection wells. For the “optimal” (in terms of storage volume and security discussed later) field development plan, we seek to find the best locations of these four wells and optimal pumping rates

H. Wang / Computers & Operations Research 61 (2015) 100–109

over the first 30 years. It is practical to drill and complete the wells in the bottom layers so that the carbon can be stored more securely. For this case, we assume all injection wells are completed in the bottom layer and the orientation of well drilling is horizontal in either x- or y-axis direction. Because wells are prescribed to start and end at the centers of blocks in reservoir simulation, well location must be specified by integer valued variables to represent drilling blocks. Accordingly, the well length can also be specified with an integer value representing the number of blocks that a well will cross. Therefore, integer decision variables associated with each well include 2 block location variables (x,y) for the starting block and the well length variable ðlÞ with a predefined orientation (x- or y-axis direction, see Fig. 11). We consider the annual injection quantity q as a variable in this study. Once it is determined, q is constant through the injection years. The continuous well control variables (in this case the CO2 injection rates) associated with each well are defined for four time intervals; that is, for this problem the pumping rates change every 7.5 years. Given a fixed q, there are three independent control variables for each control period (7.5 years). For example, suppose q ¼5 MT, during the first 7.5 years, if wells 1, 2, 3 will inject 20%, 30%, 40% of a total 5 MT CO2 annually respectively, well 4 will inject 10% of 5 MT each year during the first 7.5 year interval. Thus this problem has 13 continuous variables including annual injection quantity q and 12 injection rate variables. With this setting, the optimization problem associated with the CCS project is a mixed integer simulation optimization problems with a total of 25 mixed type decision variables. For this project, two objectives are considered: maximizing the total volume of CO2 injection and minimizing the proportion of injected CO2 are potential to leak (the percentage of CO2 that is in mobile phase). The first objective f 1 ¼ 30q (MT) is easy to compute. But the second objective function f 2 ðx; y; l; q; rÞ is computationally expensive and entails reservoir simulation. For this example case, we consider a number of constraints on well placement and controls. For well location, the injection wells must be drilled within the field's central region, which is labeled by integer lattice ½1; 2; …; 15  ½1; 2; …; 15. That is, for well location variables x; y A f1; 2; …; 15g and the well length variable is bounded by l A f1; 2; 3g. In practice, wells cannot be too close, thus we require any two wells must be at least two blocks apart, i.e. distðtðx1 ; y1 ; l1 Þ; tðx2 ; y2 ; l2 ÞÞ Z 2 between well 1 and well 2, where tðÞ denotes the well trajectory. For the control variables, the rates

Fig. 10. Performance of DZZ for optimal CCS development.

107

P must be 0 o r ij o 0:4 for well i and year j and 0:6 o 3 i¼ 1 r ij o 1:0; j ¼ 1; 2; …; 30. The geological complexity of the storage formation makes finding optimal injection locations difficult. Fig. 9 shows the high heterogeneity of the permeability field (may have orders of magnitude difference across the field) for the storage aquifer. Reservoir simulation is the commonly used technology to evaluate the storage volume and risk measure of leakage. The optimal CCS development problem is now formulated as a simulation based biobjective optimization problem with integer well location variables and continuous well control variables. The objective functions (in this case risk measure of leakage) do not have a closed form and entail the expensive reservoir simulation. We use Schlumberger's ECLIPSE simulator in our optimization process. For this synthetic case, in average, it takes ECLIPSE about 5–10 min to simulate a single solution. For practical CCS cases, it is not uncommon that it takes the simulator hours or longer to evaluate one solution. We compare optimization performance of DZZ and NSGA-II algorithms using this black-box mixed integer MOO problem. We select the initial solution with wells drilled in the center blocks and each well is one block long (Fig. 11 left), the annual CO2 injection of q ¼5 MT, and the same injection rates r ij ¼ 25% for all well i year j. For continuous variables we set the tolerance δ ¼ 0:05 for both variables q (MT/year) and r (percentage). Fig. 10 shows one DZZ run applied to this case. The FPS procedure evaluates about 150 solutions to find the first Pareto solution with ðf 1 ; f 2 Þ ¼ ð60; 4%Þ. Then the zigzag search iteratively performs neighborhood search to return more Pareto solutions with larger carbon injection f1 and larger risk f2. For this case, DZZ evaluates a total of 2454 solutions and of which 61 are approximate Pareto solutions. The approximate Pareto front as shown in Fig. 10 is quite “smooth”, it is not located on either a convex or concave curve though. For this case, we do not know the true Pareto front thus no convergence parameter CE or SC can be computed. But the DZZ obtained approximate Pareto solutions show even distribution of the solutions and they are better-quality solutions compared to those returned from the NSGA-II optimization run. The DZZ optimized well location and control rates for q¼ 5 MT are given in Figs. 11 and 12 respectively. The DZZ optimization tends to put wells towards the boundary of the constrained reservoir region. The well length is optimized to be longer. These results are practically meaningful because injecting CO2 into a wider range inside the reservoir can promote the movement of the CO2 plume toward wider porous space and enhance secure trapping. The DZZ optimization also suggests that varying the injection rates can improve the security of storage process. Fig. 13 shows the dynamic migration of CO2 via different trapping mechanisms through the field by comparing the initial solution and optimized solution. The results clearly show that the optimization finds a solution that improves secure trapping through dissolving and absorbing more mobile CO2 in the formation and thus leads to a lower risk of carbon leakage. To compare, we apply NSGA-II to this case. The population size is 50 and the number of generations is set to be 40. Thus 2000 solution evaluations have been performed for this optimization run. The crossover and mutation probability values are again 90% and 10% respectively. Among 2000 evaluated solutions, 36 solutions are identified to be non-dominated to others. Fig. 14 plots the evaluated solutions in the objective function space through this NSGA-II optimization run. Based on the comparison results (Fig. 15), we see that the majority of solutions evaluated by NSGAII are quite away from the approximate Pareto region identified by DZZ. Moreover, the 36 returned solution are poorly distributed in the space. The numerical results indicate that DZZ significantly outperforms NSGA-II for this problem, because DZZ returns more

108

H. Wang / Computers & Operations Research 61 (2015) 100–109

Fig. 11. Optimization of well placement (horizontal wells in the bottom layer); given q¼5 MT, initial placement of 4 wells (left) versus DZZ optimized placement of 4 wells (right).

Fig. 12. Optimization of injection rates (proportion of fixed annual injection); given q¼ 5 MT, initial solution (up) verses the DZZ optimized rates for 4 wells (down).

Fig. 14. NSGA-II optimization performance for optimal CCS development.

Fig. 13. CO2 trapping process (proportions of the total injection) for q¼ 5 MT annual injection; initial solution (top) versus the DZZ optimized solution performance (bottom).

evenly distributed and higher quality (with relatively larger f1 and smaller f2) Pareto solutions than NSGA-II does.

5. Concluding remarks We develop a new zigzag search algorithm using a gradient free direct search approach (called DZZ) for DMOO problems. DZZ provides a general solution method that can be applied to DMOO

Fig. 15. Comparison of DZZ and NSGA-II optimization for CCS development case.

H. Wang / Computers & Operations Research 61 (2015) 100–109

or mixed integer MOO problems, particularly for those with blackbox or simulation based objective functions. Without computing gradient or Hessian, DZZ is more useful for practical MOO problems. We use two benchmark MOO problems in the literature and one mixed integer MOO problem associated with a carbon geologic sequestration field case to test the optimization efficiency of DZZ algorithm. Based on the comparison results with NSGA-II, DZZ is demonstrated to be an efficient algorithm for DMOO problems. Our future work in DMOO includes rigorous convergence analysis of DZZ and developing new zigzag search algorithms for DMOO problems under system uncertainty. For instance, we may integrate algorithmic ideas of DZZ and retrospective optimization framework [45–47] for stochastic DMOO problems. We will apply DZZ to more practical DMOO problems to further test its efficiency, e.g. in energy system development and management considering multiple economic and environmental objectives, such as optimal design of fracturing network for shale gas fields, optimal development of large-scale wind farms, and optimal planning and control of smart grid systems. Acknowledgements We are grateful to Rutgers Research Council for partial funding of this work. Cluster computing for reservoir simulation and optimization has been provided by the high-performance computing center at Norwegian University of Science and Technology (NTNU). We also thank Schlumberger for granting ECLIPSE software licenses. References [1] Abraham A, Jain LC, Goldberg R. Evolutionary multiobjective optimization: theoretical advances and applications. London: Springer-Verlag; 2005. [2] Tappeta R, Renaud J, Rodríguez J. An interactive multiobjective optimization design strategy for decision based multidisciplinary design. Eng Optim 2002;34(5):523–44. [3] Murata T, Ishibuchi H, Tanaka H. Multi-objective genetic algorithm and its applications to flowshop scheduling. Comput Ind Eng 1996;30:957–68. [4] Willson B, Cappelleri D, Simpson T, Frecker M. Efficient Pareto frontier exploration using surrogate approximations. Optim Eng 2001;2(1):31–50. [5] Shan S, Wang GG. An efficient Pareto set identification approach for multiobjective optimization on black-box functions. J Mech Des 2005;127 (5):866–74. [6] Markowitz H. Portfolio selection: efficient diversification of investments. Blackwell, Massachusetts: Malden; 1991. [7] Pareto V. Manual of political economy. New York: A.M. Kelley; 1906. [8] Edgeworth FY. Mathematical physics. London, England: P. Keagan; 1881. [9] Chinchuluun A, Migdalas A, Pardalos PM, Pitsoulis L. Pareto optimality, game theory and equilibria. New York, USA: Springer; 2008. [10] Cohon JL. Multiobjective programming and planning. New York: Academic Press; 1978. [11] Das I, Dennis J. Normal-boundary intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM J Optim 1998;8:631–57. [12] Zopounidis C, Pardalos PM. Handbook of multicriteria analysis. Berlin, Heidelberg: Springer; 2010. [13] Audet C, Savard G, Zghal W. Multiobjective optimization through a series of single-objective formulations. SIAM J Optim 2008;19(1):188–210. [14] Schütze O. Set oriented methods for global optimization [Ph.D. thesis]. Paderborn University; 2004. [15] Deb K. Multi-objective genetic algorithms: problem difficulties and construction of test problems. Evol Comput 1999;7:205–30. [16] Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 2002;6(2):182–97. [17] Deb K. Multi-objective optimization using evolutionary algorithms. Chichester, England: Wiley; 2009. [18] Tamaki H, Kita H, Kobayashi S. Multi-objective optimization by genetic algorithms: a review. In: Proceedings of IEEE international conference on evolutionary computation; 1996. p. 517–22.

109

[19] Zhou A, Zhang Q, Jin Y, Tsang E. A model-based evolutionary algorithm for biobjective optimization. In: Proceedings of the Congress on Evolutionary Computation; 2005. p. 2568–75. [20] Zitzler E, Laumanns M, Thiele L. SPEA2: improving the strength Pareto evolutionary algorithm. Technical Report, Zürich: Eidgenössische Technische Hochschule Zürich (ETH); 2001. [21] Liang JJ, Qin AK, Suganthan PN, Baskar S. Dynamic multi-swarm particle swarm optimizer with a novel constraint-handling mechanism. IEEE Trans Evol Comput 2006;10(3):281–94. [22] Seo JH, Im CH, Heo CG, Kim JK, Jung HK, Lee CG. Multimodal function optimization based on particle swarm optimization. IEEE Trans Magn 2006;42(4):1095–8. [23] Audet C, Dennis J. Mesh adaptive direct search algorithms for constrained optimization. SIAM J Optim 2007;17:188–217. [24] Grandoni F, Ravi R, Singh M. Iterative rounding for multi-objective optimization problems. In: Proceedings of the 17th annual European symposium on algorithms; 2009. p. 95–106. [25] Ryu JH, Kim S, Wan H. Pareto front approximation with adaptive weighted sum method in multiobjective simulation optimization. In: Proceedings of the 2009 winter simulation conference, Washington DC; 2009. p. 623–33. [26] Recchioni MC. A path following method for box-constrained multiobjective optimization with applications to goal programming problems. Mathel Methods Oper Res 2003;58:58–69. [27] Hillermeier C. Generalized homotopy approach to multiobjective optimization. J Optim Theory Appl 2001;110:557–83. http://dx.doi.org/10.1023/ A:1017536311488. [28] Schütze O, Lara A, Coello Coello C. The directed search method for unconstrained multi-objective optimization problems. In: Proceedings of the EVOLVE–A bridge between probability, set oriented numerics, and evolutionary computation. [29] Schütze O, Coello Coello CA, Mostaghim S, Talbi E-G, Dellnitz M. Hybridizing evolutionary strategies with continuation methods for solving multi-objective problems. Eng Optim 2008;40(5):383–402. [30] Hernández C, Naranjani Y, Sardahi Y, Liang W, Schütze O, Sun J-Q. Simple cell mapping method for multi-objective optimal feedback control design. Int J Dyn Control 2013;1(3):231–8. [31] Ringkamp M, Ober-Blöbaum S, Dellnitz M, Schütze O. Handling highdimensional problems with multi-objective continuation methods via successive approximation of the tangent space. Eng Optim 2012;44(9):1117–46. [32] Wang HG. Zigzag search for continuous multi-objective optimization. INFORMS J Comput 2013;25(4):654–65. http://dx.doi.org/10.1287/ ijoc.1120.0528. [33] Adriana LL, Gustavo S, Carlos ACC. HCS: a new local search strategy for memetic multiobjective evolutionary algorithms. IEEE Trans Evol Comput 2010;14:112–32. http://dx.doi.org/10.1109/TEVC.2009.2024143. [34] Adriana LL, Carlos ACC, Schütze O. A painless gradient-assisted multi-objective memetic mechanism for solving continuous bi-objective optimization problems. In: IEEE congress on evolutionary computation; 2010. p. 1–8. [35] Marler RT, Arora JS. Survey of multi-objective optimization methods for engineering. Struct Multidiscip Optim 2004;26(6):369–95. http://dx.doi.org/ 10.1007/s00158-003-0368-6. [36] Chinchuluun A, Pardalos PM. A survey of recent developments in multiobjective optimization. Ann Oper Res 2007;154(1):29–50. [37] Pardalos PM, Siskos Y, Zopounidis C. Advances in multicriteria analysis. Kluwer academic; 1995. [38] Hooke R, Jeeves TA. Direct search solution of numerical and statistical problems. J ACM (JACM) 1961;8(2):212–29. [39] Lewis RM, Torczon V. Pattern search algorithms for bound constrained minimization. SIAM J Optim 1999;9(4):1082–99. [40] Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans Evol Comput 2002;6(2):182–97. [41] Lee D-T, Schachter BJ. Two algorithms for constructing a delaunay triangulation. Int J Comput Inf Sci 1980;9(3):219–42. [42] Schütze O, Vasile M, Junge O, Dellnitz M, Izzo D. Designing optimal low-thrust gravity-assist trajectories using space pruning and a multi-objective approach. Eng Optim 2009;41(2):155–81. [43] Allgower EL, Georg K. Numerical continuation methods, vol. 13. Berlin: Springer-Verlag; 1990. [44] SS, et al., Climate change 2007: the physical science basis. In: Contribution of working group I to the fourth assessment report of the intergovernmental panel on climate change; 2007. [45] Wang HG. Retrospective optimization of mixed stochastic systems using dynamic simplex interpolation. Eur J Oper Res 2012;217(1):141–8. [46] Wang HG, Echeverria D, Durlofsky LJ, Cominelli A. Optimal well placement under uncertainty using a retrospective optimization framework. Soc Pet Eng J 2012;17(1):112–21. [47] Wang HG, Pasupathy R, Schmeiser BW. Integer-ordered simulation optimization using R-SPLINE: retrospective search with piecewise-linear interpolation and neighborhood enumeration. ACM Trans Model Comput Simul (TOMACS) 2013;23(3):17.