An efficient simulation optimization method for the generalized redundancy allocation problem

An efficient simulation optimization method for the generalized redundancy allocation problem

European Journal of Operational Research 265 (2018) 1094–1101 Contents lists available at ScienceDirect European Journal of Operational Research jou...

767KB Sizes 40 Downloads 135 Views

European Journal of Operational Research 265 (2018) 1094–1101

Contents lists available at ScienceDirect

European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor

Innovative Applications of O.R.

An efficient simulation optimization method for the generalized redundancy allocation problem Kuo-Hao Chang∗, Po-Yi Kuo Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan

a r t i c l e

i n f o

Article history: Received 17 November 2016 Accepted 25 August 2017 Available online 1 September 2017 Keywords: Reliability Generalized redundancy allocation problem Simulation optimization Importance sampling

a b s t r a c t The redundancy allocation problem (RAP) is concerned with the allocation of redundancy that maximizes the system reliability subject to constraints on system cost, or minimizes the system cost subject to constraints on the system reliability, has been an active research area in recent decades. In this paper, we consider the generalized redundancy allocation problem (GRAP), which extends traditional RAP to a more realistic situation where the system under consideration has a generalized (typically complex) network structure; for example, the components are connected with each other neither in series nor in parallel but in some logical relationship. Special attention is given to the case when the objective function, e.g., the system reliability, is not analytically available but has to be estimated through simulation. We propose a partitioning-based simulation optimization method to solve GRAP. Due to several specially-designed mechanisms, the proposed method is able to solve GRAP both effectively and efficiently. For efficacy, we prove that the proposed method can converge to the truly optimal solution with probability one (w.p.1). For efficiency, an extensive numerical experiment shows that the proposed method can find the optimal or nearly optimal solution of GRAP under a reasonable computational budget and outperforms the other existing methods on the created scenarios. © 2017 Elsevier B.V. All rights reserved.

1. Introduction Reliability optimization (RO) has been an active research area attracting a great deal of attention in recent decades due to its extensive real-world applications such as power systems, electronic systems, telecommunication systems and manufacturing systems. One excellent overview on system reliability optimization is provided in (Kuo, Parasad, 20 0 0. The redundancy allocation problem (RAP), an important problem in the area of reliability optimization, corresponds to seeking the redundancy allocation that either maximizes the system reliability subject to constraints on system cost or minimizes the system cost subject to constraints on the system reliability. RAP is especially important during the crucial initial stages of designing a new system. Unfortunately, RAP is fundamentally a NP-hard combinatorial optimization problem (Chern, 1992), and the computational requirement grows exponentially as the number of nodes and links in the network increases. As such, many heuristics have been developed for solving RAP efficiently in the literature, e.g., Coit and Smith (1996a), Dai, Xie, and Wang (2007) and Zou, Gao, Wua, Li, and Li (2010). In



Corresponding author. E-mail addresses: [email protected], [email protected] (K.-H. Chang).

http://dx.doi.org/10.1016/j.ejor.2017.08.049 0377-2217/© 2017 Elsevier B.V. All rights reserved.

addition, Coit and Smith (1996b) proposed an approach that combines neural networks and the genetic algorithm to solve RAP, where the former serves as a tool for estimating the system reliability and the latter is applied to search for the components of a series-parallel system that has minimum cost, subject to a constraint on the minimum system reliability. Ouzineba, Nourelfatha, and Gendreau (2008) also proposed a tabu search heuristic to determine the minimal cost system configuration under availability constraints for multi-state series–parallel systems. Sahooa, Bhuniab, and Kapurc (2012) proposed a multi-objective reliability optimization problem in an interval environment and investigated the performance of the proposed techniques through sensitivity analyses. Tekiner-Mogulkoc and Coit (2011) further considered the problem where the reliability of most components are not known with certainty and proposed algorithms to minimize the coefficient of variation of the system reliability estimate. Zia and Coit (2010) proposed an optimization method based on column generation decomposition to maximize system reliability. In a typical RAP, the problem is often assumed to have a series– parallel or k-out-of-n type network structure, making the objective function analytically available. In practice, however, many redundancy systems, such as telecommunication systems, can have a complex network topology whose components are connected with each other neither in series nor in parallel, but in some logical

K.-H. Chang, P.-Y. Kuo / European Journal of Operational Research 265 (2018) 1094–1101

relationship. In this paper, we consider the generalized redundancy allocation problem (GRAP), which extends the traditional RAP to a more realistic situation where the system under consideration has a generalized (typically complex) network structure and thus the objective function, e.g., the system reliability, cannot be derived analytically but has to be estimated through simulation. Zhao and Liu (2003) proposed an approach that combines stochastic simulation, the neural networks and the genetic algorithm to solve both parallel and standby redundancy optimization problems. Yeh, Lin, Chung, and Chih (2010) proposed a MonteCarlo-simulation (MCS)-based particle swarm optimization to solve complex network reliability optimization problems. One drawback on the use of simulation is that it may require extensive (sometimes even unaffordable) computational time to produce statistically valid results, especially when the feasible region is very large. Moreover, the existing simulation-based methods are essentially heuristics that do not provide any convergence guarantee; that is, the optimal solution found by the algorithms is not necessarily the truly optimal solution of the problem. In this research, we propose a simulation optimization (SO) method, called partitioning-based simulation optimization method for reliability optimization (PSORO), to solve GRAP. SO is a rapidly expanding field over the past decade due to its ability to solve many practical problems. It represents a class of systematic methodologies that can solve problems where the objective function is too complex to be analytical, often due to profound randomness and/or the complicated interaction between randomness and decision variables, and thus has to be estimated through stochastic simulation. One excellent survey about methods and applications of SO is given in Amaran, Sahinidis, Sharda, and Bury (2016). In the literature, some efficient simulation optimization methods have been developed; for example, Shi and Ólafsson (20 0 0) proposed a new randomized method, called the Nested Partitions (NP) method, for solving global discrete optimization problems. Chang, Hong, and Wan (2013) proposed a new Response-Surface-Method (RSM)-based framework, called Stochastic Trust-Region Response-Surface Method (STRONG) to solve continuous simulation optimization problems. Chang (2015) provided an improved version of STRONG that has better computational efficiency and can handle generally-distributed response variables. Chang, Li, and Wan (2014) further combined STRONG with efficient screening designs for handling large-scaled SO problems. Some metaheuristics have also been developed, including the genetic algorithm, tabu search, the Nelder–Mead simplex method (NM) and scatter search etc (Spall, 2003). Chang (2012) proposed a convergent NM-based framework, called the Stochastic Nelder–Mead simplex method (SNM), to handle gradient-free SO problems. PSORO is a partitioning-based SO method specially designed for solving the GRAP. Its framework is outlined as follows. To begin with, PSORO partitions the feasible region into several subregions, samples a few solutions in each subregion, evaluates the promising index of each subregion, and identifies the most promising region. In later iterations, the whole process is repeated except that the feasible region is replaced by the most promising region. Throughout the whole process, the optimal solution is estimated with the “best” solution of all sampled solutions accumulated through the current iteration. In fact, the partitioning structure of PSORO allows the algorithm to identify and approach the region where the truly optimal solution is located, thereby reducing the search time and computations needed for locating the truly optimal solution. The concept of partitioning is not new and has been used in some methods; for example, branch-and-bound methods (Horst & Tuy, 2003), nested partitions (NP) method (Shi, Meyer, Bozbay, & Miller, 2004; Shi & Ólafsson, 2000), adaptive partitioned random search (APRS) (Tang, 1994), and adaptive global and local search (AGLS)

1095

Table 1 Notations. R(t; x ) x xij mi s C, W ci j , wi j t0

λij

ni nmax,i Nk n(k)

System reliability at time t, depending on x (x11 , . . . , x1,m1 , x21 , . . . , x2,m2 , . . . , xs,ms ) Quantity of the jth available component used in subsystem i Number of available components for subsystem i Number of subsystems System-level constraint limits for cost and weight Cost and weight of the jth available component of subsystem i Mission time (fixed) Parameter for exponential distribution, f i j (t ) = λi j exp(−λi j t )  i x Total number of components used in subsystem i, ni = m j=1 i j Upper bound for ni (ni ≤ nmax,i ∀i) The number of replications of each sampled solution in the kth iteration The number of solutions sampled by NPRO at iteration k

(Chang, 2016). However, some specially-developed mechanisms are incorporated in PSORO to enable it to solve the GRAP both effectively and efficiently. We prove that PSORO can converge to the truly optimal solution with probability one (w.p.1), and, moreover, find the optimal or nearly optimal solution under a reasonable computational budget. Also, as will be shown later, PSORO is proved to significantly outperform the other existing methods in terms of efficiency and efficacy on selected test problems and is therefore worth further investigation. More details will be presented in later sections. The remainder of this article is organized as follows. Section 2 describes the mathematical model of GRAP. Section 3 presents the IS-based MCS, followed by Section 4 where PSORO is introduced. Section 5 presents the results of numerical experiment and evaluates the efficacy and efficiency of PSORO. Section 6 concludes with general remarks and proposes directions for future research. 2. Generalized redundancy allocation problem The optimal solution(s) to GRAP is the optimal redundancy allocation that maximizes the system reliability, subject to overall restrictions on the system cost, C, the system weight, W, and the maximum number of components for each subsystem, nmax,i . The notations are defined in Table 1 and the mathematical model of GRAP is given as follows:

max R(t0 ; x ) subject to

mi s  

ci j xi j ≤ C,

i=1 j=1 mi s  

wi j xi j ≤ W,

i=1 j=1 mi 

xi j ≤ nmax,i , for i = 1, . . . , s,

j=1

xi j ∈ {0, 1, 2, . . .}.

(1)

GRAP assumes no particular structure in the network, i.e., the network structure is not restricted to be the series–parallel type. To generalize the methodology, we assume the system reliability is not analytically available, typically due to the complexity of the network structure, and instead has to be estimated by simulation. Unfortunately, the estimates provided by simulation inevitably contain noise, which poses challenges to evaluating whether one redundancy allocation is superior to another when comparing different choices of redundancy allocation. Clearly, the identification of the optimal solution to GRAP is even more challenging. One way to address this issue is to increase the sample size, i.e., the number of replications, of each redundancy allocation to

1096

K.-H. Chang, P.-Y. Kuo / European Journal of Operational Research 265 (2018) 1094–1101

mitigate the effect of noise. However, this approach may be impractical when the problem is of large scale. For example, consider Problem (1) when xij is 20, s = 10, and ms = 5. There are a total of 5020 solutions. Suppose each solution is replicated 20 times. It would require at least 20∗ 5020 simulation runs to select the optimal redundancy allocation, which clearly is infeasible in practice. In what follows, we present the proposed PSORO which embeds a partitioning structure to enable GRAP to be solved more efficiently. 3. IS-based MCS

ˆCMC

Var R





(t0 ; x ) = E

(2)

i=1

For any iteration k, the proposed IS-based MCS, on the other hand, generates Nk i.i.d. replications of the random vector ξ from a target density q that has an increased rate of the occurrence of failure. In contrast with CMC where the sample size is fixed at N, the proposed IS-based MCS employs an increasing sample size schedule {Nk } in order to reduce the noise effect when the algorithm is close to convergence and different solutions may have only minor difference in the objective function values. The system reliability estimated by IS-based MCS is given by Nk f (ξ ) 1  RˆIS (t0 ; x ) = I ( x, ξ ) . Nk q (ξ )

(3)

i=1

RˆCMC (t0 ; x ) and RˆIS (t0 ; x ) are both unbiased estimators of R(t0 ; x ) but the variance of RˆIS (t0 ; x ) is much smaller than that of RˆCMC (t0 ; x ). Let D be the collection of all possible outcomes of ξ , Nk  1  E[RˆCMC (t0 ; x )] = I ( x, ξ ) f ( ξ ) Nk i=1

D

Nk  f (ξ ) 1  I ( x, ξ ) q(ξ ) = E[RˆIS (t0 ; x )]. Nk q (ξ ) i=1

D

Nk 1 

Nk 2



I ( x , ξ ) − μ2

i=1

1  Nk

=

Nk 2

 Var[R (t0 ; x )] = E

N 1 I ( x, ξ ) . N

=



ˆIS

Accurate reliability estimation of GRAP may require a large number of simulation observations especially when a rare event is involved such as when the engineer is estimating the reliability of a highly reliable system. We propose the use of the importance sampling (IS) technique for estimating system reliability since it allows the outputs to be observed more frequently in the region of interest due to a change in the input distribution. As a result, the variance of system reliability estimates is greatly reduced, resulting in a substantial reduction in the number of simulation observations required to attain a given level of precision. This is important for rare event simulation since it typically suffers from large variances in the estimates. The use of IS can alleviate the computational burden for estimating the reliability of GRAP. Specifically, given x, let I(x, ξ ) be an indicator function, defined by the simulation model, that is 1 if the system is functional and 0 otherwise. A system is called functional if not all minimal cut sets fail. Let ξ = (ξ11 , ξ12 , . . . , ξs,ms ) be a Bernoulli random vector where each ξi j , i = 1, . . . , s, j = 1, . . . , ms denotes a Bernoulli random variable with parameter Pij . Specifically, ξi j = 1 if the jth component of subsystem i is functional (with probability Pij ) and 0 otherwise (with probability 1 − Pi j ). The indicator function I(x, ξ ) is a function of the decision vector x and the random vector ξ that represents whether or not the overall system is functional. For a fixed t0 , the performance measure of interest is the system reliability, which can be defined as R(t0 ; x ) = E[I(x, ξ )]. The crude Monte Carlo (CMC) method is one of the simplest methods to estimate the system reliability. In CMC, given x, the Bernoulli random vector ξ is generated N times from the joint distribution function f and the system reliability is estimated by

RˆCMC (t0 ; x ) =

Moreover, let E[RˆCMC (t0 ; x )] = E[RˆIS (t0 ; x )] = μ.

i=1

Nk 1 

Nk 2

I ( x, ξ ) f ( ξ ) − μ2

(4)

D

I ( x, ξ )

i=1

f (ξ ) q (ξ )

 − μ2

Nk  1 f (ξ )  I ( x, ξ ) q ( ξ ) − μ2 2 q2 ( ξ ) Nk 2

=

i=1

(5)

D

So

Var[RˆCMC (t0 ; x )] − Var[RˆIS (t0 ; x )] =

Nk  1 

Nk 2

i=1



I(x, ξ ) 1 −

D

f (ξ ) q (ξ )

f (ξ ) > 0

(6)

f (ξ )

if 1 − q(ξ ) > 0, or, equivalently, f(ξ ) < q(ξ ). We summarize the results in Theorem 1. Theorem 1. RˆCMC (t0 ; x ) and RˆIS (t0 ; x ) are both unbiased estimators of R(t0 ; x ). However, if f(ξ ) < q(ξ ), Var[RˆIS (t0 ; x )] < Var[RˆCMC (t0 ; x )], i.e., variance reduction can be achieved. 4. Partitioning-based simulation optimization method for reliability optimization (PSORO) In this section, we first present the main framework of PSORO, followed by a discussion of each step of the algorithm. 4.1. Main framework In PSORO, all feasible solutions are first encoded, and the feasible region is iteratively partitioned into several subregions to accelerate convergence to the optimal solution. Let nk denote the number of solutions sampled by PSORO and  k denote the most promising region. The main framework of PSORO is outlined as follows: Step 1. Initialization (Section 4.2). Encode all feasible solutions using the encoding approach. Step 2. Partitioning (Section 4.3). Partition the most promising region  k into M subregions k1 , . . . , kM . Aggregate the complementary region Xࢨ k into one

region kM+1 . Step 3. Sampling solutions using LHS (Section 4.3). Apply LHS to randomly select n(k) solutions in each of the rej gions k , j = 1, 2, . . . , M + 1 :

x1j , . . . , xnj (k ) , j = 1, . . . , M + 1. Step 4. Evaluating the promising index of each subregion. j For each region k , j = 1, 2, . . . , M + 1, calculate the promising index as follows:

I (kj ) =

nk ˆIS  R (t0 ; x1j ) + · · · + RˆIS (t0 ; xnj (k ) ) i=1

nk

Step 5. Identifying the most promising region. Identify the region that has the best promising index as the most promising region, i.e., let

jk∗ ∈ arg

max

j=1,...,M+1

I (kj ), j = 1, 2, . . . , M + 1.

K.-H. Chang, P.-Y. Kuo / European Journal of Operational Research 265 (2018) 1094–1101

If the most promising region is a subregion of  k , that is jk∗ ≤ M, then let this be the most promising region in the next iteration, namely, let k+1 =  j∗ . Otherwise, backtrack to the previous most

1097

The most promising region

k

promising region. Step 6. Retaining the sampled solutions and estimating the optimal solution (Section 4.4). Use the retaining strategy to keep the sampled solutions in the retaining set  k . Estimate the optimal solution by ∗ xˆ k

∈ arg max RˆIS (t0 ; x ). x∈k

Step 7. Termination/Returning the estimated optimal solution Stop if a prespecified terminating criterion is fulfilled or if the maximum number of function evaluations has been reached; re∗ turn xˆ k . Otherwise, go to Step 1. Three remarks about PSORO deserve to be made for PSORO. First, the “best” promising index in Step 5 depends on whether the problem has a maximum- or minimum-based objective function. In particular, if the objective function is maximum-based, the region that has the best promising index refers to the one that has the maximal objective function value and vice versa. Second, comparing PSORO with other partitioning-based methods, Step 6 represents a major difference in that PSORO retains the sampled solutions, while the other existing partitioning-based methods do not. By retaining the sampled solutions, the estimated optimal solution will be updated whenever a new solution has a better promising index. Consequently, PSORO can continue to improve the quality of the estimated optimal solutions throughout the process and, as will be proved in Section 5, finally achieve convergence. Third, in Step 5, if more than one region is equally promising, it does not matter which region is identified as the most promising one. Fig. 1 provides an illustrative example showing how the process works. The entire feasible region is first partitioned into 4 subregions and 3 solutions are sampled from each of them. After evaluating the potential of each subregion, one is considered the most promising region, while the others are aggregated into the complementary region. The most promising region is further partitioned into 3 subregions. Again, 3 solutions are sampled within each of those subregions as well as the complementary region. If one of the subregions is identified as the most promising region, it is further partitioned as was done in the previous iterations. On the other hand, if the complementary region is identified as the most promising region, the region that was once identified as the most promising region in a previous iteration becomes the most promising region again, that is, the algorithm backtracks. The new most promising region is then partitioned and sampled in the same fashion. This process is continued until the optimal solution is identified. k+1 = k−1 . 4.2. The encoding approach The encoding method ensures that solutions having close objective function values are clustered in a neighborhood, which in turn facilitates the partitioning process. As an example, suppose 4 Problem (1) has the constraint j=1 x1 j ≤ 6 for subsystem 1, i.e., nmax,1 = 6. Each of the variables x11 , . . . , x14 corresponds to the quantity of the jth component in subsystem 1 for j = 1, . . . , 4. PSORO uses the following encoded variables (y11 , y12 , y13 , y14 , y15 , y16 ), where y1m , m = 1, . . . , nmax,1 represents the type of component that is chosen for subsystem 1. As shown in Fig. 2, suppose one solution is y11 = 1, y12 = 2, y13 = 3, y14 = 2, y15 = 2, y16 = 3. This can be converted back into the original variables x11 = 1, x12 = 3, x13 = 2, x14 = 0, which signifies that one component of type 1, three components of type 2, two components of type 3, and zero components of type 4 are

Complementary region Complementary region

The most promising region

Backtracking

Complementary region

The most promising region

Fig. 1. An illustration of PSORO.

Fig. 2. The proposed encoding approach.

chosen for subsystem 1. It is remarkable that different orderings of (yim ), m = 1, . . . , nmax,i represent the same solution of (xi j ), i = 1 . . . , s, j = 1, . . . , ms . 4.3. The partitioning approach and Latin hypercube sampling The proposed partitioning approach partitions the feasible region in the following manner. For each subsystem i, let τ i be the number of Minimum Path (MP) that pass through subsystem i. The importance index, ξ i , of the subsystem i is defined as follows:

ξi =

τi nmax,i

(7)

Consider the network in Fig. 5 as an example. There are 15 original decision variables, x11 , . . . , x63 . Suppose nmax,1 = 1, nmax,2 = 1, nmax,3 = 2, nmax,4 = 2, nmax,5 = 2, nmax,6 = 3. Therefore, the encoded variables are y11 , y21 , y31 , y32 , y41 , y42 , y51 , y52 , y61 ,

1098

K.-H. Chang, P.-Y. Kuo / European Journal of Operational Research 265 (2018) 1094–1101

Decision variable 2

Decision variable 1 Fig. 3. Latin hypercube sampling.

y62 , y63 . The system is functional if the following MP can go through: subsystems 1 → 3 → 6; 1 → 4 → 6; 1 → 5 → 6; 2 → 3 → 6; and 2 → 5 → 6. So τ1 = 3, τ2 = 2, τ3 = 2, τ4 = 1, τ5 = 2, τ6 = 5. As a result, ξ1 = 3, ξ2 = 2, ξ3 = 1, ξ4 = 0.5, ξ5 = 1, ξ6 = 1.6. The order of subsystems according to their importance index would be 1 → 2 → 6 → 3 → 5 → 4 so the variables chosen to partition the feasible region follow the order y11 , y21 , y61 , y62 , y63 , y31 , y32 , y51 , y52 , y41 , y42 and the values of the variables are sequentially determined according to the order. PSORO applies the Latin Hypercube sampling (LHS) approach to sample solutions in the feasible region in order to enhance the accuracy of the estimation of the promising index of each region. The LHS procedure works as follows. Suppose there are only two decision variables, say y41 and y42 , the values of which are waiting to be determined. Further suppose each has 5 types of component. We first take 2 permutations πl (1 ), . . . , πl (5 ) of the integers 1, . . . , 5 and form two arrays, for example, [1, 2, 3, 4, 5] and [1, 4, 3, 5, 2]. Then, as shown in Fig. 3, 5 solutions can be determined by pairing each value from the first array with its corresponding value from the same location of the second array; that is (1, 1), (2, 4), (3, 3), (4, 5), (5, 2). Sometimes the number of types of components of each variable can be different. Consider the case, for example, y41 = 1, 2, 3 and y42 = 1, 2. Suppose 5 solutions are to be sampled. In this situation, it is recommended that one permutation π l (1), π 1 (2), π 3 (3) is first taken, say [3, 1, 2]. Then, two values are randomly taken from 1, 2, 3, say [3, 1]. Putting them together yields the first array [3, 1, 2, 3, 1]. A similar approach can be applied for the second variable. Putting together the corresponding values of the two arrays from the same location results in 5 solutions.

4.4. Retaining strategies and sample size schedule Two retaining strategies can be adopted in the PSORO framework. Retaining strategy 1 - For any iteration k, all solutions sampled j j in each region, say x1 , . . . , xn(k ) , j = 1, . . . , M + 1, are collected in the retaining set  k . Therefore, |k+1 | = n(k )M|k |. Retaining strategy 2 - For any iteration k, only the solution with the largest objective function value is collected in the retaining set of  k . Therefore, |k+1 | = M|k |. Clearly, Retaining strategy 1 allows more information to be kept, while it also requires more storage space for keeping the additional information. Moreover, as will be seen later, when coupled with the required sample size schedule in order for the algorithm to achieve convergence, the amount of required computation can grow quite rapidly. On the other hand, Retaining strategy 2 only keeps the “best” solution found in each region. The upsides are that it does not require as much storage space as Retaining strategy 1 and the amount of computations needed to estimate the optimal solution are reduced. However, Retaining strategy 2 inevitably loses useful information due to the fact that only partial information is collected in the partitioning process.

Since the optimal solution is estimated by the best solution in the retaining set, in order to ensure the truly best solution can be correctly identified, PSORO requires an appropriate sample size schedule, which is a sequence of positive integer numbers {N1 , N2 , . . . , Nk } representing the sample size of the solutions at iterations 1, 2, . . . , k. Specifically, for any iteration k, Nk samples are required for all solutions kept in the retaining set. Specially, Nk samples are taken for the new nk solutions. However, given the fact that some solutions were visited in past iterations where a certain number of samples were already taken, only Nk − Nk−1 extra samples would be needed for these old solutions. In order for PSORO to achieve convergence, the sample size schedule is required to satisfy ∞ 

α Nk < ∞ for all α ∈ (0, 1 ).

(8)

k=1

5. Convergence analysis In this section, we conduct convergence analysis and show that PSORO can converge to the truly global optima w.p.1. The only assumption that PSORO is governed by is the strong law of large numbers (SLLN), which is an assumption commonly applied to stochastic systems. While PSORO does not make stringent assumptions, the price to pay, however, is that the algorithm may need many computations when the partitioning structure cannot identify the truly most promising region. This situation may arise due to random noise or simply bad luck and/or the objective function values of feasible solutions being very close. In what follows, we only consider the algorithm in which Retaining strategy 2 is adopted because the solutions kept when applying Retaining strategy 2 are a subset of those kept when applying Retaining strategy 1. Therefore, if the algorithm with Retaining strategy 2 can be proved to converge, it necessarily implies convergence of the algorithm with Retaining strategy 1. To begin with, let the indicator process be





1 if RˆIS (t0 ; xˆ k ) ≥ maxx∈k \xˆ ∗k RˆIS (t0 ; x ) 0 otherwise.

Ik =

Lemma 1. Dai (1996) If RˆIS (t0 ; x ) is an i.i.d. random variable with (nondegenerate) normal distribution, there exists a positive number γ > 0 such that

Pr[Ik = 1] = 1 − O(Nk−1/2 e−γ Nk ), and Pr[Ik = 0] = O(Nk−1/2 e−γ Nk ).

(9)

Lemma 1 shows the rate of convergence for the indicator process is exponential in the case of averaging i.i.d. normal random variables and the bound given in Lemma 1 does not depend on solutions. For any iteration k, let x∗k be the solution that has the truly largest objective function value. Lemma 2 guarantees that the probability that the estimated optimal solution has the truly largest objective function value in the retaining set  k converges to one when the growth of Nk follows the sample size schedule given in Eq. (8). The proof is similar to Lemma 2 of Chang (2012) and is omitted here. ∞ Nk < Lemma 2. Let {Nk }∞ be a sequence satisfying k=1 α k=1 ∞ for all α ∈ (0, 1 ). Then, as k → ∞, ∗

Pr{xˆ k ∈ arg max RˆIS (t0 ; x )} = 1. x∈k

(10)

Suppose the truly optimal solution of Problem (1) is x∗ . In Lemma 3, we prove that for any iteration k, the probability that x∗ will be sampled is strictly positive. The proof is given in the appendix.

K.-H. Chang, P.-Y. Kuo / European Journal of Operational Research 265 (2018) 1094–1101

1099

Subsystem 3

Subsystem 1 Subsystem 4 Subsystem 6

Subsystem 2 Subsystem 5

Fig. 4. Simple network.

Fig. 5. Complex network.

Lemma 3. For any iteration k, the probability that the truly optimal solution x∗ is sampled is strictly positive, i.e., Pr(PSORO samples x∗ at any iteration k) > 0. With Lemma 3, we are ready to show the convergence of PSORO. The proof is given in the appendix.  Nk < ∞ for all α ∈ (0, 1 ). If PSORO genTheorem 2. Suppose ∞ k=1 α ∗ erates a sequence of {xˆ k }∞ , then as k → ∞ k=1 ∗

R(t0 ; xˆ k ) → R(t0 ; x∗ )

w.p.1.

Table 2 Parameter setting for 5 scenarios.

nmax,i C W

Scenario 1

Scenario 2

Scenario 3

Scenario 4

Scenario 5

[1,2,1,2,1] 600 700

[1,2,1,2,1] 700 800

[1,2,1,2,1] 800 900

[2,1,1,2,1] 600 700

[1,2,1,1,2] 600 700

Table 3 Component choices for simple network.

(11)

Component choices 1

6. Numerical experiments In this section, we conduct an extensive numerical experiment to demonstrate the efficacy and efficiency of PSORO. Two test problems are used. One is based on a simple network (called the “simple test problem”) and the other is based on a complex network (called the “complex test problem”). Because the optimal solution of the simple test problem is known, the efficacy of PSORO, i.e., whether the algorithm is able to find the optimal or nearly optimal solution under a given computational budget, can be easily verified. Moreover, the efficiency gains brought by IS can also be evaluated. Specifically, to understand the efficiency of PSORO, 5 scenarios based on the simple test problem are generated and used to compare the performances of PSORO and three existing algorithms, including Tabu Search (TS) (Kulturel-Konak, Smith, & Coit, 2003), Discrete Particle Swarm Optimization (DPSO)(Jin, Cheng, Yan, & Zhang, 2007) and simulated annealing (SA)(Chambaria, Najafib, Rahmatia, & Karimi, 2013). Finally, PSORO and the same three competing algorithms are applied to solve the complex test problem and their performances are compared. It is worth noting that the three competing algorithms are designed to solve RAP under the situation in which the objective function is analytically available, which may not be the case for the simple or complex test problems used in this section. In order to construct a fair basis for comparison, regardless of whether the simple or complex test problems are used, the proposed MCS approach given in Section 3 is always applied to estimate the system reliability with respect to different redundancy allocation choices. 6.1. Test problems The simple and complex networks associated with the test problems are shown in Figs. 4 and 5, respectively. For the simple network, 5 subsystems are in a series or parallel relationship.

2

3

4

Subsystem

Pij

W

C

Pij

W

C

Pij

W

C

1 2 3 4 5

0.55 0.65 0.75 0.55 0.55

40 50 60 50 40

20 30 40 20 20

0.65 0.85

120 110 – 130 120

60 50

0.75

200 – – – 230

100

0.75 0.65

70 60

0.75

130

Pij

W

C

0.85

– – – – 240

150

Table 4 Parameter setting for complex network. Parameter

s

nmax,i

mi

C

W

Setting

6

[1,2,1,3,1,2]

[2,2,3,4,3,1]

800

900

Table 5 Component choices for complex network. Component choices 1

2

3

Subsystem

Pij

W

C

Pij

W

C

1 2 3 4 5 6

0.55 0.65 0.75 0.55 0.55 0.75

50 50 60 40 40 60

20 30 40 20 20 40

0.75 0.85

130 110 – 120 120 –

70 50

0.65 0.65

60 60

Pij

0.75 0.75

4 W – – – 230 200 –

C

130 100

Pij

0.85

W – – – 240 – –

C

150

As far as the complex network is concerned, the structure is more general with the 6 subsystems being in a logical relationship. Table 2 summarizes the 5 scenarios created based on the simple test problem, whereas the parameter settings are given in Table 3. For the complex test problem, the parameter settings are shown in Tables 4 and 5. For both the simple and complex test problems, all component choices in each subsystem follow a Bernoulli (Pij ) random variable where Pij is the probability that the jth component

1100

K.-H. Chang, P.-Y. Kuo / European Journal of Operational Research 265 (2018) 1094–1101 Table 6 Performance measures for 5 scenarios of the simple test problem.

1 0.9

Competing algorithms

0.8

System reliability

0.7 0.6 0.5

Scenario

PSORO

TS

DPSO

SA

1 2 3 4 5

0.011 (20%) 0.011 (20%) 0.004 (33%) 0.008 (33%) 0.012 (13%)

0.018 (13%) 0.012 (20%) 0.005 (6.7%) 0.021 (27%) 0.031 (6.7%)

0.035 (0%) 0.036 (0%) 0.021 (0%) 0.030 (20%) 0.013 (0%)

0.024 (13%) 0.081 (0%) 0.019 (33%) 0.025 (6.7%) 0.028 (0%)

0.4

Table 7 Performance measures for the complex network problem.

0.3 0.2 0.1 0

With IS Without IS 0

1

2

3

4

Number of observations

5

Competing algorithms

PSORO

TS

DPSO

SA

Mean Standard deviation

9.39e−01 6.82e−03

9.28e−01 5.22e−03

8.83e−01 3.74e−02

8.89e−01 4.56e−02

6 5

x 10

The efficacy and efficiency of PSORO are evaluated by comparing the performance of PSORO, TS, DPSO and SA algorithms, in optimizing the GRAP in both the simple and complex networks. We use the following “Optimality Gap” (OG) to evaluate the performance of each algorithm:

outperformed the other three algorithms on the selected instances, i.e., the averaged OG of PSORO is much smaller than that of the other three algorithms. Detailed information about R(t0 ; xˆ ) for each scenario and each algorithm is given in the on-line appendix (the standard deviation is shown in the parentheses). Note that boldface is used to signify instances in which an algorithm found the truly optimal solution in any replication. To further verify that the four competing algorithms have significant difference in their performance, we conducted a two-way ANOVA where the two factors correspond to algorithm and scenario, respectively. The ANOVA results showed that algorithm is a significant factor with p-value < 0.005. That is, PSORO outperforms the other three algorithms on the selected scenarios in a statistical sense. However, we do not claim that PSORO is superior to the other three competing algorithms in any class of problems. In fact, according to the no free lunch theorems in Wolpert and Macready (1997), one algorithm that is effective on one class of problems is guaranteed to be ineffective on another class. Finally, we applied the four competing algorithms to solve the complex problem and compared their performances. Just like in the case of the simple test problem, each algorithm was run for 15 replications. Because the truly optimal solution was not known for the complex test problem, the objective function value of the final solution found by the four algorithms had to be estimated. Table 7 provides the mean and standard deviation of the objective function values of 15 replications for each algorithm. It was found that PSORO found the optimal solutions of much better quality, namely, the mean of the objective function values was higher than that of the other three algorithms. Detailed information about the estimates of R(t0 ; xˆ ) of each algorithm is given in the online appendix (the standard deviation is shown in the parentheses).

|R(t0 ; xˆ ) − R(t0 ; x∗ )| R(t0 ; x∗ )

7. Conclusions

Fig. 6. Efficiency improvement due to IS.

of subsystem i is functional. The application of the IS technique decreases Pij by a half, i.e., from Pij to Pij /2. We let n(k ) = 31 − 5(d (k ) − 1 ) for the simple test problem and nk = 46 − 5(d (k ) − 1 ) for the complex test problem. √ For both the simple and complex test problems, we let Nk = [ k], where [.] denotes the greatest integer function. 6.2. Efficiency gain brought by IS We first evaluate the efficiency gain brought about by IS by comparing the number of simulation observations consumed by PSORO in finding the optimal solution of Scenario 1 with and without applying IS. It is found that when the IS technique is applied, the number of simulation observations required for PSORO to find the truly optimal solution is reduced by 46%. This shows that the use of IS can bring enormous efficiency gains for the algorithm in solving GRAP. Fig. 6 shows the sample paths of PSORO with and without the utilization of IS. The gap between the two sample paths represents the efficiency enhancement due to the application of IS. 6.3. Efficacy and efficiency of PSORO

(12)

where xˆ and x∗ correspond to the best solution when the algorithm is terminated and the truly optimal solution, respectively. When the OG is 0 or close to 0, it means the algorithm has found the optimal or a nearly optimal solution. In total, 15 replications were run for each of 5 different scenarios for each algorithm (PSORO, TS, DPSO and SA). Regardless of algorithm and scenario, the algorithm was terminated either when the optimal solution was found or when the computational budget, namely 150,0 0 0 observations, was reached. Table 6 gives the averaged OG of 15 replications for 5 scenarios of the simple test problem (the percentage of times that the algorithm found the truly optimal solution out of 15 replications is given in the parentheses). As shown above, PSORO found the optimal or a nearly optimal solution to GRAP across all 5 scenarios. Moreover, PSORO

In this paper, we proposed a partitioning-based method, called PSORO, to enable GRAP to be solved effectively and efficiently. Equipped with the partitioning strategy, PSORO can focus computational efforts on the most promising region(s) where the optimal solution is likely to reside and thus obtains significant computational gains compared to other existing algorithms. The application of the importance sampling technique further results in significant savings on the number of simulation observations, which is especially important for rare-event simulation such as failures of highly reliable systems. We proved that PSORO can converge to the truly optimal solution w.p.1. An extensive numerical study verified both the efficacy and efficiency of PSORO—the algorithm can find the optimal or nearly optimal solution of GRAP under a given

K.-H. Chang, P.-Y. Kuo / European Journal of Operational Research 265 (2018) 1094–1101

computational budget and, moreover, it outperforms the other three existing algorithms on the selected scenarios. Two directions can be pursued in the future. First, when the number of nodes and/or links in the network is large, it may require excessive simulation time to estimate the network reliability and determine the optimal redundancy allocation. An efficient screening method should be developed to identify the important subsystems so as to enhance the computational efficiency. Second, the sample size schedule employed for determining the sample size for solutions sampled in each iteration should be further investigated for the sake of practical use, while still ensuring algorithm convergence. For example, the sample size may be adaptive according to the quality of solutions so as to result in a balance between estimation accuracy and computational efficiency. Acknowledgments This research is partially supported by Taiwan Ministry of Science and Technology 104-2628-E-0 07-0 04-MY3. Appendix A A1. Acronyms See Table 8 Table 8 Acronyms. RO RAP GRAP GA NN PSO MCS TS LHS NP SO CMC MP IS

Reliability optimization Reliability allocation problem Generalized reliability allocation problem Genetic algorithm Neural network Particle swarm optimization Monte Carlo simulation Tabu search Latin hypercube sampling Nested partitions method Simulation optimization Crude Monte Carlo Minimum path Importance sampling

A2. Proof of Lemma 3 Proof. Without loss of generality, suppose there are d decision variables. Further suppose in iteration k, nk solutions are sampled in each region. Recall that PSORO applies the LHS procedure to sample solutions in each region. Therefore,

Pr(PSORO samples x∗ at iteration k ) ≥ Pr(PSORO samples x∗ at iteration 1 ) ≥ 1/ndk > 0, and the lemma is concluded.



A3. Proof of Theorem 2 Proof. By Lemma 3, for any iteration k, suppose

Pr(PSORO samples x∗ at iteration k ) = p∗ > 0. Since sampling solutions in each iteration are independent to each other, and ∞  k=1

Pr(PSORO samples x∗ at iteration k ) =

∞  k=1

p∗ = ∞,

1101

by Borel–Cantelli Lemma II Billingsley (1995), we have

Pr(PSORO samples x∗ infinitely often ) = 1. ∗

Further, by Lemma 2, as k → ∞, xˆ k will be the truly best solution in the retaining set. Combined with the fact that x∗ will be in the retaining set infinitely often, it can be concluded that when k is sufficiently large, x∗ must be in the retaining set and remain to serve as the estimated optimal solution.  References Amaran, S., Sahinidis, N., Sharda, B., & Bury, S. (2016). Simulation optimization: A review of algorithms and applications. Annals of Operations Research, 240(1), 351–380. Billingsley, P. (1995). Probability and measure. New York: John Wiley and Sons. Chambaria, A., Najafib, A. A., Rahmatia, S. H. A., & Karimi, A. (2013). An efficient simulated annealing algorithm for the redundancy allocation problem with a choice of redundancy strategies. Reliability Engineering & System Safety, 119, 158–164. Chang, K.-H. (2012). Stochastic Nelder–Mead simplex method-A new globally convergent direct search method for simulation optimization. European Journal of Operational Research, 220(3), 684–694. Chang, K.-H. (2015). A direct search method for unconstrained quantile-based simulation optimization. European Journal of Operational Research, 246, 487–495. Chang, K.-H. (2016). A quantile-based simulation optimization model for sizing hybrid renewable energy systems. Simulation Modelling Practice and Theory, 66, 94–103. Chang, K.-H., Hong, L. J., & Wan, H. (2013). Stochastic trust-region response-surface method (STRONG)-a new response-surfance framework for simulation optimization. INFORMS Journal on Computing, 25(2), 230–243. Chang, K.-H., Li, M.-K., & Wan, H. (2014). Combining STRONG and screening designs for large-scale simulation optimization. IIE Transactions, 46, 357–373. Chern, M. S. (1992). On the computational complexity of reliability redundancy allocation in a series system. Operations Research Letters, 11, 309–315. Coit, D. W., & Smith, A. E. (1996a). Reliability optimization of series-parallel systems using a genetic algorithm. IEEE Transactions on Reliability, 45(2), 254–260. Coit, D. W., & Smith, A. E. (1996b). Solving the redundancy allocation problem using a combined neural network/genetic algorithm approach. Computers and Operations Research, 23, 515–526. Dai, L. (1996). Convergence properties of ordinal comparison in the simulation of discrete event dynamic systems. Journal of Optimization Theory and Applications, 91(2), 363–388. Dai, Y. S., Xie, M., & Wang, X. (2007). A heuristic algorithm for reliability modeling and analysis of grid systems. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 37(2), 189–200. Horst, R., & Tuy, H. (2003). Global optimization: Deterministic approaches (3rd). New York, NY, USA: Springer. Jin, Y.-X., Cheng, H.-Z., Yan, J.-Y., & Zhang, L. (2007). New discrete method for particle swarm optimization and its application in transmission network expansion planning. Electric Power Systems Research, 77, 227–233. Kulturel-Konak, S., Smith, A. E., & Coit, D. W. (2003). Efficiently solving the redundancy allocation problem using tabu search. IIE Transacations, 35(6), 515–526. Kuo, W., & Parasad, V. R. (20 0 0). An annotated overview of system-reliability optimization. IEEE transaction on Reliability, 49, 176–187. Ouzineba, M., Nourelfatha, M., & Gendreau, M. (2008). Tabu search for the redundancy allocation problem of homogenous series–parallel multistate systems. Reliability Engineering and System Safety, 93(8), 1257–1272. Sahooa, L., Bhuniab, A. K., & Kapurc, P. K. (2012). Genetic algorithm based multi-objective reliability optimization in interval environment. Computers & Indistrial Engineering, 62(1), 152–160. Shi, L., Meyer, R. R., Bozbay, M., & Miller, A. J. (2004). A nested partitions framework for solving large-scale multicommodity facility location problems. Journal of Systems Science and Systems Engineering, 13(2), 158–179. Shi, L., & Ólafsson, S. (20 0 0). Nested partitions method for global optimization. Operations Research, 48(3), 390–407. Spall, J. C. (2003). Introduction to stochastic search and optimization: Estimation, simulation, and control. New York: John Wiley and Sons. Tang, Z. B. (1994). Adaptive partitioned random search to global optimization. IEEE Transactions on Automatic Control, 39, 2235–2244. Tekiner-Mogulkoc, H., & Coit, D. W. (2011). System reliability optimization considering uncertainty: Minimization of the coefficient of variation for series–parallel systems. IEEE Transactions on reliability, 60(3), 667–674. Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1, 67–82. Yeh, W.-C., Lin, Y.-C., Chung, Y. Y., & Chih, M. (2010). A particle swarm optimization approach based on monte carlo simulation for solving the complex network reliability problem. IEEE Transactions on Reliability, 59(1), 212–221. Zhao, R., & Liu, B. (2003). Stochastic programming models for general redundancy-optimization problems. IEEE Transactions on reliability, 52(2), 181–191. Zia, L., & Coit, D. W. (2010). Redundancy allocation for series–parallel systems using a column generation approach. IEEE Transactions on reliability, 59(4), 706–717. Zou, D., Gao, L., Wua, J., Li, S., & Li, Y. (2010). A novel global harmony search algorithm for reliability problems. Computers & Industrial Engineering, 58, 307–316.