A novel boundary swarm optimization method for reliability redundancy allocation problems

A novel boundary swarm optimization method for reliability redundancy allocation problems

Accepted Manuscript A Novel Boundary Swarm Optimization Method for Reliability Redundancy Allocation Problems Wei-Chang Yeh PII: DOI: Reference: S09...

939KB Sizes 0 Downloads 55 Views

Accepted Manuscript

A Novel Boundary Swarm Optimization Method for Reliability Redundancy Allocation Problems Wei-Chang Yeh PII: DOI: Reference:

S0951-8320(16)30676-7 10.1016/j.ress.2018.02.002 RESS 6060

To appear in:

Reliability Engineering and System Safety

Received date: Revised date: Accepted date:

25 October 2016 19 January 2018 1 February 2018

Please cite this article as: Wei-Chang Yeh , A Novel Boundary Swarm Optimization Method for Reliability Redundancy Allocation Problems, Reliability Engineering and System Safety (2018), doi: 10.1016/j.ress.2018.02.002

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT Highlights A novel self-boundary search (SBS) and a two-variable update mechanism (UM2) are proposed.



The performance of the proposed BSO is ascertained by comparing the results with existing algorithms.



The proposed BSO is the best soft computing algorithm in the RRAP.

AC

CE

PT

ED

M

AN US

CR IP T



-1-

ACCEPTED MANUSCRIPT

A Novel Boundary Swarm Optimization Method for Reliability Redundancy Allocation Problems Wei-Chang Yeh Integration and Collaboration Laboratory Department of Industrial Engineering and Engineering Management National Tsing Hua University [email protected] Abstract – A new methodology called boundary simplified swarm optimization (BSO) is proposed by

CR IP T

integrating a novel self-boundary search (SBS) and a two-variable update mechanism (UM2) to improve simplified swarm optimization (SSO) in solving mixed-integer programing problems that include both discrete and continuous variables. To balance the exploration and exploitation ability, the

AN US

proposed SBS is implemented to update the current best solution (called gBest) based on the boundary conditions and analytical calculations to enhance the exploitation ability of gBest; the UM2 updates the solutions (called non-gBest) that are not gBest to fix the over-exploration of the SSO, in which all variables need to update without exploiting the information of the neighborhood area. The performance

M

of the proposed BSO is ascertained by comparing the results with existing algorithms using four reliability redundancy allocation benchmark problems in the existing literature.

ED

Keywords: Reliability Redundancy Allocation Problems; Update Mechanism; Simplified Swarm

CE

1. Introduction

PT

Optimization; Mixed-integer Programming

Optimization problems are significant and common in various types of real-life applications.

AC

Researchers have considered numerous optimization problems from numerous viewpoints in past decades [1-10]. The mixed-integer programming problem is a classical and useful optimization problem. Some of the decision variables in the mixed-integer programming problem are limited to integer values that can cause a mixed-integer programming problem to be NP-complete [1-5]. The mixed-integer programming problem is a popular research area, and numerous approaches have been developed in consideration of the mixed-integer programming problem [1-10]. The reliability redundancy allocation problem (RRAP) is a famous mixed-integer programming problem -2-

ACCEPTED MANUSCRIPT [24-34]. An RRAP must solve both the number of redundancy components (the redundancy variable), which are integer variables, and the reliability of each component (the reliability variable), which is a floating-point variable, to maximize the entire system reliability under cost, weight, and volume limitations [24-34]. Among these methods, analytical methods such as branch-and-bound [2, 3], which try to obtain optimal solutions, cannot solve mixed-integer programming problems modeled with a few hundred

CR IP T

integer variables in an acceptable amount of time. To overcome this difficulty, the iterative heuristic method (HM) and surrogate constraints algorithm (SCA) are used to solve RRAP by Kuo et al. [26] and, Hikita et al. [27], respectively. Xu et al. provided a new method to improve the efficiency and the

AN US

solution quality of the HM proposed in [25] in solving large practical problems. However, these three methods require derivatives for all nonlinear constraint functions which are not derived easily due to the highly computational complexity of the RRAP. The focus has thus shifted to methods that obtain good-quality solutions [37].

M

Since the 1990s, soft computing primarily involving fuzzy sets, neural networks, evolutionary computation, swarm intelligence, and rough sets have been applied to difficult optimization problems to

ED

efficiently obtain optimal or near-optimal solutions [6-35]. Moreover, there is an increased interest in

PT

the application of resembling and simulating natural phenomena in soft computing used to solve larger problems in science and technology [6-35]. For example, Yokota et al. [21] and Hsieh et al. [28]

CE

proposed applied genetic algorithms (GA), Chen proposed the first artificial immune algorithm (IA) [29], Yeh and Hsieh proposed the first artificial bee colony algorithm (ABC) [30]; and Huang proposed

AC

the first SSO and the first PSSO (combining both PSO and SSO) [24] for RRAP. All ABC, IA, SSO, and PSSO algorithms are based on the penalty function to search over promising feasible and infeasible regions. Among these algorithms, the existing best solutions are all obtained from PSSO [24] in literature. SSO was first proposed by Yeh as a new population-based soft computing method which integrates both swarm intelligence and evolutionary computation [11]. The development of SSO is a latecomer compared to existing major soft-computing techniques such as genetic algorithms, particle -3-

ACCEPTED MANUSCRIPT swarm optimization (PSO), differential evolution (DE), and ant colony optimization. However, SSO has had much success solving multiple multi-level redundancy allocation in series systems, even outperforming PSO [12]. After its first application, SSO has then widely been used in optimization problems including: solving the disassembly sequencing problem by using the first self-adaptive based SSO [16,17]; training artificial neural network in the prediction of time series data with the first continuous based SSO [15]; solving the high-dimensional multivariable and multimodal numerical

CR IP T

continuous benchmark functions using an improved contnuous SSO [9]; solving Parallel-machine scheduling problem [10], solving the dispatch problem using the SSO hybrid with the gradient in [18] and the bacterial foraging algorithm in [19]; solving the redundancy allocation problem using an

AN US

orthogonal simplified swarm optimization; detecting network intrusion in network security [20]; solving continuous benchmark functions by revising SSO with the macroscopic indeterminacy [21]; allocating and designing RFID network in health care management [23]; solving continuous benchmark functions using the SSO hybrid with the glowworm swarm optimization; solve the RRAP using the

M

SSO hybrid with the PSO [24]. In addition, computation results indicate that SSO and its variants exhibit better efficiency and effectiveness (solution quality) than PSO, GA, EDA, and ANN [9-24].

ED

SSO is very effective for problems with discrete variables, but it can falter in problems with

PT

floating-point variables, and there is always a need to solve continuous optimization problems with floating-point variables. Therefore, a novel SSO method called boundary swarm optimization (BSO) is

CE

proposed in this paper that replaces the original SSO update mechanism with a novel self-boundary search (SBS) and a two-variable update mechanism (UM2) to solve the RRAP.

AC

The paper is organized as follows. Section 2 provides descriptions of SSO, which is the basis of

the proposed BSO, and also briefly introduces the RRAP along with its related penalty function and four RRAP benchmarks that are tested in Section 5 to demonstrate the performance of the proposed BSO. Section 3 presents the proposed UM2 that is used to update non-gBest solutions, i.e., all solutions except the best one, in each generation. In Section 4, the proposed SBS is discussed with an analytical method to update the gBest which is the best solution obtained currently of the BSO along with the complete BSO in solving the mixed-integer programming problem. A comprehensive comparative -4-

ACCEPTED MANUSCRIPT study on the performances of the proposed BSO and the existing methods based on four RRAP benchmarks is presented in Section 5. Finally, conclusions and a discussion are offered in Section 6. 2. Background of SSO and RRAP The proposed BSO is a new revision of SSO and verified with RRAP benchmarks. Before discussing the proposed BSO, the required notations and the basic SSO are formally introduced in the

2.1 Notations The following notations are used in the subsequent sections: Nvar

CR IP T

following subsection. In addition, the general background regarding the RRAP is briefly provided.

Nvar = The number of subsystems = the number of redundancy variables

AN US

(subsystems) = the number of reliability variables. Nsol

The number of solutions.

Ngen

The number of generations which determines the termination state of the proposed

M

algorithm and is given based on existing best known algorithm proposed in [24] for the RRAP.

A random number generated from interval I.

ni , r i

ni and ri are the redundancy variable and the reliability variable in subsystem i for

PT

ED

I

i=1,2,…,Nvar.

Xsol=(nsol, rsol)=(nsol,1, nsol,2, ..., nsol,Nvar, rsol,1, rsol,2, ..., rsol,Nvar)=(x1, x2, …, x2Nvar) is

AC

Xsol

n=(n1, n2, ..., nNvar) and r=(r1, r2, ..., rNvar).

CE

n, r

the solth solution, where sol=1, 2, …, Nsol.

Psol

Psol=(p1, p2,… p2Nvar) is called the pBest of the solth solution and it is the best ancestor of the solth solution in its own evolution history, where sol=1, 2, …, Nsol.

PgBest

PgBest=(g1, g2,… g2Nvar) is called the gBest, which is the best solution among all solutions.

Rs(n, r)

The reliability of the system under n and r. -5-

ACCEPTED MANUSCRIPT Rp(n, r)

The penalized reliability of the system under n and r.

gv(n, r)

The volume constraint under n and r.

gc(n, r)

The cost constraint under n and r.

gw(n, r)

The weight constraint under n and r.

F()

The fitness function of .

CR IP T

Vub, Cub, Wub The upper bound of the required volume, cost, and weight, respectively.

2.2 The SSO

In SSO, all solutions are randomly initialized in the first generation [9-24]. In subsequent * , xNvar ) by setting it to the

AN US

* generations, the SSO updates each variable in a new solution X sol = ( x1* , x2* ,

value of the same position of Xsol, pBest, gBest, or a random feasible value based on the given probability cg, cp, cw as follows [9-24]:

if ρ[0,1]  [cg , cg  c p )

M

if ρ[0,1]  [0, cg ) if ρ[0,1]  [cg  c p , cg  c p  cw )

*  X Psol   sol   Psol

ED

if ρ[0,1]  [cg  c p  cw ,1]

* if F (X sol ) is better than F (Psol )

otherwise

(1)

(2)

PT

* xvar

 xvar  p   var  g var  x

CE

* sol if F (X sol ) is better than F (PgBest ) gBest    gBest otherwise

(3)

AC

The pBest, Psol, and gBest values are updated accordingly based on Eqs. (2) and (3) when the

better values of pBest, Psol, and gBest are generated. SSO PROCEDURE [9, 11, 12] STEP S0. Generate Xsol randomly, calculate F(Xsol), let Psol=Xsol and gen=1, and find gBest such that F(Xsol)F(XgBest) for sol=1,2,…, Nsol. STEP S1. Let sol=1. STEP S2. Update Xsol based on Eq. (1). -6-

ACCEPTED MANUSCRIPT STEP S3. If F(Psol)
CR IP T

2.3 The RRAP and its four benchmarks The RRAP belongs to the mixed-integer programming problem category and its general model

Maximize

Rs(n, r)

Subject to

gv(n, r)≤ Vub

AN US

can be defined as follows [24-35]:

gc(n, r)≤ Cub gw(n, r)≤ Wub.

(4) (5) (6) (7)

M

Eq. (4) is the goal of the RRAP, which is to maximize the overall system reliability Rs(n, r) by

ED

determining that n=(n1, n2, . . ., nNvar) and r=(r1, r2, . . ., rNvar), where Nvar=5 in the four benchmarks discussed below. Eqs. (5)-(7) are nonlinear constraints for volume, cost, and weight, respectively.

PT

As in recently published papers [13, 24, 30, 32], the following penalty function Rp(n, r) is

CE

implemented to replace the system reliability Rs(n, r) in this work if any constraint (e.g., volume, cost, or weight) is violated:

AC

    Vub   Cub   Wub    R p (n, r)  Rs (n, r)   Min   , , ,          gv (n, r)   gc (n, r)   g w (n, r)    

3

(8)

Eq.(8) inspires solutions toward unexplored regions near the border in the solution space to identify an optimal or near-optimal solution. The RRAP is becoming an increasingly powerful tool in the initial stages prior to the planning, design, and control of systems. Four benchmarks [23-35] are well known in the RRAP, and there are comprehensive related works in the existing literature that evaluate alternative approaches in soft -7-

ACCEPTED MANUSCRIPT computing, such as the artificial bee colony algorithm [30], the genetic algorithms [28, 32], the ant system [33], the immune algorithm [29], the surrogate constraints algorithm [27], Particle SSO [24], and combinations of several heuristics [25, 26, 34, 35]. The results obtained from Particle SSO , which is based on SSO [11, 12] and particle swarm optimization, outperformed the results presented by the other approaches [24]. Therefore, we only compare the proposed BSO with Particle SSO. The corresponding mixed-integer nonlinear programming models, related data, and network

CR IP T

structures for these four RRAP benchmarks, including the series system in Fig. A1, the series-parallel system in Fig. A2, the complex (bridge) system in Fig. A3, and the overspeed protection of a gas turbine system, are outlined in Appendix A.

AN US

3. The proposed novel UM2 for non-gBest solutions

This section introduces UM2, which is one of two major innovations in the proposed BSO. The subscript 2 in UM2 denotes that two variables of each non-gBest are selected randomly to be updated in UM2. These two variables are from two different type of variables, i.e., one is from the redundancy

M

variables and the other one is from the reliability variables.

ED

The proposed UM2 is based on the original UM (named UMa) in the SSO to update the component numbers in the BSO. UMa is very powerful for these problems with discrete variables, but it is

PT

weakening in solving these problems with floating-point variables. The RRAP includes both discrete

CE

variables (the redundancy variables) and floating-point variables (the reliability variables). Hence, UMa is adapted for these redundancy variables and modified for these reliability variables for these

AC

non-gBest in the proposed UM2. Thus, the proposed UM2 has the advantage of UMa in discrete-variable problems and also suitable to deal with these floating-point variables. The UMa is also an effective UM in exploration because it updates all of the variables of each

solution. However, it continues to explore undiscovered solution space even when the optimum is the neighbor of the current solutions (i.e., it lacks exploitation ability). Therefore, the proposed UM2 only updates one discrete variable and another floating-point variable in each solution as an adjustment to the trade-off between exploration and exploitation as follows: -8-

ACCEPTED MANUSCRIPT    * xj    

gj

if ρ[0,1]  [0., Cg )

pj

if ρ[0,1]  [Cg , C p ) 0 if x j is a discrete variable or ρ[0,1]  Cw  , if ρ[0,1]  [C p , Cw ) l otherwise if ρ[0,1]  [Cw ,1.]

xj x

l= 0.0005  ρ[ 0.5,0.5] 

gen , genBest

(9)

(10)

CR IP T

where I is a random variable generated from interval I, gj=1,2,…,Nsol, j=1,2,…,Nvar, gen=1,2,…,Ngen, genBest is the earliest generation number that the current gBest found, and x is a random number between the lower bound and the upper bound of the jth variable. Note that any feasible xk* must be randomly regenerated until it is feasible.

AN US

Let Xsol=(nsol, rsol)=(nsol,1, nsol,2, ..., nsol,Nvar, rsol,1, rsol,2, ..., rsol,Nvar) be the non-gBest solution needed to update using the proposed UM2 and psol,k and gk be the related values of the kth variable in pBest of Xsol, and gBest. The detailed procedure for the proposed UM2 to update X is listed below.

M

UM2 PROCEDURE

STEP M0. Randomly select one discrete variable nsol,i and a floating-point variable rsol,j from Xsol,.

ED

STEP M1. Generate two random numbers i and j from uniform distribution [0, 1]. STEP M2. If i
PT

STEP M3. If i
CE

STEP M4. If i>Cr, then reset nsol,i to be a feasible value generated randomly.

AC

STEP M5. If j
gen and go to STEP M9. genBest

STEP M6. If j
STEP M7. If j
gen and go to STEP M9. genBest

gen and go to STEP M9. genBest

STEP M8. Reset rsol,j to be a feasible value generated randomly. STEP M9. Calculate F(Xsol,). STEP M10. If F(Psol)
ACCEPTED MANUSCRIPT STEP M11. If F(PgBest)
The flowchart of the proposed UM2 is shown in Fig. 1.

4. The proposed novel self-boundary search for gBest

CR IP T

<>

This section presents the proposed novel SBS which is another innovation in the proposed BSO and is solely used to update one of the floating-point variables in gBest. In addition, the complete BSO

SSO for mixed-integer programming problems. 4.1 The analytical calculation

AN US

procedure, including the proposed SBS and UM2 (discussed in Section 3), is presented to improve the

M

In general, there are two strategies used to increase overall system reliability: 1) increase the reliability of components and/or 2) use redundant components in the subsystems. Based on the first

ED

strategy, we propose the following Lemma:

Lemma 1: Without violating any constraints, a higher ri creates a higher value of overall system

CE

PT

reliability for i=1, 2,…, Nvar.

Lemma 1 forms the basis of the proposed SBS. Among the three constraints in RRAP benchmarks

AC

1-4 (see Appendix A), only the cost constraint includes reliability variables: 5

gc(n, r)= i (1000 / ln ri )  [ni  exp(ni / 4)]  Cub

(11)

gw(n, r)=gw(n)

(12)

gv(n, r)=gv(n)

(13)

i

i 1

Hence, there is no affection to the other two constraints, i.e., both the weight and volume constraints, if reliability variables are changed. - 10 -

ACCEPTED MANUSCRIPT Lemma 2: Any improvement to the values of reliability variables will not change the values of either gw(n, r) or gv(n, r). Proof:



Because gw(n, r)=gw(n) and gv(n, r)=gv(n).

Eq. (11) becomes a mixed-integer function combined with floating-point variables and integer variables after moving Cub to the left-hand side as follows. 5

Gc(n, r)=gc(n, r)Cub= i (1000 / ln ri )  [ni  exp(ni / 4)]  Cub .

CR IP T

i

i 1

(14)

After all redundancy variables, i.e., integer variables, are known and fixed, Eq.(14) will become a complete multiple-variable continuous function and such multiple-variable continuous function will

5

AN US

further be one variable continuous function if all reliability variables except rj are known as follows. Gc(rj)=gc(rj)Cub= i (1000 / ln ri )  [ni  exp(ni / 4)]  Cub .

(15)

i

i 1

M

The following lemma shows that Gc(rj) is an increasing function which is similar to Rs(n, r). Hence, the increasing of rj increases both the values of Rs(n, r) and Gc(rj); and also increases the value

ED

of Rp(n, r) if no penalty occurs.

Proof:

PT

Lemma 3: Gc(rj) is an increasing function, i.e., Gc(rj)≤Gc( rj* ) if rj≤ rj* . From Eq. (15), i>0, and i>0 for all i, we have





AC

CE

d d Gc (rj )= i  (1000 / ln ri ) i  [ni  exp( ni / 4)] drj drj  [ni  exp(ni / 4)] 



d i  (1000 / ln ri ) i drj



 [ni  exp(ni / 4)]  i   i (1000 / ln ri ) i 1  (1000   i ) 

d (1/ ln ri ) drj

 [ni  exp(ni / 4)]  i  i (1000 / ln ri ) i 1  (1000  i )  [1/ (ln ri ) 2 ] 

1 0. rj

(16) Hence, Lemma 3 is true from the first derivative test which is able to examine the - 11 -

ACCEPTED MANUSCRIPT 

monotonic property of continuous functions.

The following lemma discusses that any solution no matter it is a feasible or infeasible, the cost constraint is always satisfied as long as the function in Eq.(15) is less than or equal to zero. Lemma 4: Let (nsol, rsol)=(nsol,1, nsol,2, ..., nsol,Nvar, rsol,1, rsol,2, ..., rsol,Nvar) be a solution. The following

(1) gc(nsol, rsol)≤Clb if Gc(rsol,j)≤0, (2) gc(nsol, rsol)>Clb if Gc(rsol,j)>0,. Follows directly from Eq. (15) and Lemma 3.



AN US

Proof:

CR IP T

statements are true:

An analytical calculation is discussed below to find the maximal rj such that Gc(rj)=0 in Eq.(15) under the conditions that ni and rk are known and fixed for i, k=1,2,…,Nvar and k≠j. Lemma 5: Let ni and rk be known and fixed for i, k=1, 2, …, Nvar and k≠j. In Appendix A, the value of

M

Rs(n, r) is maximized if

ED

  nj     j (1000)   j  n j  exp( )    4   ) . rj  exp( Nvar  ni   -1000 i  )  ni  exp( )     j Cub    i ( ln ri 4   i 1  i j  

PT

From Eq. (11), we have

CE

Proof:

(17)

AC

j(

1000  j ) ln rj

Nvar nj   1000 i n  exp( )  C  i ( )  ub  j  4  ln ri i 1  i j

Nvar

 j (

1000  j )  ln rj

Cub    i ( i 1 i j

1000 i ) ln ri

n j  exp(

- 12 -

ni    ni  exp( 4 )   

ni    ni  exp( 4 )   

nj 4

)

ACCEPTED MANUSCRIPT Nvar

Cub    i (



i 1 i j

j

1000 i ) ln ri

ni    ni  exp( 4 )   

n    j  n j  exp( j )  4  

 (ln rj )  (1000)

(Note that lnrj<0 since 0
 (ln rj ) 

N var

j

Cub    i ( i 1 i j

1000 i ) ln ri

ni    ni  exp( 4 )   

CR IP T

n   (1000)   j  j  n j  exp( j )  4  

AN US

  nj     j (1000)   j  n j  exp( )    4   ) .  rj  exp( Nvar  ni   1000 i  )  ni  exp( )     j Cub    i ( ln ri 4   i 1  i j  

Thus, this lemma is true.

(18)



ED

proposition of the proposed SBS:

M

From Lemmas 1-5, we create the following important theorem which forms the fundamental

* Theorem 1: Let rsol=(rsol,1, rsol,2, ..., rsol,Nvar), rsol =rsol except that the value of the jth variable is changed

PT

* * from rsol,j to rsol,j in rsol , and (nsol, rsol) be a feasible solution, i.e., all three constraints are

CE

* satisfied: gv(nsol, rsol)≤Vlb, gc(nsol, rsol)≤Clb, and gw(nsol, rsol)≤Wlb. If Gc( rsol,j )=0, the

AC

following statements are true:

(1) rsol,j≤ rsol,j *

  n     j (1000)   j  j  n j  exp( )    4   ) ,  exp( Nvar  n  -1000 i  )  ni  exp( i )     j Cub    i ( ln ri 4   i 1  i j  

* (2) (nsol, rsol ) is also a feasible solution,

* (3) Rs(nsol, rsol)
* (4) Rp(nsol, rsol)
- 13 -

ACCEPTED MANUSCRIPT Proof: * (1) Since (nsol, rsol) be a feasible solution, Gc( rsol,j )=0, and Gc is an increasing function,

* Lemmas 3 and 4, we have Gc(rsol,j)≤0=Gc( rsol,j ) and this statement is true.

(2) Since gv(nsol, rsol)≤Vlb from the assumption of Theorem 1, gv(nsol, rsol)=gv(nsol)=gv(nsol, * * * ) from Eq.(15), we have gv(nsol, rsol )≤Vlb. In the same way, gw(nsol, rsol )≤Vlb. Also, rsol

CR IP T

* * gc(nsol, rsol )≤Clb from Lemma 4. Hence, (nsol, rsol ) is also a feasible solution. * (3) Since rj≤ rj* (from the 1st statement of Theorem 1), we have Rs(nsol, rsol)
from Lemma 1.

(4) There is no penalty added in Rs(n, r) if (n, r) is a feasible solution, i.e., Rs(n, r)=Rp(n,

AN US

* * r). Hence, Rs(nsol, rsol)=Rp(nsol, rsol) and Rs(nsol, rsol )=Rp(nsol, rsol ). Since Rs(nsol,

* rsol)


M

* rsol ).

The following corollary is derived from Theorem 1 by removing the condition that (nsol, rsol) must

ED

be a feasible solution. From this corollary, Eq. (17) is able to improve the solution quality and also adjust an infeasible solution to a feasible solution.

PT

* * Corollary 1: Let rsol=(rsol,1, rsol,2, ..., rsol,Nvar), rsol =(rsol,1, rsol,2, ..., rsol,j-1, rsol,j , rsol,j+1, .., rsol,Nvar), gv(nsol,

AC

CE

* rsol)≤Vlb, and gw(nsol, rsol)≤Wlb. If Gc( rsol,j )=0, the following statements are true:

* (1) rsol, j

  nj     j (1000)   j  n j  exp( )    4   ) ,  exp( Nvar  ni   -1000 i  )  ni  exp( )     j Cub    i ( ln ri 4   i 1  i j  

* (2) (nsol, rsol ) is a feasible solution,

* * (3) Rp(nsol, rsol )=Rs(nsol, rsol ).

Proof: (1) Follows directly from Lemma 5. - 14 -

ACCEPTED MANUSCRIPT * (2) Follows directly from Lemma 4, we have gw(nsol, rsol )≤Wlb. From the conditions,

* gv(nsol, rsol)≤Vlb, gw(nsol, rsol)≤Wlb and gv(nsol, rsol)=gv(nsol)=gv(nsol, rsol ), gw(nsol,

* * * rsol)=gw(nsol)=gw(nsol, rsol ), we have gv(nsol, rsol )≤Vlb and gw(nsol, rsol )≤Wlb. Hence,

* (nsol, rsol ) is a feasible solution.



CR IP T

* * * (3) Since (nsol, rsol ) is a feasible solution, we have Rp(nsol, rsol )=Rs(nsol, rsol ).

The following corollary forms another basis of the proposed SBS.

* * * * Corollary 2: If (nsol, rsol ) is a feasible solution and Gc( rsol,j )=0, where rsol =(rsol,1, rsol,2, ..., rsol,j-1, rsol,j ,

rsol,j+1, .., rsol,Nvar), then

AN US

1) the decrease of any one component reliability will also decrease the system reliability; 2) the increase of any one component reliability and/or the number of redundancy component will violate the cost constraints.

* Since Gc( rsol,j )=0, we have Gc(rsol,i)=0 for i≠j from Eq. (15). The decrease of any one

M

Proof:

component reliability, say rk, will let Gc(rsol,k)<0 which decreases the system reliability

ED

from Theorem 1. Moreover, the increase of any one component reliability, say rsol,k, will

CE

PT

also let Gc(rsol,k)>0 which violate the cost constraint from Lemma 4.

4.2 An example for the proposed analytical calculation

1.

AC

The best-known solutions for the four RRAP benchmarks are obtained in [24] and shown in Table

<>

The proposed Eq. (17) is implemented to improve these solutions as shown in Tables 2-5. For example, in Benchmark 1, the redundancy and reliability vectors of the best-known solution are n=(3, 2, - 15 -

ACCEPTED MANUSCRIPT 2, 3, 3) and r=(0.77946645, 0.87173278, 0.90284951, 0.71148780). The new first reliability variable r1* is 0.7794666465 after implementing Eq. (17) to r1=0.77946645 and fixing the remaining variables.

The corresponding new system reliability is improved from 0.9316822972 to 0.9316823242156, i.e., increased by 2.7015610-8. Moreover, the cost constraint is increased from 174.999954 to 174.999999999999970, which is much closer to Cub=175. In the same manner, we can also observe that

<>

CR IP T

the final solutions are improved after changing r2, r3, and r4 independently and individually.

AN US

Each new solution has a better system reliability after implementing Eq. (17) to one and only one variable in Benchmark 2.

M

<>

ED

An interesting result can be observed from Table 4: the related system reliability of each new solution is worse after implementing Eq. (17), no matter the variable. The reason is that the cost

PT

constraint is violated in the best-known solution of Benchmark 3, i.e., the best-known solution is infeasible. However, the corresponding new cost constraint for each new update reliability variable is

CE

less than Cub=175. Thus, Eq. (17) not only improves the solution quality but also adjusts an infeasible

AC

solution to a feasible solution from Table 4 as stated in Corollary 1.

<>

Another interesting situation is a final solution becoming infeasible due to a computer truncation error, as shown in Table 5. The new cost function is 610-14 greater than Cub=400 after updating r3=0.9482103300 to r3* =0.9482103612 using Eq. (17). Thus, the original Cub must be slightly reduced - 16 -

ACCEPTED MANUSCRIPT to prevent the truncation error in Table 5.

<>

4.3 The overall procedure of the proposed SBS

CR IP T

* * * * Let (nsol, rsol ) be a feasible solution and Gc( rsol,j )=0, where rsol =(rsol,1, rsol,2, ..., rsol,j-1, rsol,j ,

* rsol,j+1, .., rsol,Nvar). Since Gc( rsol,j )=0, we have Gc(rsol,i)=0 for i≠j from Eq.(15). The decrease of any one

component reliability, say rsol,k, will let Gc(rsol,k)<0 which decreases the system reliability from Theorem 1. Moreover, the increase of any one component reliability, say rsol,k, will also let Gc(rsol,k)>0 which

AN US

violates the cost constraint from Lemma 4. Hence, any one reliability variable of the gBest is impossible to be improved once the gBest is improved after using Theorem 1. Thus, at least two reliability variables of the gBest must be changed to improve the solution quality if Theorem 1 is

M

implemented again.

From the above, there are two procedures in the proposed SBS:

ED

1) Randomly select and reinitialize two floating-point variables in gBest. 2) Randomly select one floating-point variable, say rj, and reset the value of rj based on Theorem

CE

PT

1.

The detailed procedure for the proposed SBS to update the current gBest, PgBest, is listed below.

AC

SBS PROCEDURE

STEP B0. Reinitialize the values of two reliability variables (floating-point variables), say ri and rk, selected randomly from solution PgBest for i, j=1,2,…, Nvar.

STEP B1. Based on Eq. (17), reset the value of one randomly selected reliability variable, say rj, from solution PgBest. STEP B2. Replace PgBest with the updated PgBest if the fitness function value of the updated PgBest is better than that of PgBest. - 17 -

ACCEPTED MANUSCRIPT

The flowchart of the proposed SBS is shown in Fig. 2.

<>

CR IP T

4.4 The complete procedure of the proposed BSO The flowchart and the overall procedure of the proposed BSO are shown in Fig. 3 and described as follows:

AN US

<>

BSO PROCEDURE STEP 0.

Randomly generate Psol=Xsol, calculate F(Xsol), let gen=1, and find gBest{1, 2, …, Nsol}

M

such that F(Xsol)F(XgBest) for sol=1,2,…, Nsol. Update XgBest based on SBS PROCEDURE.

STEP 2.

Let sol=1.

STEP 3.

Update Xsol based on UM2 PROCEDURE.

STEP 4.

If sol
STEP 5.

Update XgBest based on SBS PROCEDURE.

STEP 6.

If gen
AC

CE

PT

ED

STEP 1.

5. Numerical examples There are two comparisons in this study. In Comparison 1, the performance of two novel major parts: SBS and UM2 are verified and tested. In Comparison 2, we focus on the comparisons among the best solutions obtained from all existing algorithms including PSO [24], SSO [24], PSSO [24], heuristic method (HM) [25], Xu et al. [26], SCA [27], GA [28, 36], IA [29], ABC [30], and Dhingra [34]. Note that the existing best solutions are all obtained from PSSO [24] in literature. - 18 -

ACCEPTED MANUSCRIPT 5.1 Comparison 1 To systematically and efficiently select the best SSO parameters without exploring all possible combinations, there are nine attempts (‗tries‘) for the setting of Cg, Cp, and Cw based on the orthogonal array taken directly from [13, 24], as shown in Table 6.

CR IP T

<>

The proposed BSO is programmed in C++ language, implemented on an Intel Core i7 3.07 GHz PC with 16 GB memory, and measured in runtime units based on CPU seconds. The generation number is

AN US

limited to 1000, i.e., Ngen=1000 is the stopping criterion. There are 400 individual runs for each try, i.e., 3600 for each benchmark.

To investigate the solution quality and performance of the proposed BSO, the best of the final gBests for each benchmark and algorithm are recorded in Table 7 as these done in all existing

M

algorithms for the RRAP [24-30,34]. The bold values show the best results of the related try among nine tries and each shadowed value is better than the existing known solutions [24] in Table 7. The total

ED

number of final gBests that are better than those of the best-known methods are listed in Table 8 for

AC

CE

PT

each try.

<>

<>

From Table 7, All of the best of the final gBests obtained from BSO are based on the setting in Try 7, i.e., Cg=0.5, Cp=0.75, and Cw=0.95. Also, in the last column of Table 8, Try 7 has the greater number of final gBests for BSO. From Table 8, the top three tries with greater number of final gBests are Try 7, Try 8 and Try 5 (from larger to smaller). Hence, a higher value of Cg has a better chance to have a good solution since - 19 -

ACCEPTED MANUSCRIPT both Tries 7 and 8 have Cg=0.5. Moreover, the number of final gBests of Try 9 is six less than that of Try 5 and only five better than that of Try 6. The reason is that cr=0 in Try 9, i.e., there is no mechanism to escape the local trap if the solution is trapped, i.e., it is better to have cr>0 and Cg=0.5. From Tables 7 and 8, Try 7 always outperforms the other eight tries for each benchmark in solution quality. Hence, the setting of Try 7 is taken in the proposed BSO in comparing with the existing known

CR IP T

algorithms.

5.2 Comparison 2

Comparison 2 contrasts the best final gBests obtained from the proposed BSO and those of existing known methods, including the SCA [27], HM [25], Xu [26], GA [28], IA [29], PSO [24], SSO

AN US

[24], ABC [30], and PSSO [24].

M

<>

ED

<>

<>

CE

PT

<>

AC

The results are shown in Tables 9-12, and the fitness of the best solutions from all algorithms are shown in bold. In these tables, the first row shows the related methods; the second row lists the redundancy solutions of components; the third through sixth (for ID=4) and the seventh (for IDs=1-3) rows indicate the component reliability solutions of each subsystem; the second to last row provides the fitness (i.e., the system reliability) of each solution; and the last row shows the maximum possible improvement (MPI% in short) to quantify the improvement in the solutions found by the proposed method compared to the previous best known solutions. The MPI is defined as follows: - 20 -

ACCEPTED MANUSCRIPT MPI =

FBSO  F , 1  F

(19)

where F● indicates the fitness (i.e., the system reliability) obtained by the method ●. Note that the solutions obtained and listed by Yeh & Hsiehof for all four benchmark problems are all infeasible [1,24]. The reason is that there are only six digits after the decimal place listed in [1], which results in truncation errors and causes a violation in the conditions when plugging the resulting

CR IP T

solutions back into Eqs.(5)-(7) [24]. To avoid such truncation errors, 15 digits are used after the decimal place for the results obtained from the proposed BSO. The fitness of all the best known solutions obtained from [24] are recalculated and listed with 15 digits after the decimal place in Tables 9-12. The MPI of all methods are positive, i.e., the BSO improves the current known solutions. For

AN US

ID=1 to ID=4, the maximum improvement of the BSO is 38.7265198% (PSO), 83.5033461% (PSO), 66.4142671% (PSO), and 91.4801958% (GA), and the minimum improvement of the BSO is .0000579% (PPSO), 0.0010632% (PSSO), 0.0013986% (PSSO), and 0.0005342% (PSSO),

M

respectively.

Based on Tables 9-12, the fitness of best solutions for the four benchmark problems obtained from

ED

the proposed BSO are all comparatively better than those of existing known solutions in literature.

PT

Hence, from these discussions in both Comparisons 1 and 2 above, the proposed BSO with parameters setting from Comparison 1 is able to balance both the global and local searches to improve

CE

solution quality as compared to previously published results in the literature.

AC

6. Conclusions

A new improved SSO called BSO is proposed in this study by integrating SBS and UM2 in solving

RRAP. The proposed novel SBS is based on the analytical calculation discussed in Theorem 1 to update gBests and the proposed novel UM2 adapted from UM2 to update non-gBests. After adding this two UM innovations, the solution quality of BSO outperforms all well-known related methods and is superior to all existing best solutions in the experimental results in Section 5. Moreover, the proposed SBS can be implemented to any existing algorithm to improve its final solution based on the examples - 21 -

ACCEPTED MANUSCRIPT in Section 3. Thus, the proposed BSO is able to avoid local traps and strengthens the ability of SSO in solving problems with floating-point variables. In future studies, this work should be extended to apply the proposed BSO to different optimization problems with more variables or larger-scale benchmarks. Acknowledgements I wish to thank the anonymous editor and the reviewers for their constructive comments and

supported

in

part

by

the

Ministry

of

Science

and

CR IP T

recommendations, which have significantly improved the presentation of this paper. This research was Technology,

R.O.C.

under

grant

MOST

102-2221-E-007-086-MY3 and MOST 104-2221-E-007-061-MY3, also in part by the National Tsing Hua University ―Toward World-Class Universities Project‖ under Grants 105N536CE1.

[1]

AN US

References

M. R. Garey, D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman, ISBN 0-7167-1045-5, 1979.

H. P. Williams, Model Building in Mathematical Programming, Wiley, ISBN-10: 1118443330,

M

[2]

2013.

G. L. Nemhauser, L. A. Wolsey, Integer and Combinatorial Optimization, Wiley-Interscience, New York, 1999.

M. S. Chern, On the computational complexity of reliability redundancy allocation in a series

PT

[4]

ED

[3]

[5]

CE

system, Oper Res Lett, 11 (1992) 309-315. W. C. Yeh, A Novel Node-based Sequential Implicit Enumeration Method for finding all d-MPs

[6]

AC

in a Multistate Flow Network, Information Sciences, 297 (2015) 283-292. F. Glover, Future Paths for Integer Programming and Links to Artificial Intelligence, Computers and Operations Research, 13 (1986) 533-549.

[7]

D. Simon, Biogeography-based optimization, IEEE Transactions on Evolutionary Computation, 12 (2008) 702-713.

[8]

G. G. Wang, A new improved firefly algorithm for global numerical optimization, Journal of Computational and Theoretical Nanoscience, 11 (2014) 477-485. - 22 -

ACCEPTED MANUSCRIPT [9]

W. C. Yeh, An improved simplified swarm optimization, Knowledge-Based Systems, 82 (2015) 60-69.

[10] W. C. Yeh, W. C. Lee, P. J. Lai and M. C. Chuang, Parallel-machine scheduling to minimize makespan with fuzzy processing times and learning effects, Information Sciences, 269 (2014) 142-158.

Report, NSC 97-2221-E-007-099-MY3, 2008-2011.

CR IP T

[11] W. C. Yeh, Study on quickest path networks with dependent components and apply to RAP,

[12] W. C. Yeh, A two-stage discrete particle swarm optimization for the problem of multiple multi-level redundancy allocation in series systems, Expert Systems with Applications, 36 (2009)

AN US

9192-9200.

[13] W. C. Yeh, Orthogonal Simplified Swarm Optimization for the Series-Parallel Redundancy Allocation Problem with a Mix of Components, Knowledge-Based Systems, 64 (2014) 1-12. [14] W. C. Yeh, Novel Swarm Optimization for Mining Classification Rules on Thyroid Gland Data,

M

Information Sciences, 197 (2012) 65-76.

[15] W. C. Yeh, New Parameter-Free Simplified Swarm Optimization for Artificial Neural Network

ED

training and its Application in the Prediction of Time Series, IEEE Transactions on Neural

PT

Networks and Learning Systems, 24 (2013) 661-665. [16] W. C. Yeh, Optimization of the Disassembly Sequencing Problem on the Basis of Self-Adaptive

CE

Simplified Swarm Optimization, IEEE Transactions on Systems, Man, And Cybernetics—PART A: Systems and Humans, 42 (2012) 250-261.

AC

[17] W. C. Yeh, Simplified Swarm Optimization in Disassembly Sequencing Problems with Learning Effects, Computers & Operations Research, 39 (2012) 2168-2177.

[18] R. Azizipanah-Abarghooee, A new hybrid bacterial foraging and simplified swarm optimization algorithm for practical optimal dynamic load dispatch. International Journal of Electrical Power & Energy Systems, 49 (2013) 414-429. [19] R. Azizipanah-Abarghooee, T. Niknam, M. Gharibzadeh, F. Golestaneh, Robust, fast and optimal solution of practical economic dispatch by a new enhanced gradient-based simplified swarm - 23 -

ACCEPTED MANUSCRIPT optimisation algorithm, Generation, Transmission & Distribution, 7 (2013) 620-635. [20] Y. Y. Chung, N. Wahid, A hybrid network intrusion detection system using simplified swarm optimization (SSO), Applied Soft Computing, 12 (2012) 3014-3022. [21] P. C. Chang, X. He, Macroscopic Indeterminacy Swarm Optimization (MISO) Algorithm for Real-Parameter Search. Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC2014), Beijing, China, 1571-1578, 2014.

CR IP T

[22] M. Du, X. Lei, Z. Wu, A simplified glowworm swarm optimization algorithm, 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China , 2861-2868, 2014.

[23] C. H. Chou, C. L. Huang, P. C. Chang, A RFID Network Design Methodology for Decision

AN US

Problem in Health Care. Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC2014), Beijing, China, 1586-1592, 2014.

[24] C.L. Huang, A particle-based simplified swarm optimization algorithm for reliability redundancy allocation problems, Reliability Engineering & System Safety, 142 (2015) 221-230.

M

[25] W. Kuo, C. L. Hwang, F.A. Tillman, A note on heuristic methods in optimal system reliability, IEEE Transactions on Reliability, R27 (1978) 320-324.

ED

[26] Z. Xu, W. Kuo, H.H. Lin, Optimization limits in improving system reliability, IEEE Transactions

PT

on Reliability, R39 (1990) 51-60.

[27] M. Hikita, Y. Nakagawa, H. Harihisa, Reliability optimization of systems by a surrogate

CE

constraints algorithm, IEEE Transactions on Reliability, 41 (1992) 473-480. [28] Y. C. Hsieh, T. C. Chen, D. L. Bricker, Genetic algorithms for reliability design problems,

AC

Microelectronic Reliability, 38 (1998) 1599-1605.

[29] T. C. Chen, IAs based approach for reliability redundancy allocation problems, Applied Mathematics and Computation, 182 (2006) 1556-1567. [30] W. C. Yeh, T. J. Hsieh, Solving reliability redundancy allocation problems using an artificial bee colony algorithm, Computers & Operations Research, 38 (2011) 1465-1473. [32] D. W. Coit, A. E. Smith, Reliability optimization of series-parallel systems using a genetic algorithm, IEEE Transaction on Reliability, 45 (1996) 254-260. - 24 -

ACCEPTED MANUSCRIPT [33] E. Ramirez-Marquez Jose, D. W. Coit, A heuristic for solving the redundancy allocation problem for multi-state series-parallel systems, Reliability Engineering and System Safety, 83 (2004) 341-349. [34] A. K. Dhingra, Optimal apportionment of reliability & redundancy in series systems under multiple objectives, IEEE Transactions on Reliability, 41 (1992) 576-582. [35] M. Sheikhalishahi, V. Ebrahimipour, H. Shiri, H. Zaman, M. Jeihoonian, A hybrid GA–PSO

CR IP T

approach for reliability optimization in redundancy allocation problem, Int J Adv Manuf Technol, 68 (2013) 317-338.

[36] T. Yokota, M. Gen, Y. X. Li. A genetic algorithm for interval nonlinear integer programming

AN US

problem. Computers and Industrial Engineering, 30 (1996) 905-917.

[37] A. Ahmad, R. Sadigh, K. K. Damghani, A simulation-based optimization approach for free distributed repairable multi-state availability-redundancy allocation problems. Rel. Eng. & Sys.

AC

CE

PT

ED

M

Safety 157 (2017) 177-191.

- 25 -

ACCEPTED MANUSCRIPT

Appendix A Benchmark 1: The series system as Fig. A1. f (n, r)  r1n1  r2n2  r3n3  r4n4  r5n5

Max

(A1)

5

Subject to gv (n, r)   wi vi2 ni2  Vub

(A2)

CR IP T

i 1

5

gc (n, r)  i (1000 / ln ri ) i [ ni  exp( ni / 4)]  Cub i 1

5

g w (n, r)   wi ni exp(ni / 4)  Wub

(A4)

AN US

i 1

(A3)

M

0r1, r2, r3, r4, r51 and n1, n2, n3, n4, n5=1, 2, …, 10.

ED

Figure A1. The series system

Table A1. Data used in benchmark 1 [15]

i10

1 2

wi vi2

wi

Vub

Cub

Wub

2.330 1.450

1.5 1.5

1 2

7 8

110

175

200

0.541 8.050 1.950

1.5 1.5 1.5

3 4 2

8 6 9

CE

AC

3 4 5

i

PT

Subsystem i

5

- 26 -

ACCEPTED MANUSCRIPT Benchmark 2: The series-parallel system as Fig. A2 [16] f (n, r)  1  (1  r1n1 r2n2 ){1  [1  (1  r3n3 )(1  r4n4 )]r5n5 }

Max

(A5)

5

Subject to gv (n, r)   wi vi2 ni2  Vub

(A6)

i 1

5

gc (n, r)  i (1000 / ln ri ) i [ ni  exp( ni / 4)]  Cub

(A7)

i 1

5

CR IP T

g w (n, r)   wi ni exp(ni / 4)  Wub i 1

(A8)

M

AN US

0r1, r2, r3, r4, r51 and n1, n2, n3, n4, n5=1, 2, …, 10.

ED

Fig. A2 The series-parallel system Table A2. Data used in benchmark 1 [15]

i105

1 2 3 4

2.500 1.450 0.541 0.541

wi vi2

wi

Vub

Cub

Wub

1.5 1.5 1.5 1.5

2 4 5 8

3.5 4.0 4.0 3.5

180

175

100

CE 2.100

1.5

4

4.5

AC

5

i

PT

Subsystem i

- 27 -

ACCEPTED MANUSCRIPT Benchmark 3: The complex (bridge) system as Fig. A3 [16] f (n, rn, r )  r1n1  r2n2  r3n3  r4n4  r1n1  r4n4  r5n5  r2n2  r3n3  r5n5  r1n1  r2n2  r3n3  r4n4 

Max

r1n1  r2n2  r3n3  r5n5  r1n1  r2n2  r4n4  r5n5  r1n1  r3n3  r4n4  r5n5  r2n2  r3n3  r4n4  r5n5 

(A9)

2r1n1  r2n2  r3n3  r4n4  r5n5 5

Subject to gv (n, r)   wi vi2 ni2  Vub

(A10)

i 1

5

gc (n, r)  i (1000 / ln ri ) i [ ni  exp( ni / 4)]  Cub

(A11)

CR IP T

i 1

5

g w (n, r)   wi ni exp(ni / 4)  Wub i 1

(A12)

M

AN US

0r1, r2, r3, r4, r51 and n1, n2, n3, n4, n5=1, 2, …, 10.

ED

Fig. A3 The complex (bridge) system Table A3. Data used in benchmark 3 [15]

i105

1 2

2.330 1.450

AC

4 5

wi vi2

wi

Vub

Cub

Wub

1.5 1.5

1 2

7 8

110

175

200

0.541

1.5

3

8

8.050 1.950

1.5 1.5

4 2

6 9

CE

3

i

PT

Subsystem i

- 28 -

ACCEPTED MANUSCRIPT Benchmark 4: The over-speed protection system as Fig.A4 [1, 14, 16, 21]. 5

f (n, r)  [1  (1 ri ) ni ]

Max

(A13)

i 1

5

Subject to gv (n, r)   vi ni2  Vub

(A14)

i 1

5

gc (n, r)  i (1000 / ln ri ) i [ ni  exp( ni / 4)]  Cub

(A15)

CR IP T

i 1

5

g w (n, r)   wi ni exp(ni / 4)  Wub i 1

(A16)

AN US

0.5r1, r2, r3, r4, r5110-6 and n1, n2, n3, n4, n5=1, 2, …, 10.

Gas Turbine

V1

M

Mechanical and Electrical over speed detection

V2

V3

V4

Air Fuel Mixture

ED

Fig. A4 The over-speed protection system

Table A4. Data used in benchmark 4 [15]

i105

1 2

vi

wi

Vub

Cub

Wub

1 2.3

1.5 1.5

1 2

6 6

250.0

400.0

500.0

0.3 2.3

1.5 1.5

3 2

8 7

CE

AC

3 4

i

PT

Subsystem i

- 29 -

ACCEPTED MANUSCRIPT Table 1. The best known solutions for four RRAP benchmarks [24]. ID

N

R r1

r3

r2

r4

r5

gc(N,R)

Cub

Rs(N,R)

174.99995 175 .93168229721527107 4 174.99991 2 (2,2,2,2,4) .81958939 .84458412 .89534134 .89581626 .86852902 175 .99997664873810677 8 174.99997 3 (3,3,2,4,1) .82783292 .85771241 .91437458 .64861002 .70287554 175 .99988963573807466 6 399.99995 4 (5,5,4,6) .90166461 .88817296 .94821033 .84987084 400 .99995467439944530 2

AC

CE

PT

ED

M

AN US

CR IP T

1 (3,2,2,3,3) .77946645 .87173278 .90284951 .71148780 .78781644

- 30 -

ACCEPTED MANUSCRIPT Table 2. The updated variables and system reliability after using Eq.(17) in Benchmark 1. rj 0.77946645 0.87173278 0.90284951 0.71148780 0.78781644

rj* 0.7794666465 0.8717328916 0.9028496581 0.7114879133 0.7878166527

Rs(N, R*) 0.9316823242156 0.9316823243321 0.9316823242890 0.9316823242165 0.9316823242437

gc(N, R*) 174.99999999999997 174.99999999999997 175.00000000000000 174.99999999999994 174.99999999999997

AC

CE

PT

ED

M

AN US

CR IP T

j 1 2 3 4 5

- 31 -

ACCEPTED MANUSCRIPT

Table 3. The updated variables and system reliability after using Eq.(17) in Benchmark 2. rj 0.81958939 0.84458412 0.89534134 0.89581626 0.86852902

rj* 0.8195896627 0.8443995095 0.8951594429 0.8956364438 0.8684837808

Rs(N, R*) 0.9999766487782 0.9999766485136 0.9999766486140 0.9999766487743 0.9999766487329

gc(N, R*) 175.000000000000000 175.000000000000000 175.000000000000000 175.000000000000000 174.999999999999940

AC

CE

PT

ED

M

AN US

CR IP T

j 1 2 3 4 5

- 32 -

ACCEPTED MANUSCRIPT Table 4. The updated variables and system reliability after using Eq.(17) in Benchmark 3. rj 0.82783292 0.85771241 0.91437458 0.64861002 0.70287554

rj* 0.8278329737 0.8649134906 0.9215427414 0.6588716460 0.7648527086

Rs(N, R*) 0.9998896357944 0.9998877695542 0.9998884185439 0.9998884005221 0.9998873314317

gc(N, R*) 174.999999999999970 174.999999999999970 175.000000000000000 174.999999999999970 175.000000000000000

AC

CE

PT

ED

M

AN US

CR IP T

j 1 2 3 4 5

- 33 -

ACCEPTED MANUSCRIPT Table 5. The updated variables and system reliability after using Eq.(17) in Benchmark 4. rj 0.90166461 0.88817296 0.94821033 0.84987084

rj* 0.9016646473 0.8881729824 0.9482103612 0.8498708783

Rs(N, R*) 0.9999546744169 0.9999546744170 0.9999546744168 0.9999546744170

gc(N, R*) 399.999999999999940 399.999999999999940 400.000000000000060* 399.999999999999890

AC

CE

PT

ED

M

AN US

CR IP T

j 1 2 3 4

- 34 -

ACCEPTED MANUSCRIPT Table 6. The orthogonal array in setting related parameters. Cg 0.25 0.25 0.25 0.35 0.35 0.35 0.50 0.50 0.50

Cp 0.50 0.55 0.60 0.60 0.65 0.70 0.75 0.80 0.85

Cw 0.60 0.70 0.80 0.75 0.85 0.80 0.95 0.90 1.00

AC

CE

PT

ED

M

AN US

CR IP T

Combination 1 2 3 4 5 6 7 8 9

- 35 -

ACCEPTED MANUSCRIPT Try 1 2 3 4 5 6 7 8 9

Table 7. The final results obtained from the proposed bSSO Benchmark 1 Benchmark 2 Benchmark 3 Benchmark 4 0.9316640123 0.9999766304 0.9998896159 0.9999546702 0.9316730330 0.9999766103 0.9998895875 0.9999546720 0.9316756971 0.9999766454 0.9998896236 0.9999546745 0.9316756032 0.9999766429 0.9998896246 0.9999546745 0.9316818001 0.9999766447 0.9998896232 0.9999546747!1 0.9316795764 0.9999766447 0.9998896114 0.9999546745 0.9316823496* 0.9999766487#1 0.9998896372@ 0.9999546747!2 0.9316822558 0.9999766487#2 0.9998896357 0.9999546746 0.9316823451 0.9999766480 0.9998896370 0.9999546747!3

*

CR IP T

: Rp(3, 2, 2, 3, 3, 0.779333800540632, 0.87185051, 0.90296647, 0.71134556, 0.78783422)=0.9316823496 : Rp(2, 2, 2, 2, 4, 0.82019389, 0.84507293, 0.89552309, 0.89531120, 0.86831365)=0.9999766487 #2 : Rp(2, 2, 2, 2, 4, 0.8198899475, 0.84550977, 0.89542929, 0.89550272, 0.86826924)=0.9999766487 @ : Rp(3, 3, 2, 4, 1, 0.82802875, 0.85788545, 0.91430702, 0.64796289, 0.70464282)=0.9998896372 !1 : Rp(5, 6, 4, 5, 0.901614232, 0.84995498, 0.94811740, 0.88822052)=0.9999546747 !2 : Rp(5, 6, 4, 5, 0.901597023, 0.84995928, 0.94812316, 0.88822421)=0.9999546747 Rp(5, 5, 4, 6, 0.90160802, 0.888216589, 0.94816216, 0.84991342)=0.9999546747 !3 : Rp(5, 6, 4, 5, 0.9016226926, 0.84989282, 0.94812856, 0.88824385)=0.9999546747, Rp(5, 6, 4, 5, 0.901602488, 0.84994425, 0.94811583, 0.88823497)=0.9999546747

AC

CE

PT

ED

M

AN US

#1

- 36 -

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN US

CR IP T

Table 8. The number of gBests obtained from the proposed bSSO are better than existing solutions. Try Benchmark 1 Benchmark 2 Benchmark 3 Benchmark4 Total 1 0 1 0 1 2 2 0 0 0 3 3 3 0 4 2 19 25 4 0 2 1 21 24 5 2 7 1 64 54 6 0 6 0 47 53 7 41 19 10 8 78 8 8 4 6 53 71 9 14 2 7 35 58 Total 43 36 25 274 378

- 37 -

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN US

CR IP T

Table 9. Comparison of the proposed approach solutions with other algorithms for Benchmark 1 Method SCA [27] HM [25] Xu [26] GA [28] IA [29] N (3,2,2,3,3) (3,2,2,3,3) (3,2,2,3,3) (3,2,2,3,3) (3,2,2,3,3) r1 .777143 .77960 .77939 .779427 .779266 r2 .867514 .80065 .87183 .869482 .872513 r3 .896696 .90227 .90288 .902674 .902634 r4 .717739 .71044 .71139 .714038 .710648 r5 .793889 .85947 .78779 .786896 .788406 Rs .93163 .92975 .931677 .931578 .93167820 MPI .0765493% 2.7506573% .0078111% .1524901% .0060548% Method PSO [24] SSO [24] ABC [30]. PSSO [24] bSSO N (2,3,2,4,2) (3,2,2,3,3,) (3,2,2,3,3) (3,2,2,3,3) (3,2,2,3,3) r1 .80059281 .78271484 .779399 .77946645 .779562133488328 r2 .74049316 .87351990 .871837 .87173278 .871815674524307 r3 .82914384 .90264893 .902885 .90284951 .902881610103147 r4 .63686144 .71313477 .711403 .71148780 .711373682490884 r5 .88704276 .77729797 .787800 .78781644 .787722894878846 Rs .8885037 .93150199 .931682 .931682297215271 .931682336748513 MPI 38.7265198% .2632876% .0004929% .0000579%

- 38 -

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN US

CR IP T

Table 10. Comparison of the proposed approach solutions with other algorithms for Benchmark 2 Method SCA [27] GA [28] IA [29] PSO [24] N (3,3,1,2,3) (2,2,2,2,4) (2,2,2,2,4) (4,3,2,1,2) r1 .838193 .785452 .812485 .84025282 r2 .855065 .842998 .843155 .88865099 r3 .878859 .885333 .897385 .62375055 r4 .911402 .917958 .894516 .93984950 r5 .850355 .870318 .870590 .75158691 Rs .99996875 .99997418 .99997658 .99985845 MPI 25.2767564% 9.5623020% 0.2945618% 83.5033461% Method SSO [24] ABC [30] PSSO [24] bSSO N (2,2,2,2,4) (2,2,2,2,4) (2,2,2,2,4) (2,2,2,2,4) r1 .81385803 .8197457 .81958939 .819871348573628 r2 .83912659 .8450080 .84458412 .845134385123488 r3 .89366150 .8954581 .89534134 .895446914628904 r4 .89845276 .9009032 .89581626 .895375912869888 r5 .87106323 .8684069 .86852902 .868395597945110 # Rs .99997657 .99997731 .999976648738107 .999976648986365 MPI 0.3371164% 0.0010632% # Infeasible solution

- 39 -

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN US

CR IP T

Table 11. Comparison of the proposed approach solutions with other algorithms for Benchmark 3 Method SCA [27] GA [28] IA [29] PSO [24] N (3,3,2,3,2) (3,3,3,3,1) (3,3,3,3,1) (3,3,2,2,3) r1 .814483 .814090 .812485 .77061588 r2 .821383 .864614 .867661 .90109253 r3 .896151 .890291 .861221 .89278651 r4 .713091 .701190 .713852 .60083008 r5 .814091 .734731 .756699 .73451002 Rs .99978937 .99987916 .99988921 .99967140 MPI 47.6035140% 8.6703754% .3856680% 66.4142671% Method SSO [24] ABC [30] PSSO [24] bSSO N (3,3,2,4,1) (3,3,2,4,1) (3,3,2,4,1) (3,3,2,4,1) r1 .82008362 .828087 .82783292 .828044787785813 r2 .85119629 .857805 .85771241 .857844971208682 r3 .91854858 .704163 .91437458 .914327838730170 r4 .66072083 .648146 .64861002 .648083923112358 r5 .70275879 .914240 .70287554 .703755208252055 # Rs .99988862 .99988962 .999889635738075 .999889637281611 MPI 47.6035140% 66.4142671% # Infeasible solution

- 40 -

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN US

CR IP T

Table 12. Comparison of the proposed approach solutions with other algorithms for Benchmark 4 Method Dhingra [34] GA [36] IA [29] PSO [24] N (6,6,3,5) (3,6,3,5) (5,5,5,5) (4,6,5,5) r1 .81604 .965593 .903800 .92952331 r2 .80309 .760592 .874992 .81370356 r3 .98364 .972646 .919898 .88663747 r4 .80373 .804660 .890609 .89987183 Rs .99961 .999468 .999942 .99990474 MPI 88.3781132% 91.4801958% 21.8528303% 52.4193172% Method SSO [24] ABC[30] PSSO [24] bSSO N (5,6,4,5) (5,6,4,5) (5,5,4,6) (5,6,4,5) r1 .90208435 .901614 .90166461 .901633992960356 r2 .85472107 .849920 .88817296 .849946642719017 r3 .94606018 .948143 .94821033 .948111175520125 r4 .88633728 .888223 .84987084 .888217981233378 # Rs .99995416 .999955 .999954674399445 .999954674641591 MPI 1.1226911% 0.0005342% # Infeasible solution

- 41 -

ACCEPTED MANUSCRIPT SBS start Reset floating-point variables ri and rj (both selected from PgBest randomly). Reset floating-point variable rk (selected from PgBest randomly) based on Eq.(9).

end

AN US

No Abandon new PgBest and keep old PgBest.

Yes Replace old PgBest with new PgBest.

CR IP T

The fitness of new PgBest is better than that of old PgBest?

AC

CE

PT

ED

M

Figure 1. The flowchart of the proposed SBS.

- 42 -

ACCEPTED MANUSCRIPT UM2 start Select a discrete variable xi from Xsol randomly. Randomize ρi. Yes

xi=pi Randomize xi.

Yes r =p +.0005·ρ[-.5,.5]·gen/genBest j j

No ρj>Cr

Yes

Randomize rj.

The fitness of new Xsol is better than that of Psol?

ρi
ρi
Yes

No

No Yes

No ρj
No

No Yes

Yes r =g +.0005·ρ[-.5,.5]·gen/genBest j j

CR IP T

xi=gi

ρj
Replace Psol with new Xsol.

ρi>Cr

Yes

No

AN US

Select a float-point variable rj from Xsol randomly.

The fitness of new Psol is better than that of PgBest?

No

Yes

Randomize ρj.

Replace PgBest with new Psol. end

AC

CE

PT

ED

M

Figure 2. The flowchart of the proposed UM2.

- 43 -

ACCEPTED MANUSCRIPT BSO start Initialize Xsol=Psol, calculate F(Xsol) for sol=1,2, , Nsol, find gbest, and let gen=1. sol=1 Yes

sol=gBest No Call UM2 to update Xsol

CR IP T

Call SBS to update XgBest

sol
AN US

No

Yes

end

gen
M

gen=gen+1

AC

CE

PT

ED

Figure 3. The flowchart of the proposed BSO.

- 44 -