Redundancy allocation of series-parallel systems using a variable neighborhood search algorithm

Redundancy allocation of series-parallel systems using a variable neighborhood search algorithm

ARTICLE IN PRESS Reliability Engineering and System Safety 92 (2007) 323–331 www.elsevier.com/locate/ress Redundancy allocation of series-parallel s...

222KB Sizes 0 Downloads 38 Views

ARTICLE IN PRESS

Reliability Engineering and System Safety 92 (2007) 323–331 www.elsevier.com/locate/ress

Redundancy allocation of series-parallel systems using a variable neighborhood search algorithm Yun-Chia Liang, Yi-Ching Chen Department of Industrial Engineering and Management, Yuan Ze University, No 135 Yuan-Tung Road, Chung-Li, Taoyuan County, Taiwan 320, ROC Available online 6 June 2006

Abstract This paper presents a meta-heuristic algorithm, variable neighborhood search (VNS), to the redundancy allocation problem (RAP). The RAP, an NP-hard problem, has attracted the attention of much prior research, generally in a restricted form where each subsystem must consist of identical components. The newer meta-heuristic methods overcome this limitation and offer a practical way to solve large instances of the relaxed RAP where different components can be used in parallel. Authors’ previously published work has shown promise for the variable neighborhood descent (VND) method, the simplest version among VNS variations, on RAP. The variable neighborhood search method itself has not been used in reliability design, yet it is a method that fits those combinatorial problems with potential neighborhood structures, as in the case of the RAP. Therefore, authors further extended their work to develop a VNS algorithm for the RAP and tested a set of well-known benchmark problems from the literature. Results on 33 test instances ranging from less to severely constrained conditions show that the variable neighborhood search method improves the performance of VND and provides a competitive solution quality at economically computational expense in comparison with the best-known heuristics including ant colony optimization, genetic algorithm, and tabu search. r 2006 Elsevier Ltd. All rights reserved. Keywords: Variable neighborhood search; Redundancy allocation problem; Series-parallel system; Combinatorial optimization

1. Introduction A series system of s independent k-out-of-n:G subsystems is considered the most studied configuration among the redundancy allocation problem (RAP) variations. Such series-parallel systems have been widely employed in practice due to the rapid increase in the need for reliability and security. The RAP is NP-hard [1] and has been studied in many forms as summarized in Tillman et al. [2] and Kuo and Prasad [3]. Exact methods to the RAP include dynamic programming [4–7], integer programming [8–11], and mixed-integer and nonlinear programming [12]. Hsieh [13] developed a linear programming approach to approximate the integer nonlinear RAP. It can be considered as the first mathematical programming study allowing the mix of component types in subsystems. Ramirez-Marquez et al. [14] reformuCorresponding author. Fax: +886 3 4638907.

E-mail addresses: [email protected] (Y.-C. Liang), [email protected] (Y.-C. Chen). 0951-8320/$ - see front matter r 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.ress.2006.04.013

lated the objective of a series-parallel RAP as maximizing the minimum subsystem reliability, and then solved the problem using integer programming. This paper was the first time that the mix of component types was addressed using integer programming. Since the practical use of exact methods is limited to the increase of problem size and the use of identical component types in each subsystem, heuristics have become a popular alternative to exact methods. Heuristic approaches to the RAP consist of the genetic algorithm (GA) [15–17], ant colony optimization (ACO) [18–23], Tabu search (TS) [24,25], hybrid neural network (NN) and GA [26], hybrid ACO with TS [21], variable neighborhood descent (VND) [27], and the great deluge algorithm (GDA) [28]. In particular, while considering the problem type of maximizing system reliability, past studies such as TS proposed by Kulturel-Konak et al. [25], GA developed by Coit and Smith [15,16], ACO by Liang and Smith [18,22], LP suggested by Hsieh [13], and VND by authors’ previous work [27] have provided comparable results over a set of 14-subsystem benchmark problems proposed in Refs. [5,6].

ARTICLE IN PRESS 324

Y.-C. Liang, Y.-C. Chen / Reliability Engineering and System Safety 92 (2007) 323–331

Our study considers one of the latest meta-heuristics, variable neighborhood search (VNS), to solve the system reliability maximization RAP. The VNS method first introduced by Mladenovic´ [29] explored search space based on the systematic change of neighborhoods. Due to its characteristics of simplicity and easy implementation, VNS has been successfully applied to diverse combinatorial optimization problems, e.g., graph coloring [30], multisource problems [31], p-median problems [32], clustering [33], and minimum spanning tree problems [34]. In addition, Hansen and Mladenovic´ [35,36] provided comprehensive surveys of the state-of-the-art development of VNS and its variations. Authors’ previously published work [27] has shown the promise of the VND method, the simplest version among VNS variations, on solving RAP. Thus, this study aimed at employing advanced schemes in VNS such as the shaking operation and the redesign of neighborhood structures combined with an adaptive penalty function to further improve the performance of VND. This paper is organized as follows. Section 2 describes the mathematical model and basic assumption of the RAP, and Section 3 introduces the variable neighborhood search algorithm for RAP. Section 4 provides computational results for a set of benchmark problems. Finally, Section 5 contains concluding remarks.

ki

minimum number of components in parallel required for subsystem i to function Ri ðyi jki Þ reliability of subsystem i, given ki Ci (yi) total cost of subsystem i Wi (yi) total weight of subsystem i Ru unpenalized system reliability of solution u Rup penalized system reliability of solution u Cu total system cost of solution u Wu total system weight of solution u 2.1.2. Variable neighborhood search l Nl y y0 y00 m e

f

s

2. Problem definition

ni

2.1. Notation

gC

2.1.1. Redundancy allocation problem

gW

k

minimum number of components required to function a pure parallel system n total number of components used in a pure parallel system k-out-of-n: G a system that functions when at least k of its n components function s number of subsystems i index for subsystem, i ¼ 1,2,y,s j index for components in a subsystem R overall reliability of the series-parallel system C cost constraint W weight constraint ai number of available component choices for subsystem i rij reliability of component j available for subsystem i cij cost of component j available for subsystem i wij weight of component j available for subsystem i yij quantity of component j used in subsystem i yi ð¼ ðy Pi1i; . . . ; yiai ÞÞ an ordered set of yij ni ð¼ aj¼1 yij Þ total number of components used in subsystem i nmax maximum number of components that can be in parallel (user assigned)

rm r0

index for neighborhood structures, l ¼ 1; . . . ; l max a set of neighborhood structures current solution the neighboring solution generated by a shaking operation the best neighboring solution of y0 an iteration counter a constant that controls the lower bound of number of components selected in the initial solution a constant that controls the upper bound of number of components selected in the initial solution a random number from the uniform distribution with range of [ki+e, nmaxf] total number of components in parallel for subsystem i an amplification parameter that controls the magnitude of cost constraint penalty an amplification parameter that controls the magnitude of weight constraint penalty feasibility ratio at iteration m a pre-specified threshold of feasibility ratio

2.2. Redundancy allocation problem As illustrated in Fig. 1, a k-out-of-n: G subsystem i functions properly if at least ki of its ni components are operational. In the formulation of a series-parallel system problem, for each subsystem, multiple component choices i=1

i=2

i=s

1

1

1

2

2

2

n1

n2

ns

Fig. 1. A series-parallel system configuration.

ARTICLE IN PRESS Y.-C. Liang, Y.-C. Chen / Reliability Engineering and System Safety 92 (2007) 323–331

are used in parallel, and each subsystem may select different components. Thus, the system reliability maximization RAP can be formulated to select the optimal combination of components and redundancy levels to meet system level constraints, system cost of C, and system weight of W while maximizing system reliability. Thus, the mathematical model of the reliability maximization RAP can be formulated as follows: Max R ¼

s Y

Ri ðyi jki Þ

(1)

i¼1

subject to the constraints s X

C i ðyi ÞpC;

(2)

W i ðyi ÞpW :

(3)

i¼1 s X i¼1

where R denotes overall reliability of the series-parallel system. The system reliability is calculated byQthe product of s subsystem reliabilities represented by si¼1 Ri ðyi jki Þ. Ri ðyi jki Þ is the reliability of subsystem i given the minimum number of components in parallel required for subsystem i to function, ki, and the assignment of components in subsystem i, i.e., yi. Ci(yi) and Wi(yi) denote the total cost and total weight of subsystem i, respectively. Therefore, Eqs. (2) and (3) define that system cost and system weight are represented by linear combinations of component cost and weight correspondingly. In addition, if the maximum number of components allowed in parallel is pre-determined, the following constraint is added: ki p

ai X

yij pnmax ;

8i ¼ 1; 2; . . . ; s.

(4)

j¼1

where ai is the number of available component choices for subsystem i and yij denotes the quantity of component j used in subsystem i. Therefore, the P total number of ai components used in subsystem i, i.e., j¼1 yij ; has to lie within the range of ki and nmax, the maximum number of components that can be used in parallel as shown in Eq. (4). In addition, some typical assumptions are considered as follows:

     

the state of the components and the system are either good or have failed; the supply of components is unlimited; component reliability, weight, and cost are known and deterministic; failed components do not damage the system, and are not repaired; the state of each component is independent from the state of other components; components are active redundant, i.e., the failure rates of components when not in use are the same as when in use.

325

3. Variable neighborhood search algorithm Hansen and Mladenovic´ [35,36] described VNS-type algorithms as meta-heuristics employing a set of neighborhood search methods to find the local optimum in each neighborhood iteratively and hopefully to reach the global optimum at the end. Liang and Wu [27] proposed a VND algorithm, the simplest variation among VNS-type algorithms, to solve the RAP. The major difference between VND [27] and VNS in this research consisted of the shaking operation, the redesign of neighborhood structures, and the adaptive penalty function. Particularly, the use of the shaking operator added the randomization to the algorithm. That is the perturbation of the current solution provided VNS a good opportunity to escape from the local optimum while a deterministic algorithm such as VND has no such luck on this issue. The proposed VNS algorithm for RAP (denoted as VNS_RAP) is discussed below. During the initialization step, a set of neighborhood structures (Nl; l ¼ 1,y,lmax) and the sequence of their implementation are determined. Each neighborhood structure can be considered as a set of elements obtained by applying a single solution modification operator over an initial solution while the implementation sequence was the order of application of modification operators. Then an initial solution was generated and set as the current solution (y). In the main search loop starting from the shaking operation, i.e., randomly generating a neighboring solution (y0 ) of y in the first neighborhood, N1, a complete neighborhood search of y0 was then performed. If the best neighboring solution (y00 ) in this neighborhood was better than the current solution (y), y was replaced by y00 and l was reset to 1, i.e., start from the 1st neighborhood again with the updated y. Otherwise, start the consecutive neighborhood search with the current y. The process continued until all neighborhoods were visited and no further improvement could be obtained for the current solution. Then, l was reset to 1, one was added to the iteration counter (m), and a new iteration was started. The procedure continued until the pre-specified maximum number of iterations was reached. In addition, to take advantage of the neighborhood search, every time a feasible neighboring solution is found, the best feasible solution is checked and updated if necessary. When the search process stops, the final best feasible solution is used for comparison. The overview of the VNS_RAP algorithm can be formalized in the following pseudo code: (Initialization) Select a set of neighborhood structures N l ; l ¼ 1; . . . ; l max Determine the implementation sequence of neighborhood structures Generate an initial solution denoted by y Set l ¼ 1 and m ¼ 1

ARTICLE IN PRESS 326

Y.-C. Liang, Y.-C. Chen / Reliability Engineering and System Safety 92 (2007) 323–331

(Search Procedure) Repeat the following steps until the maximum number of iterations is reached Shaking: Randomly generate a solution y0 from the lth neighborhood of y Neighborhood Search: Find the best neighboring solutions of y0 from the lth neighborhood of y and denote as y00 If y00 is better than y, y00 -y, reset l ¼ 1 and go to Shaking Else if l ¼ lmax, reset l ¼ 1, set m ¼ m+1 and go to Shaking else set l ¼ l+1 and go to Shaking Four key factors underlie the proposed VNS_RAP algorithm—how to generate a proper initial solution, how to define a set of neighborhood structures, how to perform a shaking operation, and how to design a penalty function to evaluate infeasible solutions all of which are discussed in the subsequent sections. 3.1. Initial solution generation

II

II

I

3.2. Neighborhood structures This section depicts three types of neighborhood structures and illustrates each with an example. All neighborhood structures involved the change of one subsystem at a time, so only the reliability of the changed subsystem had to be recalculated when evaluating the system reliability of the neighboring solution. Type 1 structure. This structure replaced one type of component with a different type in the same subsystem, i.e., yij-yij+1 and yim-yim1 for j6¼m and yimX1. Each component available in a subsystem was independently considered, and a ‘‘blank’’ component was added to the list. The ‘‘blank’’ component is defined as the one with zero

I

II I

I

II

II III

Fig. 2. An example of an initial solution with four subsystems.

I

I

II

III

I

I

II

II

II

II

I

(b)

(c)

(d)

(e)

(f)

I

I

I

II

II

II

I

II

III

(h)

(i)

(j)

II (a)

I III (g)

To generate an initial solution, s integers between ki+e and nmaxf (inclusive) were randomly selected to represent the total number of components in parallel (ni) for each subsystem. e and f denoted the constants that controlled the range of component selection. Then, the ni components were chosen using a uniform distribution to the ai different types. For example, in a series-parallel system with four subsystems, assume that three component choices denoted as I, II, and III were available in each subsystem. The values of the parameters that controlled the number of components, nmax, ki, e, and f were set to 8, 1, 1, and 4, respectively. Thus, the number of components in each subsystem were determined according to a uniform distribution with a range of [2,4]. If the number of components in each subsystem was 2, 3, 4, and 3 in that order, then a possible solution can be generated as illustrated in Fig. 2.

III

III

Fig. 3. An example of neighborhood type 1.

reliability, weight and cost. All possibilities were enumerated and all subsystems were considered. For example, there were three component types (I, II, and III) available in a particular subsystem and Fig. 3(a) represents the current allocation with one Type I component and one Type II component. Then, a total of nine possible neighboring solutions were derived as illustrated in Figs. 3(b)–(j), where shaded boxes represent the added components, and the removed components are marked by the dashed box. In Figs. 3(b) and (e), a ‘‘blank’’ component replaces components I and II, respectively; while from Figs. 3(h)–(j), an originally blank position is filled by components I, II, and III correspondingly. Type 2 structure. This structure simultaneously exchanges two components with different type components in the same subsystem. Each component available in a subsystem was independently considered. For instance, the current allocation is demonstrated as Fig. 4(a). One possible neighboring solution is shown in Fig. 4(b) where the shaded box is the added component. Component III in subsystem 2 was replaced by component type I, and another type III component was added to the third slot of the same subsystem. Type 3 structure. This structure simultaneously substitutes three components with different type components in the same subsystem. All possibilities were enumerated and all subsystems were considered. An example illustrated in Fig. 5 assumes three component options (denoted by I, II, and III) available in each subsystem. Fig. 5(a) shows the

ARTICLE IN PRESS Y.-C. Liang, Y.-C. Chen / Reliability Engineering and System Safety 92 (2007) 323–331

i

i 1

2

I

II

2

II

1

II

II

I

II

II I

II

III

II

I

I III III

III (a)

(b)

Fig. 4. An example of neighborhood type 2.

1 I

2 II

II

III

3 I

i II

3 I

II

1 I

2 II

I

II

III

II I

i III II

II

I

I

III

III III (a)

(b)

Fig. 5. An example of neighborhood type 3.

327

solution u calculated using Eq. (1). Then, the penalized reliability Rup for systems that exceeded cost constraint C and/or weight constraint W was calculated by multiplying the unpenalized objective with two penalty factors ðC=C u ÞgC and ðW =W u ÞgW where the exponents gC and gW were amplification parameters. To avoid the situation that the effect of a significant constraint violation could be hidden by another constraint with lots of slack, these multiplicative penalty factors are applied only when the corresponding constraint is violated. In the adaptive penalty function, gC and gW were adjusted in accordance with the search history in the previous iteration. At each iteration, the number of feasible solutions was divided by the number of solutions searched (including both feasible and infeasible ones), and this feasibility ratio was denoted by rm at iteration m. rm was then compared with a prespecified threshold r0. If rm pr0 ; gmþ1 ¼ gm C ð1 þ rm =2Þ C mþ1 m and gW ¼ gW ð1 þ rm =2Þ increase the magnitude of penalty on heavily infeasible solutions in order to move toward the border of the feasible region; otherwise, gmþ1 ¼ C gm and gmþ1 ¼ gm encourage W ðð1 þ rm Þ=2Þ W C ðð1 þ rm Þ=2Þ movement further into the infeasible region. 4. Computational results

current status. In subsystem i, a component III replaces one of component IIs. Component I was removed and a new component III was added. Fig. 5(b) demonstrates the results of the subsystem i where the new components are shaded and the dashed box marks the eliminated one. 3.3. Shaking The search of each neighborhood started with a shaking operator. Similar to the usage of mutation in GA, this operation aimed at providing stochastic changes of neighborhoods that prevent the search being trapped in a local optimum. The shaking step was implemented by randomly generating a neighboring solution (y0 ) of the current one (y) based on the neighborhood structure presently visited. 3.4. Penalty function Coit and Smith [16] suggested the use of a penalty function to evaluate the objective function under a constrained problem. That is to say, an appropriatedesigned penalty function encourages the algorithm to explore the feasible region and infeasible region near the border of the feasible area, and discourages, but permits, further search into the infeasible region. After generating all neighboring solutions, the VNS_RAP algorithm used a penalty function to evaluate infeasible solutions as follows:   gC   C W gW Rup ¼ Ru   (5) Cu Wu where Cu and Wu are total system cost and weight of solution u, respectively. Ru is the unpenalized reliability of

The VNS_RAP was coded in Borland C++ Builder 6.0 and run with an Intel Pentium IV 2.8 GHz PC at 512 MB RAM. All computations used real float point precision without rounding or truncating values. The system reliability of the final solution was rounded to five digits behind the decimal point in order to conduct a fair comparison with results in the literature. The benchmark problem adopted in this study was originally proposed by Fyffe et al. [5]. It limited the system cost constraint C ¼ 130 and system weight constraint W ¼ 170. Nakagawa and Miyazaki [6] then developed 33 variations of the original problem by maintaining C ¼ 130 and varying W incrementally from 191 to 159. The reliability, cost, and weight of component choices are shown in Table 1. In [5,6], the optimization approaches require only identical components to be placed in redundancy; however, for the VNS_RAP approach, different types were allowed to reside in parallel (assuming that values of nmax ¼ 8 and ki ¼ 1 for all subsystems). Considering prior RAP works where component mixing was allowed on the heuristic benchmarks, the study chose GA of [16], linear programming approximation (LP) of [13], TS of [25], ACO of [22], and VND of [27]. Ten runs of each algorithm, except the LP algorithm, were made using different random number seeds for each problem instance, and the best feasible solution over ten runs was used for comparison. The parameter settings of the VNS_RAP algorithm were obtained from the preliminary research and the detailed design of the experiments can be found in [37]. Basically, proper parameter setting plays a crucial role in VNS_RAP. For example, parameters e and f control the lower and

ARTICLE IN PRESS Y.-C. Liang, Y.-C. Chen / Reliability Engineering and System Safety 92 (2007) 323–331

328

Table 1 Reliability, cost, and weight data of component options in test problems [5] Subsystem (i)

Component options Alternative 1

1 2 3 4 5 6 7 8 9 10 11 12 13 14

Alternative 2

Alternative 3

Alternative 4

ri1

ci1

wi1

ri2

ci2

wi2

ri3

ci3

wi3

ri4

ci4

wi4

0.90 0.95 0.85 0.83 0.94 0.99 0.91 0.81 0.97 0.83 0.94 0.79 0.98 0.90

1 2 2 3 2 3 4 3 2 4 3 2 2 4

3 8 7 5 4 5 7 4 8 6 5 4 5 6

0.93 0.94 0.90 0.87 0.93 0.98 0.92 0.90 0.99 0.85 0.95 0.82 0.99 0.92

1 1 3 4 2 3 4 5 3 4 4 3 3 4

4 10 5 6 3 4 8 7 9 5 6 5 5 7

0.91 0.93 0.87 0.85 0.95 0.97 0.94 0.91 0.96 0.90 0.96 0.85 0.97 0.95

2 1 1 5 3 2 5 6 4 5 5 4 2 5

2 9 6 4 5 5 9 6 7 6 6 6 6 6

0.95

2

5

0.92

4

4

0.96

2

4

0.91

3

8

0.90

5

7

0.99

6

9

upper bounds for the number of components in the initial solution. If the values of both bounds are large, the initial solution will be highly infeasible; if both e and f are small, an inferior feasible solution will be generated. Both cases may cost the algorithm more computational effort to reach the optimum. Therefore, the best parameter setting can be summarized as follows. The best sequence of neighborhood structures was determined as Type 1–Type 2–Type 3 (defined in Section 3.2), i.e., changes from the smallest to the largest neighborhoods. The generation of an initial solution was controlled in a range between ki+e and nmaxf (inclusive), which determined the number of components in parallel in each subsystem. The algorithm considered (e ¼ 0, f ¼ 4); in other words, one to four components were randomly selected for each subsystem. The test data indicates that the violation of the weight constraint appears more often than the cost constraint. Therefore, the penalty function gC ¼ 1 and gW ¼ 0:4 were selected for the initial values of these parameters, and the feasibility threshold r0 was set to 0.15. The stopping criterion was the number of iterations reaching 70. The comparisons of the best solutions between VNS_RAP and other methods, VND [27], GA [16], LP [13], ACO [22], and TS [25] are displayed in Table 2. The best performance in each instance is italicized. In comparison with the only mathematical programming approach, VNS_RAP outperformed LP in all instances. When compared with GA, VNS_RAP was able to find the equal or better solution in 32 (out of 33) instances. Besides, VNS_RAP performed equal to or better than ACO and TS in 28 instances each. In addition, when investigating the improvement switching from VND to VNS, the results in Table 2 show that VNS_RAP was superior to VND in all instances. This indicates that the stochastic change of neighborhood by the shaking operation and redesign of

neighborhood structures helped improve the performance. Besides, VNS_RAP was able to reach the best-known solution in 27 instances, while TS found 26, ACO attained 24, and GA obtained 8. Two inferior methods, VND and LP, were unable to achieve any of the best-known solutions. Among the three best performance algorithms, VNS_RAP, ACO, and TS, ACO showed its weakness in those less constrained instances while TS had difficulty finding the best-known solutions particularly in those highly constrained instances. On the other hand, VNS_RAP provided the most robust performance over all types of instances. The performance of each method can be further analyzed using a maximum possible deviation (MPD) calculated by 100%  (the best-known solutionthe best solution by heuristics)/(1the best solution by heuristics). The measure of MPD represented the percent of deviation from the best-known solution considering the reliability was bounded by one. Fig. 6 illustrates the MPDs of all methods. Two mediocre methods, VND and LP showed different trends when the problem constraints grew more severely. ACO, TS, and VNS_RAP are able to go below 4% in all instances, and once again VNS_RAP was the most robust method in this performance measure. Table 3 lists the average MPD over 33 instances. VNS_RAP was superior to other methods with a MPD value of 0.12213% which is less than half of the MPDs of the two competitive methods ACO and TS. While considering the average of the best solutions over all instances (as denoted by Avg R in Table 3), VNS_RAP also provided the greatest average performance of 0.97393. Table 4 summarizes the details of the best solution obtained by VNS_RAP in 33 instances. The results are listed in the descending order of system weight constraint, i.e., from the least to the most constrained. The numbers in

ARTICLE IN PRESS Y.-C. Liang, Y.-C. Chen / Reliability Engineering and System Safety 92 (2007) 323–331

329

Table 2 Comparison of the best solutions among heuristics W

VND

GA

LP

ACO

TS

VNS_RAP

191 190 189 188 187 186 185 184 183 182 181 180 179 178 177 176 175 174 173 172 171 170 169 168 167 166 165 164 163 162 161 160 159

0.98506 0.98358 0.98348 0.98302 0.98131 0.98082 0.98034 0.98046 0.97919 0.97940 0.97850 0.97732 0.97670 0.97600 0.97540 0.97379 0.97389 0.97275 0.97039 0.96945 0.96872 0.96770 0.96495 0.96374 0.96302 0.96200 0.96128 0.96051 0.95942 0.95515 0.95682 0.95305 0.95094

0.98675 0.98603 0.98556 0.98503 0.98429 0.98362 0.98311 0.98239 0.98190 0.98102 0.98006 0.97942 0.97906 0.97810 0.97715 0.97642 0.97552 0.97435 0.97362 0.97266 0.97186 0.97076 0.96922 0.96813 0.96634 0.96504 0.96371 0.96242 0.96064 0.95912 0.95804 0.95567 0.95432

0.98671 0.98632 0.98572 0.98503 0.98415 0.98388 0.98339 0.98220 0.98147 0.97969 0.97928 0.97833 0.97806 0.97688 0.97540 0.97498 0.97350 0.97233 0.97053 0.96923 0.96790 0.96678 0.96561 0.96415 0.96299 0.96121 0.95992 0.95860 0.95732 0.95555 0.95410 0.95295 0.95080

0.98675 0.98591 0.98577 0.98533 0.98469 0.98380 0.98351 0.98299 0.98221 0.98147 0.98068 0.98029 0.97951 0.97840 0.97760 0.97649 0.97571 0.97493 0.97383 0.97303 0.97193 0.97076 0.96929 0.96813 0.96634 0.96504 0.96371 0.96242 0.96064 0.95919 0.95804 0.95571 0.95457

0.98681 0.98642 0.98592 0.98538 0.98469 0.98418 0.98351 0.98299 0.98226 0.98152 0.98103 0.98029 0.97951 0.97840 0.97747 0.97669 0.97571 0.97479 0.97383 0.97303 0.97193 0.97076 0.96929 0.96813 0.96634 0.96504 0.96371 0.96242 0.95998 0.95821 0.95692 0.95560 0.95433

0.98681 0.98642 0.98592 0.98487 0.98467 0.98418 0.98351 0.98299 0.98226 0.98147 0.98103 0.98029 0.97951 0.97838 0.97760 0.97669 0.97571 0.97493 0.97381 0.97303 0.97193 0.97076 0.96929 0.96813 0.96634 0.96504 0.96371 0.96242 0.96064 0.95919 0.95804 0.95567 0.95457

MPD values from the best-known solutions

20

16

12

8

4

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 Instances from the least to the most constrained VND

GA

LP

ACO

TS

VNS

Fig. 6. Comparison of MPDs from the best-known solutions.

the configuration column represent which component alternatives were selected in each subsystem, and the number of digits indicates the number of components

used. For instance, in the configuration of the first instance (W ¼ 191), ‘‘333’’ means three of component type 3 were chosen in subsystem 1, ‘‘34’’ denotes that one component

ARTICLE IN PRESS Y.-C. Liang, Y.-C. Chen / Reliability Engineering and System Safety 92 (2007) 323–331

330

Table 3 Comparison of average system reliability and average MPD over 33 instances Measure

VND

GA

LP

ACO

TS

VNS_RAP

Avg MPD (%) Avg R

10.69253 0.97116

1.41322 0.97368

7.23292 0.97167

0.33413 0.97390

0.25757 0.97385

0.12213 0.97393

Table 4 Detailed results of VNS_RAP Cost

Weight

Reliability

Stdev

CPU Time (s)

Configuration

130 130 130 130 130 129 130 130 129 127 129 128 126 128 126 124 125 123 125 123 122 120 121 119 118 116 117 115 114 115 113 114 110

191 190 189 188 187 186 185 184 183 182 181 180 179 178 177 176 175 174 173 172 171 170 169 168 167 166 165 164 163 162 161 160 159

0.98681 0.98642 0.98592 0.98487 0.98467 0.98418 0.98351 0.98299 0.98226 0.98147 0.98103 0.98029 0.97951 0.97838 0.97760 0.97669 0.97571 0.97493 0.97381 0.97303 0.97193 0.97076 0.96929 0.96813 0.96634 0.96504 0.96371 0.96242 0.96064 0.95919 0.95804 0.95567 0.95457

0.00057 0.00070 0.00081 0.00054 0.00040 0.00035 0.00109 0.00039 0.00052 0.00100 0.00120 0.00080 0.00067 0.00071 0.00090 0.00018 0.00052 0.00120 0.00103 0.00091 0.00102 0.00070 0.00382 0.00096 0.00217 0.00061 0.00084 0.00132 0.00155 0.00176 0.00254 0.00136 0.00127

0.1734 0.1735 0.2062 0.1703 0.1626 0.1797 0.1641 0.1500 0.1578 0.1438 0.1422 0.1453 0.1359 0.1328 0.1328 0.1297 0.1328 0.1219 0.1172 0.1188 0.1125 0.1110 0.1140 0.1094 0.1109 0.1094 0.1109 0.1094 0.1172 0.1062 0.1016 0.1047 0.1031

333,11,444,3333,222,22,111,1111,12,233,33,1111,11,34 333,11,444,3333,222,22,111,1111,11,233,33,1111,12,34 333,11,444,3333,222,22,111,1111,23,233,13,1111,11,34 333,11,444,333,222,12,111,1111,23,333,33,1111,22,34 333,11,444,333,222,22,111,1111,23,333,33,1111,22,34 333,11,444,333,222,22,111,1111,23,233,33,1111,22,34 333,11,444,3333,222,22,111,1111,23,223,13,1111,22,33 333,11,444,333,222,22,111,1111,33,233,33,1111,22,34 333,11,444,333,222,22,111,1111,33,223,33,1111,22,34 333,11,444,333,222,22,111,1111,23,223,33,1111,22,33 333,11,444,333,222,22,111,1111,33,233,33,1111,22,33 333,11,444,333,222,22,111,1111,33,223,33,1111,22,33 333,11,444,333,222,22,111,1111,33,223,13,1111,22,33 333,11,444,333,222,22,111,113,33,223,33,1111,22,33 333,11,444,333,222,22,111,113,33,223,13,1111,22,33 333,11,444,333,222,22,33,1111,33,223,13,1111,22,33 333,11,444,333,222,22,13,1111,33,223,33,1111,22,33 333,11,444,333,222,22,13,1111,33,223,13,1111,22,33 333,11,444,333,222,22,13,113,33,223,33,1111,22,33 333,11,444,333,222,22,13,113,33,223,13,1111,22,33 333,11,444,333,222,22,13,113,33,222,13,1111,22,33 333,11,444,333,222,22,13,113,33,222,11,1111,22,33 333,11,444,333,222,22,11,113,33,222,13,1111,22,33 333,11,444,333,222,22,11,113,33,222,11,1111,22,33 333,11,444,333,22,22,13,113,33,222,11,1111,22,33 333,11,44,333,222,22,13,113,33,222,11,1111,22,33 333,11,444,333,22,22,11,113,33,222,11,1111,22,33 333,11,44,333,222,22,11,113,33,222,11,1111,22,33 333,11,44,333,22,22,13,113,33,222,11,1111,22,33 333,11,44,333,22,22,11,113,33,222,13,1111,22,33 333,11,44,333,22,22,11,113,33,222,11,1111,22,33 333,11,44,333,22,22,11,113,33,222,11,114,22,33 333,11,44,333,22,22,11,111,33,222,11,1111,22,33

type 3 and one of component alternative 4 were used in the last subsystem, etc. Table 4 also displays the standard deviation, denoted as Stdev, over ten runs in each instance. The low standard deviation of VNS_RAP can be interpreted as a sign of insensitivity to the random number seed and the initial solution. When considering the computational expense, the number of solutions generated may provide a better idea on how efficient an algorithm performs. The number of solutions generated in GA [16] is 48,040 (a population size of 40 chromosomes and 1200 generations), the ACO algorithm in [22] needs about 100,000 evaluations or more (a colony size of 100 ants, up to 1000 iterations plus local search), TS in [25] evaluated an average of 350,000 solutions, and the number of solutions searched in VND

[27] was approximately 49,000 on average. The average number of evaluations in VNS_RAP was around 120,000. VNS_RAP requires only a similar number of evaluations as ACO, and 1/3 of TS to get the competitive performance. It should be noted that only a few of the studies reported in the literature provided CPU time of their algorithms. VND [27] was run in Borland C++ Builder 6.0 and with an Intel Pentium IV 2.8 GHz PC at 512 MB RAM. The reported average CPU time of VND was about 0.73 s. The other reference that indicated the CPU time was [13]. Using a Pentium III 500 MHz PC, the average CPU time of LP algorithm was within one second. The average CPU time of VNS_RAP was less than 0.14 s and can be considered efficient in solving such a large system. As displayed in Table 4, the CPU time decreases when the instance

ARTICLE IN PRESS Y.-C. Liang, Y.-C. Chen / Reliability Engineering and System Safety 92 (2007) 323–331

becomes more constrained. It is a good indication for a well-designed algorithm. Ultimately, VND_RAP showed its merit in solving highly constrained optimization problems.

5. Conclusions This paper uses a variable neighborhood search metaheuristic method to solve the series-parallel redundancy allocation problem. The RAP is a well-known NP-hard problem generally in a restricted form where each subsystem must consist of identical components in parallel to make computations tractable. The newer meta-heuristic methods overcame this limitation and offered a practical way to solve large instances of the relaxed RAP where different components can be placed in parallel. Given the well-structured neighborhood of the RAP, a variable neighborhood search method that has not been used in reliability design as of yet is likely to be more effective and more efficient than one that does not. The VNS algorithm with an adaptive penalty function was proposed and tested on a set of well-known benchmark problems from the literature. It was shown that the shaking operation, newly designed neighborhoods, and the adaptive penalty function helped improve the performance and results were competitive with other best heuristics in solution quality and efficient in computational expense consideration.

References [1] Chern MS. On the computational complexity of reliability redundancy allocation in a series system. Oper Res Lett 1992;11:309–15. [2] Tillman FA, Hwang CL, Kuo W. Optimization techniques for system reliability with redundancy—a review. IEEE Trans Reliab 1977;R26(3):148–55. [3] Kuo W, Prasad VR. An annotated overview of system-reliability optimization. IEEE Trans Reliab 2000;49(2):176–87. [4] Bellman R, Dreyfus S. Dynamic programming and the reliability of multicomponent devices. Oper Res 1958;6:200–6. [5] Fyffe DE, Hines WW, Lee NK. System reliability allocation and a computational algorithm. IEEE Trans Reliab 1968;R-17(2):64–9. [6] Nakagawa Y, Miyazaki S. Surrogate constraints algorithm for reliability optimization problems with two constraints. IEEE Trans Reliab 1981;R-30(2):175–80. [7] Li J. A bound dynamic programming for solving reliability optimization. Microelectr Reliab 1996;36(10):1515–20. [8] Ghare PM, Taylor RE. Optimal redundancy for reliability in series systems. Oper Res 1969;17:838–47. [9] Bulfin RL, Liu CY. Optimal allocation of redundant components for large systems. IEEE Trans Reliab 1985;R-34(3):241–7. [10] Misra KB, Sharma U. An efficient algorithm to solve integerprogramming problems arising in system-reliability design. IEEE Trans Reliab 1991;40(1):81–91. [11] Coit DW, Liu J. System reliability optimization with k-out-of-n subsystems. Int J Reliab Qual Safety Eng 2000;7(2):129–43. [12] Tillman FA, Hwang CL, Kuo W. Determining component reliability and redundancy for optimum system reliability. IEEE Trans Reliab 1977;R-26(3):162–5. [13] Hsieh YC. A linear approximation for redundant reliability problems with multiple component choices. Comput Ind Eng 2002;44:91–103.

331

[14] Ramirez-Marquez J, Coit D, Konak A. Redundancy allocation for series-parallel systems using a max–min approach. IIE Trans 2004; 36(9):891–8. [15] Coit DW, Smith AE. Reliability optimization of series-parallel systems using a genetic algorithm. IEEE Trans Reliab 1996;45(2): 254–60. [16] Coit DW, Smith AE. Penalty guided genetic search for reliability design optimization. Comput Ind Eng 1996;30(4):895–904. [17] Levitin G, Lisnianski A, Ben-Haim H, Elmakis D. Redundancy optimization for series-parallel multi-state systems. IEEE Trans Reliab 1998;47(2):165–72. [18] Liang Y-C, Smith AE. An ant system approach to redundancy allocation. In: Proceedings of the 1999 congress on evolutionary computation, Washington DC, USA, 1999. p. 1478–84. [19] Liang Y-C. Ant colony optimization approach to combinatorial problems. Unpublished Ph.D. dissertation, Auburn University, USA, 2001. [20] Shelokar P, Jayaraman VK, Kulkarni BD. Ant algorithm for single and multiobjective reliability optimization problems. Qual Reliab Eng Int 2002;18:497–514. [21] Huang Y-C. Optimization of the series-parallel system with the redundancy allocation problem using a hybrid ant colony algorithm. Unpublished Master Thesis, Yuan Ze University, Taiwan ROC, 2003 [in Chinese]. [22] Liang Y-C, Smith AE. An ant colony optimization algorithm for the redundancy allocation problem (RAP). IEEE Trans Reliab 2004; 53(3):417–23. [23] Nahas N, Nourelfath M. Ant system for reliability optimization of a series system with multiple-choice and budget constraints. Reliab Eng Syst Safety 2005;87:1–12. [24] Huang Y-C, Her Z-S, Liang Y-C. Redundancy allocation using metaheuristics. In: Proceedings of the fourth Asia-Pacific conference on industrial engineering and management system (APIEMS 2002). Taipei, Taiwan, 2002. p. 1758–61. [25] Kulturel-Konak S, Coit DW, Smith AE. Efficiently solving the redundancy allocation problem using tabu search. IIE Trans 2003;35(6):515–26. [26] Coit DW, Smith AE. Solving the redundancy allocation problem using a combined neural network/genetic algorithm approach. Comput Oper Res 1996;23(6):515–26. [27] Liang Y-C, Wu C-C. A variable neighborhood descent algorithm for the redundancy allocation problem. Ind Eng Manage Syst 2005;4(1): 109–16. [28] Ravi V. Optimization of complex system reliability by a modified great deluge algorithm. Asia Pacific J Oper Res 2004;21(4):487–97. [29] Mladenovic´ N. A variable neighborhood algorithm—a new metaheuristic for combinatorial optimization. Abstracts of papers presented at Optimization Days, Montre´al, 1995. p. 112. [30] Avanthay C, Hertz A, Zufferey N. A variable neighborhood search for graph coloring. Eur J Oper Res 2003;151:379–88. [31] Brimberg J, Hansen P, Mladenovic´ N, Taillard E´. Improvements and comparison of heuristics for solving the multisource Weber problem. Oper Res 2000;48(3):444–60. [32] Hansen P, Mladenovic´ N. Variable neighborhood search for the pmedian. Location Sci 1997;5:207–26. [33] Hansen P, Mladenovic´ N. J-Means: a new local search heuristic for minimum sum-of-squares clustering. Pattern Recogn 2002;34:405–13. [34] Ribeiro CC, Souza MC. Variable neighborhood search for the degree-constrained minimum spanning tree problem. Discrete Appl Math 2002;118:43–54. [35] Hansen P, Mladenovic´ N. Variable neighborhood search: principles and applications. Eur J Oper Res 2001;130:449–67. [36] Hansen P, Mladenovic´ N. Variable neighborhood search. In: Glover FW, Kochenberger GA, editors. Handbook of metaheuristics. Dordrecht: Kluwer Academic Publisher; 2003. p. 145–84. [37] Chen Y-C. Redundancy allocation of series-parallel systems using variable neighborhood search algorithms. Unpublished Master Thesis, Yuan Ze University, Taiwan ROC, 2005 [in Chinese].