ARTICLE IN PRESS
Reliability Engineering and System Safety 91 (2006) 1057–1070 www.elsevier.com/locate/ress
Optimization of constrained multiple-objective reliability problems using evolutionary algorithms Daniel Salazara,b, Claudio M. Roccob,, Blas J. Galva´na a
Instituto de Sistemas Inteligentes y Aplicaciones Nume´ricas en Ingenierı´a (IUSIANI), Divisio´n de Computacio´n Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias, Spain b Facultad de Ingenierı´a, Universidad Central Venezuela, Caracas Available online 9 January 2006
Abstract This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature. r 2005 Elsevier Ltd. All rights reserved. Keywords: Constrained optimization; MOEA; Multiple-objective optimization; Redundancy allocation and reliability optimization
1. Introduction The design of a highly reliable system is improved by (1) the addition of redundant components, and (2) the increase of component reliability [1–3]. Usually both options are evaluated by formulating non-linear integer problem optimization problems, which consider a single-objective (SO) (for example maximize reliability). Since cases (1) or (2) usually increase the system’s cost, volume, and weight, the SO formulation handles one or more constraints. These problems are in general difficult to solve due to the considerable amount of computational effort required to find the optimal solution. So various heuristic methods have been developed to solve the mentioned problem [1–16].
Corresponding author. C/o POBA International No. 100, P.O. Box 02-5255, Miami, FL 33102-5255, USA. Tel.: +1 58 412 252 8346; fax: +1 58 212 605 3030. E-mail addresses:
[email protected] (D. Salazar),
[email protected] (C.M. Rocco),
[email protected] (B.J. Galva´n).
0951-8320/$ - see front matter r 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.ress.2005.11.040
At a designing stage of a highly reliable system, an important issue could be how to achieve the balance between reliability and other resources. Therefore, some authors have modeled Reliability, Availability, Maintainability and Safety systems (RAMS) problems using multiple-objective (MO) optimization formulations, which are solved by means of mathematical programming techniques and heuristics or exclusively using heuristic approaches [11–14,17–24]. In general, there are two main approaches for obtaining a solution. The first approach formulates the problem as a multiple-objective optimization problem (MOP) based on the specified criteria (e.g. maximize reliability and minimize cost) to be solved directly. The second approach transforms the original MOP into several single-objective optimization problems (SOP) to be solved sequentially [25]. However a Decision Maker (DM) requires a set of solutions more suitable in attaining certain goals. In this case, the DM can select a solution in accordance with his/ her preferences. For example, a design with higher reliability and higher cost or a design with lower cost sacrificing reliability could be chosen [26]. In order to get
ARTICLE IN PRESS 1058
D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
the whole picture of the problem at hand, the DM must solve several problems using a SO approach, by varying a group of constraints. In the MO formulation, the determination of the Pareto frontier provides more information to the DM. Note that SO or MO formulations are only one stage in the decision making process. Of course, if the DM knows the preferences, he/she could incorporate them into the formulation. In this paper we consider the reliability optimization using a MOP formulation, solved by Evolutionary Algorithms (EA). This approach belongs to the MultipleObjective Evolutionary Algorithms (MOEAs) family and is able to effectively handle constraints. A common characteristic among EA is that they do not rely on any mathematical prerequisites and can be applied to any function or constraint. The approach does not guarantee the determination of the exact Pareto frontier as any heuristic does, but an important number of comparisons [27–29] performed in Evolutionary Multi-Criteria Optimization (EMO) on benchmark problems have shown that results are very close to the exact solution. To our best knowledge, there are no reports on the use of secondgeneration MOEAs for redundancy problems. In this work the feasibility of formulating MO problems instead of SO problems is investigated by using MOEAs as a solving tool. The advantages and disadvantages of MO formulations are analyzed in the context of the ability of MOEAs to find solutions and the quality of the information provided to the DM. Some important issues like MOEA comparisons and parameter tuning, as well as the more suitable MO formulation will not be examined in this paper. The remainder of the article is organized as follows: Section 2 presents the single and the MOP formulations for the optimization problems considered. Section 3 contains an overview of the MOEA, whereas Section 4 presents the results and discussions of four examples. Finally, Section 5 presents conclusions and future work. Nomenclature Series and parallel: These terms are used in their logicdiagram sense, rather than in a layout-diagram or schematic-diagram sense.
gj, hj zz Acronyms DM MO MOEA MOP NSGA-II RAMS SO SOP
[Reliability, Cost] of the system [Reliability, Unreliability] of component i Number of constraints Number of decision variables (number of components or number of stages in a multistage mixed system) Lower bound on Ri, RS Upper bound on Ri, RS
Ri,min, RS,min Ri,max, RS,max ki, ai, bi Constants associated with constraint function of the component i xi Number of redundancies of component i, xiX1
Constant associated with [volume, weight] of component i Constraint functions j, j ¼ 1; . . . ; m Implies: 1 zz, where zz is any expression Decision Maker Multiple-Objective Multiple-Objective Evolutionary Algorithm Multi-Objective Problem Non-dominated Sorting Genetic Algorithm II Reliability, Availability, Maintainability and Safety systems Single-Objective Single-Objective Problem
Assumptions All units and systems have two states: operating or failed. The failures of all units are mutually statistically independent. 2. Problem description 2.1. Single objective problem description A SO optimization problem consists of optimizing a function: OptðF ðxÞÞ, where x ¼ ðx1 ; x2 ; . . . ; xn Þt subject to :
gj ðxÞp0; hj ðxÞ ¼ 0;
j ¼ 1; 2; :::; q, j ¼ 1; 2; :::; r ðq þ r ¼ mÞ:
Some SO optimization problems that have been analyzed in the literature are [1,3,9]: Type 1 (component reliability): Minimize a system cost function C S ¼ CðRi ; xi Þ Subject to :
Notation RS, CS ¯i Ri ; R m n
Pi, Wi
Ri;min pRi pRi;max ;
i ¼ 1; 2; :::; n;
RS;min pRS pRS;max . Type 2 (redundancy allocation): Find the optimal xi, i ¼ 1; 2; . . . ; n, which Max RS ¼ f ðRi; xi Þ. Subject to: gj ðx1 ; x2; . . . ; xn Þp0, j ¼ 1; 2; . . . ; m (in this problem Ri is fixed and xi can vary). Type 3 (component reliability and redundancy allocation): Maximize RS ¼ f ðRi; xi Þ, i ¼ 1; 2; . . . ; n, Subject to: gj ðx1 ; R1 ; x2 ; R2 ; . . . ; xn ; Rn ; Þp0, j ¼ 1; 2; . . . ; m, 0pRip1, i ¼ 1; 2; . . . ; n; xiX1 (integer) (in this problem Ri and xi can vary). For both Types 2 and 3 problems the function f ðRi; xi Þ can be any expression. For example, in the cases to be
ARTICLE IN PRESS D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
analyzed in Section 4 referred to series–parallel systems, the expression for f ðRi; xi Þ is f ðRi ; xi Þ ¼
n X
In terms of minimization:
n Y
Definition 1. Pareto optimal: A solution vector x*AX is Pareto optimal solution iff
i¼1
:9x 2 X : f i ðxÞpf i ðx Þ ^ f ðxÞaf ðx Þ;
½1 ð1 Ri Þxi .
Within these SO problems, the constraints gj are usually associated with system weight, volume and cost and are often defined or assumed, for the purpose of simplicity, as linear functions. As examples of such constraints we have: g1 ¼
1059
Pi x2i pP,
8i ¼ f1; 2; :::; kg.
These solutions are also called true Pareto solutions. Definition 2. Pareto dominance: A solution x1 dominates x2, denoted as x1 x2 iff f i ðx1 Þpf i ðx2 Þ ^ 9j : f j ðx1 Þof j ðx2 Þ; i; j 2 f1; 2; :::; kg: If there are no solutions which dominate x1, then x1 is non-dominated.
(1)
i¼1
g2 ¼
n X
C i ½xi þ expðki xi ÞpC,
(2)
W i ½xi expðki xi ÞpW .
(3)
Definition 3. Pareto set: A set of non-dominated solutions fx j:9x : x x g is said to be a Pareto set.
i¼1
g3 ¼
n X j¼1
Constraint (1) is a combination of weight and volume: Pi is the product of weight per unit and volume per unit. Constraint (2) is a cost constraint. The term expðki xi Þ is the additional cost for interconnecting parallel units while (3) is a weight constraint: the weight of a single unit is increased by the factor exp(kixi) due to the weight of the interconnecting links [6]. Note that in these formulations, only a single objective function is considered. The other objectives (reliability, cost, weight, or volume) are modeled as constraints. That means that the DM considers all individual targets separately. Ravi et al. [9] presented two algorithms based on simulated annealing to solve the first two optimization problems. Hikita et al. [1] presented an application of the Surrogate-Constraints algorithm and a comparison with other techniques previously reported (Lagrange multiplier and branch and bound technique) [3–5,7,8,10]. Rocco et al. [30] solve these problems using Cellular Evolutionary Strategies (CES). 2.2. Multiple-objective problem description A MO optimization problem consists of optimizing a vector of functions: OptðF ðxÞ ¼ ðf 1 ðxÞ; f 2 ðxÞ; :::; f k ðxÞÞ subject to :
gj ðxÞp0;
j ¼ 1; 2; :::; q ,
hj ðxÞ ¼ 0;
j ¼ 1; 2; :::; r
Definition 4. Pareto front: the set of vectors in the objective space that are image of a Pareto set, i.e. fF ðx Þj:9x : x x g. The reliability optimization problems presented in Section 2.1 can be formulated as MOP, transforming one or more constraints into one or more objectives. For example, we define the following MOPs: MOP Type 1 (component reliability): Maximize the system reliability RS and minimize a cost function CS subject to Ri,minpRipRi,max, i ¼ 1; 2; . . . ; n. MOP Type 2 (redundancy allocation): Find the optimal xi, i ¼ 1; 2; . . . ; n which maximize RS ¼ f ðRi; xi Þ and minimize a cost function CS Subject to: gj ðx1 ; x2 ; . . . ; xn Þp0, j ¼ 1,2,y,m. MOP Type 3 (component reliability and redundancy allocation): Maximize RS ¼ f ðRi; xi Þ, i ¼ 1; 2; . . . ; n and minimize a cost function CS Subject to: gj ðx1 ; R1 ; x2 ; R2 ; . . . ; xn ; Rn ; Þp0, j ¼ 1; 2; . . . ; m; 0pRip1, i ¼ 1; 2; . . . ; n; xiX1 (integer). Within these problems, we will consider that the cost function to be minimized corresponds to the previously defined cost constraint g2 in the SO formulation. Notice that even if the two objectives considered in these MOP types are reliability and cost, the MOP approach is general and can be used for any type and number of objectives (for example reliability and weight). The selection of such objectives clearly depends on the problem under study and the DM criteria. 3. Multiple-objective evolutionary algorithms (MOEAs)
ðq þ r ¼ mÞ ,
where x ¼ ðx1 ; x2 ; . . . ; xn Þt 2 X is the solution vector, or vector of decision variables, and X is the feasible domain. The concept of optimality in single objective is not directly applicable in MOPs. For this reason a classification of the solutions is introduced in terms of Pareto optimality, according to the following definitions [31].
Multiple-Objective Evolutionary Algorithms (MOEAs) is the term employed in the EMO field to refer to as a group of EA formulated to deal with MOP. This group of algorithms conjugates the basic concepts of dominance described in the later section with the general characteristics of evolutionary algorithms. MOEAs are able to deal with non-continuous, non-convex and/or non-linear
ARTICLE IN PRESS 1060
D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
Generational Iterative Loop Recombination Phase
Crossover
Input:
Mutation
Mating Pool
Reproductive Selection (recombination) No
M, N, pc, pm, tmax t = t+1 Initialization M random generated individuals
t = tmax?
Enviromental Selectionl (surviving)
PAt+1
Yes
Pt
t=0 ∅
Competitive Phase
PAt
Finalization Output
PAt+1
t = tmax
Fig. 1. Flowchart of a second-generation MOEA based on genetics algorithms.
spaces, as well as problems whose objective functions are not explicitly known (e.g. the output of Monte Carlo simulation runs). Since the first MOEA (Schaffer’s VEGA [32]), the development of MOEAs has successfully evolved, producing better and more efficient algorithms. The existing MOEAs are classified into two groups [33], according to its characteristics and efficiency. On the one hand there is a first group known as ‘‘first-generation’’ which includes all the early MOEAs (Weighted Sum, VEGA [32], MOGA1 (cited in [34]), NPGA [35], NSGA [36]). On the other hand there is a second group named ‘‘second-generation MOEAs’’, which comprises very efficient optimizers like SPEA [27]/SPEA2 [28], M-PAES [37], PESA [38]/PESA-II [39] and NSGA-II [29], among others. Basically, the main features that distinguish secondgeneration MOEAs from the first-generation group are:
Mechanism of adaptation assignment in terms of dominance: between one non-dominated solution and another dominated, the algorithm will favor the nondominated one. Moreover, when both solutions are equivalent in dominance, the one located in a less crowded area will be favored. Finally, the extreme
1 MOGA is referred to a specific algorithm proposed by Fonseca and Flemming in 1993 (cited in [34]). The reader must be aware that the term MOGA is also employed in RAMS community to identify any kind of multiple-objective optimizers based on genetic algorithms. For the sake of clarity, we claim for the use of MOEA instead of MOGA to identify evolutionary multiple-objective optimizers.
points, (i.e. the solutions that have the best value in one particular objective) of the non-dominated population are preserved and their adaptation is better than any other non-dominated point, to allow maximum front expansion. Incorporation of elitism: the elitism is commonly implemented using a secondary population of nondominated solutions previously stored. When performing recombination (selection–crossover–mutation), parents are taken from this archive in order to produce the offspring.
In general terms, current MOEAs based on genetic algorithms follow a sequence similar to the flow chart depicted in Fig. 1. Notice that there are two populations: Pt, which represents the current population during generation t, and PAt, which consists of a non-dominated solutions archive. Many state-of-the-art MOEAs keep a constant size M for the population and N for the file. Initially, M individuals are generated randomly and the archive is set to empty. In each generation all nondominated individuals from both the archive and the population are selected and assigned to the archive PAt+1. Afterwards a reproductive selection of individuals is accomplished (typically by binary tournament) and a mating pool is filled up. At this stage a new population is generated following some (m+l) recombination strategy (m parents are combined to produce l offspring). Finally, the new population replaces the old one and the process continues until the maximum number of generations is
ARTICLE IN PRESS D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
reached. At the end, the non-dominated solutions from the archive are reported as output. It is important to mention that in most of the cases, the number of non-dominated solutions found during each generation could be less than or greater than the archive size (N). Thus some criteria or methods for classifying and selecting are needed in order to set the archive size to N solutions. This can be achieved assigning to individuals fitness values based on the Pareto dominance combined with some other criteria usually referred to as density and distribution of the population. That is called environmental selection [31]. It is convenient to realize that in practice, both reproductive and environmental selection (the mechanism of adaptation assignment in terms of dominance and density) constitute the main difference between one MOEA and another. The rest of the algorithm remains the same with a traditional EA. 3.1. Non-dominated Sorting Genetic Algorithm II The Non-dominated Sorting Genetic Algorithm (NSGA-II) is a well known and extensively used algorithm based on its predecessor NSGA. It was formulated by Deb et al. [29] as a fast and very efficient MOEA, which incorporates the features mentioned earlier, i.e. an elitist f2 Rank = 3
Rank = 2
Rank = 1
f1
(a)
f2
1061
archive and a rule for adaptation assignment that takes into account both the rank and the distance of each solution regarding others. Fig. 2a shows what is meant by rank in a minimization case. The value of adaptation is equal to its rank. When comparing two solutions belonging to the same rank, extreme solutions prevail over not extreme ones. If both solutions are not extreme, the one with the bigger crowding distance (i.e. the perimeter of the cuboid or ‘‘taxicab norm’’ calculated between the two nearest neighbors) wins (Fig. 2b). This way extreme solutions and less crowded areas are encouraged. 3.2. Constraint handling techniques One of the main issues in evolutionary computation is how to guide the search towards the feasible region in the presence of constraints. The existing approaches can be classified in the following groups: (1) penalization techniques, (2) repair techniques, (3) separation techniques, and (4) hybrid techniques. For a review see [40]. In this paper experiments are based on the approach proposed by Deb et al. [29], which could be implemented in the NSGA-II as follows: (1) Calculate the normalized sum of constraint violations for all the individuals belonging to Pt[PAt. (2) Classify the individuals according to the overall constraints violation: when comparing two individuals; if the overall violation of both of them is zero, apply the ordinary ranking assignment, otherwise the individual with the lower (or null) overall violation dominates the other one. (3) The rest of the algorithm remains equal. By integrating this technique to any MOEA feasibility over optimality is promoted. Thus, the search is guided toward the feasible region. Once it is reached, feasible individuals are sorted according to the particular fashion established by implemented MOEA. Moreover, as the reader could notice, the absence of penalization parameters saves the effort of tuning. 3.3. Implementation NSGA-II is implemented following the pseudo-code presented in Algorithm 1. In order to code the individuals a mixed chromosome considering all variables ðRi ; xi Þ was selected. Integer variables (xi) are represented using an integer type, whereas real variables (Ri) are codified using a floating-point type. Algorithm 1. Pseudo-code for NSGA-II Input:
Cuboid
(b)
f1
Fig. 2. Rank (a) and cuboid (b) concepts used by NSGA-II.
M (Population size) N (Archive size) tmax (Max. number of generations)
ARTICLE IN PRESS D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
1062
Begin: Randomly initialize P0A, set P0 ¼ +, t ¼ 0. While totmax: Pt ¼ Pt+PAt Assign adaptation to Pt PAt+1 ¼ {N best individuals from Pt} MP (mating pool) ¼ {M individuals randomly selected from PAt+1 using a binary tournament} Pt+1 ¼ {M new individuals generated by applying recombination operators on MP} t ¼ t+1 Output: Non-dominated solutions from PAt
The recombination mechanism is one-point crossover, adapted for a mixed chromosome (see Fig. 3). In order to keep the possible values (states) of integer variables within their limits, they are only permuted but in any case a new value is calculated. The strategy changes when the crossover point lays on a real variable. In this case a convex combination is performed. Finally, the mutation operation for real variables is a Gaussian mutation of type Rnew ¼ Rold+N(0,s2). For integer variables two mechanism were implemented. In the first one, with probability 0.4, the current value is replaced randomly with equal probability by a new value chosen from the list of possible values of this particular variable. The second mechanism select, with probability 0.6, a new value through a ‘‘triangular mutation operator’’
that attempts to emulate the Gaussian mutation. Fig. 4 shows an example of how the mutation operator works. First, note that the length of the triangle base is equal to the number of possible states the integer variable can assume. With the current state a triangle with total area 1 is built. Then a uniform U(0,1) random number equivalent to a cumulative probability or area Ar is generated. If Ar is less than the area up to the current value, the new state that replaces the current one is the state that makes the maximal triangle with area less than or equal to Ar (see Fig 4, dotted area). Likewise, if Ar is greater than the area up to the current state, the new state is that which makes the minimal triangle area larger than or equal to Ar (Fig 4, shaded area). 4. Computational examples In this section we analyze four examples related to redundancy optimization problems, comparing both SOP and MOP optimization results. All of the MOPs presented below are based on a SO formulation with one or several constraints. The MO formulation of each of them was obtained converting the cost constraint into an objective function. Thus, all the following problems are bi-objective problems. In those cases where the original SOPs consider several constraints, higher dimensional MO formulations are possible. Nevertheless, the bi-objective formulation is the only case suitable for an easy visual inspection, therefore it is appropriate as introductory examples. In a complex problem, some features like the efficiency of the
Fathers
R
I
R
I
R
Off-springs
I P=0.5
R
I
R
I
R
I
R
I
R
I
R
I
R
I
R
I
R
I
R
I
R
I
R
I
Crossover point
R
I
R
I
R
I
R
I
R
I
R
I
P=0.5
α R
+(1-α) R
R
I
R
I
R
I
R
I
R
I
R
I
Crossover point
R
I
R
I
R
I α
R +(1-α)
R
Fig. 3. Example of the crossover mechanism for mixed chromosomes: real (R) and integer (I) variables.
ARTICLE IN PRESS D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
algorithm regarding the number of objectives, the importance of each objective for the DM and the difficulty of the problem itself should be considered. 4.1. Example 1: MOP Type 1: life-support system in a space capsule [9] Fig. 5 shows the complex system to be considered. The symbolic reliability expression of the system is RS ¼ 1 ¯ 1R ¯ 3 ½1 R2 ð1 R ¯ 1R ¯ 4 Þ2 R ¯ 4 Þ2 . R 3 ðR P The cost function selected is C S ¼ 2 K i Rai i , with: K1 ¼ 100, K2 ¼ 100, K3 ¼ 100, K4 ¼ 150; ai ¼ 0.6 for all i. The MOP is to select Ri to minimize CS and maximize RS, subject to: 0.5 ¼ Ri,minpRip1, i ¼ 1, 2, 3, 4. Table 1 presents the results obtained by Ravi et al. [9] and by Rocco et al. (RMMM) [30] under the SO formulation with RSX0.90. The minimum cost is approximately 641.8235 with a system reliability of 0.9000. Note that the average and best CS obtained in [30] are better than the solution in [9]. Assuming that Ri can vary on the interval [0,1], and that it is discretized in steps of 106, then the search space consists of 1030 points. Rocco et al. [30] report that the results required, on average, 1.2 106 function evaluations.
1063
Fig. 6 shows the Pareto front determined by NSGA-II using different number of generations. From this figure, it is clearly visible that the solution achieved during the SO optimization, as expected, belongs to the Pareto front. Fig. 7 shows a closer detail of the Pareto front around the best solution found by the SO formulation. It is interesting to note that with only 1000 generations, which corresponds to 100100 function evaluations, the NSGA-II is able to produce an excellent approximation to the Pareto frontier. This fact is very important for the DM. Indeed he (she) could select a solution according to his (her) preference, without solving any SOP. For example, Fig. 7 shows the best trade-off zone, that is the zone where a DM may choose the final decision on the basis of trade-off among optimization objectives. Only one run is required to have a complete picture of solutions. 4.2. Example 2: MOP Type 2: n-stage mixed system [9] Fig. 8 shows the complex system to be considered. The reliability of the system is RS ¼ P5i¼1 ½1 ð1 Ri ÞX i . The problem is to find the number of optimal redundancy xi, i ¼ 1, 2, 3, 4, 5 to maximize P RS and minimize a cost function expressed as C S ¼ 5i¼1 C i ½x1 þ expð0:25x1 Þ Subject to :
g1 ¼
5 X
Pi x2i p110,
i¼1
g3 ¼ Min. state value
Max. state value Current state Random areas (dotted –left- and shaded-right) Fig. 4. Triangular mutation operator.
1 2 4 in
out
3 2
4 1
Fig. 5. Example of complex system [9].
Table 1 Results for Example 1, using the SO formulation
R1 R2 R3 R4 RS CS
Ravi et al. [9]
Rocco et al. best [30]
Rocco et al. average [30]
0.50006 0.83887 0.50001 0.50002 0.90001 641.8332
0.500000009 0.838920148 0.500000011 0.500000022 0.900000023 641.8235769
0.500105634 0.838809128 0.500002527 0.500005922 0.900000619 641.8278043
5 X
W i ½xi expð0:25xi Þp200.
i¼1
Table 2 shows the data for this example. Table 3 presents the results obtained by Ravi et al. [9] and Rocco et al. [30] for the SO formulation, considering CSp175. Note that the results of [9] and those reported in [30] are equal. Assuming that Ri can vary on the interval [0,1], and that it is discretized in steps of 106, then the search space consists of 1030 points. Rocco et al. [30] report that the results required, on average, 9.5 105 function evaluations. Fig. 9 shows the results of the MO formulation using NSGA-II with 930 and 1550 generations. Both runs produce the same optimum results. Note that the SO optimum coincides with an extreme of the Pareto front. In this case, the results of the MO formulation show the DM that no better solution exists. Again, the SO formulation provides only partial information. The MO formulation gives in a single run and with less computational efforts, information about other non-dominated as well as feasible solutions. This example also shows that NSGA-II is able to effectively manage linear as well as non-linear constrains. 4.3. Example 3: MOP Type 3: complex system [1] Fig. 10 shows the complex system to be considered. The reliability of the system is RS ¼ f 1 f 2 þ f 1 f 2 f 3 f 4 þ f¯ 2 f¯ 3 f 1 f 4 f 5 þ f¯ 1 f¯ 4 f 2 f 3 f 5 ,
ARTICLE IN PRESS D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
1064
1
0.95
RMMM-CES best
Rs
0.9
0.85 DOMINATED ZONE 0.8
NSGA-II (500 Gen)
NSGA-II (100 Gen)
RMMM-CES (best)
NSGA-II (1000 Gen)
0.75 590
610
630
650
670
690
710
730
Cs Fig. 6. Pareto front for Example 1: The best solution found using the SO formulation is clearly indicated.
0.908 0.906
NSGA-II (500 Gen)
NSGA-II (1000 Gen)
NSGA-II (100 Gen)
RMMM-CES (best)
0.904 RMMM-CES best
Rs
0.902 0.900
DOMINATED ZONE 0.898 0.896 0.894 639
640
641
642
643
644
645
Cs Fig. 7. Closer view of the Pareto front around the SO best solution of Example 1.
Stages 1
2
j
n
1
1
1
1
2
2
2
2
where f i ¼ f ðRi ; xi Þ ¼ 1 ð1 Ri Þxi . For example for xi ¼ 1, RS can be written as RS ¼ R1 R2 þ ð1 R1 R2 ÞR3 R4 þ ð1 R2 Þð1 R3 ÞR1 R4 R5 þ ð1 R1 Þð1 R4 ÞR2 R3 R5 .
X1
X2
Xj
Xn
Fig. 8. Mixed series–parallel system [9].
The problem consists of determining Ri and the number of redundancy xi to maximize RP S and minimize a cost function expressed as C S ¼ 5i¼1 ai ð1000= ln ðRi Þbi Þ
ARTICLE IN PRESS D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
½xi þ expð0:25xi Þ.
Subject to :
g1 ¼
5 X
V i x2i p110,
i¼1
g3 ¼
5 X
W i ½xi expð0:25xi Þp200,
i¼1
0pRi p1; i ¼ 1; 2; :::; n; 1pxi p5 ðintegerÞ. Table 4 shows the data for this example. Table 5 presents the results by Hikita et al. [1] (using a SurrogateConstraints Algorithm) and by Rocco et al. [30] on the SO formulation. Note that the best RS and average RS obtained in [30] are better than those reported in [1]. Table 2 Data for Example 2 i
1
2
3
4
5
Ri Pi Ci Wi
0.80 1 7 7
0.85 2 7 8
0.90 3 5 8
0.65 4 9 6
0.75 2 4 9
Table 3 Results for Example 2 Rocco et al [30] best
Rocco et al [30] average
3 2 2 3 3 0.90446 146.125
3 2 2 3 3 0.90446 146.125
3 2 2 3 3 0.90446 146.125
Assuming that each xi can vary across the values 1, 2, 3, 4, and 5, and Ri in the interval [0,1], and that Ri is discretized in steps of 106, then the search space consists of 3.125 1033 points. Rocco et al. [30] reports that the results required, on average, 1.4 106 function evaluations. Fig. 11 shows the results of the MO formulation using NSGA-II. Note that the best SO solution is located in an almost flat zone of the Pareto front, where there are many solutions with similar reliability values but are less expensive. Table 6 shows the last 17 points of the Pareto front. For example, the DM can evaluate what is the difference between solutions 17 and 13. From the reliability point of view, solution 13 has a value of 0.0057% less than solution 17, while it costs 35.83% less. This means that the choice of solution 13 is clearly less expensive than 17, while its reliability is not seriously affected. Then, the DM can consider that such so small variation on the reliability value is non-significant, thus solution 13 could be preferred. Clearly, the SO formulation cannot provide this information.
4.4. Example 4: MOP Type 2: Fyffe et al. [41] This problem was originally proposed by Fyffe et al. [41]. It includes 14 series subsystems with three or four component choices for each subsystem. The problem consists in finding the number and the type of components that constitute each subsystem that maximize RS, subject to cost and weight linear constraints. Nakagawa and Miyazaki (N&M) [42] developed 33 variations of the original problem, where the allowable system cost constraint is maintained at 130 units and the weight constraint limit varies from 159 to 191 units. This problem has been considered by various authors to test their SO approaches [15,16]. The best results for these 33 problems were
1 NSGA-II (archive size = 30, 100 Gen)
0.9
NSGA-II (archive size = 50, 100 Gen) 0.8
RMMM-CES best
0.7 RMMM-CES best
0.6 Rs
x1 x2 x3 x4 x5 RS CS
Ravi et al. [9]
1065
0.5 0.4 0.3 DOMINATED ZONE 0.2 0.1 0 20
40
60
80
100
120
140
160
Cs Fig. 9. Pareto front for Example 2: The best solution found using the SO formulation is clearly indicated.
ARTICLE IN PRESS D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
1066
generally found by a genetic algorithm approach [16] or the Tabu Search approach [15]. It is important to note that in [15,16] the authors only report the best solution achieved in terms of the reliability 1
of the system whereas N&M report the system reliability solution along with the system cost and weight (Table 7). To illustrate the MO formulation, the following problem is solved: 14 Y ½1 ð1 Rij Þxij
Maximize RS ¼
2
i¼1
and
5
minimize C S ¼ 3
mi 14 X X
C ij xij
i¼1 j¼1
4
Fig. 10. Example of complex system.
subject to :
WS ¼
mi 14 X X
W ij xij p191,
i¼1 j¼1 mi X
Table 4 Data for Example 3 j
105a
b
V
W
1 2 3 4 5
2.33 1.45 0.541 8.05 1.95
1.5 1.5 1.5 1.5 1.5
1 2 3 4 2
7 8 8 6 9
xij X1; i ¼ 1; . . . ; 14,
j¼1
xij 2 ½0; 1; 2; . . ., where, xij is the quantity of the jth available component for subsystem i, mi is the number of available components for subsystem i, and Rij, Cij, Wij are the reliability, cost and weight for the jth available component for subsystem i.
Table 5 Results for example 4.3 Component
Hikita et al. [1]
1 2 3 4 5 RS CS
Rocco et al. [30] best
Rocco et al [30] average
xi
Ri
xi
Ri
3 3 2 3 2
0.79130901 0.81512537 0.90966804 0.72061088 0.81941995 0.99978004 174.875982
3 3 3 3 1
0.83478726 0.84638504 0.85143024 0.71726940 0.71775770 0.999871456 172.527554
0.999820811
1
RMMM Best solution
Rs
0.8
0.6
0.4
0.2
RMM NSGA-II (100 Gen)
0 0
100
200
300
400
500
Cs Fig. 11. Pareto Front for Example 3: the square corresponds to the best solution found using the SO formulation.
ARTICLE IN PRESS D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
The component cost, weight and reliability values were originally presented in Fyffe et al. [41]. For solving the problem, two NSGA-II runs of 100000 generations were performed. Fig. 12 shows the (RS ; C S ) front generated by NSGA-II: both runs are very similar. Fig. 13 shows the front generated by NSGA-II (Run 1) along with the results by N&M. Note that the N&M’s solution set does reflect a non-dominated front since the authors set the allowable system weight to different values during the generation of these solutions. So the global effect is similar to performing a three-objective optimization. Naturally, the projection of a three-dimensional set into a two-dimensional plane does not necessarily look like a function shape. A 3D plot of both N&M and NSGA-II solutions is presented in Fig 14. Table 6 Coordinates of the last 17 points of the Pareto front, Example 3 i
RS(i)
CS(i)
17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
0.99997 0.99996 0.99995 0.99992 0.99991 0.99984 0.99976 0.99968 0.99959 0.99944 0.99936 0.99907 0.99895 0.99892 0.99872 0.99867 0.99851
313.276 312.560 250.910 203.296 201.043 163.818 149.223 135.174 123.364 119.200 111.251 108.132 97.7224 96.5712 92.5478 91.2443 87.7329
1067
It is important to note that the set of solutions obtained by N&M correspond to different optimization problems, so the comparison with NSGA-II could seem unfair. Nevertheless the front generated by NSGA-II satisfies the maximum system weight constraint of 191 units, so the DM can compare simultaneously different solutions, for example, RS and CS for the same allowable system weight WS. Fig. 13 clearly shows that the NSGA-II outcome dominates almost all of the N&M solutions, except for the two solutions near to the upper right corner. The SO formulation makes the algorithm concentrate its efforts in finding the best feasible solution that maximizes RS, disregarding very good solutions, i.e. those nondominated solutions with lower RS and lower CS. Thus, even if the results of N&M have the same quality in terms of system reliability, and in some cases are better than those obtained using NSGA-II, the difference in cost is very noticeable. This behavior is due to the fact that the SO formulation employs only one criterion to assess the quality of the solutions. Thus, between two solutions with similar RS, the algorithm will choose the one with the higher value, even if the difference is not significant in practice. The problem is that this small difference in RS might represent a big difference in cost. For example, if the maximum system weight constraint is set to 191 units, the relative difference between the highest (0.9834) and the lowest (0.9555) system reliability obtained by NSGA-II is only 2.84%, while the corresponding relative difference cost is 31.71% (123 and 84 units, respectively) (see Fig. 14). Again, if the DM considers that such a small variation on the reliability value is not significant, then the cheaper solution could be preferred. Clearly, the SO formulation cannot provide this information.
Table 7 Results obtained by N&M [41] Case
CS
WS
RS
Case
CS
WS
RS
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
130 132a 131a 129 133a 129 129 126 130 130 128 126 127 123 123 125 121
191 189 188 188 186 186 185 184 182 182 181 180 179 177 177 176 174
0.9864 0.9854 0.9850 0.9847 0.9840 0.9831 0.9829 0.9822 0.9815 0.9815 0.9800 0.9796 0.9792 0.9772 0.9772 0.9764 0.9744
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
121 122 123 119 119 121 120 117 116 118 116 114 112 111 110 100
174 173 172 170 170 169 168 167 166 165 164 163 162 161 159 159
0.9744 0.9723 0.9720 0.9700 0.9700 0.9675 0.9666 0.9656 0.9646 0.9621 0.9609 0.9602 0.9589 0.9565 0.9546 0.9546
a
Infeasible solution.
ARTICLE IN PRESS D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
1068
130 NSGA-II (100000 Gen) Run 1
125
NSGA-II (100000 Gen) Run 2
120 115
Cs (x)
110 105 100 95 90 85 80 0.95
0.955
0.96
0.965
0.97 Rs (x)
0.975
0.98
0.985
0.99
Fig. 12. (RS,CS) front generated by NSGA-II for Example 4.
130
Dominated Area
Cs (x)
120
110
100 N&W
90
80 0.95
NSGA-II (100000 Gen) Run 1 0.96
0.97
0.98
0.99
1
Rs (x) Fig. 13. Non-dominated front obtained for Example 4.
The above analysis provides some clues as to which constraints should be treated as objective and which should not. It is difficult to formulate a problem where the cost is not important, so it seems suitable and convenient to convert the SO formulation into a bi-objective one. However, a formulation with three or more objectives could be unnecessary. For example, consider the design of a system that should not violate the structural constraint of its location. If the supporting structure is not already built, there is some room to consider building a stronger structure, if either the savings or the increase in reliability justifies that decision.
In that case, the weight should be treated as an objective. Nevertheless, if the supporting structure is already built, it would make no sense in minimizing the weight, so treating it just as a constraint is enough. Although, if the DM has a very good idea of what kind of solution is desired, a SO formulation can be suitable if there is a real computational effort saving. Nonetheless, it is not easy to know a priori if a SO genetic algorithm would be more efficient than a MOEA, since it hardly depends on the problems. The above examples encourage the use of MOEAs instead of SO algorithms. A possible strategy is to perform
ARTICLE IN PRESS D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
N&M
1069
NSGA-II (100000 Gen)
190 180
Ws (x)
170 160 150 140 1.0E+0 9.5E-1 9.0E-1 8.5E-1 8.0E-1 7.5E-1 7.0E-1 6.5E-1
130 120
60
6.0E-1
80
Cs
(x)
Rs
(x)
40
5.5E-1 5.0E-1
100
4.5E-1 120
4.0E-1
Fig. 14. 3D plot of N&M and NSGA-II solutions for Example 4.
a first exploration with a MO formulation, since the outcomes produced by the MOEAs provide the DM with a good picture of the possible alternatives. Then the DM can choose one solution to improve it by means of a suitable algorithm, possibly a SO algorithm updated with the information provided by the MOEA. 5. Conclusions This paper presents the optimization of two main system design characteristics as reliability and cost related to three types of reliability optimization problems using a MO formulation. This formulation is able to consider all objectives simultaneously, without any kind of aggregation, providing the DM a more informative approach such as non-dominated solutions and their characteristics. To solve the MOP a class of evolutionary algorithms, the MOEAs were selected. This approach does not rely on any mathematical prerequisites, thus considers any kind of constraints, a fact that may introduce obstacles in the convergence property of some of the algorithms previously considered in the literature. The examples analyzed have shown:
The advantages of the MO formulation over SO, since MO provides more information than SO.
The capability of MOEAs to solve MO problems. How to transform SO problems to MO. The efficient use of MOEAs to handle constrained reliability problems.
Additional research in other RAMS problems is required to compare the performance of different MOEA regarding computational efforts, convergence and management of constraints. Even if the objectives considered in the examples are reliability and cost, the MOP approach is general and can be used for any type and number of objectives. The selection of such objectives clearly depends on the problem under study and the DM criterion. Finally, we would like to point out that the MO is only one stage on decision making.
Acknowledgments The authors are grateful to the anonymous referees for their comments and valuable suggestions on the earlier version of the paper. Special thanks to Prof. Sadan KonakKulturel for providing the references and data set for Example 4.
ARTICLE IN PRESS 1070
D. Salazar et al. / Reliability Engineering and System Safety 91 (2006) 1057–1070
References [1] Hikita M, Nakagawa Y, Nakashima K, Narihisa H. Reliability optimization of systems by a surrogate-constraints algorithm. IEEE Trans Reliab 1992;41:473–80. [2] Kim J- H, Yum B- J. A heuristic method for solving redundancy optimization problems in complex systems. IEEE Trans Reliab 1993;42:572–8. [3] Kuo W, Hwang CL, Tillman FA. A note on heuristic methods in optimal system reliability. IEEE Trans Reliab 1978;27:320–4. [4] Nakagawa Y, Nakashima K. A heuristic method for determining optimal reliability allocation. IEEE Trans Reliab 1977;26:156–26161. [5] Lin HH, Kuo W. A comparison of heuristic reliability optimization methods. In: World productivity forum & 1987 international industrial engineering conference proceedings. Institute of Industrial Engineering, 1987. p. 583–9. [6] Tillman FA, Hwang CL, Kuo W. Optimization of system reliability. New York: Marcel Dekker; 1985. [7] Misra KB, Ljubojevic MD. Optimal reliability design of a system: a new look. IEEE Trans Reliab 1973;22:255–8. [8] Nakashima K, Yamato K. Optimal design of a series–parallel system with time-dependent reliability. IEEE Trans Reliab 1997;26:119–20. [9] Ravi V, Murty BSN, Reddy PJ. Nonequilibrium simulated annealing-algorithm applied to reliability optimization of complex systems. IEEE Trans Reliab 1997;46:233–9. [10] Kuo W, Lin H, Xu Z, Zhang W. Reliability optimization with the Lagrange multiplier and branch-and-bound technique. IEEE Trans Reliab 1987;36:1090–5. [11] Dhingra A. Optimal apportionment of reliability & redundancy in series systems under multiple objectives. IEEE Trans Reliab 1992;41(4):576–82. [12] Elegbede C, Adjallah K. Availability allocation to repairable systems with genetic algorithms: a multi-objective formulation. Reliab Eng Syst Saf 2003;82(3):319–30. [13] Nahas N, Nourelfath M. Ant system for reliability optimization of a series system with multiple-choice and budget constraints. Reliab Eng Syst Saf 2005;87(1):1–12. [14] Ramı´ rez-Rosado IJ, Bernal-Agustı´ n JL. Reliability and costs optimization for distribution networks expansion using an evolutionary algorithm. IEEE Trans Power Syst 2001;16:111–8. [15] Kulturel-Konak S, Smith AE, Coit DW. Efficiently solving the redundancy allocation problem using tabu search. IIE Trans 2003; 35(6):515–26. [16] Coit DW, Smith AE. Reliability optimization of series–parallel systems using a genetic algorithm. IEEE Trans Reliab 1996;45(2): 254–60. [17] Yun-Chia Liang, Smith AE. An ant colony optimization algorithm for the redundancy allocation problem (RAP). IEEE Trans Reliab 2004;53(3):417–23. [18] Ramirez-Marquez JE, Coit DW, Konak A. Redundancy allocation for series–parallel systems using a max-min approach. IIE Trans 2004;36(9):891–8. [19] Wong YY, Jong WK. Multi-level redundancy optimization in series systems. Comput Ind Eng 2004;46(2):337–46. [20] Peng-Sheng You, Ta-Cheng Chen. An efficient heuristic for series–parallel redundant reliability problems. Comput Oper Res 2005;32(8):2117–27. [21] Shelokar PS, Jayaraman VK, Kulkarni BD. Ant algorithm for single and multiobjective reliability optimization problems. Qual Reliab Eng Int 2002;18(6):497–514. [22] Dhingra AK. Optimal apportionment of reliability and redundancy in series systems under multiple objectives. IEEE Trans Reliab 1992;41(4):576–82.
[23] Sakawa M. Multiobjective reliability and redundancy optimization of a series–parallel system by the surrogate worth trade-off method. Microelectron Reliab 1978;17(4):465–7. [24] Coit DW, Jin T, Wattanapongsakorn N. System optimization with component reliability estimation uncertainty: a multicriteria approach. IEEE Trans Reliab 2004;53(3):369–80. [25] Martorell S, Sa´nchez A, Carlos S, Serradell V. Alternatives and challenges in optimizing industrial safety using genetic algorithms. Reliab Eng Syst Saf 2004;86(1):25–38. [26] Giuggioli P, Marseguerra M, Zio E. Multiobjective optimization by genetic algorithms: application to safety systems. Reliab Eng Syst Saf 2001;72:59–74. [27] Zitzler E, Thiele L. Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 1999;3(4):257–71. [28] Zitzler E, Laummans M, Thiele L. SPEA2: improving the strength Pareto evolutionary algorithm. TIK Report No. 103, Swiss Federal Institute of Technology (ETH), Computer Engineering and Networks Laboratory (TIK), 2001. [29] Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II. KanGAL Report No. 200001, Kanpur Genetic Algorithms Laboratory (KanGAL), Indian Institute of Technology, 2001. [30] Rocco CM, Miller AJ, Moreno JA, Carrasquero N. Reliability optimisation of complex systems. In: The annual reliability and maintainability symposium, Los Angeles, USA, 2000. [31] Zitzler E, Laumanns M, Bleuler S. A tutorial on evolutionary multiobjective optimization. In: Workshop on multiple objective metaheuristics (MOMH 2002), http://www.tik.ee.ethz.ch/bleuler [32] Schaffer JD. Multiple optimization with vector evaluated genetic algorithms. PhD thesis, Vanderbilt University, Unpublished, 1984. [33] Coello C. http://www.cs.cinvestav.mx/EVOCINV/download/tutorial– moea.pdf [34] Coello C. A comprehensive survey of evolutionary-based multiobjective optimization techniques. Knowl Inf Syst 1999;1(3): 129–56. [35] Horn J, Nafpliotis N. Multiobjective optimization using the niched pareto genetic algorithm. IlliGAL Report 93005, Illinois Genetic Algorithms Laboratory, University of Illinois, Champaign, IL, 1993. [36] Srinivas N, Deb K. Multiobjective optimization using nondominated sorting in genetic algorithms. Evol Comput 1994;2(3):221–48. [37] Knowles JD, Corne DW. M-PAES: a memetic algorithm for multiobjective optimization. In: Proceedings of the Congress on Evolutionary Computation (CEC00). Piscataway, NJ: IEEE Press; 2000. p. 325–32. [38] Knowles JD, Corne DW. Approximating the nondominated front using the Pareto archived evolution strategy. Evol Comput 2000;8(2):149–72. [39] Corne DW, Jerram NR, Knowles JD, Oates MJ. PESA-II: regionbased selection in evolutionary multiobjective optimization. In: Proceedings of the genetic and evolutionary computation conference (GECCO-2001). Los Altos, CA: Morgan Kaufmann Publishers; 2001. p. 283–90. [40] Coello C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art. Comput Methods Appl Mech Eng 2002;8(2):1245–87. [41] Fyffe DE, Hines WW, Lee NK. System reliability allocation problem and a computational algorithm. IEEE Trans Reliab 1968;17: 64–9. [42] Nakagawa Y, Miyazaki S. Surrogate constraints algorithm for reliability optimization problems with two constraints. IEEE Trans Reliab 1981;30:175–80.