Reliability Engineering and System Safety 79 (2003) 149–159 www.elsevier.com/locate/ress
Robust design using a hybrid-cellular-evolutionary and interval-arithmetic approach: a reliability application Claudio M. Rocco*, Jose´ Alı´ Moreno, Ne´stor Carrasquero Facultad de Ingenierı´a, Universidad Central Venezuela, Apartado 47937, Caracas 1040A, Venezuela
Abstract Sensitivity analysis is a technique by which one can determine, with good approximation, whether a system will work within the specification limits when the input vary between their limits. Although this type of analysis, from input to output, can provide useful information to the decision-maker, we present an approach based on the use of Cellular Evolutionary Strategies and Interval Arithmetic in which the inverse problem can be solved. That is, we move from output to input: starting with output range specifications, we infer the maximum uncertainty or variability that can be exhibited by the input parameters. The approach used is an indirect method based on optimisation instead of a direct method based on mapping from the output into the input space. The proposed approach is illustrated by computational examples applied to a reliability complex system. Results are compared with those obtained using a general non-linear optimisation package. q 2002 Elsevier Science Ltd. All rights reserved. Keywords: Modelling and simulation; Evolutionary strategy; Interval arithmetic; Robust design; Global optimisation; Sensitivity analysis
1. Introduction For examining the effects of uncertain inputs within a model, various analytic and computational techniques exist [1]. These include: † Methods for computing the effect of changes in inputs on model predictions, i.e. sensitivity analysis (SA). † Methods for calculating the uncertainty in the model outputs induced by the uncertainties in its inputs, i.e. uncertainty propagation, and † Methods for comparing the importance of the input uncertainties in terms of their relative contributions to uncertainty in the outputs, i.e. uncertainty analysis. Techniques used in sensitivity and uncertainty analysis may include [1]: † Deterministic, one-at-time analysis of each factor holding all others constant at nominal value; † Deterministic joint analysis, changing the value of more than one factor at a time; † Parametric analysis, moving one or a few inputs across reasonably selected ranges such as from low to * Corresponding author. Tel.: þ1-58-412-252-8346; fax: þ 1-58-212605-3030. E-mail address:
[email protected] (C.M. Rocco).
high values in order to examine the shape of the response; † Probabilistic analysis, using probability density functions, regressions, Monte Carlo simulation, or other means to examine how much uncertainty in conclusions is attributable to which inputs. A tentative condensed list of reasons why and instances where SA should be considered can be found in Refs. [2,3] and includes: † To identify critical regions in the space of inputs parameters; † To locate few influential parameters in systems with multiple uncertain inputs (screening exercises); † To ascertain whether a subset of input parameters may account for (most of) the output variance; † To validate models; † To optimally allocate resources. SA is usually carried out by determining which parameter or parameters have significant effects on the results of a study (Local or Global SA [3]). An attempt is then made to increase the precision of these parameters in order to reduce the danger of serious error. This type of analysis, from input to output, can provide to the decision-maker a sharper picture of the effects of
0951-8320/03/$ - see front matter q 2002 Elsevier Science Ltd. All rights reserved. PII: S 0 9 5 1 - 8 3 2 0 ( 0 2 ) 0 0 2 2 6 - 0
150
C.M. Rocco et al. / Reliability Engineering and System Safety 79 (2003) 149–159
Nomenclature CES cellular evolutionary strategies CES-IA the name for our algorithm ES evolutionary strategies FOR feasible operating region IA interval arithmetic MIB maximum volume inner box Notation Rs, Cs [reliability, cost] of the system Ri ; R i [reliability, unreliability] of component i m number of constraints n number of components Cs,min, Cs,max [lower, upper] bound on Cs gj constraint function j, j ¼ 1; …; m x vector n £ 1 that identifies a design parameters X experimental region y(x) vector m £ 1 that describes the properties or performances of x
the uncertainty. Of course the fact that the uncertainty in a group of variables is more important than another group, gives to the decision-maker some hints of what to do to reduce their effects. In this paper we present an approach in which the inverse problem can be solved especially when there are specifications for the outputs. That is, we move from output to input and determine the effect of such a specification on the inputs. The proposed approach is based on optimisation instead of mapping from the output into the input space. For example, in reliability system design, mathematical models are used to describe the properties of the system to be designed. One can say that a model calculates (or predicts) the outcomes in terms of a vector of properties y from a vector x of design parameters (component reliability) [4]. In this context the n-vector x represents the design parameters of the product and the model or function y(x) describes the properties or performances, represented by an m-vector. As an example, consider the complex system (Life-Support System in a Space Capsule [5]) shown in Fig. 1. The reliability of the system is given by Rs ¼ 1 2 R3 ðR 1 R 4 Þ2 2 R 3 ½1 2 R2 ð1 2 R 1 R 4 Þ2 : For Ri ¼ 0:9; i ¼ 1 – 4; the reliability of the system Rs is 0.9987219. We can evaluate, for example, what are the effects on Rs if the reliability of one (local analysis) or more (global analysis) components is changed. For example, we can conclude that a ^ 10% variation in the reliability of component i causes a ^ z% variation on Rs. These effects can provide information on how to modify component reliability or the system structure. The robustness of a design is defined as the maximum size of the deviation from this design that can be tolerated whereby the product still meets all requirements [4]. In
lower bound for yi ðxÞ upper bound for yi ðxÞ vector n £ 1 that identifies a point in the search space a vector n £ 1 of inclination angles C vector n £ 1 that identifies the centre of the maximum volume inner box (MIB) s vector n £ 1 of standard deviation Nð0; sÞ vector of independent random numbers normally distributed with mean 0 and standard deviation in the vector s zz implies: 1-zz, where zz is any expression T, W, Z interval numbers Assumptions 1 all units and the system have two states: good and bad 2 the states of all units are mutually statistically independent LBi UBi w
the previous example, we would like to know what are the maximum possible deviations for Ri consistent with Rs being constrained within defined limits (for example: 0:99 # Rs # 0:999). Since the early 1980s, several experiments for improving component reliability have been documented, based on the design of experiments techniques or the Taguchi approach [6]. This paper presents an innovative approach to obtain a robust design based on the use of Cellular Evolutionary Strategies (CES) and Interval Arithmetic (IA). We do not consider the factors or parameters that affect reliability of each component but we propose a method to quantify the maximum deviation on each reliability component, which does not violate output specifications. The problem we address is different from variability or SA [7,8], where, knowing the component parameters deviation, the method evaluates the variation on the properties due to variation on the inputs, that is moving from the input to the output. In this paper we reverse the process, assessing input parameter uncertainty for one
Fig. 1. Example of reliability system.
C.M. Rocco et al. / Reliability Engineering and System Safety 79 (2003) 149–159
151
(local) or all (global) components, which maintains the output performance function within specified bounds. Section 2 gives the problem definition. Section 3 contains an overview of the evolutionary approach, while IA is presented in Section 4. Section 5 presents the general approach used to solve the problem. Finally, the results and discussions of the examples are considered in Section 6.
2. Problem description Consider the voltage divider shown in Fig. 2. The terminal voltage Vab is expressed as Vab ¼ Vo R=ðR þ Ra Þ and Rin ¼ R þ Ra : Let us assume that Vo ¼ 10 V and that R ¼ 50 V and Ra ¼ 70 V: We can evaluate how Vab and Rin are affected by uncertainty in R and Ra. For example, if Ra ¼ 70 V and R ¼ 50 ^ 10 V we obtain that Vab will vary between 5.40 and 6.36 V and 110 # Rin # 130 V: Additionally, if Ra ¼ 70 ^ 30 V; then Vab will vary between 4 and 7.14 V and 80 # Rin # 140 V: Now suppose that specifications require that Vab between 4.5 and 5.5 V and Rin between 80 and 120 V. Fig. 3 shows the feasible region (region delimited by dotted lines) [9]. How should we select R and Ra in order to satisfy both specifications? In general, robust design deals with non-linear optimisation and it is first assumed that y(x) is a continuous function [5]. The proposed approach, based on CES, does not require this assumption. A second normal assumption is that x must vary in a continuous way within a so-called experimental region X. For the approach developed, the region simply consists of lower and upper bounds on the design parameters. In a technological context it is common to use an expression about a relation y(x) like ‘the relation is valid on this range’ [4]. In the voltage divider example, we are looking for the maximum deviation for R and Ra such as to satisfy the specification. Fig. 4 shows the theoretical tolerance region for R and Ra. Ro and Rao are the centre values, whereas DR and DRa are the maximum deviations to be determined. We will assume that with any parameter the range of variation is symmetrical about a central or nominal value.
Fig. 2. Voltage divider example.
Fig. 3. Feasible solution.
2.1. Feasible solution set and approximate descriptions Suppose the area shown in Fig. 5(a) is the feasible zone, that is any pair (R, Ra) in that zone satisfies the specifications. An exact description of the feasible solution set (FSS) [10] or feasible operating region [11] (Fig. 5(a)) is in general not simple, since it may be a very complex set. Moreover, the FSS could be limited by non-linear functions. For this reason, approximate descriptions are often looked for, using simply shaped sets like boxes containing (outer bounding, Fig. 5(b)) or contained in (inner bounding, Fig. 5(c) and (d)) the set of interest. In particular we are looking for MIBs (Fig. 5(c) and (d)): we can guarantee that whatever pair (R, Ra) is selected in those boxes, it will comply with the specifications. The maximum ranges of possible variations of the feasible values are the sizes (along co-ordinate axis) of the axis-aligned box of minimum volume containing FSS [10]. The ranges for each variable define the so-called tolerance region [9]. Several references report on methods to describe the FSS. For example, Milanese et al. [10] present techniques to obtain MIB. Such methods can be applied in the cases that the FSS is limited by linear functions. The proposed
Fig. 4. Tolerance region.
152
C.M. Rocco et al. / Reliability Engineering and System Safety 79 (2003) 149–159
Fig. 5. FSS and approximate descriptions.
approach is general and can be applied even if the FSS is limited by non-linear functions. Spense [9] discuss methods based on Monte Carlo techniques to obtain a box such that the area outside the FSS is minimal. But their solution is not robust as the one obtained by our approach. Other approaches described in Refs. [4,12] can be applied only to linear systems. For linear systems, the proposed approach can be used but we consider it is more time consuming and so we do not justify its use. Our approach can analyse two cases: 1. Centre specified. Given a centre value, it will find a symmetrical ‘box’ (Fig. 5(c)). We can guarantee that any point selected within this box complies with the specifications. Note that this may not the ‘biggest’ box. For example, we could enlarge the box in the R direction and obtaining a bigger box. But in this case the box would not be symmetrical with respect to the given centre. 2. Centre unspecified. It will select a centre value and find a symmetrical box that maximise the internal area (Fig. 5(d)). In this case we can get a box bigger than the previous case and again we can guarantee that whatever point is selected in this box, it will comply with the specifications.
slack function gi ðxÞ : gi ðxÞ ¼ UBi 2yi ðxÞwhen there isan upper bound requirement or gi ðxÞ ¼ yi ðxÞ2LBi whenthere is alower boundrequirement In order to focus on inequalities we do not define an objective function on the properties, which is usually included in these problems [4]. This results in a product design problem mathematically formulated as: find an element of F >X; with: F :¼ {x [ Rn lgi ðxÞ $ 0; i ¼ 1;…;m} To obtain the MIB it is required that all the points inside the generated box satisfy the constraints. Then, the mathematical formulation is: Let the experimental region X defined by: X :¼ {x; C [ Rn lxi [ ½xi;lower ; xi;upper ; Ci ¼ ðxi;upper þ xi;lower Þ=2}
1. Centre specified: 2.2. General robust design mathematical formulation In the definition of the design problem we restrict ourselves to the inequality form. The designer formulates target values on the reliability of a system by setting lower and upper bounds on the property yi ðxÞ: The problem is now to find a product x in the experimental region X which fulfils the requirements on the properties. We define the following
max
n Y
absððxi 2 Ci ÞÞ;
s:t: x [ F > X
i¼1
2. Centre unspecified: max
n Y i¼1
absððxi 2 Ci ÞÞ;
s:t: x; C [ F > X
C.M. Rocco et al. / Reliability Engineering and System Safety 79 (2003) 149–159
3. Evolutionary approach During the last years, (global) optimisation algorithms imitating certain principles of nature have proved their usefulness in various domains of applications. Phenomena found in annealing processes, central nervous systems and biological evolution have lead to optimisation methods like: simulated annealing, (artificial) neural networks and the field of evolutionary algorithms (EA) (genetic algorithms, evolutionary programming and evolution strategies (ES)) [13]. EA have been applied to a wide range of problems especially in those cases where traditional optimisation techniques have shown poor performances or simply have failed [14]. A population of individuals each of which representing one point of the solution space collectively evolves towards better solutions by means of a parent’s selection process, a reproduction strategy, usually by means of a randomised recombination and/or mutation operators and a substitution strategy. Parent selection determines which individuals participate in the reproduction strategy. Recombination allows the exchange of already existing genes and mutation introduces new genetic material. The substitution strategy defines the individuals for the next generation population [15]. ES were developed as a powerful tool for parameter optimisation tasks [15,16]. Three basic types of ES have been developed. In a two-member or (1 þ 1) evolution strategy (ES(1 þ 1)), one ‘parent’ produces one offspring per generation by applying a normally distributed perturbation, until a ‘child’ performs better than its ancestor and take its place [13]. In this technique each individual (proposed solution) is represented by a couple of floatvalued vectors ðw; sÞ: Here the first vector w represents a point in the search space (n-components); the second vector s is a vector of perturbation [17]. In the next iteration an offspring (the mutated individual) is generated by the expression: wtþ1 ¼ wt þ Nð0; sÞ The term Nð0; sÞ represents a vector of independent random numbers normally distributed with mean 0 and standard deviation in the vector s [18]. The use of a normal distribution as a perturbation for an individual agrees with the biological observation that smaller perturbations occur more often in nature. To improve the convergence rate of the ES(1 þ 1) the ‘1/5 success rule’ was proposed [19]. This rule consists of adapting the s values in a deterministic manner according to the measured success frequency of mutations. It can be simply stated as follows: the ratio between successful mutations and the total mutations performed along k generations must be 1/5. When this
153
ratio is greater than 1/5, the s values are increased; if it is smaller than 1/5, the s values are reduced. Otherwise the standard deviations remain unchanged. The increasing and reduction factors suggested are 1.22 and 0.82, respectively [19]. This strategy has been enhanced to a ðm þ 1Þ strategy, which incorporated recombination for the first time with several parents being available [13]. Parents are selected randomly (uniform distribution) from the whole population. Recombination can be performed by means of recombination operators commonly used in genetic algorithms such as point crossover or arithmetical crossover [17]. In the point crossover two parents are selected and the new offspring components come randomly from the first or second parent. In the arithmetic crossover two parents w1 and w2 produce two offspring w3 and w4 which are a convex linear combination of their parents: w3 ¼ aw1 þ ð1 2 aÞw2 and w4 ¼ ð1 2 aÞw1 þ aw2 : The mutation scheme and the stepsize control were unchanged [17]. Schwefel [16,19] generalised these strategies to the multimember evolution strategy now denoted by ES(m þ l) and ES(m; l). In these strategies the standard deviation for mutation becomes part of the individual and evolves by means of mutation and recombination, a process called selfadaptation of strategy parameters [19]. So each individual is now represented by ðw; s; aÞ; where w is a vector that identifies a point in the search space, s is a vector of perturbation and a is a vector of inclination angles. After a random uniform parent’s selection, a mutation is then defined by [19]:
s0i ¼ si ·expðt0 ·Nð0; 1Þ þ t·Ni ð0; 1ÞÞ a0i ¼ ai þ b·Nð0; 1Þ s0 ; a0 Þ 0; y t ¼ y þ Nð pffiffiffiffiffi pffiffiffi pffiffiffi where t / ð 2 nÞ21 ; t0 / ð 2nÞ21 ; b < 0:0873; n is the number of variables. The self-adaptation process of strategy parameters is based on the existence of an indirect link between strategy parameters and the fitness of an individual as well as a sufficiently large diversity in the parent population. Depending on an individual’s wi ; the resulting objective function f ðwi Þ serves as the ‘phenotype’ (fitness) in the substitution strategy step. In a ðm þ lÞ strategy, the m best of all ðm þ lÞ individuals survive to become parents of the next generation. Using the ðm; lÞ strategy, selection takes place only among l offspring. The second scheme is more realistic and therefore more successful, because no individual may survive forever, which could at least theoretically occur using the plus variant [13]. Furthermore, the ðm þ lÞ tends to emphasise local rather than global search properties. For this reason, modern ES use the ðm; lÞ substitution strategy [19]. A ratio of l=m < 7 is recommended as a good setting for the relation between parent and offspring population size [19]. Algorithm 1 presents the ðm; lÞ-ES in pseudo-code notation [14]:
154
C.M. Rocco et al. / Reliability Engineering and System Safety 79 (2003) 149–159
Algorithm 1 (ðm; lÞ-ES) t ¼ 0; initialise Pð0Þ ¼ {a1 ; …an }; evaluate f ðy1 Þ; …; f ðyn Þ; while ðTðPðtÞÞ ¼ 0Þ do {T denotes a termination criterion}{ P0 ¼ 0; for i ¼ 0 to l do { Select parents; Recombination; Mutation; Evaluate new offspring; Store offspring in P0 ; } Pðt þ 1Þ ¼ select the best m offspring from P0 ; t ¼ t þ 1; } Algorithm 2 presents the ðm þ lÞ-ES in pseudo-code notation [14]. Algorithm 2 (ðm þ lÞ-ES) t ¼ 0; initialisePð0Þ ¼ {a1 ; …; a n }; evaluate f ðy1 Þ; …; f ðyn Þ; while ðTðPðtÞÞ ¼ 0Þ do {T denotes a termination criterion} { for i ¼ 0 to l do { Select parents; Recombination; Mutation; Evaluate new offspring; Store offspring in P; } Pðt þ 1Þ ¼ select the best m offspring among m þ l; t ¼ t þ 1; } CES [14] are an approach that combines the ESðm; lÞ techniques with concepts from Cellular Automata [20] for the parent’s selection step, using the concepts of neighbourhood. In the CES approach each individual (proposed solution) represented by a couple of float-valued vectors (w, s ) is located randomly in a cell of a two-dimension array. To update a specific individual, the parents’ selection is performed looking only at determined cells (its neighbourhood) in contrast with the general ES, which search parents in the whole population [14]. Note that this new arrangement will only define how to update a specific individual (ncomponents) and do not means to reduce a high dimensional problem to a two-dimensional neighbourhood.
Fig. 6. Two-dimension cell array.
As an example, Fig. 6 shows the array for 20 ncomponents individuals ða1 ; …; a20 Þ and the Von Neumann neighbourhood for individual a13 (radius ¼ 1). The Von Neumann neighbourhood is formed by individuals a8, a12, a14 and a18. To update a13, the parents’ selection is determined at random, with the same mating probabilities and, only taking into account a8, a12, a13, a14 and a18 [14]. Seven new individuals are then generated by means of the mutation scheme previously described, and using an arithmetic crossover. The best of them replaces a13 (the number 7 follows a suggestion made in Ref. [19]: l=m < 7). The rest of the individuals are update in the same way. In CES the neighbourhood type and the radius are parameters to be selected. In Ref. [14] it was found that for optimisation problems, the best results were obtained with a Von Neumann neighbourhood with radius equal to 1. Using those parameters, the diversity among individuals is maintained during several evolutionary iterations. A bigger radius tends to simulate a conventional ES while a different neighbourhood (for example, the Moore neighbourhood) tends to homogenise the population. While the ES were originally designed with the parameter optimisation problem in mind, the CES were designed to find the global optimum or ‘near’ optimum for complex multimodal functions. Its use is then suggested in the case of high dimensional problems [14]. As mentioned in Ref. [17], ES and therefore CES handle constraints as part of the optimisation problem. If, during some iteration, an offspring does not satisfy all of the constraints, then the offspring is disqualified. If the rate of occurrences of such illegal offspring is high, the ES adjust their control parameters, by decreasing the components of the vector s. In order to enhance the constraints handling in ES, some ideas from the genetic algorithm community have been included [18]. For example, strategies for genetic algorithm for handling constraints are to impose penalties on individuals that violate them [17], the use of ‘decoders’ which avoid illegal individual or ‘repair’ algorithms which repair any infeasible solution [18]. Due to the stochastic nature of CES, it is important that problems solved using this techniques show their consistency, that is how many times a run is close to the global solution. Results from several runs are required to assess CES consistency.
C.M. Rocco et al. / Reliability Engineering and System Safety 79 (2003) 149–159
4. Interval arithmetic IA originates from the recognition that frequently there is uncertainty associated with the parameters used in a computation [21,22]. This form of mathematics uses interval ‘numbers’, which are actually an ordered pair of real numbers representing the lower and upper bound of the parameter range [23]. For example, if we know that the time of planned maintenance (Tp) is between 2 and 5 h, the corresponding interval number would be written as follows: Tp ¼ [2,5] h. IA is built upon a basic set of axioms. If we have two interval numbers T ¼ ½a; b and W ¼ ½c; d with a # b and c # d then [21 –23]: T þ W ¼ ½a; b þ ½c; d ¼ ½a þ c; b þ d T 2 W ¼ ½a; b þ ð2½c; dÞ ¼ ½a 2 d; b 2 c T p W ¼ ½min{ac; ad; bc; bd}; max{ac; ad; bc; bd} 1=T ¼ ½1=b; 1=a; 0 ½a; b T=W ¼ ½a; b=½c; d ¼ ½a; b p ½1=d; 1=c; 0 ½c; d In the above rules of IA, we exclude division by an interval containing zero. It is often useful to remove this restriction. The resulting arithmetic is called Extended Interval Arithmetic and gives rise to a set of additional rules. If we have two interval numbers T ¼ ½a; b and W ¼ ½c; d with a # b and c # 0 # d and c , d then [23]: 8 ½b=c; 1 if b # 0 and d ¼ 0 > > > > > > ½21; b=d < ½b=c; 1 if b # 0 and c , 0 , d > > > > > ½21; b=d if b # 0 and c ¼ 0 > > < T=W ¼ ½21; 1 if a , 0 , b > > > > ½21; a=c if a $ 0 and d ¼ 0 > > > > > > ½21; a=c < ½a=d; 1 if a $ 0 and c , 0 , d > > > : ½a=d; 1 if a $ 0 and c ¼ 0 For additional extended IA rules, see Ref. [20]. A real number b is equivalent to an interval ½b; b: Such an interval is said to be degenerate [23]. Only some of the algebraic laws valid for real numbers remain valid for intervals. It is easy to show that interval addition and multiplication are associative as well commutative. However, the distributive law does not always hold for IA [21]. As an example: ½0; 1ð1 2 1Þ ¼ ½0; 0 ¼ 0 while ½0; 1 2 ½0; 1 ¼ ½21; 1: An important property referred to as subdistributivity does hold. It is given mathematically by the set inclusion relationship: TðW þ ZÞ # TW þ TZ: The failure of the distributive law often causes overestimation. It is also interesting to note that T 2 T – 0; and T=T – 1: In general, when a given variable occurs more than once in an interval computation, it is treated as a different variable in each occurrence. Thus T 2 T is the same as T 2 W with W equal to but independent of T and causes overestimation, that is
155
the widening of computed interval. This effect is called the ‘dependency problem’ [21]. However, there are expressions where dependence does not lead to overestimation. For example, T þ T or W p W if W $ 0 [22]. Other mathematical operations have been defined in order to avoid or reduce the dependency problem. For example: 8 ½1; 1 if n ¼ 0 > > > > > n n < ½a ;b if a $ 0 or if a # 0 # b and n is odd Tn ¼ > > ½bn ;an if b # 0 > > > : n n ½0; maxða ; b Þ if a # 0 # b and n is even An interval function is an interval-valued function of one or more interval arguments. Consider a real valued function f of real variables t1 ; t2 ;…; tn ; and an interval function F of interval variables T1 ;T2 ; …; Tn : The interval function F is said to be an interval extension of f, if: FðT1 ;T2 ; …;Tn Þ ¼ f ðt1 ; t2 ;…; tn Þ Here Ti represents a degenerate interval: Ti ¼ ½ti ;ti : For example [21], let f ðtÞ ¼ tð1 2 tÞ; then the interval extension of f is FðTÞ ¼ Tð1 2 TÞ and T ¼ ½t; t: The range of a function f of real variables over an interval can be calculated from the interval extension F, changing ti by Ti : Moore [21] states that (inclusion property): f ðt1 ; t2 ; …; tn Þ # FðT1 ; T2 ; …; Tn Þ; for all ti [ Ti ði ¼ 1; …; nÞ This means that the range of the function f(t) can be calculated using IA. As an example consider the function w ¼ t2 2 1 and t in ½22; 2: If we evaluate w only at vertices (that is wð22Þ and wð2Þ) we obtain: wð22Þ ¼ wð2Þ ¼ 3: As shown in Fig. 7, the exact range of the function for t in ½22; 2 belongs to ½21; 3: If the function is evaluated as an interval function we need only one ‘interval’ evaluation and obtain the interval ½21; 3 (exact range): W ¼ T 2 2 1 ¼ ½22; 22 2 1 ¼ ½0; 4 2 ½1; 1 ¼ ½21; 3
Fig. 7. Range for w ¼ t2 2 1; t [ ½22; 2:
156
C.M. Rocco et al. / Reliability Engineering and System Safety 79 (2003) 149–159
In many practical problems it could be difficult to obtain an expression in which each variable occurs not more than once. In these cases a single computation of the interval extension of f(t) only yields an interval F(T) that is wider than the exact range for f(t). However, by the inclusion property, the interval F(T) is guaranteed to enclose f(T). Thus, F(T) may, in some cases, serve as an initial, rough estimate of the output variable tolerance providing infallibly outward bounds on it [24]. Several techniques have been developed to avoid the dependence problems and hence reduce the overestimation: Quadratic approximation and Gauss Transformation [25], Deconvolution [26], Generalized Interval Arithmetic [23], Modal Interval [27,28] and Central Forms [29]. IA has been used in several fields. For applications see Refs. [8,30 – 32] among others.
5. General approach The innovative approach presented in this paper defines an initial hyper-rectangle and by an iterative process enlarges it until the MIB is found. CES are used for the optimisation problem while IA is used as a checking technique that guarantees the feasibility of the design. (1) Centre specified. Initially we generate a random point w ¼ ðw1 ; w2 ; …; wn Þ and check if the point is feasible evaluating the constraint functions. If the point is infeasible, then we generate a new point and check it for feasibility. If the generated point is feasible we generate a symmetrical box (hyper-rectangle) using C as symmetry centre. To check the feasibility of the generated box, we could evaluate the constraints at each vertex of the box. If the box is feasible, then we calculate the associated volume of the box. The goal is then to evolve the box in order to maximise the volume. If the box is not feasible, then we discard the generated box, and generate a new point. (2) Centre unspecified. Initially we generate at random w ¼ ðw1 ; w2 ; …; wn Þ and the centre co-ordinate C ¼ ðC1 ; C2 ; …; Cn Þ: In this case, the centre co-ordinates are considered as additional variables and the total number of variables is 2 p n: We check if both points w and C are feasible, evaluating the constraint functions. If any point is infeasible, then we generate new points and check them for feasibility. If both w and C are feasible, we generate a symmetrical box using C as symmetry centre. To check the feasibility of the generated box, we can evaluate the constraints at each vertex. If the box is feasible, then we calculate the associated volume. The goal is then to evolve the box in order to maximise the volume. If the box is not feasible, then we discard the generated box, and generate a new point.
Note that in cases 1 and 2, to check the box feasibility, we may have two problems: (a) To evaluate the function in 2n vertices and (b) Extreme values are not necessarily at vertices of the tolerance region. To overcome these two drawbacks, we propose the use of IA to evaluate if the generated box is feasible. If constraint functions are evaluated as interval functions we need only one ‘interval’ evaluation and we can obtain the exact range of the constraint functions inside the generated box. Note that in the case that the FSS is non-convex, feasibility check using IA will take this into account. For example, assume the constraint is gðxÞ ¼ x2 2 1 $ 0: Let us consider the segment ½22; 2: If we evaluate the constraint at vertex we get gð22Þ ¼ gð2Þ ¼ 3: From this result we could consider that the segment ½22; 2 belongs to the feasibility region. If we evaluate the constraint using IA, we get Gð½22; 2Þ ¼ ½21; 3: That means that although 2 2 and 2 belong to FSS there are some points in ½22; 2 that are outside the FSS. Indeed, the segments ½22; 21 and ½1; 2 are the intervals included in the FSS, since both Gð½22; 21Þ and Gð½1; 2Þ are $ 0. To illustrate the CES-IA approach let us consider the voltage divider example. Assume the centre (Ro, Rao) is specified as C ¼ ð50; 55ÞV: We specify Vab between 4.5 and 5.5 V and Rin between 80 and 120 V. Initially we generate at random the vector w, for example, w ¼ ð51; 61ÞV: Evaluating Vab and Rin for R ¼ 51 V and Ra ¼ 61 V, we get Vab ¼ 4.55 V and Rin ¼ 112 V. That means the vector w is a feasible point. We then generate a symmetrical ‘box’ using C as symmetry centre with co-ordinates: (49,61), (49,49), (51,49) and (51,61). To avoid the dependence problem, we evaluate Vab as the interval function Vab ¼ Vo =ð1 þ Ra =RÞ: For R [ ½49; 51 V obtain Vab ¼ 10=ð1 þ and Ra [ ½49; 61V; we ð½49; 61=½49; 51ÞÞ ¼ ½4:45; 5:10 V; which does not belong to [4.5,5.5] V. That means that the box generated using w is not a feasible box. We generate another random vector w, for example, w ¼ ð52; 57Þ: Evaluating Vab and Rin for R ¼ 52 V and Ra ¼ 57 V; we get Vab ¼ 4.77 V and Rin ¼ 109 V. That means the vector w is a feasible point. We then generate a symmetrical box using C as symmetry centre with coordinates: (48,57), (48,53), (52,53) and (52,57). Evaluating Vab as an interval function, for R [ ½48; 52 V and Ra [ ½53; 57V; we obtain Vab [ ½4:57; 4:95 V and Rin [ ½101; 109 V: That means that the box generated using w and C is a feasible box. In order to use the evolutionary approach we need to specify the fitness function for each feasible point generated. As we are looking for MIB, the fitness function
C.M. Rocco et al. / Reliability Engineering and System Safety 79 (2003) 149–159
is defined as fitnessðWÞ ¼
n Y
157
The problem is to obtain the ranges for each Ri such as 0.99 # Rs # 1, subject to: absðwi 2 Ci Þ
i¼1
Algorithm 3 presents the proposed approach using the CES in pseudo-code notation: Algorithm 3 CES-IA: t ¼ 0; feasible ¼ false; while (feasible ¼ false) do { Generate a random point w ¼ ðw1 ; w2 ; …; wn Þ; If w is feasible then feasible ¼ true; } Pð0Þ ¼ w; {Initial population} while (TCðPðtÞÞ ¼ 0) do {TC denotes a termination criterion} { for i ¼ 0 to m do { For j ¼ 0 to 7 do { Select a new cell; Select parents randomly in the neighbourhood; Recombination; Mutation; {Evaluation of the new offspring} Generate a symmetrical box using C as centre; Check box feasibility using IA; If the box is feasible then { Evaluate the volume box; Fitness (offspring) ¼ volume; } else Fitness (offspring) ¼ 0; Store new offspring in Y {temporal array} } Replace the selected cell with the best element from Y; } t ¼ t þ 1; }
6. Computational examples
Base; Ri ¼ 0:90;
i ¼ 1; 2; 3; 4:
Using the approach described in 5 (centre co-ordinate C ¼ (0.90, 0.90, 0.90, 0.90)) and a CES with 20 generations, 49 individuals in a 7 £ 7 grid, s ¼ 1:0; Von Neumann neighbourhood with radius ¼ 1 and asynchronous substitution, we obtained the following results: R1 ¼ ½0:801; 0:998;
R2 ¼ ½0:822; 0:977;
R3 ¼ ½0:801; 0:998;
R4 ¼ ½0:801; 0:998:
Using these ranges the interval for Rs is: Rs,bounded ¼ [0.9900,0.9999]. To show the robustness of the proposed approach, 20 runs were performed and the best solution from among the 20 runs was used as the final solution. Differences between runs are due to the stochastic nature of CES. The average of the fitness function was 1.17 £ 1024 with a standard deviation of 1.69 £ 1025 (on average CES need 2.2 £ 106 fitness function evaluation). To show how close to the global solution are the results obtained using the CES-IA approach, Table 1 presents the solution obtained with the Excel Solver (feasibility check is performed using IA) and the best CES-IA solution. The Excel Solver uses a general non-linear algorithm and relies on mathematical prerequisites to be applied and cannot always guarantee a good solution and even fails when used to solve certain reliability optimisation problems [33]. The average error in 20 runs was 4.3419 £ 1025. This means that the proposed approach produces excellent results. 6.2. Life-support system in a space capsule: additional constraint Normally the design problem also seek to minimise the cost function Cs: X Cs ¼ 2 Ki Rai i For example in Ref. [5]: K1 ¼ 100; K2 ¼ 100; K3 ¼ 100; K4 ¼ 150; ai ¼ 1:0 for all i. Using the above values and the previous CES-IA ranges for Ri, the range for Cs is [726.37,893.62]. Using the Excel results for Ri, the range for Cs is [724.83,895.10]. Table 1 Comparison excel vs. best CES-IA solution Component
Excel
Best CES-IA
0 1 2 3 Box volume Error
[0.800,1.000] [0.824,0.975] [0.800,1.000] [0.800,1.000] 0.001213436
[0.801,0.998] [0.822,0.977] [0.801,0.998] [0.801,0.998] 0.00117825 3.51881 £ 1025
6.1. Life-support system in a space capsule [5] Fig. 1 shows the complex system to be considered. The reliability of the system is: Rs ¼ 1 2 R3 ðR 1 R 4 Þ2 2 R 3 ½1 2 R2 ð1 2 R 1 R 4 Þ2
158
C.M. Rocco et al. / Reliability Engineering and System Safety 79 (2003) 149–159
Our approach can consider several constraints simultaneously. For example, we can solve the problem to obtain the ranges for each Ri such as 0.99 # Rs # 1 an Cs,min # Cs # Cs,max. If we define: Cs;min ¼ 750; Cs;max ¼ 850; then the new solution found by CES is R1 ¼ [0.8482,0.9517]; R2 ¼ [0.8519,0.9480]; R3 ¼ [0.8522,0.9477]; R4 ¼ [0.8652,0.9347]. Using these new ranges, the intervals for Rs and Cs are: Rs,bounded ¼ [0.9956,0.9998] and Cs,bounded: [770.0810, 849.9189]. Unfortunately we could not obtain a solution using the Excel Solver. 6.3. Discussion The approach presented is based on CES as the optimisation technique. Of course, we could use another technique to solve the optimisation problem, but as mentioned, CES do not rely on special mathematical prerequisites to be applied. The main drawback of IA is that results can be overestimated due to the dependency problem. In general overestimation cannot be estimated a priori and could cause that no MIB is found. But, if a box is obtained we are sure that it is a robust solution. As mentioned, the overestimation can be treated using special techniques. The added burden to the procedure CES-IA for determining the feasibility verification of the generated box is far outweighed by the flexibility provided by such techniques (only one ‘interval’ evaluation and guaranteed ranges) in contrast to multiple vertices evaluation.
7. Conclusions In this paper we have presented an innovative approach based on the use of CES and IA to obtain robust system design, that is tolerance input regions based on specified output bounds. CES have been used to solve the optimisation problem to obtain the maximum size of each deviation, while IA has been used as a checking technique that guarantees the feasibility of the design. SA is a valuable tool that can provide to the decisionmaker a sharper picture of the effects of the uncertainty. The proposed approach complements the information provided by a SA to the decision-maker allowing the analysis of problems from output to input. We think that the proposed approach along with SA are powerful tools that the decision-maker could use to validate and design any kind of systems.
Acknowledgements This work is based on research supported by Universidad Central de Venezuela. We are pleased to thank the editors and referees for their helpful advice and suggestions.
References [1] Granger M, Henrion M. Uncertainty: a guide to dealing with uncertainty in quantitative risk and policy analysis. London: Cambridge University Press; 1993. [2] Saltelli A, Scott M. Guest editorial: the role of sensitivity analysis in the corroboration of models and its link to model structural and parametric uncertainty. Reliab Engng Syst Safety 1997;57:1–4. [3] Saltelli A. What is sensitivity analysis? In: Saltelli A, Chan K, Scott EM, editors. Sensitivity analysis. Chichester: Wiley; 2000. [4] Hendrix EMT, Mecking CJ, Hendriks ThHB. Finding robust solutions for product design problems. Eur J Oper Res 1996;92:28–36. [5] Ravi V, Murty BSN, Reddy PJ. Nonequilibrium simulated annealingalgorithm applied to reliability optimization of complex systems. IEEE Trans Reliab 1997;R-46:233 –9. [6] Hamada M. Using statistically designed experiments to improve reliability and to achieve robust reliability. IEEE Trans Reliab 1995; R-44:206–15. [7] Constantinides. Basic reliability. 1994 Annual Reliability and Maintainability Symposium, Anaheim, California, USA. [8] Rocco CM. Variability analysis of electronic systems: classical and interval methods. 1997 Proceeding of the Annual Reliability and Maintainability Symposium, Philadelphia, USA. [9] Spense R, Singh Soin R. Tolerance design of electronic circuits. Wokingham: Addison-Wesley; 1988. [10] Milanese M, Norton J, Piet-Lahanier H, editors. Bounding approaches to system identification. New York: Plenum Press; 1998. [11] Thomaidis T, Pistikopoulos E. Optimal design of flexible and reliable process systems. IEEE Trans Reliab 1995;R-44:243– 50. [12] Shary S. Algebraic approach to the interval linear static identification, tolerance and control problems. Reliab Comput 1996;2(1):3–33. [13] Kursawe F, Tzeng G, Yu P, editors. Proceedings of the Tenth International Conference on Multiple Criteria Decision Making, Taipei 1992;. [14] Medina M, Carrasquero N, Moreno J. Estrate´gias Evolutivas Celulares para la Optimizacio´n de Funciones. IBERAMIA’98, 68 Congresso Iberoamericano de Inteligencia Artificial, Lisboa, Portugal; 1998. [15] Kursawe F. Evolution strategies—simple models of natural process? Rev Int Syst 1993;7(5):627– 642. [16] Schwefel H, Back Th. Evolution strategies. I. Variants and their computational implementation. In: Periaux P, Winter G, editors. Genetic algorithm in engineering and computer science. New York: Wiley; 1995. [17] Michalewicz Z. Genetic algorithms þ data structure ¼ evolution programs. Berlin: Springer; 1992. [18] Navarro J, Moreno J, Carrasquero N. Evolutionary multi-objective optimization of simulation models. IBERAMIA’98, 68 Congresso Iberoamericano de Inteligencia Artificial, Lisboa, Portugal; 1998. [19] Ba¨ck Th, Schwefel H. Evolutionary computation: an overview. Proceedings of the IEEE International Conference on Evolutionary Computation (IECC’96), Nagoya, Japan, 20– 29. New York: IEEE Press; 1996. [20] Wolfram S. Cellular automata as models of complexity. Nature 1984; 3H. [21] Moore R. Methods and applications of interval analysis. Philadelphia: SIAM Studies in Applied Mathematics; 1979. [22] Neumaier A. Interval methods for systems of equations. London: Cambridge University Press; 1990. [23] Hansen E. Global optimization using interval analysis. New York: Marcel Dekker; 1992. [24] Kolev LV. Interval methods for circuit analysis. Singapore: World Scientific; 1994. [25] Hadjihassan S, Walter E, Pronzato L. Quality improvement via optimisation of tolerance intervals during the design stage.
C.M. Rocco et al. / Reliability Engineering and System Safety 79 (2003) 149–159
[26] [27]
[28] [29]
In: Kearfott RB, Kreinovich V, editors. Applications of interval computations. Dordrecht: Kluwer; 1996. Ferson S, Long T. Deconvolution can reduce uncertainty in risk analysis. http://ramas.com. Armengol J, Vehı´ J, de la Rosa JL, Trave´-Massuye´s L. On modal interval analysis for envelope determination within Ca-En qualitative simulator; 1998. http://ima.udg.es/SIGLA/X. Vehı´ J. Ana´lisi i disseny de controladors robustos. PhD Thesis. Universitat de Girona; 1998. Ratschek H, Rokne J. Computer methods for the range of functions. England: Ellis Horwood; 1984.
159
[30] Kearfott RB, Kreinovich V. Applications of Interval Computations. Dordrecht: Kluwer; 1996. [31] Alvarado F, Wang Z. Interval arithmetic in power flow analysis. IEEE Trans Power Syst 1992;7(3):1341–9. [32] Hesham S. An interval mathematics approach to economic evaluation of power distribution systems. PhD Dissertation. Virginia Polytechnic Institute; 1992. [33] Rocco CM, Miller AJ, Moreno JA, Carrasquero N. Reliability optimization of complex systems. The Annual Reliability and Maintainability Symposium, Los Angeles, USA; 2000.