Global optimal reliability index of implicit composite laminate structures by evolutionary algorithms

Global optimal reliability index of implicit composite laminate structures by evolutionary algorithms

Structural Safety 79 (2019) 54–65 Contents lists available at ScienceDirect Structural Safety journal homepage: www.elsevier.com/locate/strusafe Gl...

3MB Sizes 0 Downloads 6 Views

Structural Safety 79 (2019) 54–65

Contents lists available at ScienceDirect

Structural Safety journal homepage: www.elsevier.com/locate/strusafe

Global optimal reliability index of implicit composite laminate structures by evolutionary algorithms Gonçalo das Neves Carneiro, Carlos Conceição António a

T



INEGI/LAETA, Faculty of Engineering, University of Porto, Porto, Portugal

ARTICLE INFO

ABSTRACT

Keywords: Reliability assessment Reliability index Uncertainty Composite structures Evolutionary algorithms

With uncertainty, reliability assessment is fundamental in structural optimization, because optimization itself is often against safety. To avoid Monte Carlo methods, the Reliability Index Approach (RIA) approximates the structural failure probability and is formulated as a minimization problem, usually solved with fast gradientmethods, at the expense of local convergence, or even divergence, particularly for highly dimensional problems and implicit physical models. In this paper, a new procedure for global convergence of the RIA, with practical efficiency, is presented. Two novel evolutionary operators and a mixed real-binary genotype, suitable to hybridize any Evolutionary Algorithm with elitist strategy, are developed. As an example, a shell laminate structure is presented and the results validated, showing good convergence and efficiency. The proposed method is expected to set the basis for further developments on the design optimization of more complex structures with multiple failure criteria.

1. Introduction Uncertainty results from the variation of the true values of variables, in relation to their expected values, and may be responsible for deviations of the expected response of systems. In structures such variables are associated with material and strength properties, geometric variables and external loads. In composite materials, the uncertainty in material and strength properties is particularly strong. Thus, deterministic design solutions may still be unsafe, from a probabilistic perspective. Reliability assessment becomes fundamental. By definition, the probability of failure is a cumulative probability, difficult to estimate analytically for most engineering problems. Numerical methods are the most common techniques. The Monte Carlo Sampling (MCS) is the foundation for sampling methods for reliability assessment. It consists in evaluating a traditional binary process, for which the frequency of failure events is an unbiased estimator of the probability of failure [1,2], with any desired accuracy [3]. For most engineering problems, failure probabilities are too small and the number of required samples is huge. To cope with this, variance reduction techniques and improved surrogate models are proposed frequently. References [4–13] provide a collection of relevant and some recent advances in the application of MCS in structural reliability. Random variations in structural variables are likely to happen near the mean-values. Local reliability methods focus their action in that region



and are usually based on the concept of the reliability index [14–16]. Hasofer and Lind [17] and Rackwitz and Fissler [18] proposed an invariant reliability index. Most methods to estimate the reliability index are based on gradient algorithms and response surface methods (see [19–24]). Despite their efficiency, gradient methods often diverge for high dimensional problems and/or highly concave limit-state functions [25–28]. In the design of composite structures, reliability assessment is frequently an inner cycle of design optimization. The Reliability Index Approach (RIA) and the Performance Measure Approach (PMA) are the most common philosophies to impose and evaluate reliability constraints [29]. Reviews on techniques applied in the reliability-based design optimization (RBDO) of composite structures are in references [30,31]. Zhou et al. [32] and Dehmous et al. [33] addressed the multiscale RBDO of composite structures. Chen et al. [34] applied the RBDO to optimize the stacking sequence of composite laminates. António [35] and António et al. [36] developed a descent method to assess the reliability in RBDO of composite shells. António and Hoffbauer [37,38] preformed inverse reliability analysis. The reliability index was estimated by gradient-based methods in all references. On structures with the possibility of discontinuous implicit response functionals, and/or their derivatives, gradient methods may also diverge. The use of meta-heuristics to find the global reliability index in structural problems is scarce. Cheng and Li [39] proposed an artificial

Corresponding author. E-mail address: [email protected] (C. Conceição António).

https://doi.org/10.1016/j.strusafe.2019.03.001 Received 4 July 2018; Received in revised form 28 February 2019; Accepted 6 March 2019 0167-4730/ © 2019 Elsevier Ltd. All rights reserved.

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António

neural network (ANN)-based genetic algorithm (GA). Deng et al. [40] and Shao and Murotsu [41] proposed a GA searching only a finite number of search directions. Wang and Ghson [42] modified the previous GA to improve convergence. In the context of composite structures, António [43] and António and Hoffbauer [44] developed a hierarchical GA for a unified design optimization and global reliability index search. In this work, a new methodology for the evolutionary-based reliability assessment is developed, suitable to be applied to any evolutionary algorithm (EA), with elitist strategy. It methodology exploits the particular characteristics of the RIA. Novelties include the decomposition of random variables, the redefinition of the RIA as an unconstrained problem with a mixed real-binary coding of the variables and two new evolutionary operators: one for the genetic repair of the solutions, to impose the equality constraint of the reliability assessment problem, and another for the progressive reduction and reallocation of the search domain, implicitly guiding the evolution of the population and focusing the search procedure in a region of the failure surface with higher probability content. The proposed method is expected to set the basis for further developments on the design optimization of more complex structures with multiple failure criteria. The paper is organized as follows. Section 2 and 3 review deterministic strength criteria and the concepts of structural reliability analysis, respectively. In Section 4 the proposed evolutionary operators and strategies are detailed. Computational results and the discussion are presented in Section 5 and the conclusions of the work in Section 6.

g (d, µ z)

i, j = 1,2,6

+

i = 1,2,6

[Fi i ] R = 1

pf (d , µ z) = Pr(g (d, µ z)

min y

FSS =

1 S2

FYY = FX =

1 X

1 YY ' 1 X'

FXY = F XY FXX FYY , 0.5 FY =

1 Y

1 Y'

FS = 0

, Nk

e = 1,

, Ne,

k

g (d, µ z) 0

pz (z|d,µ z) d z

(4)

|| y || (5)

where G ( ) is the transformed limit-state function, in the standard normal uncertainty space. Note that the limit-state function depends on d and µ z , because the uncertainty space is a property of each design solution. The RIA, as stated in Eq. (5), is quite challenging to solve, in structural problems, and asks for a compromise between efficiency and precision. One major difficulty is related with the knowledge about the failure surface, in the uncertainty space. While the quadratic failure surface is defined as an ellipsoid, in the stress space (see Eq. (1)), it may have a different shape in the uncertainty space, depending on the functional relationship between random variables and R . So, it is not possible to know where and how the failure surface is defined. This is relevant, because the RIA, as stated in Eq. (5), imposes that the MPP is found on the failure surface. When dealing with design optimization of structures, reliability assessment becomes an inner cycle to be executed and the problem becomes expensive. MCS methods are known to increase the computational time considerably. The RIA solved by gradient-based algorithms is a natural choice, because these methods are very fast. However, they require gradient information, convexity and continuity of the search space and cannot guarantee global convergence.

(1)

F XY

0) =

subject to G (y|d,µ z) = 0

where i represents the component of the stress tensor, Fij and Fi the corresponding quadratic and linear strength parameters, projected onto the laminate axis system, defined as follows [47] 1 XX '

, Np,

= 1,

where pz ( | ) is the joint conditioned probability density function (PDF) of the random variables. The integral of Eq. (4) can only be attempted to be solved analytically when both the PDF and the integral boundaries are known and explicitly defined. That is not the case for most structural design problems. Here, instead, the probability of failure is estimated indirectly by the RIA [17]. Let µg and g be the first two moments of the limit-state function and assume g ~ N (µg , g ) . The measure = µg / g represents the distance, in standard-deviation units, from µg to zero and is named the reliability index [2]. Thus, the probability of failure is calculated by pf = ( ) , where ( ) is the cumulative distribution function of the standard normal distribution. In practice, µg and g are very difficult to calculate and the distributions of g and z are any. Then, the limit-state function has to be considered as a multivariate function of z and g (z|d,µ z) = 0 represents a hypersurface in the uncertainty space. The uncertainty space becomes multidimensional and so the uncertainty needs to be normalized to be comparable in all directions. Considering the set of independent standard normal random variables, y (z) ~ N (0, 1), obtained through transformation of z , the point on the failure surface whose norm ( ) is minimum is known as the most probable point (MPP). This point, yMPP , is the solutions of the following minimization problem [17], known as the RIA

ith

FXX =

p = 1,

also known as limit-state function, where Np, Ne and Nk are the number of plies, the number of elements and the number of integration points, respectively. R 0 is the allowable Tsai number, usually equal to unity. The probability of failure is then defined as [2]

In this work, the structural linear equilibrium of laminate composite structures is solved by a displacement formulation of the Finite Element Method (FEM), in particular, using the shell finite element model developed by Ahmad [45]. A short description of the Ahmad element can be found in the reference [46]. In composite structures, deterministic structural integrity is commonly evaluated by the Tsai number defined, for plane stress state, as the solution of the following quadratic equation [47] 2 i j ]R

1 ,

R0

(3)

2. Structural reliability

[Fij

min (Rp, e, k (d, µ z))

0 (2)

such that, R < 1 means failure and R 1 means structural integrity. The geometric discretization of the physical system implies de calculation of the stress and strain fields in specific integration points. For a first-ply failure philosophy, the Tsai number, R, is also distributed along each ply, p, and must be calculated on every integration point, k , of each element, e . The first ply to fail is the one possessing the point with the lowest Tsai number, representing the point of the structure with the most aggravated stress state. Under uncertainty, there are situations where a safe deterministic design, might fail. In this work, uncertainty is of the random kind. Two sets of variables are considered: deterministic design variables, Nd d , and random design variables and random parameters, N z , with known mean-values µ , as their realization, and standard z z deviations z , as input variables. With uncertainty, R also becomes random and so it is only possible to estimate the probability of it being in the failure domain, i.e., the probability of failure of the system. In reliability assessment theory it is useful to normalize failure criteria. It follows

3. Evolutionary strategy Evolutionary algorithms (EA) are a class of metaheuristics applied mainly in the fields of optimization and machine learning. These algorithms aim to perform a global search of the domain to find the global optimal solution. An EA uses mechanisms inspired in the biological evolution of populations, following mainly the Darwin theory of 55

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António

the “survival of the fittest” evolutionary paradigm. Solutions (individuals) in a population are usually represented as binary strings (genotypes), having a correspondent decimal representation (phenotype) and an associated fitness level. The generic structure of an EA is as follows: 1. 2. 3. 4. 5. 6.

that g is defined in the non-standard uncertainty space, while the reliability index, , and the direction cosines, a , are defined in the standard uncertainty space. However, this is only possible if the transformation between both spaces is known and will save computational time, avoiding the transformation of the implicit limit-state function. ( ) is a penalty function for the solutions, according to their degree of violation of the equality constraint g (z (y)|d,µ z) = 0 , and is defined by [43,48]

Random generation of genotypes, in the population; Fitness-based selection of pairs of parents; Recombination of genes between parents, originating the offspring; Mutation of genes; Selection of the survivors, for the next generation; Repeat steps 2–6, until convergence is achieved.

q= K=

1

ai

1

Ny

(a i ) 2 = 1

(6)

i=1

where a is the vector of direction cosines and Ny = Nz is the total number of random variables. Assuming the transformation T (z) = y and its inverse are known, and since g (z (y)|d,µ z) = G (y|d ,µ z) , the constrained minimization problem of Eq. (5) (the primal problem) is converted into an equivalent unconstrained maximization problem of the augmented functional f¯ (the dual problem), suitable to be solved by EA’s, as follows

max ¯ , af = C

(g (z (y)|d ,µ z)),

y= a

(8)

ln (p1 / p2 ) ln (g1 / g2 ) p1

|g1 |q

(9)

One of the known difficulties of the RIA is the need to verify an equality constraint. With EA the feasibility of constraints is not guaranteed, because a one-to-one relation between the set of possible offspring genotypes and the set of feasible phenotypes is not always available. It means that from a set of feasible phenotypes, the genetic operators can produce a new one that is infeasible. This is relevant, because equality constraints define a hyper-surface, in the search space, and so the feasible domain is greatly reduced, particularly in high dimensional spaces. To cope with the implicit equality constraint, an individual genetic repair operator is proposed. To repair a genotype with an infeasible phenotype means transforming that same genotype into another with a feasible phenotype. The action of the individual genetic repair operator, upon each solution, consists on the modification (manipulation) of the genetic information relative to , while the genes relative to vector a are kept intact. The goal is to find the value of , along the search direction of each solution, for which the failure surface is defined, i.e. . is defined as a binary-coded If the genetic information about variable, then the resolution of the repairing operation is limited by both the number of genes (bits) and the size-constraints, eventually making the process unsuccessful. Therefore, it is convenient to opt for a mixed real-binary genetic code. Let N be the set of binary strings with N is the binary representaa fixed length of N bits, such that s (a) N tion of unit vectors a , N = i =y1 ni , where ni is the number of bits allocated to each component ai of vector a , and s ( ) is the real-binary transformation operator. Then, let N + 1 be the set of mixed strings of one real-valued data element and N binary bits. Thus, an array N + 1 is called a mixed genotype, representing a possible { , s (a)} solution in the uncertainty space. A schematic genetic string is shown in Fig. 1.

To improve the efficiency of the search process, the search variables are decomposed into their magnitude (norm) and direction (direction cosines) components, in the uncertainty space. Any coordinate in the standard normal space is defined by

a:

, ] , ]

3.2. Mixed coding and individual genetic repair

3.1. Decomposition of variables and redefinition of the RIA

a,

[ [

where is a very small valued non-negative real, representing the allowable constraint violation. The exponential form of the penalty function allows to define a gradual penalization for the solutions, organizing them according to their level of violation in a continuous manner. The constants K and q are defined considering two degrees of violation and penalty. Weak penalty, p1, corresponding to a tolerable violation of the constraint, g1, and strong penalty, p2 , corresponding to an significant violation, g2 . Thus, introducing the two degrees of violation, in Eq. (8), and equating to K and q, one obtains [43,48]

One can find innumerous proposals of EA’s, because these algorithms are somehow problem-dependent and need to be developed to suite each problem. Initially, EA’s use only zero-order information. However, heuristics of higher-order information can be introduced. Another common strategy applied is the concept of elitism (a set of the highest fitted solutions within the population, will survive until (i.e. is transferred to) the population of the next generation), at the cost of a larger exploitation and a smaller exploration of the search space. Here, the proposed evolutionary methodology exploits the particular characteristics of the RIA. It may be interpreted as a quasi-local problem with a directional search scheme, since it seeks the closest point on a surface in relation to a fixed point. Novelties include the decomposition of random variables, the redefinition of the RIA as an unconstrained problem with a mixed real-binary coding of the variables and two new evolutionary operators: one for the genetic repair of the solutions, to impose the equality constraint of the reliability assessment problem, and another for the progressive reduction and reallocation of the search domain, implicitly guiding the evolution of the population and focusing the search procedure in a region of the failure surface with higher probability content, where the MPP is expected to be. The application of these operators is more efficient if also some knowledge of the problem is known. For that purpose, an initial estimate of the reliability index is calculated. The next sections will detail these procedures.

y=

0 , if g K |g|q , if g

(g ) =

(7)

In Eq. (7), f¯ is a positive measure of fitness, such that C is a high valued constant and is a scaling factor. One of the characteristics of the proposed dual problem is the independence between and a, as search variables, allowing to easily manipulate the genotypes, to improve the quality of the solutions and to accelerate the convergence. This independence is fundamental to the application of the two new evolutionary operators (see Sections 3.2 and 3.3). On Eq. (7), notice

Fig. 1. Schematic representation of the real-binary genotype. 56

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António

Fig. 2. Flowchart of the individual genetic repair operator.

For a deterministically feasible design solution, the individual genetic repair operator consists on the conditioned one-dimensional inverse minimization problem of the limit-state function, i.e. min(g ( |a)) 0 . Inverse, because the sought optimal value (g = 0) is

convergent; (3) the iterative process depends on the choice of

About the previous considerations, it is important to mention that:

already known and one is only interested in determining the corresponding value of . Conditioned, because the search direction, a , of each solution, is kept constant. The individual genetic repair is performed as an iterative procedure, represented by the following asymptotic sequence

g ( 0|a), g ( 1|a),

, g ( k|a), g (

k + 1|a),

0

(1) reliability assessment is not performed if a design solution is infeasible deterministically; (2) it is not possible to assert whether g is actually a locally decreasing function on [0, g0] but it should be an acceptable hypothesis given the measurable proximity of the origin of the uncertainty space to the failure surface, particularly for optimized structures. Even if that is not the case, the process is still able to converge, if lim g ( k|a) = 0 holds. Thus, once g k is close enough to zero, the

(10)

with k

=

k 1

+

k 1,

k = 1,

, kmax

(11)

k

assumption of a locally decreasing function becomes valid again. If lim g ( k|a) 0 , then the procedure diverges. Such situation is

where k is the number of the current iteration, k = 0 is the initial state of the solution, for the initial value 0 , before repairing, and kmax is a maximum number of iterations, allowing for convergence. For a bijeck : [0, g ] [0, max ], the increment at each iteration tive function 0 follows the proposed exponential law k

=

max

2e

|g ( k|a)| g0 ln(2) g0

1

max .

k

discussed later. (3) The value of max is important, because it will influence the efficiency of the iterative process. If too small, it becomes cumbersome. If too large, then g may be become negative. When it happens, the process runs backwards, with negative increments, until convergence.

(12)

Overall, the exponential law in Eq. (12) allows the iterative process to run more efficiently towards convergence, varying from larger to more refined increments, as the limit-state function approaches zero. In practice, convergence is achieved, in a finite number of iterations, after the following inequality is verified

being g0 = g (d, µ z) , as defined in Eq. (3), and max the maximum increment value. From the definitions on Eqs. (10)-(12), some considerations should be made about the iterative process: (1) it is assumed the origin of the uncertainty space is contained in the safety domain; (2) ideally, it is assumed g is a locally decreasing function on [0, g0], meaning Eq. (10) represents a locally decreasing sequence, since g k + 1 g k , for all k . Then, lim g ( k|a) = 0 and the series is said to be

|g ( k|a)

0|

, k = 1,

, kmax

(13)

where is a very small valued non-negative real, as defined in Eq. (8). About the previous point (3), it is important to mention that the value of max is also important for the stability of the iterative process. While

k

57

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António

0 always from Eqs. (10)–(12) represent the ideal case, for which g the positive side, if the increment is too large either a) convergence is achieved by the negative side, with negative increments, or b) an oscillatory effect happens, with g alternating between positive and negative values. In both cases, convergence is achieved if, and only if, lim |g ( k|a)| = 0 exists. In case b), if such limit doesn’t exist, it means

group for which the equality constraint of the RIA is verified. Appropriately choosing the required set of solutions to be t and once t the equality t = Z is verified, the new search region is defined by

Z t = {y

that |g ( k + 1|a)| > |g ( k|a)| and the iterative process will diverge, even though it oscillates around g = 0 . Thus, the individual genetic repair cr , where cr is the operator is only conditionally stable, for max critical value for the maximum increment. The stability of the operator is kept by controlling max . During the evolutionary process of the RIA, the individual genetic repair will be aborted if Eq. (13) is not verified, in a finite number of iterations. It may happen either because a solution is not pointing towards the failure surface (if it isn’t a closed surface) or if the failure surface is very far. In both cases, the solutions are penalized as defined in Eq. (8). Fig. 2 represents the flowchart of the proposed operator.

B (yMPP ; )

g (z (y)|d ,µ z) = 0}

Ny :

[

min ,

max ],

a

[ 1, 1]}

max ],

ai

j

(1) the fact that the search process restarts with an elite group, may cause the evolution of the population to nest quickly and to favor certain genetic characteristics that could otherwise be improved. (2) even though, ideally, Z Z 0 and Z t aims to be a good approximation of Z , it is not possible to guarantee that yMPP Z t , because the MPP may be outside of the reduced size constraints. Issue (1) is a consequence of the loss of diversity in the population, resulting from the forced manipulation of the genetic characteristics of the population. Issue (2) results from the implicit definition of the limitstate function and the fact that the location of the Interest Region, Z , and of the MPP itself is unknown. To cope with these issues (1) the elite’s global diversity is measured, at every generation. It is defined at phenotypic level as the sample variance of inside the elite, Var ( ) . This choice is justified because is both search variable and objective. So, Var ( ) is a measure of genetic diversity with the ability to evaluate the potential of the elite to improve the evolution, at each generation, since the similarity between direction cosines of different solutions is implicit, after the reduction of the search space. Also, it is independent of the diameter of the search space, so it is universal, in the sense that it keeps its meaning throughout all the evolutionary process. (2) the search region, Z t ,is upgraded sequentially, at every pre-determined number of generations, tZ ,for an optimal maturation of the elite. The upgrade process is repeated, while a minimum level of diversity, a2, is preserved, after which the third stage happens and the population evolves in the same domain, until the final convergence. To prevent the hypothesis of dimensions being reduced to a single point or being too small, it is imposed a minimum diameter a , for i = 1, ..,Ny . to each size constraint, such that |aimax aimin|

(14)

where is a positive real and B (yMPP , ) is an open Ny -ball with center on yMPP and radius , defining a neighborhood around the MPP. In practice, the limit-state function is implicit and the actual location of the Interest Region is unknown. To avoid searching through all the uncertainty space, an initial bounded search region, Z 0 , is defined as follows

Z 0 = {y

min ,

where aimin = min(ai , j = 1, ..,ktop) , aimax = max(ai , j = 1, ..,ktop) , for i = 1, ..,Ny . The second stage begins and, for the new search region, the genetic code of the elite members is recoded and the remaining population is randomly generated. The advantage of restarting the procedure with an already defined elite group is the possibility to condition the future evolution of the population. In practice, it makes the algorithm choose the most promising evolutionary path available and focus the search in a more promising area of the reduced search region. The search for the MPP is accelerated. After the reduction of the search domain, the population evolves closer to the feasible domain, requiring only small corrections of . Because solutions are no longer penalized, the single differentiating fitness factor is their -value. The reduction of the search domain around the elite group, at generation t , poses two issues:

The individual genetic repair operator acts explicitly on the genotypes, by correcting the real-coded gene related to the of the solutions, while keeping the other binary-coded genes, of direction cosines, fixed. This allowed to improve the solutions, translating them to the failure surface and improving their potential within the population. As mentioned, there are solutions which do not verify the equality constraint, after being repaired, and are penalized. Given the size of the search space, it is not of practical interest to keep searching on directions not pointing towards the MPP. A second evolutionary operator is now introduced, acting implicitly on the binary-coded genes, related to the direction cosines of the solutions, by conditioning the course of the evolution and giving both the ability to increase the resolution of the search process and to focus it on a preferential region of the uncertainty space. From an evolutionary point-of-view, it means certain phenotypical characteristics will be preferred. In the scope of the current problem, the target region in the uncertainty space, here named Interest Region, Z , is defined as the set of solutions of the neighborhood containing the MPP, lying on the failure surface, as follows

y

[

(16) j

3.3. Progressive reduction and reallocation of the search domain

Ny :

g (z (y)|d,µ z) = 0,

[aimin , aimax ]}

k

Z = {y

Ny :

(15)

where min and max are two Euclidian distances measured from the origin of the uncertainty space. Ideally, these are such that [ min, max ] (see next section) and Z Z 0 . MPP Overall, the search procedure is divided in three stages. The first acts on search region Z 0 , seeking solutions that are potentially in Z . From the initial stage, it results a search region Z t , which aims to be a good approximation of Z , but still cannot be designated as such. This region is defined after a consistent set of solutions, verifying the equality constraint, is established. Then, it is possible to define an improved and reduced search region, at any posterior generation t , by defining the new size-constraints of the direction cosines as the extreme values of each variable existing in the set. For that purpose, let N + 1 be the set of genotypes, of k Pt POP elements, defining the population of solutions, at generation t . Then let, t Pt be the subset containing the ktop solutions with the highest fitness, i.e., the elite t t group. Also, let Z be the set containing solutions of the elite

Thus, the search domain is progressively reduced and reallocated. The upgrade operation allows to increase the resolution of the search, making possible to have a precision similar to that of gradient-based methods, with only few bits per variable. The maximum possible quality of the resolution is defined by the diameter a and the number of bits of each variable. There are two critical steps to be taken before final convergence. The first is to define the first reduced search region, Z t1,at generation t1. The second is the loss of diversity in the elite group. A lower level of diversity means the potential to find better solutions is smaller and may be a sign that the population is getting closer to the MPP or converging to local minima. Then, the loss of diversity should be progressive, 58

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António

allowing to define search regions with a higher likelihood of containing the MPP. For that, a good combination of a2, tZ and a is vital. The goal is to find a finite sequence {Z 0, Z t , Z t + tz , Z t + 2tz , , Z t + ntz } , whose final term, Z t + ntz , is a good approximation of Z and for which yMPP Z t + ntz . Global convergence of the method is considered to happen only in the third stage of evolution. To be achieved, Z t + ntz must be defined and a minimum number of generations has to be completed, in the first place. Then, a measure of similarity between elite groups, or members of the elite, at consecutive generations, must be satisfied. The global convergence criteria is written as follows t 1+

t1+ + 1

2 a

, if Var ( ) <

4. Application to composite shell structures To study the ability of the proposed method, a set of Pareto-optimal solutions, found in a previous work on the multi-objective Reliabilitybased Robust Design Optimization (RBRDO) of shell composite laminate structures, where reliability assessment was executed by the PMA [49], is evaluated. Such design solutions verify implicitly the constraint 3 but, because of the nature of the PMA, the actual value of their reliability index is unknown. Now, the goal is to determine the reliability index, and the probability of failure, of each optimal design solution by the proposed evolutionary approach and to validate the results with two other methods: a gradient-based method [35,36], and the crude MCS method. In the next subsections, the EA that is hybridized with the new evolutionary operators is succinctly described and flowchart of the hybridized algorithm presented, the numerical example and its properties are defined and the results are discussed.

(17)

where is an imposed generational gap to happen after generation t1 and ~ is a symbol representing similarity between two sets of solutions. 2 + If Var ( ) , where is a a , at generation t1 + , then pre-defined increment of generations to be run. 3.4. Knowledge of the problem

4.1. Evolutionary algorithm

The proposed evolutionary operators, as presented in the previous sections, were developed to help EA to explore the vast search domain in a more efficient manner, avoiding uninteresting regions. Yet, the heuristics supporting the operators require some parameters to be correctly pre-determined for each design solution, to make their application more efficient, being them max , min and max . Being related to the value of the reliability index, the definition of these parameters would benefit if an estimate of the true value of MPP is available. Assuming g is differentiable at the origin of the uncertainty space, the estimate of the actual reliability index is defined as follows, obtained from a first-order Taylor expansion of the limit state function, equating to zero (as in Eq. (5)) and considering that at the MPP z i = µz i + ai zi . Solving with respect to , one obtains the following approximation [35,36]

The proposed algorithm to solve the RIA problem is a single-objective GA based on the works by António [48] and António et al. [50]. Here, it is referred to as micro-genetic algorithm (mGA), since it is developed with a small population and a small number of genes per variable. It is applied to each design solution with the purpose of evaluating the corresponding reliability level, measured in the uncertainty space. The operators of the mGA are briefly explained as follows I. Initialization: the initial population set, P1, is randomly generated with s (a)i U (0, 1) and i0 = min , for i = 1, , kPOP , where kPOP is the number of solutions in the population. II. Selection: Individuals are ranked according to their individual fitness. An elite group is formed, at every generation t , by the best ktop individuals. The selection of the parents is fitness based, with one from the elite group, t , and another from the remaining population. III. Crossover: For each selected couple of parents, a new solution is generated and included in the offspring set B . This procedure is repeated until B is defined. The offspring genetic material is obtained using an improved parameterized uniform crossover [50]. At an intermediate stage the entire population is defined by the set Pt B . IV. Similarity Control: Population Pt B is ranked by fitness value and the similarity is evaluated variable-by-variable, between each possible pair of solutions. The comparison of each variable is made gene-by-gene, such that two solutions are said to have an equal value on a variable if, and only if, s (ai ) j1 = s (ai ) j2 for j1 , j2 = 1, , kPOP , with j1 j2 , and i = 1, , Ny . Then, two solutions are said to be similar if the number of equal variables is equal or greater than an imposed prescribed number. Solutions with similar genetic properties are eliminated and replaced by new randomly generated solutions. The new population is again ranked by fitness and the solutions with lower fitness are excluded. Now, the dimension of the population is smaller than the original one. The size of the population is recovered by the mutation operator. During the search in the initial region Z 0 , solutions verifying the equality constraint are carefully allowed to have a higher degree of similarity between each other. V. Implicit Mutation: an operator of Implicit Mutation [48] is applied, recovering the initial size of the population.

g

=

Nz i=1

(

)

2 g zi zi

(18)

(d, µ z)

differs In the previous equation, it is important to notice that from 0 found in Eq. (11), where the latter is a reference to the -value of a solution before being repaired. Both parameters, however, can match. This estimate is then used as “knowledge of the problem” to define the three necessary parameters. For the current problem, it suffices that min

=

ref

1;

max

=

ref

+ 1;

max

=

max

(19)

with ref

=

Int( ), if Int( ) + 0.5, if

0 0

Int ( ) < 0.5 Int( ) 0.5

(20)

where Int( ) represents the function returning the integer part of a real number. Eq. (19) implies that the initial search space, Z 0 , is defined as the hypervolume between the two hyperspheres of radius min and max , where the true Interest Region, Z , is expected to lie. Also, it is defined that max = max as an initial guess of the maximum increment, in Eq, (12), which may need to be corrected to avoid divergence. The gradient of the limit-state function is only calculated once, at the mean values of the variables, before the evolutionary process starts. Yet, to avoid excessive model evaluations, the gradient g , with respect to the random variables z , is calculated analytically by the Adjoint Variable Method [35,49].

The mGA combined with the new evolutionary operators proposed in this paper results in a hybridized GA, here named HmGA. The flowchart of the HmGA is shown in Fig. 3. 59

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António

Fig. 3. Flowchart of the HmGA.

4.2. Numerical example

macro-elements, grouping all elements, and there is one laminate for each macro-element. The laminate distribution is as shown also in Fig. 4. The balanced angle-ply laminates with five layers and the stacking sequence [+ / /0/ / + ] are considered in a symmetric construction. Ply angle is referenced to the x -axis of the reference axis. Variables hi , i = 1, , 4 , denote the laminate thicknesses. A

A clamped cylindrical shell laminated structure is considered as show in Fig. 4. Nine vertical loads of mean value Pk = 11.5KN are applied along the free linear side (AB) of the structure. This side is constrained in the y-axis direction. The structure is divided into four 60

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António

Table 2 HmGA parameters. 15 33.33% of total pop.

Population size,kPOP Elite group size,ktop

33.33% of total pop. 3 See Eq. (19)

Mutation group size,kmut Genes per binary variable min , max max

max

50 0.15 4 gen. 0.01

kmax a tz 2 a

Similarity degree, in Z 0 , between solutions if g (z|d,µz) = 0

Fig. 4. Geometric definition of the cylindrical shell structure and composite laminates distribution.

g (z|d,µz)

E2 (GPa)

G12 (GPa)

181.00

10.30

7.17

X ; X '(MPa)

Y ; Y '(MPa)

S (MPa )

1500; 1500

40; 246

68

1 var. 3 var.

The RBRDO problem from which the previous structure was optimized is defined as follows [49]

Table 1 Mean values of the mechanical properties of composite layers [51]. E1 (GPa)

0

50 gen. 20 gen.

min µ z2

12

0.28

subject to

(kg /m3) 1600

F (µ z2|µ z1,

z1,

z2)

1

0

u (µ z 2 | µ z 1 ) ua

= {W (µ z2|µ z1), det C (µ z2|µ z1,

p fR (µ z2|µ z1)

p fR a

µ z2li

µ z2ui , i = 1, 2, …Nz2

µ z2 i

z1,

z2)}

(22)

smoothing procedure is followed at the boundaries of the laminates to guarantee the continuity of the structure. A composite material built with the carbon/epoxy system denoted T300/N5208 [51] is used in the presented analysis. This is a unidirectional composite of long carbon fibers aggregated in an epoxy matrix. The macro mechanics’ mean values of the elastic and strength properties of the ply material used in the laminates of the structure are presented in Table 1. The elastic constants of the orthotropic ply are the longitudinal elastic modulus E1, the transversal elastic modulus E2 , the in-plane shear modulus G12 and the in-plain Poisson’s ration 12 . The ply strength properties are the longitudinal tensile strength X , the longitudinal compression strength X ' , the transversal tensile strength Y and the transversal compression strength Y ' and the shear strength S . In the current example, to perform reliability assessment, it is necessary to separate random variables, z , in two groups: random parameters, z1 z, and random design variables, z2 z. The announced variables and parameters for this numerical example are grouped as follows:

where W is the structural weight (optimality measure), det C is the determinant of the variance-covariance matrix (robustness measure) of = (u, R) , u (µ z2|µ z1) is the critical displacement, R (µ z2|µ z1) is the critical Tsai number and pf a is the allowable value of the probability of R failure associated with the critical Tsai number. The standard deviations z2 are constant and are not a function of µ z2 . In the RBRDO problem the probabilistic stress constraint was evaluated as an inner cycle, by the PMA. The RBRDO problem was solved by a dominancebased multi-objective GA [49] and the set of optimal designs make up the so-called Pareto-optimal front. The reliability index of each Paretooptimal solution was first predicted by a gradient-based method [35,36] and its distribution is displayed in Fig. 5. 4.3. Results To execute the HmGA an Intel(R) Core(TM) i7-6700 CPU @ 3.40Ghz processor was used. The main validation of the method is done by comparison of the same results obtained by an alternative gradientmethod [35,36]. It was considered that convergence is possibly = 20 achieved if Eq. (17) is verified for = 50 generations and generations.

random mechanical properties {E1i, E2i, Yi , Si } z1, for i = 1, , 4 random ply angles { i} z2 , for i = 1, , 4 , equal for all plies and laminates random laminate thicknesses {h1, h2 , h3, h4 } z2 This separation is needed, because the randomness of the ply angles and the laminate thicknesses is only considered in the cycle of design optimization, as in Eq. (22), and are kept constant as their realizations, µ z2 , in the reliability assessment problem. So, there are a total of sixteen random parameters, z1, defining the uncertainty space. The mean-values of the random parameters, µ z1 , are kept constant, for each design solution. For the random mechanical properties, the standard deviations, z1 = z1 (µ z1) , are kept constant and are necessary to transform the random parameters to standard normal ones. Assuming z~ N (µ z, z) then the following transformation is applied

z = µ z + yz

z

(21) Fig. 5. Pareto optimal front and distribution of the reliability index of the solutions, estimated by a gradient-based method [35,36].

where yz = y (z) , such that yz~ N (0, 1) , and zi = (0. 06µ zi ) , for i = 1, , Nz . The parameters of the HmGA are presented in Table 2. 61

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António

Fig. 6. Distribution of the reliability index obtained via HmGA and a gradientbased method [35,36].

Fig. 8. Distribution and quadric polynomial regression of the computing times, by increasing order of , for both convergence cases.

When evaluating the accuracy of EA’s, it is important to take into account that these methods act on a discretized domain of the search variables and that their resolution is limited, mainly, by the number of genes per variable, while for gradient-based methods it is only limited by the chosen kind of precision of the mathematical operations of the algorithms. Fig. 6 shows a comparison of the distribution of the reliability index, along the Pareto optimal front, obtained by the two methods. The HmGA was able to converge without the need of extra generations, . Results show a very good proximity to the values estimated with the gradient-based method. Calculating the relative difference, of the values of , between the two methods as

it is seen in Fig. 7 that the maximum relative difference is inferior to 5% and the mean relative difference of all the solutions is about 0.6%. It is also worth to notice that, the majority of the predicted values of the reliability index have a relative difference inferior to the mean. Fig. 8 displays the distribution of the computing times, by increasing order of , with a quadratic polynomial regression. There are three intervals of [3.552, 3.758] there is a the values of worth the attention. For [3.887, 4.483] cluster of solutions with times varying 5 to 8 min. For computing times decrease and are relatively stable with maximum difference of about 2 min between the faster and the slower solutions. [4.508, 5.363] there is a significant increase in the Finally, for computing time of the solutions, with values varying between 4 and 12 min. About the computing times obtained for each solution, it is chief to mention that it is a function of the time spent on each generation and a function of the number of generations necessary until convergence. The

time spent on each generation results from the execution of the basic evolutionary operators and also depends on the number of iterations run by the individual genetic repair operator, for each solution on which it is executed. On the other hand, the time spent until convergence depends on the number of generations, t1, necessary to run until the first domain reduction and on the imposed number of generations, , until final convergence, which may still be genetically corrected. Since the HmGA achieved global convergence for the imposed value of , from the following considerations and by Fig. 9, it is possible to conclude that the majority of the computing time was spent on the search for the first reduced region, Z1. In practice, the individual genetic repair operator is able to converge in a small number of iterations. Moreover, the basis GA only evaluates the newly generated solutions, saving runs of the physical model. So, the computing time spent on each generation is short. During the search process, it was possible to conclude that the biggest contribute to the computing time came as a consequence of the similarity control operator. Without it, the search gets easily nested, due to the appearance of cloned solutions. However, during the evolution on search region Z 0 , it was frequently observed an interesting phenomenon of triangular similarity, where a superior solution of the elite had a considerable degree of genetic similarity with other solutions of the elite, of inferior fitness, which did not have a relevant level of similarity between them. As a result, the population failed to preserve those solutions and the number of solutions verifying the equality constraint declined considerably. Even so, with the adopted strategy for the similarity control, explained in section 4.1, the proposed methodology is able to converge in a useful time, demonstrating the robustness of the developed evolutionary operators. Yet, with further developments it might be possible to reduce even more the computing time of the algorithm. In relation to the MPPs obtained, it is of interest to analyze how

Fig. 7. Distribution of the relative difference of the values of , along the Pareto optimal front, for the case HmGA.

Fig. 9. Number of generations needed to find Z1, by increasing order of design solutions.

=

|

EA

grad | grad

(23)

62

of the

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António

Table 3 Input (left) and output (right) parameters of the crude MCS method. ( (1)

(2) (3)

3.552

4.002 4.508

)

1.913E-04

3.144E-05 3.266E-06

Ns

Kf

pflow

pfup

pf

3E6

587

0.000182384

0.000208949

1.957E-04

6

0

3.34302E-06

2.000E-06

3E6 3E6

90

1.67174E-05

3.52014E-05

MCS

3.000E-05

meaning that the sign of the variations is equal. The magnitude of such variations also shows a good match, even though it is possible to identify a small number of solutions with a slightly increased difference between the values. Yet, such differences are not significant to the estimated values of , as show in Fig. 7. The final step to validate the proposed methodology is to perform a crude MCS analysis. Because the estimated failures probabilities are very low, only three representative solutions evaluated by the HmGA were assessed by the MCS method, as a very large number of samples is needed to predict very low probabilities. Based on the performed MCS analysis, a two-sided confidence interval of pf can be calculated with a given confidence, . So, the lower and upper limits of the confidence interval of pf are obtained from [52]

Fig. 10. Relative difference, in percentage, of values of the mechanical properties, on the MPP, as estimated by the HmGA and the gradient-based method.

close the points found by both methods are. Given the number of design solutions evaluated and the elevated number of uncertainty variables considered, it is not possible to plot all of them and, so, only the most relevant results are represented graphically, while the others are commented. In general, the MPPs found by the HmGA have a good degree of accuracy, just like with the values of . Estimating the relative difference of values of the mechanical properties, in the MPP, between the gradient-based method and the HmGA, like in Eq. (23), it is seen that the design solution with highest relative difference of is the also the one with the highest relative difference in the mechanical properties, as expected. For this solution, the longitudinal Young modulus of the first laminate, E11, is the random parameter with both higher the relative difference between the two methods and deviations from the mean in absolute value, in the MPP. Still, it is not significant, as it is at most 4%, as shown in Fig. 10. Fig. 11 shows a uniform radial distribution of the evaluated design solutions, allowing to plot the values of E11 of all the MPPs and to compare them with the mean value, µ E1. Each point on these plots, here referred to as -plots, is defined by {E1j N cos , E1j N sin }, = 2 (N 1)/ N , for N = 1, , 42. Starting in 0rad , the design solutions of the Pareto optimal front are ordered counterclockwise. Overall, it is seen that the MPPs, obtained by the two methods, present similar behaviors in relation to the mean values,

pflow, up

=

Kf

Kf

1(

Ns

)

Ns

(1

Kf Ns

) (24)

Ns

where Kf is the number of failure events and Ns is the number of samples. Ns normally distributed random samples, in the uncertainty space, were created by generating the same number of random standard normal deviates by the Box-Muller transform [53]. Table 3 presents the = ( ), values of together with the corresponding estimate p f RIA

the number of samples used for each solution, the number of failure events, the boundaries of the confidence intervals, for = 95%, and the failure probability calculated by the MCS, pf MCS = Kf / Ns . Fig. 12 plots the calculated confidence intervals and the position of the estimate within the interval, for the three evaluated solutions. pf RIA It is seen that the estimate p f , for the three reference solutions RIA falls inside the calculated confidence intervals. From a frequentist perspective, 95% of the random confidence intervals, constructed from random samples, are expected to contain the unknown true value of pf . The fact that the estimate p f lies inside a confidence interval is a RIA

good indicator that the estimate is close to the true value. Although it doesn’t mean by itself that 95% of the random intervals will contain the estimate p f , it is a good measure of convergence of the HmGA. The RIA

concern with the crude MCS method is the large number of required samples and the computing time necessary to evaluate all of them. For the three solutions in Table 3, 3 000 000 simulations were run, for each one of them. It took 35 h to assess the probability of failure of each design solution. As seen in results in Table 3 and in Fig. 12, solution (1) clearly didn’t need so many samples to converge. Yet, even a few hundred thousand samples would take several hours to be evaluated, because of the need to consecutively run a FEM model. On the other hand, solution 3 would need even more simulations, since a very low number of failures was observed and one cannot conclude about its convergence. This happened, because the probability of failure is already extremely low for a reliability index around 4.5. Computing times of this magnitude are incomprehensible, if reliability assessment is an inner cycle of a design optimization problem. By comparison, it shows the efficiency of the proposed evolutionary method to solve the RIA, with computing times peaking at 12 min for reliability indexes higher than 4.5. Even though the proposed method is slower than gradient-based methods, the advantages of EA’s have to be taken into account, mainly the possibility to achieve global convergence. The proposed method is expected to set the basis for further developments on the design optimization of more complex systems,

Fig. 11. -plots of longitudinal Young modulus, E1, for the first laminate. 63

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António

Fig. 12. Confidence intervals (black) of pf and estimated values pf RIA (red), with

= 95% , for the tested solutions presented in Table 3. (For interpretation of the

with multiple failure criteria, where hundreds (if not thousands) of design solutions are evaluated. To make it possible new developments will have to be made, over the ones presented in this paper. Primarily, to improve the efficiency during the first stage of the evolutionary search and to make the search procedure compatible with multiple failure surfaces. In such problems, it is expected that gradient-based methods fail, or converge in local optima, easily.

“Associated Laboratory of Energy, Transports and Aeronautics (LAETA)”, UID/EMS/50022/2019.

Supplementary data to this article can be found online at https:// doi.org/10.1016/j.strusafe.2019.03.001.

5. Conclusions

References

In this paper, a new procedure for global convergence of the RIA, with practical efficiency, is presented, suitable to hybridize any EA with elitist strategy, for reliability assessment. The proposed method is expected to set the basis for further developments on the design optimization of more complex systems, with multiple failure criteria, where gradient-methods are expected to fail. Random variables are decomposed into magnitude and direction components, the RIA redefined as an unconstrained problem, with a mixed real-binary coded genotype, and two new evolutionary operators are introduced: one for the genetic repair of the solutions, to impose the equality constraint of the reliability assessment problem, and another for the progressive reduction and reallocation of the search domain, implicitly guiding the evolution to a region of the failure surface with higher probability content. To study the ability of the proposed method, a large set of Pareto-optimal solutions is evaluated, resulting from a previously solved problem of RBRDO of shell composite laminate structures. The reliability index of each solution is estimated by the proposed evolutionary method and by a gradient-based method. Then, the confidence intervals for the probability of failure are estimated, with MCS. The results show very good accuracy of the proposed method in predicting the value of and the location of the MPP, in the failure surface. Computing times of the HmGA are considered to be practical and it is concluded that the most consuming part of the search process happens in the first stage of evolution, because of the similarity control operator, which needs further adaptations to the current problem. Yet, it demonstrates the efficiency and the capability of the new evolutionary operators to solve the RIA problem. Finally, confidence intervals of the probability of failure were calculated by the crude MCS method, for three representative solutions. The probabilities of failure estimated by means of fall within the confidence intervals. The MCS required a huge number of samples, which took many hours. By comparison, it shows the efficiency of the proposed evolutionary methodology, which was able to solve the problem in just few minutes with similar levels of accuracy. Even though it performs slower than gradient-based methods, the advantages of EA have to be taken into account, mainly the ability to achieve global convergence.

[1] Huang C, El Hami A, Radi B. Overview of Structural Reliability Analysis Methods Part II: Sampling Methods Incertitudes et fiabilité des systèmes multiphysiques 2017; 1(Optimisation et Fiabilité). [2] Melchers RE. Structural Reliability Analysis and Prediction. Ltd. JWS, editor; 1999. [3] Nikolaidis E, Ghiocel DM, Singhal S. Engineering design reliability applications: for the aerospace, automotive and ship industries. CRC Press; 2007. [4] Schueller G. Efficient Monte Carlo simulation procedures in structural uncertainty and reliability analysis-recent advances. Struct Eng Mech 2009;32(1):1–20. [5] Hurtado JE, Ramírez J. The estimation of failure probabilities as a false optimization problem. Struct Saf 2013;45:1–9. [6] Rashki M, Miri M, Moghaddam MA. A new efficient simulation method to approximate the probability of failure and most probable point. Struct Saf 2012;39:22–9. [7] Ditlevsen O, Melchers RE, Gluver H. General multi-dimensional probability integration by directional simulation. Comput Struct 1990;36(2):355–68. [8] Ditlevsen O, Bjerager P, Olesen R, Hasofer AM. Directional simulation in Gaussian processes. Probab Eng Mech 1988;3(4):207–17. [9] Melchers R. Structural system reliability assessment using directional simulation. Struct Saf 1994;16(1):23–37. [10] Shayanfar MA, Barkhordari MA, Barkhori M, Barkhori M. An adaptive directional importance sampling method for structural reliability analysis. Struct Saf 2018;70:14–20. [11] Papaioannou I, Papadimitriou C, Straub D. Sequential importance sampling for structural reliability analysis. Struct Saf 2016;62:66–75. [12] Wang J, Sun Z. The stepwise accuracy-improvement strategy based on the Kriging model for structural reliability analysis. Struct Multidisc Optim 2018. https://doi. org/10.1007/s00158-018-1911-9. [13] Zhang L, Lu Z, Wang P. Efficient structural reliability analysis method based on advanced Kriging model. Appl Math Model 2015;39(2):781–93. [14] Cornell CA. A proposal for a reliability-based code suitable for immediate implementation. Memorandum to members of ASCE Task Committee on. Struct Saf 1967. [15] Ditlevsen O. Structural reliability analysis and the invariance problem. Technical report No 22, Solid Mechanics Division. Ontario, Canada: University of Waterloo; 1973. [16] Lind NC. An invariant second-moment reliability format. Paper No 113 Solid Mechanics Division. Ontario, Canada: University of Waterloo; 1972. [17] Hasofer AM, Lind NC. An exact and invariant first order reliability. J Eng MechASCE 1974;100(EM1):111–21. [18] Rackwitzt R, Fiessler B. Structural reliability under combined random load sequences. Comput Struct 1978;9:489–94. [19] Periçaro GA, Santos SR, Ribeiro AA, Matioli LC. HLRF–BFGS optimization algorithm for structural reliability. Appl Math Model 2015;39(7):2025–35. [20] Keshtegar BA. hybrid conjugate finite-step length method for robust and efficient reliability analysis. Appl Math Model 2015;45:226–37. [21] Keshtegar B, Meng Z. A hybrid relaxed first-order reliability method for efficient structural reliability analysis. Struct Saf 2017;45:84–93. [22] Ezzati G, Mammadov M, Kulkarni S. A new reliability analysis method based on the conjugate gradient direction. Struct Multidisc Optim 2015;51(1):89–98. [23] Acar E. A reliability index extrapolation method for separable limit states. Struct Multidisc Optim 2016;53:1099–111. [24] Bai YC, Han X, Jiang C, Bi RG. A response-surface-based structural reliability analysis method by using non-probability convex model. Appl Math Model

references to colour in this figure legend, the reader is referred to the web version of this article.)

Appendix A. Supplementary data

Acknowledgement The authors gratefully acknowledge the funding by Fundação para a Ciência e Tecnologia (FCT), Portugal, through the funding of the 64

Structural Safety 79 (2019) 54–65

G. das Neves Carneiro and C. Conceição António 2014;38(15–16):3834–47. [25] Meng Z, Li G, Yang D, Zhan D. A new directional stability transformation method of chaos control for first order reliability analysis. Struct Multidisc Optim 2017;55(2):601–12. [26] Keshtegar B. Limited conjugate gradient method for structural reliability analysis. Eng Comput 2017;33(3):621–9. [27] Jiang C, Han S, Ji M, Han X. A new method to solve the structural reliability index based on homotopy analysis. Acta Mech 2015;226(4):1067–83. [28] Keshtegar B. Stability iterative method for structural reliability analysis using a chaotic conjugate map. Nonlinear Dyn 2016;84(4):2161–74. [29] Valdebenito MA, Schueller GI. A survey on approaches for reliability-based optimization. Struct Multidisc Optim 2010;42:645–63. [30] Díaz J, Montoya MC, Hernández S. Efficient methodologies for reliability-based design optimization of composite panels. Adv Eng Softw 2016;93:8–21. [31] Zhao Q, Chen X, Ma Z, Lin Y. A comparison of deterministic, reliability-based topology optimization under uncertainties. Acta Mech Solida Sin 2016;29(1). [32] Zhou X-Y, Gosling PD, Ullah Z, Kaczmarczyk L, Pearce CJ. Exploiting the benefits of multi-scale analysis in reliability analysis for composite structures. Compos Struct 2016;155:197–212. [33] Dehmous H, Duco F, Karama M, Welemane H. Multilevel assessment method for reliable design of composite structures. Mech Adv Mater Struc 2017;24(6):449–57. [34] Chen J, Ge R, Wei J. Probabilistic optimal design of laminates using improved particle swarm optimization. Eng Optimiz 2008;40(8):695–708. [35] António CAC. Optimization of structures using composite materials made of polymeric matrix (in Portuguese) PhD. Thesis, Faculty of Engineering Portugal: University of Porto; 1995. [36] António CAC, Torres Marques A, Gonçalves JF. Reliability based design with a degradation model of laminated composite structures. Struct Optim 1996;12:16–28. [37] António CAC, Hoffbauer LN. Uncertainty assessment approach for composite structures based on global sensitivity indices. Compos Struct 2013;99:202–12. [38] António CC, Hoffbauer LN. Uncertainty propagation in inverse reliability-based

design of composite structures. Compos Struct 2013;99:202–12. [39] Cheng J, Li QS. Reliability analysis of structures using artificial neural network based genetic algorithms. Comput Methods Appl Mech Engrg 2008;197:3742–50. [40] Deng L, Ghosn M, Shao S. Development of a shredding genetic algorithm for structural reliability. Struct Saf 2005;27:113–31. [41] Shao S, Murotsu Y. Approach to failure mode analysis of large structures. Probabilist Eng Mech 1999;14(1–2):169–77. [42] Wang J, Ghosn M. Linkage-shredding genetic algorithm for reliability assessment of structural systems. Struct Saf 2005;27:49–72. [43] António CAC. A hierarchical genetic algorithm for reliability based design of geometrically non-linear composite structures. Compos Struct 2001;54:37–47. [44] António CC, Hoffbauer LN. Reliability-based design optimization and uncertainty quantification for optimal conditions of composite structures with non-linear behavior. Eng Struct 2017;153:479–90. [45] Ahmad S, Irons S, Zienkiewicz OC. Analysis of thick and thin shell structures by curved finite elements. Int J Numer Methods Eng 1970;2(3):419–51. [46] António CC, Hoffbauer LN. From local to global importance measures of uncertainty propagation in composite structures. Compos Struct 2008;85:213–25. [47] Tsai SW, Wu EM. A general theory of strength for anisotropic materials. J Compos Mater 1971;5(1):58–80. [48] António CAC. A multilevel genetic algorithm for optimization of geometrically nonlinear stiffened composite structures. Struct Multidisc Optim 2002;24:372–86. [49] das Neves Carneiro G, António CC. A RBRDO approach based on structural robustness and imposed reliability level. Struct Multidisc Optim 2018;57(6):2411–29. [50] António CC, Castro CR, Sousa LC. Optimization of metal forming processes. Comput Struct 2004;82:3425–33. [51] Tsai SW. Composite Design. Think Composites, USA; 1987. [52] Casella G, Berger LB. Statistical Inference. Thomson Learning, editor; 2000. [53] Box GEP, Muller ME. A note on the generation of random normal deviates. Ann Math Stat 1958:610–1.

65