Automation in Construction 20 (2011) 610–619
Contents lists available at ScienceDirect
Automation in Construction j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / a u t c o n
Reliability-based design optimization with discrete design variables and non-smooth performance functions: AB-PSO algorithm I-Tung Yang ⁎, Yi-Hung Hsieh Dept. of Construction Engineering, National Taiwan University of Science and Technology, No. 43 Keelung Road, Taipei 106, Taiwan
a r t i c l e
i n f o
Article history: Accepted 4 December 2010 Available online 11 January 2011 Keywords: Reliability-based design optimization (RBDO) Particle swarm optimization Subset simulation Uncertainty analysis Metaheuristics
a b s t r a c t The purpose of reliability-based design optimization (RBDO) is to find a balanced design that is not only economical but also reliable in the presence of uncertainty. Practical applications of RBDO involve discrete design variables, which are selected from commercially available lists, and non-smooth (non-differentiable) performance functions. In these cases, the problem becomes an NP-complete combinatorial optimization problem, which is intractable for discrete optimization methods. Moreover, the non-smooth performance functions would hinder the use of gradient-based optimizers as gradient information is of questionable accuracy. A framework is presented in this paper whereby subset simulation is integrated with a new particle swarm optimization (PSO) algorithm to solve the discrete and non-smooth RBDO problem. Subset simulation overcomes the inefficiency of direct Monte Carlo simulation (MCS) in estimating small failure probabilities, while being robust against the presence of non-smooth performance functions. The proposed PSO algorithm extends standard PSO to include two new features: auto-tuning and boundary-approaching. The former feature allows the proposed algorithm to automatically fine tune its control parameters without tedious trialand-error procedures. The latter feature substantially increases the computational efficiency by encouraging movement toward the boundary of the safe region. The proposed auto-tuning boundary-approaching PSO algorithm (AB-PSO) is used to find the optimal design of a ten-bar truss, whose component sizes are selected from commercial standards, while reliability constraints are imposed by the current design code. In multiple trials, the AB-PSO algorithm is able to deliver competitive solutions with consistency. The superiority of the AB-PSO algorithm over standard PSO and GA (genetic algorithm) is statistically supported by non-parametric Mann–Whitney U tests with the p-value less than 0.01. © 2011 Elsevier B.V. All rights reserved.
1. Introduction Reliability-based design optimization (RBDO) is concerned with designing a structure to optimize an objective function, with uncertainty about whether a given reliability threshold is met. This problem is of paramount importance to a wide spectrum of engineering fields, especially civil engineering [1]. The most common objective function is the minimization of material cost or weight. For many practical situations where customization is expensive, the design variables are, by nature, discrete as their values can only be selected from a commercially available set, such as the diameter of tubes, thickness of plates, or cross-section of steel beams [19]. Furthermore, computations of reliability constraints in real-world applications often involve highly nonlinear, non-smooth, and non-differentiable performance functions, which are needed to relate the structural state (failure or safe) to uncertainty [38,41]. The RBDO problem with discrete design variables and non-smooth performance functions is the target of the present study.
⁎ Corresponding author. E-mail address:
[email protected] (I.-T. Yang). 0926-5805/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.autcon.2010.12.003
The complexity of the considered RBDO problem is three-fold. First, the solution space is discrete; the problem hence belongs to the category of combinatorial optimization, known as NP-complete. Existing discrete optimization algorithms require, in worst case, an exponential amount of time, and therefore are impractical for all but very small instances [29]. Second, the non-smoothness of performance functions renders conventional gradient-based optimizers ineffective as they would stall due to gradient inaccuracy and fail to converge [39]. Popular gradient-based optimizers include sequential quadratic programming (SQP) or generalized reduced gradient algorithms (GRG2). Third, the nonlinearity and non-smoothness of the reliability criteria may diminish the accuracy of approximation methods, e.g., first-order or second-order reliability methods, in estimating failure probabilities. By contrast, Monte Carlo simulation (MCS) is insensitive to the types of reliability criteria. Yet, MCS is computationally intensive as it needs a large number of samples to evaluate small failure probabilities. To address all the foregoing concerns, the present study develops a framework to integrate a new gradient-free optimization method with an efficient simulation procedure. The proposed method is a modified particle swarm optimization (PSO) algorithm with two new features: being auto-tuning and boundary-approaching. The auto-tuning feature
I.-T. Yang, Y.-H. Hsieh / Automation in Construction 20 (2011) 610–619
is to automatically adjust the control parameters with dynamic feedback from the search process, thereby eliminating the need for further sensitivity analysis. The boundary-approaching feature is to encourage movement to the constraint boundary, thus accelerating the search process. The adopted simulation procedure is subset simulation, which is flexible as it does not rely on the approximation of reliability criteria, and yet, using much fewer samples to estimate small failure probabilities. The paper is organized in the following manner. The second section defines the considered RBDO problem and illustrates its complexity. The third section introduces subset simulation, used to evaluate small failure probabilities with great efficiency. The fourth section elucidates the proposed AB-PSO (auto-tuning and boundary-approaching PSO) algorithm. A 10-bar truss example is used in the fifth section to verify the performance of the proposed framework by comparisons with standard PSO and another popular metaheuristic: genetic algorithm (GA). Hypotheses that the AB-PSO algorithm outperforms standard PSO and GA are examined by non-parametric Mann–Whitney tests. Conclusions are given in the last section. 2. Problem definition In RBDO, variables can be described using two vectors: design vector and random vector. The elements within the design vector X = [x1, x2,… xn] represent design variables whose feasible domain consists of discrete selections. The random vector Z = [z1, z2,… zq] is used to model uncertainty about resistance properties of materials and applied external loads. Some decision variables may also be random when they are bound to have errors. Two sources of uncertainty should be accounted for in the random vector: aleatory and epistemic uncertainty. Aleatory uncertainty is the inherent variation in the system, also known as irreducible and objective uncertainty. By comparison, epistemic uncertainty is presumed as being caused by lack of knowledge (or data), also known as reducible and subjective uncertainty. Epistemic uncertainty may result from estimation error, model imperfection, and human error [11]. The considered RBDO problem is to find a balanced design X, from a finite set. The objective is to minimize the total cost whereas the system failure probability should not exceed a specified target.
611
where Pr[.] represents the probability of system failure and g(X,Z) is the performance function, which indicates failure for a given design X and realization of Z when it is less than or equal to zero. In other words, the performance function (limit state function) separates the variable space into a safe region for which g(X,Z) N 0 and a failure region where g(X,Z)≤0. The failure probability in Eq. (3) is inversely related to system reliability, which is an integrated result of individual component reliabilities, assuming series or parallel systems [16]. A series system fails when any of the components fails. Thus, the failure probability is Pf ðX; ZÞ=Pr ∪ gi ðX; ZÞ ≤ 0 i
ð4Þ
where gi(X,Z) is the performance function of the i-th component. By comparison, a parallel system fails when every components fails. The failure probability becomes Pf ðX; ZÞ=Pr ∩ gi ðX; ZÞ ≤ 0 i
ð5Þ
The performance functions in Eqs. (4) and (5) result from the design constraints that are imposed by the current design code, such as displacement, stress, or compression force of buckling. In practice, the performance functions may not always be linear, smooth, and differentiable, especially when the performance functions are disrupted at intermediate points [37]. Displacement constraints can be written as jdp j≤ dallow
ð6Þ
where dp is the displacement at the p-th node; dallow is the allowable displacement. Stress constraints can be written as σi ≤ σallow σy σallow = ns
ð7Þ
where σi is the axial stress in the i-th component; σallow is the allowable stress, which equals the yield stress σy divided by a safety factor. The compression force may involve buckling constraints
min Ct ðX; ZÞ
ð1aÞ
σc;i ≤ σe
st: Pf ðX; ZÞ≤ P˜f
ð1bÞ
where σc,i is the axial compression force in the i-th component; σe is the critical buckling force in compression. The buckling force can be formulated as a function of member length and thickness, material stiffness, and moment of inertia.
X
xi pX∈L = fl1 ; l2 ; :::lr g
ð1cÞ
where Ct(X, Z) is the total cost; Pf(X, Z) is the system failure probability; P˜f is the target failure probability; xi represents the i-th vector element in X, selected from a known list, L. The total cost in Eq. (1a) consists of material costs and the expected failure cost, which is given by the product of failure cost and failure probability. The failure cost may encompass the costs of repairing or replacing the damaged structure, removing the collapsed structure, and compensation for injury or death of general users. Together, the total cost can be expressed as Ct ðX; ZÞ= Cm ðXÞ +Pf ðX; ZÞ×Cf
ð2Þ
where Ct, Cm, and Cf denote total cost, material cost, and failure cost, respectively; Pf is the failure probability, which can be evaluated as an implicit function of design and random variables. The probabilistic constraint in Eq. (1b) can be described by the performance function Pf ðX; ZÞ=Pr½g ðX; ZÞ ≤ 0
ð3Þ
ð8Þ
3. Proposed framework In the literature, the reliability analysis has been integrated with an optimization algorithm in three different ways: double-loop, single-loop, and decoupled. Double-loop methods have a nested nature. The outer loop performs optimization while every candidate solution is evaluated by the reliability analysis in the inner loop. The overall computational efficiency is jointly determined by the number of evaluation runs in the inner loop as well as that of the optimization iterations in the outer loop. Single-loop and decoupled methods avoid the reliability analysis by introducing equivalent optimality conditions to replace direct evaluation of the reliability constraints. At the core of single-loop methods is the most probable point (MPP) defined in the Gaussian space, where MPP is the minimum distance point on the limit state function from it to the origin [12]. Instead of finding the exact optimal solution, single-loop methods approximate the MPP, based on lower-order polynomial functions or derivatives of the performance functions [18,26,33].
612
I.-T. Yang, Y.-H. Hsieh / Automation in Construction 20 (2011) 610–619
Decoupled methods break loops into sequential cycles. It first determines the deterministic optimal solution in the search space, by the aid of nonlinear constrained optimizers. Then, the MPP for each constraint is found. In the next step, constraints are shifted according to their individual MPP. Thereafter, a deterministic optimization to the shifted problem is solved again to finishes the current cycle. The entire process continues from cycle to cycle until convergence [13]. In a nutshell, decoupled methods start with a deterministic optimum and strive to search for a close feasible solution satisfying the reliability constraint. Single-loop and decoupled methods, despite requiring less computational efforts, have their limitations. On one hand, single-loop methods, as an approximation, may not guarantee accuracy. On the other hand, decoupled methods become unsatisfactory when there are multiple local optima and the reliable optimum is not close to the deterministic optimum [9]. Furthermore, decoupled methods entail extensive use of nonlinear constrained optimizers, which would be impeded by the lack of derivatives. The proposed framework extends the double-loop method by integrating subset simulation and the AB-PSO algorithm, as demonstrated in Fig. 1. Subset simulation estimates the failure probability of each candidate solution. The failure probability is then used to check whether the reliability constraint is satisfied in Eq. (1b) and to compute the associated objective value in Eq. (2). On the basis of the objective value, the AB-PSO algorithm searches for better solutions iteratively. Both subset simulation and the AB-PSO algorithm will be discussed separately in the following sections.
Sampling methods include Monte Carlo simulation (MCS) and Latin hypercube sampling (LHS). Although they are flexible about the shapes of limit-state functions and the distribution types of random variables, the large number of samples required to evaluate small failure probabilities makes them inefficient. More advanced sampling methods, such as importance sampling and its variants [20,27,40], can reduce the required sample size to 5%–20%, but become inapplicable in high dimensions [32]. To attain the generality of sampling methods without sacrificing the computational efficiency of approximation methods, the present study adopts subset simulation to perform the reliability analysis. Subset simulation stems from the concept that a small failure probability can be expressed as a product of larger conditional failure probabilities for intermediate failure events, thereby converting a rare event simulation problem into a sequence of more frequent ones [2]. During simulation, conditional samples are generated from designed Markov chains so that they gradually populate each intermediate failure region until reaching the final target failure region. Unlike approximation methods, subset simulation is robust against the presence of non-linear and non-smooth performance functions. The mathematical expression of subset simulation is based on a decreasing sequence of nested failure regions {Fi}m i = 1such that F1 ⊃ F2... ⊃ Fm = F and Fk = ∩ ki = 1Fi. The probability of failure, given a specific design vector X, can be estimated as follows m
Pf =P ðF; XÞ= P ðF1 ; XÞ ∏ P ðFi jFi−1 ; XÞ
ð9Þ
i=2
4. Subset simulation A variety of methods have been employed to evaluate the reliability constraints in Eq. (1b): direct integration, approximation, and sampling. Direct integration methods calculate a multi-fold integral of a joint distribution function in the failure region. Nevertheless, it is almost impossible to evaluate the multi-fold integral exactly [30]. Approximation methods (e.g., first-order and second-order reliability methods, FORM/SORM) are based on linear or quadratic approximation of the limit-state surface, which separates the safe and failure regions. The accuracy is dependent on the assumption that the limit state surface is close to being linear or quadratic. Another approximation method, FOSM (first-order second-moment method), requires evaluation of a Taylor series expansion and of function derivatives. Albeit being simple and efficient, FOSM relies on the assumptions of normal distributions and linearization of the limit state function at the mean values of random variables.
By an appropriate selection of events, the conditional probabilities in Eq. (9) can be made sufficiently large so that they can be estimated using far fewer runs of simulation. This results in a tremendous gain in computational efficiency, in contrast to the sampling methods. Au and Beck [3] showed that subset simulation is 1030 times faster than MCS when the failure probability is 10−6. Fig. 2 is the implementation of subset simulation in the proposed framework where the Metropolis–Hastings algorithm (MHA) is used to generate Markov chains of samples according to conditional distributions. Notations in Fig. 2 include 1. 2. 3. 4.
m denotes the number of conditional levels and i is the counter. bi is the prescribed threshold at the i-th conditional level. NM is the number of Markov chains and j is the counter. Ni is the number of samples in each Markov chain and k is the counter.
At the beginning of subset simulation, we estimate the failure probability at the first conditional level given the design vector X: P(F1,X). This is done by measuring the fraction of the number of MCS samples lying in the first failure region F1. Using the MCS samples, one can obtain samples distributed as the conditional distribution when the performance function is greater than the prescribed threshold bi. Starting from those samples, we can compute the acceptance ratio and use it to simulate new Markov chain samples. The condition probability P(F2|F1,X) can be estimated by the fraction of current samples lying in the second failure region F2. The Markov chain samples in the second condition level would then serve as seeds for the third level. Repeating the steps, we can compute the conditional probabilities of the higher conditional levels until the failure region of interest Fm has been reached. Taking the product of all the conditional probabilities leads to the probability of failure P(F,X). 5. Auto-tuning boundary-approaching PSO
Fig. 1. Proposed framework.
The original PSO scheme mimics the social adaptation of a biological creature in a swarm, such as fishes, insects, or birds [22]. This computational intelligence technique has been applied to a wide
I.-T. Yang, Y.-H. Hsieh / Automation in Construction 20 (2011) 610–619
613
Fig. 2. Implementation of subset simulation.
variety of domains [5,24,31,35,43]. The results demonstrate the promising performance while Clerc [6] provides a comprehensive knowledge on the theories, perspectives, and applications of PSO algorithms. A standard PSO algorithm is initialized with a swarm of particles, each a candidate solution. Every particle is iteratively moved across the search space and is attracted to the position of the best fitness historically achieved by the particle itself (local best, lBest) and by the best among the neighbors (global best, gBest). In essence, each particle continuously focuses and refocuses the effort of its search according to both local and global bests until converging to the optimum. PSO requires no gradient information and therefore particularly suitable for the present study.
PSO is easy to implement as it uses only primitive mathematical operators and two equations. The first equation moves a particle by → → adding a change velocity vi ðt + 1Þ to the current position xi ðt Þ at iteration t + 1: → → → xi ðt + 1Þ= xi ðt Þ+ vi ðt +1Þ
ð10Þ
The second equation determines the velocity as a combination of → three contributing factors: (1) previous velocity vi ðt Þ, (2) movement → in the direction of the local best xL , and (3) movement in the direction → of the global best xG . Formally, the velocity is given by → → → → → → vi ðt + 1Þ=w × vi ðt Þ+ r1 c1 xL − xi ðt Þ +r2 c2 xG − xi ðt Þ
ð11Þ
614
I.-T. Yang, Y.-H. Hsieh / Automation in Construction 20 (2011) 610–619
where w is an inertia weight to control the influence of the previous velocity; r1 and r2 are two random numbers uniformly distributed in the range of (0,1); c1 and c2 are two acceleration constants. The first acceleration constant c1, called as the cognitive coefficient, handles the mechanism of “learning from experience”. The second acceleration constant c2, social coefficient, handles the mechanism of “social interaction”. Since the particles' positions are originally real-valued, they can be normalized within [0,1] and then mapped into the discrete search space through a discretization function. The discretization function is →
X= xi ðt Þ×n
ð12Þ
→
where xi ðt Þ is the position vector with a normalized domain of unit size in each dimension; n stands for the number of selections for decision variables. The vector elements are multiplied by the number of selections to change the domain to [0,n]. The result is then rounded up to the next largest integer. For instance, the position vector [0.12, 0.98] will be transformed into the decision vector [2, 10] if there are 10 selections for each structural component. Such discretization has been shown appropriate and effective for discrete search spaces [10,25,42]. The locations of particles are always constrained within the defined domain. The proposed AB-PSO algorithm aims to stimulate a move to the boundary of reliability constraints. It is often the case that higher reliability (lower failure probability) is associated with higher material cost. Given that a tradeoff exists between reliability and material cost, the solution with minimum cost would lie close to the boundary of reliability constraints. Yet, the movement to the boundary is not directly modeled in standard PSO algorithms. Although such a movement should be encouraged, it cannot be made “too fast” with two considerations. First, immature convergence may occur to rest on local optimal solutions. Second, the particles may be brought to the failure region immediately and waste time on ineffective search. The boundary-approaching feature is implemented by adding another term to Eq. (11). The calculation of velocity now goes as follows → → → → → → vi ðt + 1Þ=w × vi ðt Þ+ r1 c1 xL − xi ðt Þ +r2 c2 xG − xi ðt Þ → → + r3 c3 xB − xi ðt Þ
ð13Þ
where r3 is a uniform random number; c3 is an acceleration coefficient → for boundary-approaching; xB denotes the position of the particle with the shortest distance to the boundary, i.e., with the probability of failure smaller than but closest to the target. Fig. 3 illustrates Eq. (13)
in a two-dimensional search space. The boundary of reliability constraints in the search space is unknown and plotted as the dotted surface. The velocity of particle 1 is contributed by four terms: velocity → → in the previous iteration vi ðt Þ, attraction to the local best xL (particle → 2), attraction to the global best xG (particle 3), and attraction to the → particle closest to the boundary xB (particle 4). Particles 5, 6, 7 are other particles in the current iteration. An extra routine is added to handle violation of reliability constraints. Once that happens, particles will maintain their previous locations. Standard PSO algorithms have to set several control parameters: inertial weight and acceleration (cognitive and social) coefficients, which are crucial to the performance. Despite the importance, the selection is usually done by experience-based trial-and-error experimentation. That is, a series of sensitivity analyses have to be conducted to test for the effect of different parameter settings. Adding to the difficulty is multiple runs to collect enough results for statistically verification. The entire process is cumbersome and contrived. Automatic parameter selection may be done by including the control parameters as design variables and optimize them all together [4]. This approach, however, increases the dimensions of the search space, making the search a lot more difficult. Another popular alternative is experimenting with different parameters and selecting the ones that give the best results. This causes a nested structure as the outer loop optimizes the control parameters whose performance has to be evaluated by the inner RBDO analysis. Since the inner RBDO analysis by itself is computationally expensive, this approach is practically unacceptable [15]. The AB-PSO algorithm automatically adjusts the control parameters as the algorithm proceeds. The adjustment is conducted after a certain number of iterations, not in every iteration. This allows some time for the selected control parameters to make impact. When the adjustment is due, a window is used to shift the control parameter either to the current value or to one of the nearest two values, according to their individual performances. Using a window to limit the search, in contrast to a full-domain search, is specifically designed to alleviate the computational burden. The auto-tuning process is illustrated in Fig. 4. At the beginning, the ranges of parameters are set by the users: w is between 0.5 and 1.5; c1, c2, and c3 are all between 0.5 and 2.5. All the parameters are set to 11 different levels. As an adjustment, we compare the performances of parameter settings to determine the best for the next iterations. Suppose the current setting is (w, c1, c2, c3) = (1.1, 1.7, 0.7, 2.1). The 3-item windows for all the acceleration constants cover the current value and the nearest two values: one higher and one lower. Because the inertia weight should be decreased as the algorithm proceeds to encourage exploitation [34], its window covers the current value and the next two smaller values. Checking 3 settings for four parameters (w, c1, c2, and c3) takes 34 = 81 experiments, compared to a full-domain search that needs 114 = 14,641 experiments. At the end of this adjustment, we select (w, c1, c2, c3) = (0.9, 1.9, 0.7, 2.3) as it yields the best result among all the experiments. In this way, the parameters are gradually adjusted during progress. Note that the number of experiments can be further reduced by experiment design. For instance, the Taguchi orthogonal array L9 can be used to choose the best parameters among 81 combinations, using merely 9 experiments [8]. Fig. 5 gives the pseudocode of the AB-PSO algorithm. 6. Practical application and validation
Fig. 3. Boundary-approaching feature.
The performance of the proposed framework is examined in a benchmark 10-bar truss example, shown in Fig. 6 [23]. Practical specifications are used to reflect real-world considerations. The example has the following structural characteristics: modulus of elasticity E = 68,971 N/mm2 and material density ρ = 2.768 × 10−6 kg/mm3. The design variables are sizes of individual bars, which are selected from standard sizes. Table 1 lists the outside diameter D, thickness t, and
I.-T. Yang, Y.-H. Hsieh / Automation in Construction 20 (2011) 610–619
615
Fig. 4. Auto-tuning feature. Fig. 6. Ten bar truss.
cross-sectional area A of 72 choices of hollow pipes, based on CNS 4435 and G3102 [7]. The objective is to minimize the total cost, which consists of material cost and expected cost of failure. The price of material is $520/ton. The failure cost is estimated to be $5 million. The current design problem is subject to both aleatory and epistemic uncertainties, such as variability in loading, manufacturing, and material properties, and errors from estimation, modeling, and computation. The random variables include external loads P1 and P2 and yield stress Fy. The cross-sectional areas are associated with manufacturing errors: eA1–10. Epistemic uncertainty is accounted for by: (1) estimation error of loads ep1 and ep2 and of yield stress ef and (2) computation error of stress eσ. All the random variables and errors are shown in Table 2 with their distribution types and parameters. The system failure probability must not exceed 1 × 10−4. The reliability constraints are imposed on stresses (compression and tension) and buckling, as described in Eqs. (7) and (8). The failure probability at the system level is computed through the union of individual component failures, which occur when the resistance is less than the maximum stress. The buckling analysis is performed to meet local design code requirements [14] where the buckling stress of individual members is calculated according to the diameter/thickness ratio. There are three levels of assessments as follows: ðaÞIf ðD=t Þ ≤ 232=Fy ; σ =0:85AFσ
Fig. 5. Pseudocode of the AB-PSO algorithm.
ð14Þ
where D is member diameter, t is member thickness, Fy is yield stress, σ is buckling stress, and A is cross-sectional area. In Eq. (14), Fσ is defined as 8 h i 2 > Fσ = expð0:419λC Þ Fy for λC ≤1:5 > > < " # 0:877 > > Fy for λC N1:5 > Fσ = : λ2C
ð15Þ
where λC is a dimensionless buckling parameter given by KL λC = πr
rffiffiffiffiffi Fy E
ð16Þ
where K is effective length factor, L is unbraced length of member, r is radius of gyration, and E is modulus of elasticity. ðbÞIf 914=Fy ≥ ðD=t Þ N 232=Fy ; σ =Q ×0:85AFσ
ð17Þ
where Q is a reduction factor Q=
77 2 + Fy ðD=t Þ 3
ðcÞIf ðD=t Þ N 914=Fy ; not allowed
ð18Þ ð19Þ
As the RBDO problem involves discrete decision variables and non-smooth performance functions, traditional gradient-based optimizers are less than satisfactory. A bypass is to relax the discrete search space into a continuous space, use gradient-based optimizers to solve it, and then round the solution to the nearest discrete option. Unfortunately, this approach does not work here. When we try popular gradient-based optimizers, SQP and GRG2, to solve the problem at hand, the search is highly dependent on the starting points. The optimizers either fail to converge or easily rest at solutions close to the starting point, with very little progress. The AB-PSO algorithm takes 500 iterations with the swarm size = 40 to perform the search. Together, 2 × 104 solutions are evaluated for each run of the AB-PSO algorithm. This number of function evaluations represents a very efficient search since the entire search space is composed of 7210 = 3.74 × 1018 solutions (72 choices for 10 members). Thus, the AB-PSO algorithm searches a tiny portion of the search space (in the order of 10−14) and yet finds very competitive solutions as we will see later. The reliability constraint has to be evaluated for each solution. Since the failure probability must be less than 1 × 10-4, direct MCS would require at least 29,957 samples to perform the evaluation [28]. Subset simulation, in contrast, is a lot more efficient. The number of conditional levels is chosen to be 4 with a conditional failure probably equal to 0.1 in the subset simulation procedure. At each conditional level, the number of samples is set to be 100. Because 500 × 0.1 = 50 samples would be shared between consecutive levels, subset
616
I.-T. Yang, Y.-H. Hsieh / Automation in Construction 20 (2011) 610–619
Table 1 Bar size list. Choice
D(mm)
t (mm)
A (mm2)
Choice
D (mm)
t (mm)
A (mm2)
Choice
D (mm)
t (mm)
A (mm2)
Choice
D (mm)
t (mm)
A (mm2)
#1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 #16 #17 #18
21.7 27.2 27.2 34 42.7 42.7 48.6 48.6 48.6 48.6 60.5 60.5 60.5 76.3 76.3 76.3 89.1 89.1
2 2 2.3 2.3 2.3 2.5 2.3 2.5 2.8 3.2 2.3 3.2 4 2.8 3.2 4 2.8 3.2
123.8 158.3 179.9 229.1 291.9 315.7 334.5 362.1 402.9 456.4 420.5 576.0 710.0 646.5 734.9 908.5 759.1 863.6
#19 #20 #21 #22 #23 #24 #25 #26 #27 #28 #29 #30 #31 #32 #33 #34 #35 #36
101.6 101.6 101.6 114.3 114.3 114.3 114.3 139.8 139.8 139.8 139.8 165.2 165.2 165.2 165.2 216.3 216.3 216.3
3.2 4 5 3.2 3.5 4.5 5.6 3.6 4 4.5 6 4.5 5 6 7.1 4.5 5.8 6
989.2 1226.5 1517.4 1116.9 1218.3 1552.3 1912.4 1540.4 1706.5 1912.8 2522.1 2271.8 2516.4 3000.8 3526.5 2994.3 3835.6 3964.1
#37 #38 #39 #40 #41 #42 #43 #44 #45 #46 #47 #48 #49 #50 #51 #52 #53 #54
216.3 216.3 216.3 267.4 267.4 267.4 267.4 267.4 267.4 318.5 318.5 318.5 318.5 318.5 355.6 355.6 355.6 355.6
7 8 8.2 6 6.6 7 8 9 9.3 6 6.9 8 9 10.31 6.4 7.9 9 9.5
4602.7 5235.2 5360.9 4927.3 5407.6 5726.5 6519.4 7306.1 7540.9 5890.5 6754.6 7803.7 8750.9 9982.2 7021.1 8629.4 9799.9 10329.4
#55 #56 #57 #58 #59 #60 #61 #62 #63 #64 #65 #66 #67 #68 #69 #70 #71 #72
355.6 406.4 406.4 406.4 406.4 406.4 406.4 457.2 457.2 457.2 457.2 457.2 508 508 508 508 508 508
12 7.9 9 9.5 12 12.7 16 9 9.5 12 12.7 16 7.9 9 9.5 12 12.7 14
12953.4 9890.2 11236.2 11845.5 14868.5 15707.9 19623.6 12672.6 13361.7 16783.6 17734.8 22177.1 12411.8 14108.9 14877.8 18698.8 19761.6 21727.3
Table 2 Random variables. Variables
Distribution
Mean
P1 , P2 A1–10 Fy
Extreme type I Uniform Lognormal
296,666 N
ep1, ep2 eA1–10 ef eσ
Uniform Uniform Uniform Uniform
Dispersion
2
517.28 N/mm (9th bar), 172.43 N/mm2 (others) 0 0 0 0
10% c.o.v. ± 4% 8% c.o.v. ± 10% ± 3% ± 20% ± 5%
simulation needs only (500 − 50) × 4 + 50 = 1850 samples. This yields a significant savings (1850/29,957 = 6.18%) in computational time. Since the AB-PSO algorithm is a stochastic search process, multiple runs are necessary to investigate the general performance. Results of 9 independent runs of the AB-PSO algorithm are listed, in the ascending order of total cost, in Table 3. To show the merits of the AB-PSO algorithm, the results are compared with the standard PSO and GA. GA is inspired by the processes of biological selection and evolution to perform random search. It is well known as being able to handle combinatorial optimization problems using bit-string representation [21]. The solutions in the tested GA are stored in integer-coded chromosomes. Related operators include tournament selection, uniform crossover, and creep mutation. Tournament selection randomly chooses a set of chromosomes to compete while the ones with lower cost would be selected for crossover. Uniform crossover utilizes a random mask to indicate the portion on the parent
chromosome that is going to be passed on to the children solution. In creep mutation, chromosomes are changed by a small increment or decrement, such as increasing or decreasing the bar size by 1. The present study uses the simple GA operators, as opposed to other variants, such as fmGA, gemGA, LLGA, and BOA [17]. To be fair, all the three algorithms take the same number of iterations with the same size of population. The control parameters of standard PSO and GA have been fine tuned through extensive experiments (more than 10 runs) whereas the AB-PSO algorithm does not need such laborious adjustments, with the auto-tuning capability. Among all the trials in Table 3, the AB-PSO algorithm finds the best solution with the lowest cost $443,557. In addition to the overall best solution, the general performance of the AB-PSO algorithm is consistently good as it leads to the lowest median ($487,321) and smallest standard deviation ($37,984), compared with standard PSO and GA. Here we use the median, instead of the mean, to measure the average performance because the mean value of total cost cannot be associated with any particular set of solution whereas the median can. Observe that the AB-PSO algorithm, with the boundary-approaching capability, can push the solutions toward the boundary of the safe region more effectively as the associated failure probabilities are generally closer to the target: 1 × 10−4. For future references, the 9 solutions found by the AB-PSO algorithm are listed in Table 4 where the size of each bar is selected from Table 1. To derive more rigorous conclusions, hypothesis tests are performed to prove the superiority of the AB-PSO algorithm over standard PSO and GA. Because the normality assumption is doubtful (i.e., total cost may not necessarily has a symmetrical normal
Table 3 Optimization results: AB-PSO, PSO, and GA. No. of trials
Total cost ($)
Material cost ($)
Failure probability
AB-PSO
PSO
GA
AB-PSO
PSO
GA
AB-PSO
PSO
GA
1 2 3 4 5 6 7 8 9 10 Median Std Dev.
443,557 471,487 478,744 486,817 487,321 488,779 502,793 535,896 573,759 635,451 487,321 37,984
477,084 572,485 577,541 629,393 644,585 676,429 704,860 724,299 747,652 871,355 644,585 86,140
506,671 529,869 532,709 539,599 547,200 551,733 592,376 630,221 640,554 712,261 547,200 46,780
443,228 471,032 478,541 486,538 486,855 488,480 502,410 535,519 573,299 634,991 486,855 37,948
476,722 572,177 577,219 628,903 644,403 676,066 704,523 724,198 747,319 871,105 644,403 86,173
506,613 529,635 532,280 539,319 547,123 551,700 592,240 629,874 640,463 712,167 547,123 46,776
6.58E−05 9.11E−05 4.05E−05 5.58E−05 9.32E−05 5.98E−05 7.66E−05 7.54E−05 9.21E−05 9.19E−05 7.54E−05 1.83E−05
7.25E−05 6.17E−05 6.44E−05 9.80E−05 3.64E−05 7.25E−05 6.73E−05 2.02E−05 6.67E−05 5.00E−05 6.67E−05 2.23E−05
1.16E−05 4.68E−05 8.59E−05 5.59E−05 1.54E−05 6.45E−06 2.72E−05 6.93E−05 1.81E−05 1.88E−05 2.72E−05 2.83E−05
I.-T. Yang, Y.-H. Hsieh / Automation in Construction 20 (2011) 610–619
617
distribution), we adopt the non-parametric Mann−Whitney U test [36] to examine the following two hypotheses
Table 4 Solutions found by AB-PSO. Solution No.
Bar number 1
2
3
4
5
6
7
8
9
10
1 2 3 4 5 6 7 8 9
#39 #43 #36 #37 #41 #42 #37 #41 #46
#23 #30 #23 #24 #28 #41 #33 #41 #25
#38 #38 #44 #44 #54 #45 #43 #53 #48
#32 #27 #34 #29 #23 #26 #19 #6 #33
#26 #29 #25 #21 #22 #16 #35 #16 #40
#17 #26 #29 #29 #24 #20 #36 #30 #27
#34 #34 #46 #38 #40 #37 #37 #43 #30
#37 #38 #24 #31 #28 #32 #32 #26 #43
#29 #26 #25 #38 #26 #27 #20 #2 #21
#21 #25 #28 #15 #27 #25 #31 #35 #30
Total cost($) 443,557 471,487 478,744 486,817 487,321 488,779 502,793 535,896 573,759
Hypothesis 1. AB-PSO versus standard PSO H0 There is no difference in the performance of the AB-PSO algorithm and that of the standard PSO algorithm. H1 The AB-PSO algorithm is significantly better than the standard PSO algorithm in the sense that the former can find smaller cost than the latter. The Mann–Whitney test starts with ranking all the solutions in ascending order. The U statistic is equal to the sum of the numbers of
Fig. 7. Performance comparisons: AB-PSO, PSO, and GA.
618
I.-T. Yang, Y.-H. Hsieh / Automation in Construction 20 (2011) 610–619
ranks from the AB-PSO solutions that are less than each of the ranks from the standard PSO solutions. Here, the U statistic is equal to 8, which yields a one-tailed p-value of 0.0020. As the p-value is lower than 0.01, one has a very strong evidence (N0.99 confidence level) to reject the null hypothesis, and thus conclude that the AB-PSO algorithm is statistically superior to the standard PSO algorithm. Similarly, we can examine the following hypothesis Hypothesis 2. AB-PSO versus GA H0 There is no difference in the performance of the AB-PSO algorithm and that of the GA. H1 The AB-PSO algorithm is significantly better than the GA in the sense that the former can find smaller cost than the latter. In this case, the U statistic is 9, which yields a one-tailed p-value of 0.0013. Again, the statistical evidence is strong enough (N0.99 confidence level) to reject the null hypothesis so as to conclude that the AB-PSO algorithm is significantly superior to GA. Fig. 7 compares the convergence histories of AB-PSO with standard PSO and GA, with respect to the best, median, and worst performances. In all the cases, standard PSO and GA, albeit converging fast, gets trapped at sub-optimal solutions. By contrast, the AB-PSO algorithm has been shown to continuously strive for the global optimum. 7. Closing remarks Practical RBDO problems involve discrete decision variables and nonsmooth performance functions. This class of combinatorial optimization problems poses tremendous difficulties for conventional gradient-based optimizers. In an effort of overcoming such difficulties, the present study develops a new metaheuristic, AB-PSO algorithm, which does not require gradient calculations to proceed. The AB-PSO algorithm is equipped with two new features: auto-tuning and boundary-approaching. The former fine tunes the control parameters automatically whereas the latter accelerates the search by encouraging movements to the limit state surface. To evaluate small failure probabilities, subset simulation is adopted to reduce simulation time, while maintaining accuracy when the performance functions are highly nonlinear and non-smooth. The AB-PSO algorithm is integrated with subset simulation in the proposed framework to seek greater effectiveness and higher efficiency. The proposed framework is implemented in a 10-bar truss example, whose component sizes are selected from a commercially available set while satisfying current design code requirements. Multiple trials of the AB-PSO algorithm confirm its superior and consistent performance. The hypotheses that the AB-PSO algorithm outperforms the standard PSO and GA are strongly supported by statistical evidence. The proposed framework is an initiative to integrate advanced metaheuristics with software packages that perform structural reliability analysis, such as ANSYS and ABAQUS. Such integration will facilitate designers to perform RBDO in large-scale design problems with high computational efficiency. Future research efforts will be directed to verify the practical use of the proposed framework. Acknowledgments The study was supported by a grant from the Taiwan Building Technology Center at National Taiwan University of Science and Technology (Taiwan Tech). References [1] A.H.S. Ang, W.H. Tang, Probability concepts in engineering planning and design, Decision, Risk and Reliability, vol. II, John Wiley and Sons, New York, NY, 1984. [2] S.K. Au, J.L. Beck, Estimation of small failure probabilities in high dimensions by subset simulation, Probabilistic Engineering Mechanics 16 (4) (2001) 263–277.
[3] S.K. Au, J.L. Beck, Subset simulation and its application to seismic risk based on dynamic analysis, Journal of Engineering Mechanics, ASCE 129 (8) (2003) 901–917. [4] T. Bäck, D.B. Fogel, Z. Michalewicz, Evolutionary Computation 2: Advanced Algorithms and Operators, Institute of Physics, Bristol, UK, 2000. [5] J.H. Chen, L.R. Yang, M.C. Su, Comparison of SOM-based optimization and particle swarm optimization for minimizing the construction time of a secant pile wall, Automation in Construction 18 (6) (2009) 844–848. [6] M. Clerc, Particle Swarm Optimization, ISTE, London, UK, 2006. [7] CNS 4435, G3102 (1995). Carbon steel pipes for general structures, Chinese National Bureau of Standards, Ministry of Economic Affairs, Taiwan, http://www. cnsonline.com.tw/en/. [8] I. Condra, Reliability Improvement with Design of Experiments, 2nd ed. CRC Press, Boca Raton, FL, 2001. [9] K. Deb, S. Gupta, D. Daum, J. Branke, A.K. Mall, D. Padmanabhan, Reliability-based optimization using evolutionary algorithms, IEEE Transactions on Evolutionary Computation 13 (2009) 1054–1074. [10] Y. del Valle, G.K. Venayagamoorthy, S. Mohagheghi, J.C. Hernandez, R.G. Harley, Particle swarm optimization: basic concepts, variants and applications in power systems, IEEE Transactions on Evolutionary Computation 12 (2) (2008) 171–195. [11] A. Der Kiureghian, Measures of structural safety under imperfect states of knowledge, Journal of the Structural Engineering 115 (5) (1989) 1119–1140. [12] X. Du, W. Chen, A most probable point based method for uncertainty analysis, Journal of Design and Manufacturing Automation 4 (1) (2001) 47–66. [13] X. Du, W. Chen, Sequential optimization and reliability assessment method for efficient probabilistic design, Journal of Mechanical Design 126 (2) (2004) 225–233. [14] Editorial Committee of Construction and Planning Agency of Ministry of Interior (ECCPAMI, Design and technique specifications of steel structures for buildings, Taipei, Taiwan, , 20078 (in Chinese). [15] A.E. Eiben, J.D. Smith, Introduction to Evolutionary Computing, Springer, Berlin, Heidelberg, New York, 2003. [16] D.M. Frangopol, K. Maute, Life-cycle reliability-based optimization of civil and aerospace structures, Computers & Structures 81 (7) (2003) 397–410. [17] D.E. Goldberg, The Design of Innovation: Lessons from and for Competent Genetic Algorithms, Kluwer Academic Publishers, Boston, 2002. [18] R.V. Grandhi, L. Wang, Reliability-based structural optimization using improved two point adaptive nonlinear approximations, Finite Elements in Analysis and Design 29 (1) (1998) 35–48. [19] S. Gunawan, Y. Papalambros, Reliability optimization with mixed continuous-discrete random variables and parameters, Journal of Mechanical Design 129 (2) (2007) 158–165. [20] A. Harbitz, An efficient sampling method for probability of failure calculation, Structural Safety 3 (1986) 109–115. [21] R.L. Haupt, S.E. Haupt, Practical Genetic Algorithms, 2nd ed. Wiley, New York, NY, 2004. [22] J. Kennedy, R.C. Eberhart, Y. Shi, Swarm Intelligence, Morgan Kaufmann, San Francisco, California, 2001. [23] S. Kumar, R. Pippy, E. Acar, N.H. Kim, R.T. Haftka, Approximate probabilistic optimization using Exact-Capacity-Approximate-Response-Distribution (ECARD). Structural and Multidisciplinary Optimization 38 (6) (2009) 613–626. [24] D.N. Kumar, M.J. Reddy, Multipurpose reservoir operation using particle swarm optimization, Journal of Water Resources Planning and Management, ASCE 133 (3) (2007) 192–201. [25] E. Laskari, K. Parsopoulos, M. Vrahatis, Particle swarm optimization for integer programming, Proc. of the IEEE Congress on Evolutionary Computation, 2002, pp. 1582–1587. [26] J. Liang, Z. Mourelatos, J. Tu, A single loop method for reliability based design optimization, Proceedings of the ASME Design Engineering Technical Conferences, Salt Lake City, UT, 2004. [27] R.E. Melchers, Radial importance sampling for structural reliability, Journal of Engineering Mechanics 116 (1) (1990) 189–203. [28] R.E. Melchers, Structural Reliability Analysis and Prediction, 2nd ed. John Wiley and Sons Ltd., New York, NY, 1999. [29] C.H. Papadimitriou, K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Prentice Hall, Englewood Cliffs, New Jersey, 1982. [30] S. Rahman, Meshfree methods in computational stochastic mechanics, in: A. Haldar (Ed.), Recent Developments in Reliability-based Civil Engineering, World Scientific, Hackensack, NJ, 2006, (Ch. 10.). [31] J. Robinson, Y. Rahmat-Samii, Particle swarm optimization in electromagnetic, IEEE Transactions on Antennas and Propagation 52 (2) (2004) 397–407. [32] G.I. Schuëller, H.J. Pradlwarter, P.S. Koutsourelakis, A critical appraisal of reliability estimation procedures for high dimensions, Probabilistic Engineering Mechanics 19 (4) (2004) 463–474. [33] S. Shan, G.G. Wang, Reliable design space and complete single-loop reliabilitybased design optimization, Reliability Engineering & System Safety 93 (8) (2008) 1218–1230. [34] Y. Shi, R.C. Eberhart, A modified particle swarm optimizer, Proceedings of the IEEE International Conference on Evolutionary Computation, IEEE Press, Piscataway, NJ, 1998, pp. 69–73. [35] W.H. Slade, H.W. Ressom, M.T. Musavi, R.L. Miller, Inversion of ocean observations using particle swarm optimization, IEEE Transactions on Geoscience and Remote Sensing 42 (9) (2004) 1915–1923. [36] P. Sprent, N.C. Smeeton, Applied Nonparametric Statistical Methods, Fourth Ed. Chapman & Hall/CRC, Boca Raton, FL, 2007.
I.-T. Yang, Y.-H. Hsieh / Automation in Construction 20 (2011) 610–619 [37] P.Y. Tabakov, M. Walker, A technique for optimally designing engineering structures with manufacturing tolerances accounted for, Engineering Optimization 39 (1) (2007) 1–15. [38] A.A. Taflanidis, J.L. Beck, Stochastic subset optimization for reliability optimization and sensitivity analysis in system design, Computers & Structures 87 (5–6) (2009) 318–331. [39] Y. Tsompanakis, N. Lagaros, M. Papadrakakis, Structural Design Optimization Considering Uncertainties, Taylor and Francis, London, UK, 2007. [40] Y.T. Wu, H. Millwater, T. Cruse, Advanced probabilistic structural analysis method for implicit performance functions, Journal of the American Institute of Aeronautics and Astronautics 12 (1990) 255–276.
619
[41] Y.T. Wu, Y. Shin, R.H. Sues, C.A. Cesare, Probabilistic function evaluation system (ProFES) for reliability based design, Structural Safety 28 (2006) 164–195. [42] I.T. Yang, Using elitist particle swarm optimization to facilitate bicriterion time-cost trade-off analysis, Journal of Construction Engineering Management 133 (7) (2007) 498–505. [43] I.T. Yang, Distribution-free Monte Carlo simulation: premise and refinement, Journal of Construction Engineering and Management, ASCE 134 (3) (2008) 352–360.