Structural Safety 32 (2010) 384–392
Contents lists available at ScienceDirect
Structural Safety journal homepage: www.elsevier.com/locate/strusafe
Design optimization using Subset Simulation algorithm Hong-Shuang Li, Siu-Kiu Au * Department of Building and Construction, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong
a r t i c l e
i n f o
Article history: Available online 3 April 2010 Keywords: Subset Simulation Design optimization Constraint-handling Feasibility-based rule
a b s t r a c t This paper presents a global optimization algorithm based on Subset Simulation for deterministic optimal design under general multiple constraints. The proposed algorithm is population-based realized with Markov Chain Monte Carlo and a simple evolutionary strategy. Problem-specific constraints are handled by a feasibility-based fitness function that reflects their degree of violation. Based on the constraint fitness function, a double-criterion sorting algorithm is used to guarantee that the feasible solutions are given higher priority over the infeasible ones before their objective function values are ranked. The efficiency and robustness of the proposed algorithm are illustrated using three benchmark optimization design problems. Comparison is made with other well-known stochastic optimization algorithms, such as genetic algorithm, particle swarm optimization and harmony search. Ó 2010 Elsevier Ltd. All rights reserved.
1. Introduction Many constrained optimization problems arise from modern engineering design process, with systems often modeled with non-linear, multimodal or even discontinuous behavior [1,2]. The development of efficient yet robust constrained optimization algorithm under multiple general constraints has been an important branch of research in engineering design of sophisticated systems. Despite extensive development achieved in the past decides, design optimization is still a challenging problem. One class of optimization algorithms is based on traditional mathematical optimization algorithms [1]. The other is based on stochastic search algorithms. The major disadvantage of traditional optimization algorithms is that their solutions are vulnerable to being trapped in a local optimum when the constrained optimization problem is non-convex, possibly due to non-convex nature of the objective function or the complex geometry of the feasible domain shaped by constraints. A variety of stochastic optimization algorithms inspired by natural mechanisms have been proposed for global optimization in many research fields [3]. Genetic Algorithm (GA) [4] and Simulated Annealing (SA) [5] are amongst the most successful heuristic techniques. Other stochastic optimization algorithms include Ant Colony Optimization (ACO) [6], particle swarm optimization (PSO) [7,8], etc. It is commonly believed that there is no single universal method capable of solving all kinds of global optimization problems efficiently because each method has its own definition and heuristics to improve efficiency. In this paper, a stochastic search algorithm for design optimization is developed based on Subset Simulation. Subset Simulation * Corresponding author. Tel.: +852 21942769; fax: +852 27887612. E-mail addresses:
[email protected] (H.-S. Li),
[email protected] (S.-K. Au). 0167-4730/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.strusafe.2010.03.001
(SS) is an efficient Monte Carlo technique originally developed for structural reliability problems [9–11]. The method takes advantage of Markov Chain Monte Carlo (MCMC) and a simple evolutionary strategy [12]. Based on the idea that an extreme event (optimization problem) can be considered as a rare event (reliability problem), this paper explores the application of Subset Simulation to solving constrained optimization problems. Subset Simulation is robust to high dimensional problems and its selection of algorithm parameters is reasonably simple. The original Subset Simulation algorithm does not handle constraints, however, and effort is made in this work to adapt the algorithm to handle constraints in the generation of samples during Markov Chain Monte Carlo simulation [13]. Constraint-handling is an important subject in constrained optimization theory and a number of strategies exist for evolutionary algorithms [14]. Motivated by Dong et al. [15], a modified feasibility-based rule is adopted to handle general constraints in this work, which segregates feasible and infeasible solutions and optimizes the objective function simultaneously in a natural way. The remaining sections of this paper are organized as follows. Section 2 explains the connection between optimization problem and reliability problem. Section 3 presents the proposed algorithm. Section 4 discusses the associated computational issues and choice of algorithm parameters. The proposed algorithm is then tested using three benchmark design optimization problems, which illustrate the robustness and efficiency of the proposed algorithm. 2. Connection between optimization problem and reliability problem Consider the constrained optimization problem formulated as
max hðxÞ s:t: g i ðxÞ 6 0;
i ¼ 1; . . . ; L; x 2 S
ð1Þ
385
H.-S. Li, S.-K. Au / Structural Safety 32 (2010) 384–392
where h : Rn ? R is the objective function, g i : Rn ! Rði ¼ 1; . . . ; LÞ is the ith inequality constraint, L is the number of inequality constraints; x 2 Rn represents the set of design variables, and S # Rn is the definition domain or the search space. To view an optimization problem in the context of a reliability problem, consider the input variables x as random with specified (user-defined) probability distributions so that the objective function h(x) is now random and has its own probability density function (PDF), and cumulative distribution function (CDF). This concept of ‘augmenting’ deterministic variables as random ones is merely a computational technique for solving complex problems taking advantage of Monte Carlo Simulation [16]. Let hopt be the global maximum of h, where x ¼ xopt . By the defining properties of a CDF, it is monotonic non-decreasing, rightcontinuous and its value at h ¼ hopt is unity. Consider the reliability problem of finding the ‘‘failure probability” PF defined as
PF ¼ PðFÞ ¼ PðhðxÞ P hopt Þ
ð2Þ
where the failure event is F ¼ fhðxÞ P hopt g. Clearly, this is zero because hopt is the global maximum. In the context of an optimization problem, the failure probability is of little concern, however; the attention is on the point or region where the objective function attains the largest value(s). The rare failure region in the reliability problem corresponds to the region where the objective function attains its global maximum. This suggests the possibility of converting a global optimization problem into a rare event simulation problem, where the feasible potentially optimal solutions are analogously the rare event samples close to failure in the reliability problem. The rare event simulation problem can be in turn handled by Subset Simulation. 3. Proposed algorithm Along the spirit of Subset Simulation for reliability analysis [9– 11], PðhðxÞ P hopt Þ in Eq. (2) can be expressed as a product of a sequence of conditional probabilities. During Subset Simulation a series of intermediate threshold values fhi : i ¼ 1; 2 . . .g are generated that correspond to boundaries consistent with the specified conditional probabilities fpi : i ¼ 1; 2 . . .g
p1 ¼ PðhðxÞ P h1 Þ ¼ PðF 1 Þ
ð3Þ
pi ¼ PðhðxÞ P hi jhðxÞ P hi1 Þ ¼ PðF i jF i1 Þ; i > 1
ð4Þ
where Fi is the intermediate event determined by the intermediate objective function value hi. By the definition of conditional probability, we have
PF ¼
Y
pi
ð5Þ
i¼1
Provided that the constraints have been taken care of in the generation of samples (see later), we can generate samples (solutions) using the modified Metropolis–Hasting algorithm [9–11] that progressively towards the maximum, so that the rare event region can be gradually explored. For an optimization problem with at least one global point, one can expect that hi ? hopt as PF ? 0. During implementation, the Subset Simulation algorithm generates a sequence of intermediate objective function values fhi : i ¼ 1; 2; . . .g which converges to the global optimum hopt. At the same time, the samples gradually populate from a broad region implied by the original parameter PDF (chosen by user) towards a narrowing region containing the global optimal solution (design vector) xopt.
general multiple constraints. Substantial research has been devoted to handling constraints in evolutionary algorithm (EA) as an important step in the design process [14,17]. Simulated annealing (SA), on the other hand, has not yet gained similar popularity because it requires special strategies to maintain diversity [18]. When the feasible domain consists of several discontinuous sub-feasible domains, SA may encounter difficulty in exploring the whole feasible domain. Similar to SA, Subset Simulation is also built on Markov Chain Monte Carlo to generate candidate samples (solutions). However, SA is a typically point-to-point algorithm, while Subset Simulation operates a population of samples. To a certain extent, Subset Simulation may be regard as a population-based SA with a simple evolutionary strategy for reliability problem. In this work, we borrow constraint-handling techniques in population-based stochastic optimization algorithms to develop appropriate strategies to be incorporated in Subset Simulation. Existing constraint-handling techniques in the literature can be roughly classified into four groups: (1) rejection method; (2) penalty function method; (3) multi-objective function method; (4) specific (algorithm-dependent). In the rejection method, only the feasible solutions are kept in the search process and infeasible solutions are discarded. It is difficult to approach the feasible region if the feasible region in the search space is comparatively small. By construction, the rejection method cannot explore the infeasible region. In contrast, the penalty function method exploits also the infeasible solutions in the searching process by transforming a constrained problem into an unconstrained one through the incorporation of a penalty function into the objective function. The main disadvantage of this technique is the difficulty in the selection of the penalty function because it is problem- and scale-dependent [14]. Multi-objective optimization concept has recently been adapted to handle constraints in GA [19], which converts a single objective optimization problem into a multiobjective one and then applies multi-objective techniques to solve it. This strategy by-passes the selection of penalty function. There is, however, one major difference in the optimization aim between multi-objective optimization and handling constraints using multiobjective optimization concept. The former aims at finding a Pareto optimal set in the searching space, while the later requires the associated multi-objective optimization problem to degenerate into a single-objective one in the feasible region again because the optimal value is gained at one point instead of a point set. Strategies based on different heuristic have been developed, such as ‘gene repair’ in GA and ‘fly-back’ in PSO [20]. However, these methods lack generality. It is desirable to incorporate the constraints during the generation of samples so that the population of the samples already (at least roughly) reflects the effects of the constraints and hence the feasible domain. For this reason, we adopt the constraint fitness priority based ranking method first proposed by Dong et al. [15] used in handling constraints in PSO. The method automatically takes care of the constraints during sample generation and so it by-passes the need to modify the objective function. In the context of the proposed algorithm, a new constraint fitness function for handling constraints is proposed in this work. A ‘constraint violation function’ is first defined to handle constraints and ranking samples. For an inequality constraint g i ðxÞ 6 0, it is defined as
mi ðxÞ ¼
0 if
g i ðxÞ 6 0
g i ðxÞ if
g i ðxÞ > 0
ð6Þ
3.1. Constraint-handling
The overall constraint fitness function that accounts for all constraints is defined as:
One central problem for applying stochastic optimization techniques to the constrained optimization problem is how to handle
F con ðxÞ ¼ max mi i
ð7Þ
386
H.-S. Li, S.-K. Au / Structural Safety 32 (2010) 384–392
It is clear that Fcon(x) = 0 if and only if the trial solution x belongs to the feasible domain. The expression in Eq. (7) reveals that all solutions in the feasible domain have the same value of Fcon(x) (equal to zero) while the worst violated constraint dominates the violation level for an infeasible solution. The constraint fitness function used here is different from that used in Dong et al. [15]. The real violation is used to reflect the magnitude of violation in this work, while a relative measure was adopted in Dong et al. [15]. Another difference is that the most serious violation among all constraints is used to construct constraint fitness function instead of a sum of relative ratio with random weighted factors. In the proposed algorithm for optimization problems, samples with favorable values in terms of both constraint fitness function and objective function are selected from the population at the current level to provide ‘‘seeds” for the next simulation level. The selection process needs to respect the constraint fitness function first and then objective function among those that are feasible. A double-criterion ranking method is used where the samples are first sorted according to their constraint fitness function values [15]
F con ðx1 Þ 6 F con ðx2 Þ 6 6 F con ðxN Þ
ð8Þ
As mentioned before, the constraint fitness function value of a sample in feasible domain is zero, so the order of those feasible samples cannot be determined by Eq. (8). However, the feasible samples that satisfy the constraints will always appear at the top of the list with the same value of Fcon(x) equal to zero. In the next step, these feasible samples are then sorted again according to their objective function value, while the positions of the infeasible samples on the list remain unchanged. Thus, a unique sequence is determined by filtering through these two criteria. The first ranking based on the constraint fitness function is designed to search the feasible domain, while the second ranking based on objective function looks for the optimal solution among the feasible candidates. In this manner the searching processes for the feasible region and the optimal solution proceed together. This method preserves the advantage that the constraint fitness function value of a feasible solution is always better than that of an infeasible one and it by-passes the need to trade off between the objective and penalty function [15], which can be nontrivial due to different scaling. 3.2. Procedure Based on the aforementioned considerations, the proposed algorithm is described as follows: 3.2.1. Step 0 Select the distribution parameters of input variables. In the original problem, each input variable is deterministic parameter but it is augmented as a random variable in the presented algorithm with an artificial probability density functions (PDF) f(x). 3.2.2. Step 1 Generate N independent and identically distributed samples fx1 ; x2 ; . . . ; xN g by direct Monte Carlo Simulation according to the artificial distributions. Each sample xi ði ¼ 1; . . . ; NÞ has n compoð1Þ ð2Þ ðnÞ ðjÞ nents, i.e. xi ¼ fxi ; xi ; . . . ; xi gT , where xi ðj ¼ 1 ; . . . ; nÞ are generated from fj(xj) (j = 1 , . . . , n). Calculate the constraint fitness function values and the objective function values of the samples, and then sort according to the double-criterion ranking algorithm described in Section 3.1. Refer xN as the best solution and x1 as the worst. From the unique sequence, obtain the N(1 p1)th sample xNð1p1 Þ with corresponding h1;Nð1p1 Þ and F con ðxNð1p1 Þ Þ, assuming that p1 and N are chosen such that N(1 p1) is an integer. The first
pair of intermediate threshold values is chosen as h1 ¼ h1;Nð1p1 Þ and F c1 ¼ F con ðxNð1p1 Þ Þ. The subscript ‘1’ here denotes simulation level 1. The first intermediate event F1 is defined as
8 if F c1 ¼ 0 > < fh P h1 g F 1 ¼ fF con P F c1 g if F c1 < 0 > :
ð9Þ
That is, if all the top p1N samples satisfy the constraint (i.e., with Fcon = 0), then they are segregated based on their objective function value h. Otherwise, they are simply segregated based on their constraint fitness function value Fcon. This definition of the intermediate event F1 is consistent with the double-ranking criteria, where the constraint criterion takes priority. By considering the cases {Fc1 = 0} and {Fc1 < 0} separately, it can be readily shown that the sample estimate of P(F1) is by construction equal to p1. There are always p1N samples belonging to F1, which provide ‘‘seeds” for generating new samples in the next simulation level. Note that not all samples in F1 satisfy the constraint (when Fc1 < 0). This is intended to allow infeasible samples that ‘almost’ satisfy the constraint to have an opportunity to generate potentially feasible samples. 3.2.3. Step 2 The modified Metropolis–Hasting algorithm [9–11] is employed for generating samples conditional on the intermediate event. In the kth simulation level, starting from each sample in Fk1, a Markov Chain can be generated with the same conditioning. Since the initial samples obey the conditional distribution, all these Markov Chains are automatically in stationary state and samples in these Markov Chains distribute according to conditional distribution. The length of each Markov Chain is 1/pk1(k = 2, . . .) where pk1 is the level probability in last simulation level. Evaluate and sort the samples again, determine the N(1 pk)th sample xNð1pk Þ from the sequence. Again, select the samples according to Fk defined as
8 if F ck ¼ 0 > < fh P hk g F k ¼ fF con P F ck g if F ck < 0 > :
ð10Þ
These samples provide ‘‘seeds” for sampling in the next simulation level. The procedure is repeated for higher simulation levels until a convergence criterion is met. Note that the total number of samP ples is equal to N þ m k¼2 ð1 pk ÞN, where m is the total simulation levels for a problem. 4. Computational issues 4.1. Artificial PDF for design variables The choice of the distribution for the input variables directly affects the domain where feasible solutions are searched. A truncated Gaussian distribution may be used to handle simple bound constraints on individual design variable
f ðx; l; r; xu ; xl Þ ¼
/ðxrlÞ Uð r Þ Uðxl rlÞ xu l
ð11Þ
where U() is the probability density function of the standard Gaussian distribution, U() is the cumulative distribution function of the standard Gaussian distribution and the definition domain X ¼ fx : xl 6 x 6 xu g. The mean l should be chosen as the initial guess to the global optimum so that the generated samples are typically generated in its neighborhood. If no prior information on the problem is available, one may locate l at the center of the definition domain. Note that the definition bounds of design variables also
387
H.-S. Li, S.-K. Au / Structural Safety 32 (2010) 384–392
serve as constraints in the problem, while the global optimal solution could be located at or near the boundaries of some constraints. Locating l at the center of the definition bounds leads to roughly equal probability of samples at the initial simulation level to reach either bound of a design variable. The standard deviation of the artificial distribution controls the range to be explored and it has an influence on the efficiency. If it is too small, most samples will cluster in a small region and the optimal solution will only be explored locally. If it is too large, the samples will scatter over a large region and it would require a longer process to converge to the global optimum. One strategy is to use the three sigma limits in reliability engineering, setting the distance from sampling center to the upper or lower bound equal to three standard deviations, i.e.
ri
Li ¼ 6
ð12Þ
where Li is the interval length of definition domain of the ith input variable xi, and ri is the corresponding artificially standard deviation.
5.1. Welded beam design problem The first problem considers the optimal design of a welded beam, which is taken from Coello Coello [21]. The objective is to minimize the total cost function h(x) including setup cost, welding labor cost and material cost with constraints on shear stress s(x), bending stress in the beam r(x), buckling load on the bar Pc(x), end deflection of the beam d(x), side (geometric) constraints g 3 ðxÞ; g 4 ðxÞ and g 5 ðxÞ, and limits on design variables. There are four design variables as shown in Fig. 1: [x1, x2, x3, x4] = [h, l, t, b]. The problem can be formulated as
min f ðxÞ ¼ 1:1047x21 x2 þ 0:0481x3 x4 ð14:0 þ x2 Þ
ð14Þ
Subject to
g 1 ðxÞ ¼ sðxÞ smax 6 0
ð15Þ
g 2 ðxÞ ¼ rðxÞ rmax 6 0 g 3 ðxÞ ¼ x1 x4 6 0
ð16Þ ð17Þ
g 4 ðxÞ ¼ 0:1047x21 þ 0:04811x3 x4 ð14:0 þ x2 Þ 5:0 6 0 g 5 ðxÞ ¼ 0:125 x1 6 0
ð18Þ ð19Þ
g 6 ðxÞ ¼ dðxÞ dmax 6 0
ð20Þ
4.2. Convergence criterion
g 7 ðxÞ ¼ P Pc ðxÞ 6 0
ð21Þ
In this paper, we use the standard deviation of samples in each simulation level to check the convergence of the searching process. In order to eliminate the effects of different scaling of design variables, the interval length of definition domain is used as a reference. The convergence criterion is
where
r ^k xu x 6 e l
ð13Þ
^ k is the sample standard deviation at kth simulation level, where r and e is a specified tolerance. The idea of this criterion stems from the fact that when the searching procedure approaches the global optimum, more repeated samples would be found in samples and then the estimator of standard deviation of samples will tend to zero. Here, we adopt e = 105–107. When the convergence criterion is satisfied, the largest value in the objective function sequence is taken as the optimal value, and the corresponding sample as the optimal solution. 4.3. Level probability The level probability pk is a parameter that regulates the convergence of the optimization process. If a small value is used, the algorithm would have a low probability of reaching a global optimum. The level probabilities must be high enough to permit the locally developed Markov Chain samples to move out of the local optimum in favor of finding a global optimal, especially in early simulation levels. However, a low value would increase the number of simulation levels. Here, we adopt a descending strategy to handle this difficulty. First, we set p1 = 0.5 at the initial simulation level and then reduce to 0.2 in the subsequent simulation levels ^ i is less than when the largest estimator of all design variables r ^ j < 0:01 in order to accumu0.1, and further reduce to 0.1 when r late the speed of convergence.
5. Illustrative examples The proposed algorithm is applied to three benchmark optimization design problems with stress and displacement constraints. In each example, 30 independent runs are carried out to investigate the statistical performance of the proposed algorithm.
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
sðxÞ ¼ ðs0 Þ2 þ 2s0 s00 þ ðs00 Þ2
ð22Þ
P MR x2 ; M ¼P Lþ s ¼ pffiffiffi ; s00 ¼ J 2 2 x1 x2 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2 2 x2 x1 þ x3 R¼ þ 4 2 pffiffiffi 2 x x1 þ x3 2 J¼2 2 x 1 x2 2 þ 12 2 0
ð23Þ ð24Þ ð25Þ
6PL 4PL3 ; dðxÞ ¼ x4 x23 Ex33 x4 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffi! 4:013E x23 x64 =36 x3 E 1 Pc ðxÞ ¼ 2L 4G L2
rðxÞ ¼
ð26Þ
ð27Þ
P ¼ 6000 lb; L ¼ 14 in:; E ¼ 30 106 psi; G ¼ 12 106 psi
ð28Þ
smax ¼ 13600 psi; rmax ¼ 30000 psi; dmax ¼ 0:25 in:
ð29Þ
This problem has been attempted previously by several stochastic optimization algorithms: GA based on a co-evolution model (GA1) [21], GA through the use of dominance-based tournament selection (GA2) [22], evolutionary programming with a cultural algorithm (EP) [23], co-evolutionary particle swarm optimization
h
P
t
l L
Fig. 1. The welded beam design.
b
388
H.-S. Li, S.-K. Au / Structural Safety 32 (2010) 384–392
Table 1 Best solutions for welded beam design.
h l t b g1 g2 g3 g4 g5 g6 g7 f
GA1
GA2
EP
CPSO
HYPSO
NM–PSO
SS
0.208800 3.420500 8.997500 0.210000 0.337812 353.9026 0.0012 3.411865 0.083800 0.235649 363.2324 1.748309
0.205986 3.471328 9.020224 0.206480 0.10305 0.231748 0.000494 3.430044 0.080986 0.235514 58.64689 1.728226
0.205700 3.470500 9.036600 0.205700 1.9886769 4.4815489 0.000000 3.433213 0.080700 0.235538 2.6033472 1.724852
0.202369 3.544214 9.048210 0.205723 13.65555 75.81408 0.003354 3.424572 0.077369 0.235595 4.472858 1.728024
0.205730 3.470489 9.036624 0.205730 0.0254 0.053122 0.000000 3.432981 0.08073 0.235540 0.031556 1.724852
0.205830 3.468338 9.036624 0.205730 0.025251 0.053122 0.000100 3.433169 0.08083 0.235540 0.031556 1.724717
0.20572964 3.470488668 9.03662391 0.20572964 0.000021 0.000029 0.000000 3.432984 0.080730 0.235540 0.000019 1.724852
Table 2 Statistical performance in welded beam design.
GA1 GA2 EP CPSO HYPSO SS
Best
Mean
Worst
Std.
ANFC
1.748309 1.728226 1.724852 1.728024 1.724852 1.724852
1.771973 1.792654 1.971809 1.748831 1.749040 1.747429
1.785835 1.993408 3.179709 1.782143 1.814295 1.928811
0.011220 0.074713 0.443131 0.012926 0.040049 0.046668
900,000 80,000 N/A 200,000 81,000 83,703
to the results produced by EP [23] and HYPSO [25] and is better than the results obtained by other optimization techniques. Table 2 summarizes the average optimum-locating performance and computational effort spent by different methods over 30 independent runs. It can be seen that the mean of the objective function yielded by Subset Simulation is better than that of other algorithms although the worst case is some what average among others. Based on 30 independent runs, the average iteration number was 122 and the average number of function calls (ANFC) was 83,703. Comparing with the function calls required by other methods the proposed algorithm can be an efficient choice for this problem.
(CPSO) [24], hybrid particle swarm optimization (HYPSO) with a feasibility-based rule [25] and hybrid Nelder–Mead simplex search method and particle swarm optimization (NM–PSO) [26]. Their best solutions are compared with that obtained by the proposed algorithm, and are listed in Table 1. It should be noted that the result produced by NM–PSO [26] is an infeasible solution because the third constraint g 3 ðxÞ had been slightly violated. From Table 1, the best feasible solution obtained by Subset Simulation is competitive
5.2. 25-Bar space truss structure The second example considers the optimization design of a 25bar space truss show in Fig. 2. The density of bar material is 0.1 lb/
Z
75 in (1)
1 (2)
2
100 in
3
4 8
9
6
7 5
(3)
75 in
75 in
10
12
(6) 13
100 in
(4)
11
15 (5) 22 14
18
19
23
Y
24 (7) 25
20 21
17 (10) 16 (8) 200 in (9) 200 in X Fig. 2. 25-Bar space truss structure.
389
H.-S. Li, S.-K. Au / Structural Safety 32 (2010) 384–392 Table 3 Stress limits for 25-bar truss structure.
Table 6 Statistical performance in 25-bar structure.
Variables
Compressive stress limits (ksi)
Tensile stress limits (ksi)
N
Best
Mean
Worst
COV
ANFC
ANI
1 2 3 4 5 6 7 8
35.092 11.590 17.305 35.092 35.092 6.759 6.957 11.802
40.0 40.0 40.0 40.0 40.0 40.0 40.0 40.0
100 200 300 400 500
549.77787 545.48825 545.40691 545.5436 545.3681
563.35581 550.75017 548.42659 547.2907 546.2707
615.4135 587.1954 579.99718 553.0217 549.5522
0.022853 0.014088 0.011501 0.004027 0.001569
7828 15698 23637 26749 33148
115 116 117 102 102
A1 A2–A5 A6–A9 A10–A11 A12–A13 A14–A17 A18–A21 A22–A25
590
Table 4 Loading cases for 25-bar truss structure.
1 2 3 6
580
Case 1 (ksi)
Case 2 (ksi)
PX
PY
PZ
PX
PY
PZ
0.0 0.0 0.0 0.0
20.0 20.0 0.0 0.0
5.0 5.0 0.0 0.0
1.0 0.0 0.5 0.5
10.0 10.0 0.0 0.0
5.0 5.0 0.0 0.0
Weight (lb)
Node
N=100 N=300 N=500 HS PSO PSOPC HPSO
570
560
550
in3 and the modulus of elasticity of bar material is 10,000 ksi. The 25 truss members are parameterized into eight groups, as follows: (1) A1, (2) A2–A5, (3) A6–A9, (4) A10–A11, (5) A12–A13, (6) A14–A17, (7) A18–A21, (8) A22–A25. The design objective is to minimize the weight of the structure subject to all members satisfying the stress limits listed in Table 3 and displacement limits in three directions of ±0.35 in. These amount to a total of 172 non-linear implicit constraints. The structure is designed for two loading conditions as shown in Table 4. All bar areas are to be between 0.01 in.2 and 3.5 in.2 This truss structure has been considered previously by Lee and Geem [27] and Li et al. [20]. Table 5 summarizes the best designs computed by Subset Simulation, harmony search (HS) [27], standard particle swarm optimization (PSO), standard PSO with passive congregation (PSOPC) and heuristic particle swarm optimization (HPSO) [20]. Note that the best designs obtained by the HS algorithm violated the displacement constraint by about 0.206% mðxÞ (limit 100%) on stresses constraints. It can be seen that Subset Simulation yielded comparable solutions to PSOPC and HPSO, and better than that of PSO. Table 6 summarizes the statistical performance of Subset Simulation based on 30 independent runs with the number of samples used in one level (N) ranging from 100 to 500. To make a fair comparison it is important to consider both the computational effort spent and the final quality of solution, as well as the likelihood of achieving reasonably good solution. It can be seen that the mean and worst optimal designs improve as
540 0
20000 40000 60000 80000 100000 120000 140000 160000
The total number of structural analyses Fig. 3. Total number of structural analyses versus weight for 25-bar structure.
N increases, but the best optimal designs have little improvement after N P 200. The average number of iteration levels (ANI) required by Subset Simulation is insensitive to N (N P 200) as it ranges between 102 and 117. The column titled ‘COV’ shows the sample coefficient of variation (standard deviation/mean) of the optimal weight achieved calculated using the results from 30 runs. As expected, it reduces as N increases, and more significantly when N is small. Fig. 3 shows the optimal weight versus the total number of structural analyses. It is intended to give a holistic picture of the stochastic performance of the proposed algorithm considering both the computational effort spent and the accuracy or quality of results achieved. The best design obtained by PSO is not plotted in Fig. 3 because it converges to a weight about 627 lb. It is seen that when N = 100 is small the results have large variability. This is perhaps due to a lack of samples at each simulation level to explore well enough the parameter space, so that in some runs the global optimal region has not been visited by the samples, leading to ‘false’ global optimum. This problem is related to practical
Table 5 Optimal designs for 25-bar structure. Optimal cross-sectional areas (in.2)
Design variables
Lee and Geem [27]
1 2 3 4 5 6 7 8 Weight (lb)
A1 A2–A5 A6–A9 A10–A11 A12–A13 A14–A17 A18–A21 A22–A25
Li [20]
SS
PSO
PSOPC
HPSO
0.047 2.022 2.950 0.010 0.014 0.688 1.657 2.663
9.863 1.798 3.654 0.100 0.100 0.596 1.659 2.612
0.010 1.979 3.011 0.100 0.100 0.657 1.678 2.693
0.010 1.970 3.016 0.010 0.010 0.694 1.681 2.643
0.010 2.057 2.892 0.010 0.014 0.697 1.666 2.675
544.38
627.08
545.27
545.19
545.37
390
H.-S. Li, S.-K. Au / Structural Safety 32 (2010) 384–392 Y 120 in
Z 120 in (17)
(18)
(13)
(14)
60 in
X 15
(8) 60 in
(7) 10
9
3
4
240 in
(9)
16
(10)
12 (4) 17
60 in (5)
(6)
(5)
1
14
(3) 8
18 (6)
13 11
2
7
60 in 5 (2)
X
(1)
6 (2)
(1)
Element and node numbering system Fig. 4. 72-Bar space truss structure.
Table 7 Loading cases for 72-bar truss structure. Node
17 18 19 20
5.3. 72-Bar space truss structure
Case 1 (ksi)
Case 2 (ksi)
PX
PY
PZ
PX
PY
PZ
5.0 0.0 0.0 0.0
5.0 0.0 0.0 0.0
5.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0
5.0 5.0 5.0 5.0
‘ergodicity’, and is always present in Markov Chain simulation. To put other methods into context, the performance of other methods reported in the literature are also plotted in the figure. It is not possible to make a direct comparison of performance, however, because the statistical performance of other methods in independent runs is not available from the literature.
This example considers the optimal design of a 72-bar space truss structure as shown in Fig. 4. The density of bar material is 0.1 lb/in.3 and the modulus of elasticity of bar material is 10,000 ksi. The 72 members are parameterized into 16 groups, as follows: (1) A1–A4, (2) A5–A12, (3) A13–A16, (4) A17–A18, (5) A19– A22, (6) A23–A30, (7) A31–A34, (8) A35–A36, (9) A37–A40, (10) A41– A48, (11) A49–A52, (12) A53–A54, (13) A55–A58, (14) A59–A66, (15) A67–A70, (16) A71–A72. The design objective is to minimize the weight of the structure, subject to 320 non-linear constraints on stress and displacement. All members are to have an axial stress within ±25 ksi and the top nodes are to have a displacement within ±0.25 in. in all three directions. The structure is designed for two loading conditions as shown in Table 7. The design variables have a
Table 8 Optimal designs for 72-bar structure (case 1). Design variables
Optimal cross-sectional areas (in.2) Adeli and Kumar [28]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Weight (lb)
A1–A4 A5–A12 A13–A16 A17–A18 A19–A22 A23–A30 A31–A34 A35–A36 A37–A40 A41–A48 A49–A52 A53–A54 A55–A58 A59–A66 A67–A70 A71–A72
Lee and Geem [27]
Li et al. [20]
SS
PSO
PSOPC
HPSO
2.026 0.533 0.100 0.100 1.157 0.569 0.100 0.100 0.514 0.479 0.100 0.100 0.158 0.550 0.345 0.498
1.7901 0.521 0.100 0.100 1.229 0.522 0.100 0.100 0.517 0.504 0.100 0.101 0.156 0.547 0.442 0.590
41.794 0.195 10.797 6.861 0.438 0.286 18.309 1.220 5.933 19.545 0.159 0.151 10.127 7.320 3.812 18.196
1.855 0.504 0.100 0.100 1.253 0.505 0.100 0.100 0.497 0.508 0.100 0.100 0.100 0.525 0.394 0.535
1.857 0.505 0.100 0.100 1.255 0.503 0.100 0.100 0.496 0.506 0.100 0.100 0.100 0.524 0.400 0.534
1.838 0.508 0.100 0.100 1.200 0.504 0.101 0.101 0.507 0.524 0.100 0.102 0.100 0.518 0.397 0.538
379.31
379.27
6818.67
369.65
369.65
368.32
391
H.-S. Li, S.-K. Au / Structural Safety 32 (2010) 384–392 Table 9 Optimal designs for 72-bar structure (case 2). Optimal cross-sectional areas (in.2)
Design variables
Adeli and Kumar [28]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
A1–A4 A5–A12 A13–A16 A17–A18 A19–A22 A23–A30 A31–A34 A35–A36 A37–A40 A41–A48 A49–A52 A53–A54 A55–A58 A59–A66 A67–A70 A71–A72
Weight (lb)
Sarma and Adeli [29] Simple
Fuzzy
2.755 0.510 0.010 0.010 1.370 0.507 0.010 0.010 0.481 0.508 0.010 0.643 0.215 0.518 0.419 0.504
2.141 0.510 0.054 0.010 1.489 0.551 0.057 0.013 0.565 0.527 0.010 0.066 0.174 0.425 0.437 0.641
1.732 0.522 0.010 0.013 1.345 0.551 0.010 0.013 0.492 0.545 0.066 0.013 0.178 0.524 0.396 0.595
376.50
372.40
364.40
Lee and Geem [27]
Li et al. [20]
SS
PSO
PSOPC
HPSO
1.963 0.481 0.010 0.011 1.233 0.506 0.011 0.012 0.538 0.533 0.010 0.167 0.161 0.542 0.478 0.551
40.053 0.237 21.692 0.657 22.144 0.266 1.654 10.284 0.559 12.883 0.138 0.188 29.048 0.632 3.045 1.711
1.652 0.547 0.100 0.101 1.102 0.589 0.011 0.010 0.581 0.458 0.010 0.152 0.161 0.555 0.514 0.648
1.907 0.524 0.010 0.010 1.288 0.523 0.010 0.010 0.544 0.528 0.019 0.020 0.176 0.535 0.426 0.612
1.897 0.517 0.011 0.010 1.283 0.538 0.011 0.010 0.521 0.508 0.010 0.091 0.169 0.542 0.447 1.897
364.33
5417.02
368.45
364.86
364.01
Table 10 Statistical performance in 72-bar structure (case 1).
440
Best
Mean
Worst
COV
ANFC
ANI
100 200 300 400 500
372.0479 369.1044 368.487 368.3177 368.4005
393.9235 372.2963 370.48 369.5221 369.1293
594.693 388.154 382.5792 371.1209 370.8501
0.105743 0.011067 0.006837 0.001923 0.0016
10532 26372 43543 55257 72470
168 202 218 210 178
Table 11 Statistical performance in 72-bar structure (case 2). N
Best
Mean
Worst
COV
ANFC
ANI
100 200 300 400 500
367.1904 365.4087 364.6523 364.4046 364.0103
383.9047 368.0798 366.7094 365.6186 365.0701
430.5991 379.7397 371.7137 367.9447 366.9877
0.046078 0.007681 0.004802 0.00251 0.002223
10194 24026 36960 48742 67971
164 186 189 188 207
N=100 N=300 N=500 HS PSO PSOPC HPSO
430 420
Weight (lb)
N
410 400 390 380 370 360 0
20000 40000 60000 80000 100000 120000 140000 160000
The total number of structural analyse
N=100 N=300 N=500 HS PSO PSOPC HPSO
440 430
Weight (lb)
420 410 400 390 380 370 0
20000 40000 60000 80000 100000 120000 140000 160000
The total number of structural analyses Fig. 5. Total number of structural analyses versus weight for 72 bar structure (case 1).
range of 0:1 6 Ai 6 5:0 in:2 for case 1, and 0:01 6 Ai 6 5:0 in:2 for case 2. This problem has been considered previously by Lee and Geem [27], Adeli and Kumar [28], Sarma and Adeli [29], and Li et al.
Fig. 6. Total number of structural analyses versus weight for 72-bar structure (case 2).
[20]. Their best designs are compared against that obtained by Subset Simulation, and are listed in Table 8 (case 1) and Table 9 (case 2). It can be seen that Subset Simulation yielded better designs in both cases. For this structure, the PSO, PSOPC and HPSO algorithms required 150,000 structural analyses to reach the optimal design and the HS algorithm required 20,000 structural analyses for both cases. The PSO algorithm does not converge to a reasonable solution in 3000 iterations. The solution produced by HPSO algorithm slightly violated the stress constraints by 0.187% in case 2. The HS design in case 1 violated the displacement constraints by about 0.240%. Tables 10 and 11 summarizes the statistical performance of Subset Simulation based on 30 independent runs with N ranging from 100 to 500. Figs. 5 and 6 show the optimal weight versus the total number of structural analyses. The best designs obtained by PSO and the worst design for case 1 obtained by Subset Simulation with N = 100 are not plotted in Figs. 5 and 6. Similar observations as in the last example can be concluded from these tables and figures. 6. Conclusions The proposed algorithm is based on the idea that extreme events (optimization problem) can be considered as rare events
392
H.-S. Li, S.-K. Au / Structural Safety 32 (2010) 384–392
(reliability problem). A feasibility-based rule is used to guarantee that feasible solutions are always better than infeasible solutions in term of constraint fitness function values. The rule employs a modified constraint fitness function to evaluate the feasibility of a solution and a double-criterion ranking method to rank the solutions according to both constraint fitness function values and objective function values. Three benchmark problems demonstrate the robustness and efficiency of the proposed algorithm and compare with existing ones. The proposed algorithm is found to be competitive in exploiting the feasible regions and providing optimal designs in complex problems. It is also believed that the proposed algorithm for optimization preserves dimensional robustness of Subset Simulation as observed in reliability applications. Applications to real-life examples of more complex systems are to follow in the near future. Acknowledgement The work presented in this paper is supported by the Hong Kong Research Grant Council (HKRGC) through General Research Fund (GRF) (Project No. 9041484). The support is gratefully acknowledged. References [1] Ravindran A, Ragsdell KM, Reklaitis GV. Engineering optimization: methods and applications. 2nd ed. New Jersey: John Wiley and Sons; 2006. [2] Haftka RT, Gurdal Z. Elements of structural optimization. 3rd ed. Dordrecht: Kluwer Academic Publishers; 1992. [3] Spall JC. Introduction to stochastic search and optimization: estimation, simulation, and control. New York: John Wiley and Sons; 2003. [4] Holland JH. Adaptation in natural and artificial systems. Michigan: University of Michigan Press; 1975. [5] Kirkpatrick S, Gelatt C, Vecchi M. Optimization by simulated annealing. Science 1983;220(4598):671–80. [6] Maniezzo V, Dorigo M, Colorni N. The ant system: optimization by a colony of cooperating agents. IEEE Trans Syst, Man Cybern – Part B 1996;26(1):29–41. [7] Eberhart RC, Kennedy J. A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, Nagoya; 1995. p. 39–43. [8] Kennedy J, Eberhart RC. Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, Perth, vol. 4; 1995. p. 1942–8.
[9] Au SK, Beck JL. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilist Eng Mech 2001;16(4):263–77. [10] Au SK, Beck JL. Subset simulation and its application to seismic risk based on dynamic analysis. J Eng Mech 2003;129(8):901–17. [11] Au SK, Ching J, Beck JL. Application of subset simulation methods to reliability benchmark problems. Struct Saf 2007;29(3):183–93. [12] Schueller GI. Efficient Monte Carlo simulation procedures in structural uncertainty and reliability analysis – recent advances. Struct Eng Mech 2009;32(1):1–20. [13] Robert CP, Casella G. Monte Carlo statistical methods. 2nd ed. New York: Springer; 2004. [14] Coello Coello CA. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art. Comput Method Appl Mech Eng 2002;191(11–12):1245–87. [15] Dong Y, Tang J, Xu B, Wang D. An application of swarm optimization to nonlinear programming. Comput Math Appl 2005;49(11–12):1655–68. [16] Au SK. Reliability-based design sensitivity by efficient simulation. Comput Struct 2005;83:1048–61. [17] Michalewicz Z. Evolutionary algorithms for constrained parameter optimization problems. Evolut Comput 1996;4(1):1–32. [18] Hedar A, Fukushima M. Derivative-free filter simulated annealing method for constrained continuous global optimization. J Global Optim 2006;35(4): 521–49. [19] Mezura-Montes E, Coello Coello CA. Use of multi-objective optimization concepts to handle constraints in genetic algorithms. In: Abraham A, Jain L, Goldberg R, editors. Evolutionary multi-objective optimization. Berlin: Springer; 2005. [20] Li LJ, Huang ZB, Liu F, Wu QH. A heuristic particle swarm optimizer for optimization of pin connected structures. Comput Struct 2007;85(7–8):340–9. [21] Coello Coello CA. Use of a self-adaptive penalty approach for engineering optimization problems. Comput Ind 2000;41(2):13–127. [22] Coello Coello CA, Montes EM. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv Eng Inform 2002;16(3):193–203. [23] Coello Coello CA, Becerra RL. Efficient evolutionary optimization through the use of a cultural algorithm. Eng Optim 2004;36(2):219–36. [24] He Q, Wang L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng Appl Artif Intel 2007;20(1): 89–99. [25] He Q, Wang L. A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization. Appl Math Comput 2007;186(2):1407–22. [26] Zahara E, Kao Y. Hybrid Nelder–Mead simplex search and particle swarm optimization for constrained engineering design problems. Expert Syst Appl 2009;36(2, Part 2):3880–6. [27] Lee KS, Geem ZW. A new structural optimization method based on the harmony search algorithm. Comput Struct 2004;82(9–10):781–98. [28] Adeli H, Kumar S. Distributed genetic algorithm for structural optimization. J Aero Eng 1995;8(3):156–63. [29] Sarma KC, Adeli H. Fuzzy genetic algorithm for optimization of steel structures. J Struct Eng 2000;126(5):596–604.