Swarm algorithm with adaptive mutation for airfoil aerodynamic design

Swarm algorithm with adaptive mutation for airfoil aerodynamic design

Swarm and Evolutionary Computation 20 (2015) 1–13 Contents lists available at ScienceDirect Swarm and Evolutionary Computation journal homepage: www...

1MB Sizes 2 Downloads 58 Views

Swarm and Evolutionary Computation 20 (2015) 1–13

Contents lists available at ScienceDirect

Swarm and Evolutionary Computation journal homepage: www.elsevier.com/locate/swevo

Regular Paper

Swarm algorithm with adaptive mutation for airfoil aerodynamic design Manas Khurana n,1, Kevin Massey 2 RMIT School of Aerospace, Mechanical and Manufacturing Engineering

art ic l e i nf o

a b s t r a c t

Article history: Received 9 February 2012 Received in revised form 9 October 2014 Accepted 12 October 2014 Available online 23 October 2014

The Particle Swarm Optimization (PSO) method is sensitive to convergence at a sub-optimum solution for complex aerospace design problems. An Adaptive Mutation-Particle Swarm Optimization (AM-PSO) method is developed to address this challenge. A Gaussian-based operator is implemented to induce particle search diversity with probability through mutation. The extent of mutation during the optimization phase is governed by the collective search patterns of the swarm. Accordingly the proposed approach is shown to mitigate convergence at a sub-optimum design while concurrently limiting the computational resources required during the optimization cycle. The swarm algorithm developed is successfully validated on benchmark test functions with results favorably compared against several offthe-shelf methods. The AM-PSO is then used for airfoil re-design at flight envelopes encompassing lowto-high Mach numbers. The drag performances of the optimum airfoils are lower than the baseline shapes with the design effort requiring minimal computational resources relative to the optimization method documented in the literature. & 2014 Elsevier B.V. All rights reserved.

Keywords: Adaptive mutation with probability Swarm diversity Computational efficiency

1. Introduction A heuristic methodology based on “collective intelligence” in biological populations is the Particle Swarm Optimizer developed by Kennedy and Eberhart [1]. The swarm theory provides a simple and efficient process for design optimization simulations and is ideal for continuous variable problems [2]. Variants of the algorithm have been developed and applied in Electromagnetic Design [3], Computer Science [4], Medical Physics [5], Finance [6] and Sports Engineering [7]. As the search topology for complex engineering design problems is multi-modal with finite local solutions [8,9], it has been widely acknowledged that the implementation of innovative search operators including crossovers and/or mutation is necessary to achieve an acceptable solution for engineering design problems [10]. This has been effectively demonstrated in Genetic Algorithms (GA) [11], Simulated Annealing (SA) [12], Ant Colony (AC) [13] and PSO [14] algorithms.

n

Corresponding author. E-mail address: [email protected] (M. Khurana). 1 Research Engineer at RMIT University, School of Aerospace, Mechanical and Manufacturing Engineering. 2 Formerly at RMIT. Now at DARPA Tactical Technology Office, Washington, USA. http://dx.doi.org/10.1016/j.swevo.2014.10.001 2210-6502/& 2014 Elsevier B.V. All rights reserved.

To facilitate optimization convergence to the true optimum, adaptive PSO algorithms by the control of the inertia weight function have been used to balance between swarm global and local search patterns. Methods have included the fuzzy adaptive approach [15], a linear approach by Xin et al. [16], a logarithmic decreasing function by Gao et al. [17] and sigmoid increasing by Malik et al. [18], through to a randomized approach by Feng et al. [19] and oscillating mechanisms by Kentzoglanakis and Poole [20]. The efficacy of these methods was evaluated by Bansal et al. [21], yet convergence to the global optimum for complex multi-modal problems could not be achieved at a consistent rate [22]. Yang et al. [23] highlighted the need to develop novel PSO search processes to mitigate convergence at sub-optimum regions as a critical design requirement. Swarm algorithms that facilitate convergence to a global optimum for complex problems are in development. An effective approach is to ensure swarm search diversity is maintained so that particles are active in exploiting promising solution regions for improved performance results. A territorial Particle Swarm Optimization algorithm by Arani et al. [24] avoided the formation of particle clusters to ensure search diversity is not compromised. A two-swarm cooperative PSO method by Sun and Li [25] operated by partitioning the swarm into two distinct groups, each with a specific task to ensure the global optimum is captured for large search spaces through swarm diversity. Kar et al. [26] introduced a “craziness” variable into their swarm algorithm to ensure the

2

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

particles have a predefined measure by which they change their respective directions with probability to maintain diversity. Improvements to the convergence time to a global optimum have also been worked upon and a modified teaching-learning-based algorithm by Satapathy and Nail [27] was developed based on the concept of effective class room teaching involving teachers and students [28] to address this need. Particle swarm operations based on mutation which mimic natural evolution have also been explored as a viable option to improve swarm performance through diversity [14,29–35]. Reliance on particle mutation requires the need to ascertain an acceptable balance between solution feasibility and computational efficiency. This requirement is critical for design problems which are characterized by time intensive fitness function evaluators including Computational Fluid Dynamic solvers in aerospace design applications [36]. The number of design iterations, hence fitness function evaluations in an optimization simulation need to be minimal while concurrently ensuring an optimum solution is established. Lehre and Yao [37] mapped the impact of mutation on computational runtime in Evolutionary Algorithms (EA). It was shown that a balance between computational efficiency and solution feasibility is problem dependent and mutation parameters need to be defined accordingly which is a significant design challenge. Alireza [29] developed a mutation operator on the basis that at optimization initialization, the search population will be far away from the optimum point, and the success of the PSO will be limited. Accordingly an extended mutation step size was recommended to facilitate a search phase at new regions. Alternately at the concluding stages of the optimization cycle, the mutation step size was reduced to initiate local exploration about the isolated search region. A small mutation step size at the concluding stages of the optimization cycle may not be a viable option. If the swarm is converging to a local optimum, then a restricted mutation step size will not effectively facilitate the transition of the particles to a favorable area of the solution topology. An extended mutation step size is a requirement at the later stages of the iterative design cycle as this may facilitate the transition of the swarm from local to global optimum regions. Accordingly computational resources will not be exhausted on fitness evaluations for particles that are at offdesign, sub-optimum solution regions. In the works by Zhen-su et al. [30], adaptive particle mutation mechanisms were implemented, yet the probability-of-mutation at each generation was fixed. This practice can result in a computationally time intensive optimization process as function evaluations by mutation are imposed at a constant rate and at stages that do not necessarily warrant this. Si et al. [14] developed an adaptive polynomial mutation strategy by the readjustment of the global best particle. Pant et al. [31] proposed mutation operators by the repositioning of the personal and global best particles in two separate variants. Mutation was activated at the end of each iteration which is computationally ineffective. Tang and Zhao [34] also mutate the global best particle at each iteration with the mutation step size based on the measure of the scope of the search volume. As the particles converge to a specific region, the search space envelope modeled by the swarm is reduced and so is the mutation step size. If the swarm was inadvertently converging to a local optimum, then the mutation step size will be restricted as the size of the search space is small. With a small mutation step size, particle transition from local to global solution regions will not necessarily ensue and the swarm will remain isolated to a local optimum region; a potential issue that was also prevalent in the works by Alireza [29]. Adaptive mutation operators were proposed by Li et al. [32] and in the works by Li and Yang [33]. In [32] only the global best particle was mutated at specific iteration with probability using

one of three pre-defined position adjustment functions. In [33] particle mutation was extended from the population level (global best only) to the individual level such that each particle in the swarm could be mutated with probability. Design demonstration confirmed the validity of mutation that is adaptive to the iteration cycle with favorable fitness performances established on select benchmark test functions. In accordance with the mutation practices imposed in [32,33], a novel Adaptive-Mutation Particle Swarm Optimization (AM-PSO) algorithm is developed in the presented analysis. As the fundamentals of the PSO process without mutation have been shown to be effective for simplified problems [38,39], the extension of the works through AM-PSO must also exhibit acceptable design simulations for complex design problems. The proposed advancements in the AM-PSO are from the view-point of addressing solution feasibility and computational efficiency.

 Solution feasibility. Mutation that is restricted to specific parti-



cles including personal and/or global best points only as in [14,31,32,34] needs to be advanced further. The collective search patterns of all the particles in the swarm needs to be exploited through mutation so that the theoretical benefits of the swarm theory can be realized in full. This is achieved by incorporating all search agents for mutation consideration so that the probability of obtaining newer and potentially more promising data points is maximized. In the AM-PSO, each agent in the swarm is eligible for mutation. Adaptive operators are integrated which factor the phase of the iterative design cycle and the search experiences of the particles in the swarm. The principle of this mechanism works on the basis that fewer, if any particles are selected for mutation when search diversity in the population is wide and spread. Conversely diversity is reduced when the particles start to share a common consensus regarding solution optimality. In this case, to mitigate convergence at a sub-optimum region, greater number of particles relative to the phase when diversity was enhanced are selected for re-positioning. The process of converging to a global optimum, hence solution feasibility is advanced by intelligently incorporating mutation at key phases of the optimization cycle. Computational feasibility. Each mutation step carries an additional function evaluation which adds to the overall computational expense. If the function evaluator is time intensive, additional function evaluations drastically limit the efficiency of the optimization process. The number of mutations to be performed at each iteration needs to be adaptive to the search experiences of the swarm to achieve an acceptable balance between solution feasibility and overall computational efficiency. In select methods [30,31,34], the rate-of-mutation at each generation was fixed. In this case, mutation is unnecessarily imposed at optimization phases when particle repositioning may not be justified. Mechanisms are incorporated into the AM-PSO to address this shortfall. The fitness of the personal best solutions at each iteration is factored to establish the particle population size that will undergo mutation. With this, the rate-of-mutation is made adaptive to the search experiences of the agents during the optimization cycle for the purposes of limiting additional function evaluations without compromising solution optimality.

Variants of the swarm algorithm have been used in previous airfoil design efforts. In [40] the PSO algorithm by Qin et al. [41] was applied in the design of a long endurance airfoil. The global best particle converged to a local solution due to limitations in search diversity of the swarm. In [42] the concept of integrating an artificial neural network (ANN) as a aerodynamic force fitness

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

function evaluator into a swarm algorithm [41] was introduced. In [9] this concept was demonstrated to be an effective approach in lowering the computational time needed to achieve convergence. Improvements to airfoil drag performances were also achieved relative to the design efforts in [40] as the maximum velocity of the particles was reduced. Yet convergence to the global solution was still not established. In [43] the concept of particle mutation was outlined to address this challenge and was successfully used in the redesign of a simplified, low-speed airfoil design problem that had minimal performance and shape constraints imposed. In the present analysis, the design development and validation of the AM-PSO method from [43] is outlined and used on real-life, complex low-and-high speed airfoil design problems. Accordingly the work is structured as follows: Section 2 – Adaptive-mutation particle swarm optimizer: The fundamentals of the proposed AM-PSO method are defined. Section 3 – AM-PSO numerical test validation: A comprehensive test validation process is presented on benchmark test functions to demonstrate the efficacy of the AM-PSO algorithm. Section 4 – AM-PSO for aerospace design application: The AMPSO is further assessed from an aerospace design perspective. An airfoil shape optimization problem for an Unmanned Aerial Vehicle (UAV) operating at low and high Mach numbers is defined and simulated using the developed optimization approach. Section 5 – Conclusion and summary of future works: Summary of major findings and contributions to the related research field are outlined. Avenues for further research work are summarized.

3

current position. The dominating global search process is to be mitigated with local search tendencies by decreasing w. In the two outlined scenarios, w is modified by the following equation [41]:   1 wij ¼ 1  αAIW ð2Þ 1 þ e  AIW ij where αAIW is inertia weight positive constant in the range ð0; 1; and empirically set at 0.95 in the simulations to follow. 2.3. Wall boundary conditions The position x, of the ith particle in the jth dimension can violate the search domain at iterate k during position update such that xij ðkÞ2 = ½xmin ., j ; xmax ., j . Wall boundary conditions are applied by constraining the movement of the particles in the form xmin ., j r xj r xmax ., j such that they are within the defined dimensional search space volume. Method types include the absorbing, reflecting, invisible and the random initialization approach [3,44–46]. Numerical test experiments were performed to evaluate and compare the performances of the absorbing, reflecting and random initialization methods on benchmark test functions. The invisible approach was not considered as the reduction of the swarm population size by the elimination of particles that violate the defined search limits is not justified. The fundamental principles of the AM-PSO methodology must not be restricted by limiting the swarm population size. The boundary condition type that resulted in swarm convergence to the global solution region with minimal computational effort was established and the findings are discussed in Section 3.

2. Adaptive-mutation particle swarm optimizer 2.4. Adaptive-mutation Adaptive mutation operators that govern the optimization phase and subsequent particle population size selected for repositioning are introduced. The setup of the AM-PSO algorithm is as follows. 2.1. Space filling design for solution initialization To maximize search diversity at optimization initialization, the swarm is dispersed into the D-dimension solution volume using Latin Hypercube Sampling (LHS). 2.2. Adaptive Inertia Weight An Adaptive Inertia Weight (AIW) function developed by Qin et al. [41] is used to facilitate a dynamic balance between global and local search phases of the particles in the following equation: jxij  pbest ij j AIW ij ¼  pbest ij pbestg j þ ϵAIW

ð1Þ

where: xij ; position of the ith particle in the jth dimension; pbest ij ; personal best solution; pbestg; current global best solution; and ϵAIW ; a positive value close to zero; empirically set at 10  2 in the simulations to follow: A small value of AIW represents a scenario where the personal best and current positions of the respective particle are close, yet isolated from the global best as demonstrated by Qin et al. [41]. This indicates a local search process which is to be modified by increasing the inertia weight factor, w, to balance the distance between personal and global best solutions. A large magnitude of AIW represents a particle with a global exploration tendency as personal and global best solutions are close, yet at distant from the

Mutation operators that are applicable to the swarm theory include [47]: (a) Constant rate: mutation is applied through the optimization phase where the number of particles involved in the repositioning process are predefined; (b) Linearly varying rate: the rate-of-mutation is linearly modified (either increased or decreased) over the progression of the swarm evolutionary design cycle; and/or (c) Global solution stagnation: mutation is only induced if the global best solution has not changed over successive userdefined iterations. The constant mutation-rate is computationally inefficient as it is unnecessarily applied at all optimization phases without due consideration to the search patterns (local versus global) of the individual particles in the swarm. Linearly varying the rate-of-mutation which is adaptive to the search cycle, (k), can address this issue. The methodology has been demonstrated to facilitate a dispersed search space exploration capability by the swarm [47]. This method is still not effective from computational efficiency view point as mutation is guided by iterations without considering the search diversity of the particles. The principles of a stagnant global best which has not changed over successive iterations is indicative of a swarm that has converged to a specific design space region. Mutation at this late stage of the optimization cycle may not be effective especially if the swarm has been allowed to converge to a local solution valley over a series of consecutive iterations due to a stagnant global best. In the proposed works, mutation is integrated into the AM-PSO method based on the personal best search experiences of the particles in the swarm. Related to this process, is a probabilistic paradigm which defines the rate-of-mutation by selecting specific

4

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

particles that are to be mutated to control the computational overheads. The setup of the proposed method is as follows: (i) Estimate particles personal best fitness range factor, pf :

D

xij;M ðkÞ ¼ xij ðkÞ þσ ij j

ð5Þ

where:

pf ¼ ðEF pbesti;max ; EF pbesti;min Þ Estimate, E, an arbitrarily high and low measure of the objective fitness, F, with EF pbesti;max: and EF pbesti;min: respectively. (ii) Objective distance metric, ds : At each iteration, k, the fitness distance metric, ds, of the swarm is calculated from the actual fitness of the worst performing particle, F pbesti;max: in the swarm subtracted from the fitness of the global best F pbesti;min: such that   dsðkÞ ¼ F pbesti;max: F pbesti;min:  ð3Þ (iii) Probability-of-mutation, pr M : ! dsðkÞ EF pbesti;max: pr MðkÞ ¼ EF pbesti;min:  EF pbesti;max:

the following rule:

xij;M ðkÞ; position of particle i with dimension j mutated at iteration k; xij ðkÞ; current position of particle i with dimension j at iteration k; and σ Dj ij ; number generated from Gaussian distribution for particle; i; on a randomly selected dimension; Dj ; with mean zero and standard deviation; stdM ; of stdM ¼ ωM  ðDj;max:  Dj;min: Þ

ð6Þ

where: ð4Þ

In Eq. (4), pr MðkÞ is adaptive to the personal best fitness performances of the  particles in the swarm. When  dsðkÞ -EF pbesti;max:  EF pbesti;min:  [user-estimated value of the highest and lowest fitness measure from point (i)], this is representative of a swarm with high search diversity, and pr MðkÞ decreases to avoid unnecessary fitness function evaluations by mutation. Fig. 1 presents the evolution of pr MðkÞ as a function of dsðkÞ with EF pbesti;max ¼ 1 and EF pbesti;min ¼ 0. Marker prM increases during the optimization cycle as the swarm converges (to be expected during the iterative cycle) to a common region through the measure ds. The following steps outline the processes applied in the selection of candidate particle(s) for mutation: (iv) Particle random assignment of integers in interval [0,1] – mr i : Generate a (n  1) matrix of random numbers, mr, for each particle, i, denoted by mri. (v) Identification of particle(s) that will be induced for mutation: For particle(s) where pr Mi 4 mr i , Gaussian mutation rule is applied. (vi) Gaussian mutation rule: Randomly select one dimension, j, for mutation and apply

ωM ; mutation scalar factor; and ðDj;max:  Dj;min: Þ; design variable intervals for dimension j: (vii) Wall boundary condition = ½xmin;j ; Apply random wall-boundary condition if xij;M ðkÞ2 xmax;j  (Algorithm 1, lines 16 and 27 of AM-PSO pseudo-code). (viii) Fitness analysis of mutated particles, F i;M : Establish fitness, F i;M , of position xij;M (Algorithm 1, line 28). (ix) Consider position update rule: Apply position update rule if the position of the mutated particle, xij;M , results in a lower fitness than the position of the particle without mutation, xij (Algorithm 1, line 29). Else, particle remains at present state: (

xij ðk þ 1Þ ¼ xij;M

if F i;M o F i

xij ðk þ 1Þ ¼ xij

otherwise

(x) Consider update of personal best position: Position, xij;M , is the new personal best of the respective particle if the mutated position yields a lower fitness than the previous personal best (Algorithm 1, line 33). Else, there

Fig. 1. Adaptive-based swarm probability-of-mutation.

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

is no change in the personal best position: ( pbest i ðk þ 1Þ ¼ xij;M if F i;M o F pbest i pbest i ðk þ 1Þ ¼ pbest i

otherwise

(xi) Consider update of global best position: If the mutated position of a respective particle yields a lower fitness than the previous global best, then there is a position update change to reflect a new swarm leader (Algorithm 1, line 34). Else, there is no change: ( pbestgðk þ1Þ ¼ xij;M if F i;M o F pbestg pbestgðk þ1Þ ¼ pbestg

otherwise

(xii) Assess swarm termination: If the termination criteria is satisfied, then stop simulation (Algorithm 1, lines 36–38). Else, repeat loop (Algorithm 1, lines 10–36). The measures used for optimization termination assessment are as follows. 2.5. Optimization termination Optimization convergence is assessed based on the following three measures: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 36: 37: 38:

(a) Global best, pbestg; (b) Objective distance metric, ds (Eq. (3)); and (c) Distribution of the spread of personal best fitness performances, σ t , the standard deviation of the personal best fitness performances of each particle in the swarm. Even with a stagnant global best solution, pbestg, the swarm may still be active in pursuing improved solutions. Accordingly additional measures are needed to avoid premature termination by effectively assessing the degree-of-search activity that is present in the swarm. The disparity in fitness between the best and worst performing particles, ds, is measured at each iteration ðthrough F pbesti;max and F pbesti;min Þ. As ds is a measure of fitness performance across two particles, an additional measure based on the standard deviation of the personal best fitness performances of the entire swarm, σt, is also proposed. Collectively these measures will map the search patterns of the swarm with optimization termination occurring when they are stagnant over successive, user-defined and empirically tested number of iterations. The pseudo-code of the AM-PSO is presented in Algorithm 1. The commands in lines 20–35 are representative of the developed AM-PSO principles. Algorithm 1. AM-PSO algorithm.

for all particles i do Initialize position xD i ðkÞ Initialize velocity vD i ðkÞ ¼ 0 Compute fitness f D

Set personal best pbest i ðkÞ Set global best pbestgðkÞ end for while termination criteria not satisfied do k¼0 for all particles i do Update velocity of the particles by: D

5

D D D vD i ðk þ1Þ ¼ w  vi ðkÞ þ c1  r 1  ½pbest i ðkÞ  xi ðkÞ þ c2  r 2  ½pbestgðkÞ xi ðkÞ Update position of the particles by: D D xD i ðk þ1Þ ¼ xi ðkÞ þ vi ðkÞ Apply Random Initialization Boundary Wall Condition for Dimensional Space Violated Agents: xij ðkÞ ¼ xmin;j þ ðxmax;j xmin;j Þ:rand Evaluate fitness of the new position Update personal best position end for INPUT: Estimate of Personal Best Fitness Range pf [Section 2.4 – step (i)] and ds [Section 2.4 – step (ii)] using data from line 17 OUTPUT: Probability-of-Mutation prM [Section 2.4 – step (iii), Eq. (4)] Generate a ðn  1Þ matrix of random numbers mri [Section 2.4 – step (iv)] for all particles i do if pr M 4mr i [Section 2.4 – steps (v) and (vi)] then Randomly mutate dimension j [Section 2.4 – step (vi), Eq. (5)] Apply Random Initialization Boundary Wall Condition for Dimensional Space Violated Agents - [Section 2.4 - step (vii)]: xij ðkÞ ¼ xmin;j þ ðxmax;j  xmin;j Þ:rand Evaluate fitness of the new position by mutation F i;m – [Section 2.4 - step (viii)] Apply Position Update Rule – [Section 2.4 – step (ix)] else No Mutation end if Update personal best position – [Section 2.4 – step (x)] Update global best position – [Section 2.4 – step (xi)] end for Assess swarm termination – [Section 2.4 – step (xii)] k ¼k þ1 end while

6

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

ðf 9 Þ Shifted Rastrigin function:

Table 1 AM-PSO test validation case one properties [29].

n

Test function Search space ¼ initialization space Optimum point Modality n

f 1 ðxÞ f 2 ðxÞ f 3 ðxÞ

½  600; 600 ½  30; 30n ½  5:12; 5:12n

0 0 0

Multimodal Unimodal Multimodal

f 9 ðxÞ ¼ ∑ ðz2i  10 cos ð2πzi Þ þ 10Þ þ fbias11 i¼1

fbias11 ¼  330;

z ¼ xo

ðf 10 Þ Shifted rotated Rastrigin function n

f 10 ðxÞ ¼ ∑ ðz2i  10 cos ð2πzi Þ þ 10Þ þ fbias12 i¼1

3. AM-PSO numerical test validation

fbias12 ¼  330;

Benchmark test functions are used for AM-PSO design demonstration and validation. Ten test functions, f, are selected with varying degrees-of-complexity of which four ðf 7 –f 10 Þ are from the CEC 2005 benchmark suite [48,49]: ðf 1 Þ Griewank:

   n x2i x  ∏ cos piffi þ 1 4000 i i¼1



n

f 1 ðxÞ ¼ ∑

i¼1

ðf 2 Þ Rosenbrock function: i n1 h  2 f 2 ðxÞ ¼ ∑ 100 xi þ 1  x2i þ ðxi  1Þ2 i¼1

ðf 3 Þ Rastrigin's function: n

f 3 ðxÞ ¼ ∑ ðx2i 10 cos ð2πxi Þ þ 10Þ i¼1

ðf 4 Þ 2n minima function: f 4 ðxÞ ¼

1 n 4 ∑ ðx  16x2i þ 5xi Þ ni¼1 i

ðf 5 Þ Weierstrass function: " # i kmax h n k k ∑ a cos ð2πb ðxi þ 0:50ÞÞ f 5 ðxÞ ¼ ∑ i¼1

kmax

n ∑

h

k¼0

k¼0

i k ak cos ð2πb n0:50Þ ;

a ¼ 0:50; b ¼ 3; kmax ¼ 20

ðf 6 Þ Salomon's function: sffiffiffiffiffiffiffiffiffiffiffiffiffi! f 6 ðxÞ ¼ 1  cos 2π

n

∑ x2i þ 0:10

i¼1

sffiffiffiffiffiffiffiffiffiffiffiffiffi n

∑ x2i

i¼1

ðf 7 Þ Shifted Rosenbrock function: i n1 h  2 f 7 ðxÞ ¼ ∑ 100 z2i  zi þ 1 þ ðzi  1Þ2 þ fbias5 i¼1

fbias5 ¼ 390;

z ¼ x o þ 1

ðf 8 Þ Shifted rotated Ackley function with global optimum on bounds: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! ! 1 n 2 1 n ∑ zi  exp ∑ cos ð2πzi Þ f 8 ðxÞ ¼  20 exp  0:20 ni¼1 ni¼1 þ 20 þ e þfbias6 where fbias6 ¼  140; z ¼ ðx  oÞn M 0 . M 0 is a linear transformation matrix number ¼100.

with

condition

M 0 is a linear number ¼2

z ¼ ðx  oÞn M 0 transformation

matrix

with

condition

As was discussed in Section 2.3, design-of-experiment simulations were performed to compare the fitness and computational efficiency (as a function of iteration count) performances of the different wall boundary conditions on uni and multimodal problems. The results confirmed that there is no one universal wall boundary condition that works best across all problems. The random wall boundary condition resulted in convergence to the global optimum for all problems, yet computational efficiency was affected for cases where the global optimum was at the boundary of the design space. Considering a balance between solution optimality and computational efficiency, the random wall boundary condition exhibited the most desired outcome relative to the other tested methods. With this, the intervals of the design variables, hence solution topology must be mapped with due care to mitigate this undesirable effect. The efficacy of the AM-PSO is now demonstrated against offthe-shelf methods [29,50]. Two sets of design validation performances are presented. In the first set, the work by Alireza [29] is used as a benchmark for design performance comparison of the AM-PSO against the conventional PSO, including a Nonlinearly Decreasing Weight PSO (NDWPSO), a Real-Coded Genetic Algorithm and an Adaptive Particle Swarm Optimization (APSO) method. The test properties used in the verification process match the settings applied in [29] and are presented in Table 1. To facilitate a valid comparative analysis of the AM-PSO against the works by Alireza [29], the dimension size was set at 10; the maximum number of generations to convergence at 1000; and the swarm population size at 20 particles. The simulations were run over 30 independent trials and the converged fitness means and standard deviations extracted. The results in Table 2 confirm the merits of the AM-PSO method. The mean fitness values are considerably less than offthe-shelf algorithms [29]. The fitness standard deviation of each function over the 30 independent trials further confirms the consistency at which the AM-PSO converges to the global optimum. Accordingly the benefits of the AM-PSO with performance evaluation restricted to 1000 iterations are confirmed relative to the design processes reported in the literature. In the second test, the complexity of the test problems including the simulation test envelope (function evaluations) is increased. The functions f7–f10 are from the CEC 2005 benchmark suite [48,49] and are added to the test library to further demonstrate and verify the effectiveness of the AM-PSO on complex problems. In the validation process, the test functions are modeled with 10 dimensions, and the swarm search and initialization ranges, including the global optimum values of the respective functions are summarized in Table 3. The performance of the AM-PSO for the functions listed in Table 3 is tested against the algorithms outlined by Juang et al.

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

7

Table 2 Test function simulation results – clamped convergence at 1000 iterations. PSOs

Function

PSO [29] NDWPSO [29] GA [29] APSO [29] AM-PSO

Mean 7 std.dev.

Mean 7 std.dev.

Mean 7std.dev.

f1

f2

f3

0.1543 0.1294 0.2253 0.0983 0.0621

7 0.0271 70.0248 7 0.3473 7 0.0054 7 0.0296

25.3463 20.4258 32.4326 5.8467 1.5778

7 4.1323 7 3.6832 7 9.3247 7 1.3471 7 1.8300

6.1057 7 0.1973 5.7735 7 0.1823 7.8546 7 0.2267 5.1565 7 0.1358 7.0660e  05 7 3.3512e 04

Bold face indicates the best result for the respective PSO variant model.

Table 3 Summary of search ranges and global optimum values of the select test functions. Test function

Search range ¼ initialization space

Global optimum

f 3 ðxÞ f 4 ðxÞ f 5 ðxÞ f 6 ðxÞ f 7 ðxÞ f 8 ðxÞ f 9 ðxÞ f 10 ðxÞ

½  5:12; 5:12n ½  5; 5n ½  0:50; 0:50n ½  100; 100n ½  100; 100n ½  32; 32n ½  5; 5n ½  5; 5n

0  78.3323 0 0 390  140  330  330

[50]. The maximum number of fitness function evaluations in the AM-PSO runs, which are inclusive of the additional evaluations due to mutation, is set at 100,000 as per the CEC 2005 benchmark suite [49]. The results documented by Juang et al. [50] are with 300,000 evaluations with a swarm size of 30 particles which are simulated for 10,000 iterations. The AM-PSO simulations are with 30 particles and optimization termination is activated at the hit of 100,000 fitness function evaluations. The simulations are executed over 30 independent trials. To establish whether the performances between the AM-PSO and other algorithms are different with a statistical significance, the Wilcoxon rank-sum test is performed. The analysis is setup as follows: 1. Statement of algorithm performance hypothesis: For the two algorithms, with AM-PSO as method A and method B from the literature, the fitness distribution from the 30 trials is compared using the null-hypothesis as:  H 0 : F A ¼ F B . There is no difference in the mean fitness between the two methods.  H 1 : F A o F B . There is a difference in mean fitness between the two methods. 2. Significance level: The test is performed at a significance level of α¼0.05. 3. Interpretation of results: The results are presented in Table 4. A star ðnÞ designation represents the best reported algorithm in the literature [50]. As part of the statistical analysis, if the p-value from the t-test is less than 0.05, the results are referred to as significantly different and the null-hypothesis, H0, is rejected at the defined significance level. If the p-value is greater than 0.05, then we fail to reject the null-hypothesis and conclude that there is not enough evidence in the data to suggest either algorithm A and B is better than the other. Accordingly measure h refers to the test of statistical significance between AM-PSO and the respective off-the-shelf algorithm. If the research supports hypothesis H0, then h ¼0 (same performance). If hypothesis H1 is

concluded, then the means and standard deviations are examined to establish h. Accordingly h¼1 (better) if AM-PSO is superior than the literature-based method(s) or h ¼  1 (worse). Accordingly the summation of instances that yield n h ¼ 1; 0; 1 (comparison of AM-PSO versus best off-the-shelf) from the eight test functions are defined as a footnote to Table 4.

The results confirm that the AM-PSO has performance gains. Statistically there were no significant differences in the mean fitness performance between the AM-PSO and the best reported method in [50] for functions f3–f5 ðh ¼ 0; sameÞ. Yet for function f6, AM-PSO had superior mean fitness over the 30 sample trials ðp o 0:05Þ than the best methods documented in the literature (AFPSO and AFPSO-Q1 in [50]) with convergence to the global optimum achieved ðh ¼ 1; betterÞ. The efficacy of the AM-PSO as a function of the rate-ofconvergence to the optimum was further demonstrated. Convergence to the global optimum with 100% consistency was achieved for functions f3–f6 with fitness standard deviation of zero. This performance trait was not matched with such distinction by other methods. The performance of the AM-PSO on complex non-biased test functions (f7–f10) from the CEC 2005 benchmark suite [48,49] was further demonstrated. The mean fitness for functions f7 and f9 is about the global optimum with p o0:05, such that h¼1 (better); a performance trait that was not achieved by other methods. Despite the AM-PSO exhibiting a mean fitness for function f10 that is closer to the optimum than other methods, the test of statistical significance supports the null-hypothesis indicating no difference between the mean fitness of the AM-PSO and the best reported   method in the literature f 10 : p 4 0:05; h ¼ 0; same . The validation results further confirmed that in one experiment with function f8, the AM-PSO had a slightly higher mean converged fitness than the best method in the literature. In this case, the absolute mean fitness and  difference between the AM-PSO  DMSPSO is minimal at ð  1:198e þ02Þ  ð  1:200e þ 02Þ ¼ 0:20. The test of statistical significance indicated no performance diffe rences between the two algorithms f 8 : p 4 0:05; h ¼ 0; sameÞ. Demonstration of the AM-PSO algorithm confirmed performance improvements than other methods. The established performance gains are achieved with much fewer function evaluations (100,000 versus 300,000) than the settings used in literature [50]. At worst, the performance of the AM-PSO with 200,000 fewer function evaluations is equivalent to the best offthe-shelf algorithm. The favorable results established on the complex CEC 2005 benchmark suite functions are confirmatory that the AM-PSO method is well suited for airfoil design applications where solution optimality and computational efficiency are two critical design factors.

8

Table 4 Comparison of simulation results (AM-PSO ¼100,000 function evaluations; off-the-shelf methods ¼300,000 function evaluations [50]). PSOs

Function p-value

h

SPSO [51] QIPSO UPSO FIPS DMSPSO CLPSO AFPSO AFPSO-Q1 AM-PSO

6.633e  017 9.381e  01 1.747e þ 00 7 3.445e þ 00 2.520e þ 00 7 1.8655e þ 00 1.679e þ01 7 6.411e þ01 1.293e þ00 71.153e þ00 1.300e  02 74.489e  02 5.870e  07 7 2.115e  06 1.520e  12 7 5.651e  12 0 70

1.25e  05 5.25e  08 2.09e  11 4.95e  13 1.86e  08 5.041e  13 0.0816 –

1 1 1 1 1 1 0 0b

1.824e þ077 3.014e þ 07 4.598e þ 07 7 1.323e þ08 1.680e þ 07 7 6.900e þ 07 1.306e þ 03 7 3.857e þ 03 5.302e þ 06 7 1.593e þ 07 4.027e þ 02 7 2.236e þ 01 4.205e þ 02 73.462e þ 01 3.998e þ 02 72.062e þ01 3.906e þ 027 9.099e  01

h

 7.830e þ 01 7 1.826e  01  7.824eþ 01 75.162e  01  7.145e þ 01 7 3.835e þ00  7.065e þ 01 7 4.372e þ00  78.3323 7 1.445e  14  78.3323 7 1.445e  14  78.3323 7 1.445e  14  78.3323 7 1.445e  14  78.3323 7 0

6.27e  10 4.61e  10 6.88e  09 2.93e  11 2.58e  09 1.17e  10 3.55e  09 9.30e  07

1 1 1 1 1 1 1 1b

 1.197e þ 02 7 7.011e  01  1.197e þ 02 7 1.307e 01  1.197e þ 02 7 4.762e  02  1.200e þ02 7 9.567e  02  1.200eþ 02 7 6.101e  02  1.197e þ 02 7 6.916e  03  1.198e þ 02 76.774e  02  1.198e þ 02 76.775e  02  1.198e þ 02 75.921e  02

Mean 7 std.dev

p-value

0.3337 0.3337 1.43e  11 4.06e  12 – – – –

0 0 1 1 0b 0b 0b 0b

07 0 7.563e þ 00 7 1.484e þ 00 2.083e  01 7 6.617e  01 3.649e  01 7 6.283e  01 07 0 07 0 07 0 07 0 07 0

– 1.21e  12 2.21e  06 1.70e 08 – – – –

b

b

0 1 1 1 0b 0b 0b 0b

f9 2.77e 05 3.36e  05 1.20e  08 8.47e  09 6.35e 02 0.1624 0.3632 0.4204

1 1 1 1 0b 0 0 0

 3.208e þ02 7 8.462e þ00  3.252e þ02 7 5.886e þ00  3.260e þ02 7 4.081e þ00  3.128e þ 02 7 7.815e þ 00  3.276e þ 02 72.340e þ 00  3.290e þ02 7 8.664e  01  3.293e þ02 7 6.986e  01  3.295e þ02 7 6.788e  01  3.300eþ 027 9.120e  06

Shi and Eberhart's [51] original inertia weight algorithm developed to balance local exploitation with global exploration. Best algorithm reported in the literature [50] (not including AM-PSO).

p-value

h

2.43e  13 8.64e  14 1.69e  14 1.13e  12 2.71e  14 1.69e  14 1.69e  14 1.69e  14

1 1 1 1 1 1 1b 1b

1.28e  09 1.36e  07 3.80e  07 5.57e  07 0.4376 1.59e  07 4.80e  07 2.00e  04

1 1 1 1 0b 1 1 1

f6 1.265e  01 74.498e  02 1.133e  01 73.457e  02 9.987e  02 7 2.323e  10 3.150e  01 72.102e  01 1.032e  01 71.826e  02 9.987e  02 7 3.066e  11 9.987e  02 7 2.823e  17 9.987e  02 7 2.823e  17 9.000e  02 70 f10 7.72e  12 7.60e  10 8.21e  11 2.36e  12 2.47e  11 2.37e  12 1.04e  07 7.96e  06

h: performance of AM-PSO against respective off-the-shelf algorithm (assessment by statistical significance) with categories:  1 (worse), 0 (same), and 1 (better). AM-PSO score (against other methods) for each performance category (across 8 test functions):  1 (worse) ¼0, 0 (same)¼ 5, and 1 (better) ¼3. Bold face indicates the best overall result (including AM-PSO). a

Mean 7 std.dev

h

f5

f8

f7 SPSOa [51] QIPSO UPSO FIPS DMSPSO CLPSO AFPSO AFPSO-Q1 AM-PSO

p-value

f4

f3 a

Mean 7 std.dev

1 1 1 1 1 1 1 1b

 3.083e þ02 7 6.277e þ 00  3.080e þ02 7 8.951e þ 00  3.107e þ02 7 5.918e þ 00  3.085e þ02 7 9.423e þ00  3.196e þ02 7 3.296e þ00  3.118e þ 02 75.437e þ00  3.139e þ02 7 3.860e þ00  3.157e þ 02 7 4.080e þ 00  3.199eþ 027 4.010e þ 00

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

Mean 7 std.dev

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

9

where:

4. AM-PSO for aerospace design applications The AM-PSO algorithm is used for airfoil design to achieve the following flight operations with enhanced aerodynamic efficiency: (a) Low-speed for High Altitude Long Endurance (HALE) performances that include Intelligence, Surveillance and Reconnaissance (ISR) missions; and (b) Rapid dash segments for the Suppression of Enemy Air Defense (SEAD) units at transonic Mach numbers. The optimum airfoils developed by the AM-PSO at the two flight conditions are compared with off-the-shelf concepts. Accordingly the feasibility of the developed optimization method on complex real-life design problems is assessed.

f p ; fitness as a result of penalty; cj ; expected maximum cost to repair constraint j; ði:e alter x so that it is feasibleÞ; normalized to ð0; 1Þ; ηcon ; number of constraints: As fitness for constraint violation is normalized to range ð0; 1Þ, the magnitude of cd at climb and cruise segments is manipulated to transpose an even contribution to the objective term relative to the penalty magnitude. Hence, a multiple of 100 to cdclimb and cdcruise is imposed onto the objective function to ensure the contribution of drag is not diminished by the normalized magnitude of the penalty function. The objective function then becomes: J ðxÞ ¼ 100  cdclimb þ 100  cdcruise

ð9Þ

4.1. HALE airfoil design definition 4.2. AM-PSO algorithm setup The design requirements of an airfoil to achieve HALE performances require the following features [52]: (a) The maximum lift coefficient, clmax, at Reynolds number Rn ¼ 3:0  106 must be greater than 1.76; and (b) Low profile drag coefficient, cd, at Rn ¼ 4:0  106 for cruise target lift coefficient, cTl of 0.40 and at climb lift coefficient envelope of cTl ¼ 0:50  1:00. The following constraints apply: (a) Extent of the favorable pressure gradient, dcp =dx o 0, on airfoil upper surface at cruise lift coefficient, cl, must not exceed 30-percent chord; (b) Airfoil thickness-to-chord, t=c, must be greater than 12 percent; and (c) Pitching-moment coefficient at zero lift, cm0, must be cm0 Z 0:10. The design objectives and constraints are transformed into the AM-PSO algorithm as follows: At Rn ¼ 4:0  106 J ðxÞminimize : cdclimb & cdcruise Subject to: dcp ) ðx; α; M 1 Þ r 0:30c dx ) cm0 ðx; α; M 1 Þ Z 0:10 ) t=cmax ðxÞ Z 0:12 At Rn ¼ 3:0  106 ) clmax ðx; αmax ; M 1 Þ Z 1:76

ηcon

j¼1

(a) Swarm population size: 30. (b) Estimate of particle personal best fitness range factor: pf ¼ ðEF pbesti;max ; EF pbesti;min Þ ¼ ð6; 0Þ. The EF pbesti;max measure is based on the assumption that the worst solution is a particle with all four constraints, cj, in Eq. (7) penalized, pen, to a maximum limit of one such that cjpen ¼ 1. An excessively high drag coefficient of cdmax ¼ 0:01 for climb and cruise segments is further assumed, thus F pbesti;max becomes: EF besti;max ¼ ðcclmax pen þ cdcp pen þ ct=cpen þ ⋯ dx

ccm0 pen þ ccd climbmax þ ccd cruisemax Þ ¼ ½1 þ 1 þ 1 þ 1 þ ð0:01  100Þ þ ⋯ ð0:01  100Þ ¼6 The magnitude of EF pbesti;min ¼ 0 is not realistic since an airfoil with zero drag is not aerodynamically plausible assuming that all constraints are achieved. Despite this point, the related measure is set at zero with  intent to ensure the magnitude of ds (Eq. (3)) is less than EF pbesti;max  EF pbesti;min . Accordingly prM (Eq. (4)) will always be a positive integer so that particle(s) for mutation can be logically selected [Section 2.4 – step (v)]. (c) Mutation scalar factor, ωM ¼ 0:05 [Section 2.4 – step (vi), Eq. (6)]. (d) Optimization termination is activated when the three swarm search assessment measures are stagnant over 25 successive iterations (Section 2.5).

ð7Þ

where J minimize is the objective function that is to be minimized, x is the vector of airfoil shape function design variables, α is the flow angle-of-attack in degrees, and M 1 is the freestream Mach number. The cTl is not defined as a constraint since the aerodynamic fitness function evaluator, XFOIL [53] which is used in this analysis has a built-in trimming function that closely estimates the flow angle needed to achieve the target lift coefficient. The remaining constraints are defined in the AM-PSO by penalty functions to the objective function to ensure minimum drag performances are not compensated by a constraint violated design. A static penalty function is introduced and is normalized to ð0; 1Þ if the aerodynamic and/or geometric constraints are violated such that f p ðxÞ ¼ J ðxÞ þ ∑ cj

The AM-PSO is setup as follows:

ð8Þ

4.3. HALE airfoil shape optimization Airfoil shapes are generated using the PARSEC approach by Sobieczky [54] which uses 11 design variables to represent an airfoil contour. The low-fidelity solver, XFOIL [53] is used in the optimization cycle to establish the aerodynamic coefficients. The profiles of the baseline NLF(1)-0416 [52] and AM-PSO derived shape, with corresponding aerodynamic performance data is compared in Fig. 2. The geometry of the optimum airfoil by the AM-PSO in Fig. 2(a) closely resembles the baseline NLF(1)-0416. Due to the subtle shape variances of the optimum shape relative to baseline, lower drag performances in Fig. 2(b) have been achieved over an extended lift coefficient envelope. At the target cruise lift coefficient, cTl ¼ 0:40, the optimum airfoil has a delayed boundary layer flow transition in Fig. 2(c), that is aft of the leading edge on airfoil upper and lower

10

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

Fig. 2. Comparison of NLF(1)-0416 airfoil with AM-PSO derived solution at Mach¼ 0.10 and Rn ¼ 4:0  106 : (a) NLF(1)-0416 airfoil and AM-PSO; (b) drag polar of NLF(1)-0416 and AM-PSO at Rn ¼ 4.0  106; (c) coefficient of pressure of NLF(1)-0416 airfoil and AM-PSO at cruise cTl 0.40; and (d) clmax of NLF(1)-0416 and AM-PSO at Rn ¼ 3:0  106 .

Table 5 Comparison of constraints performance of the AM-PSO derived shape with baseline profile. Design approach

clmax, Rn ¼ 3  106

%clmax , gaina

dcp =dx

cm0

NLF(1)-0416 AM-PSO

1.7500 1.7440

–  0.34

0.2493 0.2956

 0.0994  0.0901

a

Relative to the benchmark NLF(1)-0416 Airfoil [52].

surfaces relative to the baseline shape. An additional aerodynamic benefit that has been achieved by the AM-PSO derived airfoil is the delay in stall angle without compromising the maximum lift coefficient, clmax in Fig. 2(d). The state of the design constraints of the two shapes is compared in Table 5. The AM-PSO profile satisfies two of the three constraints. The condition that clmax Z 1:76 is not achieved with a performance variance between the two shapes limited to  0.34%. 4.4. UAV high-speed airfoil design The AM-PSO coupled with the PARSEC shape parameterization approach is further used for the re-design of the Royal Aircraft Establishment, RAE 2822 airfoil, with the objective of minimizing the wave drag at transonic Mach numbers. The design is defined as follows: At Rn ¼ 2:7  106 & Mach ¼ 0:74 minimize cd

Subject to: ) cTl ðxÞ ¼ 0:733 ) t=cmax ðxÞ Z 0:12

ð10Þ

where x is the vector of airfoil shape function design variables (PARSEC coefficients). A Reynolds-averaged Navier–Stokes equations solver with the k–ω Shear Stress Transport transition turbulence model by Menter et al. [55] in ANSYS FLUENT [56] is used to establish airfoil aerodynamic coefficients. The setup of the AM-PSO follows from the settings used in the NLF(1)-0416 analysis with exception to the swarm population size which in this case is set at 20. In addition, post-AM-PSO simulation, a Quasi-Newton gradient optimization method (GM) in MATLAB [57] is used. The AM-PSO derived optimum is used as the starting point to a gradient analysis to facilitate a local search process about the global solution region. The airfoils by the two optimization approaches, with corresponding distribution of the pressure coefficient, cp, are compared to the baseline RAE 2822 in Fig. 3 The shape derived by the hybrid optimization approach follows the contour of the AM-PSO profile in Fig. 3(a). Relative to the original RAE 2822 airfoil, the chord location of the maximum thickness point of the AM-PSO and AM-PSO with gradient method is further aft of the leading edge. The disparity in contour profile between the two optimization approaches is limited to surface modeling at the trailing edge region. The hybrid optimization derived profile is modeled with a divergent trailing edge contour which is typical in supercritical airfoils. The cp distribution of the airfoils is further modeled in Fig. 3(b). The AM-PSO method improves drag performance by lowering the

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

11

Fig. 3. Transonic airfoil optimization at cTl  0:733 by the AM-PSO and a hybrid design approach encompassing a gradient optimization analysis post-AM-PSO simulation: (a) comparison of airfoil shapes and (b) comparison of coefficient of pressure distribution.

Table 6 Aerodynamic performance of a transonic airfoil designed by the AM-PSO and AM-PSO/gradient method. Design method

t/c

AoA (1)

cl

cd

%cd, gaina

AM-PSO AM-PSO/GM

0.1200 0.1210

 0.62  0.012

0.745 0.734

0.0144 0.0128

 27  35

a

Relative to the baseline RAE 2822 airfoil [58].

intensity of airfoil shock wave on the upper surface ðx=c  0:28Þ relative to the baseline RAE 2822 profile. The integration of the gradient optimization method post-AM-PSO analysis has the impact of further reducing the intensity of the shock wave at this region. In addition, the two optimization phases have resulted in an airfoil that has smooth and gradual shock wave development instead of a rapid onset as per the baseline RAE 2822 shape. The constraints on profile thickness ðt=cmax Z 0:12Þ and target lift coefficient ðcTl ¼ 0:733Þ were achieved in both optimization methods in Table 6. The AM-PSO derived solution resulted in a drag reduction of E 27% relative to the baseline RAE 2822 airfoil [58]. Drag is minimized further to E 35% with the incorporation of a gradient optimization analysis which is greater than the 29.7% magnitude that was recently established by Han and Zingg [59]. The results by the AM-PSO/GM approach are in agreement with the findings by Namgoong [8] where a modified variant of the GA was used for the same re-design effort (matching Re and flow angle-of-attack). The computational efficiency of the optimization process is further assessed to verify the merits of the AM-PSO relative to the GA [8]. The total number of fitness function evaluations for convergence is defined by the summation of solver calls by the AM-PSO and gradient optimization phase. The hybrid optimization approach encompassing the AM-PSO with GM in Table 7 represents E80% fewer solver calls than the GA result by Namgoong [8] without compromising solution optimality. As noted by Zingg et al. [60], consideration to the relative cost of the optimization process is only one of several important factors that must be factored when selecting an

Table 7 Transonic airfoil optimization assessment of total function evaluations. Data type

AM-PSO

GM

Total function evaluations

Literature [8] Proposed works

– 19,742

– 1209

107,520 20,951

optimization algorithm for shape design analysis. It is suggested that population-based methods (GA/PSO) are likely to be more suited for preliminary design efforts where lower convergence tolerances are accepted and understanding of the influence of design trade-offs is critical. Gradient-based methods including the adjoint approach are likely to be more suited for detailed design evaluations where stricter convergence tolerances are required and the range of possible designs for solution consideration are much narrower. With this, the methodologies presented in this analysis confer that the AM-PSO is well suited for preliminary airfoil design applications.

5. Summary The fundamentals of the AM-PSO are as follows: (a) Assessment of swarm search diversity; (b) A Gaussian-based mutation operation with probability; and (c) Particle position update. The search performances of the worst and best performing particles in the swarm, hence diversity are factored at each iteration to evaluate if mutation is warranted. Probability rules are then imposed to identify the particles for mutation using a Gaussian operation. If diversity is wide and spread, then the rateof-mutation is limited to control function evaluations. As the swarm progresses to a specific solution region, diversity is reduced and a higher mutation rate follows. Position update rules are then applied to transfer the particles to favorable locations in the solution volume if design improvements are established by mutation.

12

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

Demonstration of the AM-PSO algorithm confirmed the viability of this method on complex benchmark functions. The AM-PSO consistently facilitated acceptable convergence performances relative to off-the-shelf methods. Convergence to the global optimum with 100% success rate was achieved for select problems from the CEC 2005 benchmark suite [48,49] with minimal computational effort (total function evaluations). The validity of the AM-PSO was further demonstrated on complex aerodynamic design problems at low-and-high Mach numbers. At low speeds, excessive drag reduction was established relative to the baseline NLF(1)-0416 airfoil [52] without violating performance and geometry constraints. At transonic flight envelopes, wave drag reduction in excess of E 35% was also achieved in the re-design effort of the RAE 2822 airfoil. These results were in agreement to the findings by Namgoong [8] where a modified GA was used for the same problem type. The true benefits of the AM-PSO were realized with a E 80% reduction in fitness function evaluations relative to a GA [8]. The swarm algorithm developed has further implications from a multidisciplinary perspective. The principles of the proposed method need to be extended further for application on problems with greater degrees-of-freedom ðD 4 100Þ that are representative of a complete aircraft system instead of being limited to a subsystem level (airfoil). The concepts defined in the analysis will provide an avenue to address this critical requirement.

Acknowledgments The authors would like to acknowledge the assistance of Dr. Shen-Lung Tung, National Central University, Taiwan, for providing the test simulation runs of the PSO variants listed in Table 4; and reviewers for their valuable comments which resulted in the improvement of the quality of the presented paper. References [1] J. Kennedy, R. Eberhart, Particle swarm optimization, in: IEEE International Conference on Neural Networks, Perth, Australia, IEEE, 1995, pp. 1942–1948. [2] A. Singh, K. Swarnkar, Power system restoration using particle swarm optimization, Int. J. Comput. Appl. 30 (September (2)) (2011) 25–32. [3] J. Robinson, Y. Rahmat-Samii, Particle swarm optimization in electromagnetics, IEEE Trans. Antennas Propag. 52 (2) (2004) 397–407. [4] S.K. Mehdi Kamal, S. Hessabi, A novel partitioned encoding scheme for reducing total power consumption of parallel bus, in: Advances in Computer Science and Engineering: 13th International CSI Computer Conference, Springer, Berlin, Heidelberg, 9–11 March 2008, pp. 90–97. [5] G. Georgoulas, C. Stylios, V. Chudacek, M. Macas, J. Bernardes, L. Hotska, Classification of fetal heart rate signals based on features selected using the binary particle swarm algorithm, in: World Congress of Medical Physics and Biomedical Engineering, vol. 2, no. 07, 2006, pp. 1156–1159. [6] Y. Chen, H. Zhu, PSO heuristics algorithm for portfolio optimization, in: Advances in Swarm Intelligence, Lecture Notes in Computer Science, vol. 6145, Springer, Berlin, Heidelberg, 2010, pp. 183–190. [7] K. Shimoyama, K.S. Nishiwaki, T.S. Jeong, S. Obayashi, Material design optimization for a sport shoe sole by evolutionary computation and FEM analysis, in: IEEE Congress on Evolutionary Computation (CEC), Barcelona, Spain, IEEE, 27 September 2010, pp. 1–7. [8] H. Namgoong, Airfoil optimization of morphing aircraft (Ph.D. thesis), Aerospace Engineering, Purdue, Indiana, 2005. [9] M. Khurana, H. Winarto, A. Sinha, Airfoil optimisation by swarm algorithm with mutation and artificial neural networks, in: 47th Aerospace Sciences Meeting, Victoria, Orlando, USA, AIAA, 2009. [10] A. Esmin, G. Lambert-Torres, G. Alvarenga, Hybrid evolutionary algorithm based on PSO and GA mutation, in: Proceedings of the Sixth International Conference on Hybrid Intelligent Systems (HIS 06), Auckland, New Zealand, IEEE, 2006. [11] C.Z. Qiaoling Xu, Gongwang Zhang, A. An, A robust adaptive hybrid genetic simulated annealing algorithm for the global optimization of multimodal functions, in: Control and Decision Conference, IEEE, 2011, pp. 7–12. [12] W. Abdulal, A. Jabas, S. Ramachandram, O.A. Jadaan, Mutation based simulated annealing algorithm for minimizing makespan in grid computing systems, in: Third International Conference on Electronics Computer Technology, IEEE, 2011, pp. 90–94.

[13] Z. Yonghua, X. Jin, Y. Wentong, C. Yong, The advanced ant colony algorithm and its application, in: Third International Conference on Measuring Technology and Mechatronics Automation, IEEE, 2011, pp. 664–667. [14] T. Si, N. Jana, J. Sil, Particle swarm optimization with adaptive polynomial mutation, in: World Congress on Information and Communication Technologies, IEEE, 2011, pp. 143–147. [15] C. Liu, C. Ouyang, P. Zhu, W. Tang, C. Ouyang, An adaptive fuzzy weight PSO algorithm, in: International Conference on Genetic and Evolutionary Computing, vol. 0, 2010, pp. 8–10. [16] J. Xin, G. Chen, Y. Hai, A particle swarm optimizer with multi-stage linearlydecreasing inertia weight, in: CSO 2009 International Joint Conference on Computational Sciences and Optimization, vol. 1, 2009, pp. 505–508. [17] Y.-L. Gao, X.-H. An, J.-M. Liu, A particle swarm optimization algorithm with logarithm decreasing inertia weight and chaos mutation, in: CIS 08 International Conference on Computational Intelligence and Security, vol. 1, 2008, pp. 61–65. [18] R. Malik, T. Rahman, S. Hashim, R. Ngah, New particle swarm optimizer with sigmoid increasing inertia weight, Int. J. Comput. Sci. Secur. 1 (2) (2007) 35–44. [19] Y. Feng, G.F. Teng, A.X. Wang, Y.M. Yao, Chaotic inertia weight in particle swarm optimization, in: ICICIC 07 Proceedings of the Second International Conference on Innovative Computing, Information and Control, IEEE Computer Society, 2007. [20] K. Kentzoglanakis, M. Poole, Particle swarm optimization with an oscillating inertia weight, in: GECCO 09 Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation, ACM, 2009, pp. 1749–1750. [21] J. Bansal, P. Singh, M. Saraswat, A. Verma, S. Jadon, A. Abraham, Inertia weight strategies in particle swarm optimization, in: 2011 Third World Congress on Nature and Biologically Inspired Computing (NaBIC), IEEE, 2011, pp. 633–640. [22] C.K. Monson, K.D. Seppi, Adaptive diversity in PSO, in: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, ACM, 2006, pp. 59–66. [23] X. Yang, J. Yuan, J. Yuan, H. Mao, A modified particle swarm optimizer with dynamic adaptation, Appl. Math. Comput. 189 (2007) 1205–1213. [24] B.O. Arani, P. Mirzabeygi, M.S. Panahi, An improved PSO algorithm with a territorial diversity-preserving scheme and enhanced explorationexploitation balance, Swarm Evol. Comput. 11 (2013) 1–15. [25] S. Sun, J. Li, A two-swarm cooperative particle swarm optimization, Swarm Evol. Comput. 15 (2014) 1–18. [26] R. Kar, D. Mandal, S. Mondal, S.P. Ghoshal, Craziness based particle swarm optimization algorithm for FIR band stop filter design, Swarm Evol. Comput. 7 (2012) 58–64. [27] S.C. Satapathy, A. Naik, Modified teaching-learning-based optimization algorithm for global numerical optimization—a comparative study, Swarm Evol. Comput. 16 (2014) 28–37. [28] R. Rao, V. Savsani, D. Vakharia, Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems, Comput.Aided Des. 43 (3) (2011) 303–315. [29] A. Alireza, PSO with adaptive mutation and inertia weight and its application in parameter estimation of dynamic systems, ACTA Autom. Sin. 37 (May (5)) (2011) 541–549. [30] Zhen-su Lu, Zhi-rong Hou, Juan Du, Particle swarm optimization with adaptive mutation, Front. Electr. Electron. Eng. China 1 (1) (2006) 99–104 /http://link. springer.com/article/10.1007/s11460-005-0021-9S. [31] M. Pant, R. Thangaraj, A. Abraham, Particle swarm optimization using adaptive mutation, in: IEEE Proceeding of the 19th International Workshop on Database and Expert Systems Application, DEXA'08, Turin, 1–5 September 2008, pp. 519–523. [32] C. Li, S. Yang, I. Korejo, An adaptive mutation operator for particle swarm optimization, in: Proceeding of the 2008 UK Workshop on Computational Intelligence, UKCI'08, Leicester, 10–12 September 2008, pp. 165–170. [33] C. Li, S. Yang, An adaptive learning particle swarm optimizer for function optimization, in: CEC 09 IEEE Congress on Evolutionary Computation, IEEE, 2009, pp. 381–388. [34] Jun Tang, Xiaojuan Zhao, Particle swarm optimization with adaptive mutation, in: WASE International Conference on Information Engineering, IEEE, 2009, pp. 234–237. [35] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients, IEEE Trans. Evol. Comput. 8 (3) (2004) 240–255. [36] A. Forrester, Efficient Global Aerodynamic Optimisation Using Expensive Computational Fluid Dynamics Simulations, Technical Report, University of Southampton, 2004. [37] P.K. Lehre, X. Yao, On the impact of mutation-selection balance on the runtime of evolutionary algorithms, IEEE Trans. Evol. Comput. 16 (April (2)) (2012) 225–241. [38] I. Montalvo, J. Izquierdo, R. Pérez-García, M. Herrera, Improved performance of PSO with self-adaptive parameters for computing the optimal design of water supply systems, Eng. Appl. Artif. Intell. 23 (5) (2010) 727–735. [39] X. Hu, L. Russell, R.C. Eberhart, Y. Shi, Engineering Optimization with Particle Swarm, Technical Report, Purdue University, West Lafayette, Indiana, USA, 2003. [40] M.S. Khurana, Application of an hybrid optimization approach in the design of long endurance airfoils, in: 26th International Congress of the Aeronautical Sciences, Anchorage, Alaska, USA, ICAS, 2008.

M. Khurana, K. Massey / Swarm and Evolutionary Computation 20 (2015) 1–13

[41] Z. Qin, F. Yu, Z. Shi, Y. Wang, Adaptive inertia weight particle swarm optimization, in: Artificial Intelligence and Soft Computing, Lecture Notes in Computer Science, vol. 4029, Springer, Berlin, Heidelberg, 2006, pp. 450–459. [42] M. Khurana, H. Winarto, A. Sinha, Application of swarm approach and artificial neural networks for airfoil shape optimization, in: 12th AIAA/ISSMO Multidisciplinary Analysis and Optimisation, Victoria, British Columbia, Canada, AIAA, 2008. [43] M. Khurana, H. Winarto, Development and validation of an efficient direct numerical optimisation approach for aerofoil shape design, Aeronaut. J. 114 (1160) (2010) 611–628. [44] S. Helwig, R. Wanka, Particle swarm optimization in high-dimensional bounded search spaces, in: Proceedings of the 2007 IEEE Swarm Intelligence Symposium, IEEE Press, 2007, pp. 198–205. [45] J. Li, B. Ren, C. Wang, A random velocity boundary condition for robust particle swarm optimization, in: Bio-inspired Computational Intelligence and Applications, Springer-Verlag, Berlin, Heidelberg, 2007, pp. 92–99. [46] L. Zhang, F. Yang, A. Elsherbeni, On the use of random variables in particle swarm optimization: a comparative study of Gaussian and uniform distributions, J. Electromagn. Waves Appl. 23 (2009) 711–721. [47] P. Andrews, An investigation into mutation operators for particle swarm optimization, in: IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, IEEE, 2006, pp. 1044–1051. [48] J. Liang, P. Suganthan, K. Deb, Novel composition of test functions for numerical global optimization, in: Proceedings of IEEE on Swarm Intelligence Symposium, IEEE Press, 2005, pp. 68–75. [49] P. Suganthan, N. Hansen, J. Liang, K. Deb, Y. Chen, A. Auger, S. Tiwari, Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on RealParameter Optimization, Technical Report, Nanyang Technological University, Singapore, 2005.

13

[50] Y.-T. Juang, S.-L. Tung, H.-C. Chiu, Adaptive fuzzy particle swarm optimization for global optimization of multimodal functions, Inf. Sci. 181 (October (20)) (2011) 4539–4549. [51] Y. Shi, R. Eberhart, A modified particle swarm optimization, in: Proceedings of IEEE Congress on Evolutionary Computation, IEEE, 1998, pp. 69–73. [52] D.M. Somer, Design and Experimental Results for a Natural-Laminar-Flow Airfoil for General Aviation Applications, Technical Report, NASA Langley Research Center, 1981. [53] M. Drela, XFOIL: an analysis and design system for low Reynolds number airfoils, in: Low Reynolds Number Aerodynamics, Lecture Notes in Engineering, vol. 54, Springer-Verlag, New York, 1989, pp. 1–12. [54] H. Sobieczky, Parametric airfoils and wings, Numer. Fluid Dyn. 68 (1998) 71–88. [55] F.R. Menter, R.B. Langtry, S.R. Likki, Y.B. Suzen, P.G. Huang, S. Volker, A correlation-based transition model using local variables—part i: model formulation, J. Turbomach. 128 (3) (2006) 413–422. [56] ANSYSs , FLUENT Academic Research, Release 12.0, ANSYS, Inc., 2009. [57] MATLAB, Help File: Optimization Toolbox User's Guide, The Mathworks Inc., 2005. [58] P. Cook, M. McDonald, M. Firmin, Aerofoil RAE 2822—Pressure Distributions and Boundary Layer and Wake Measurements, AGARD Advisory Report 138, Royal Aircraft Establishment, Farnborough, Hants, United Kingdom, 1979. [59] X. Han, D.W. Zingg, An adaptive geometry parametrization for aerodynamic shape optimization, Optim. Eng. 15 (March (1)) (2014) 69–91. [60] D. Zingg, M. Nemec, T. Pulliam, A comparative evaluation of genetic and gradient-based algorithms applied to aerodynamic optimization, Eur. J. Comput. Mech. 17 (May (1–2)) (2012) 103–126.