Massive Parallelization as Prinicipal Technology for Constrained Optimization of Aerodynamic Shapes

Massive Parallelization as Prinicipal Technology for Constrained Optimization of Aerodynamic Shapes

Parallel ComputationalFluid Dynamics- MultidisciplinaryApplications G. Winter, A. Ecer, J. Periaux, N. Satofuka and P. Fox (Editors) © 2005 ElsevierB...

832KB Sizes 0 Downloads 34 Views

Parallel ComputationalFluid Dynamics- MultidisciplinaryApplications G. Winter, A. Ecer, J. Periaux, N. Satofuka and P. Fox (Editors)

© 2005 ElsevierB.V. All rights reserved.

33

M a s s i v e P a r a l l e l i z a t i o n as P r i n i c i p a l T e c h n o l o g y for C o n s t r a i n e d Optimization

of Aerodynamic

Shapes

s. Peigin a and B.Epstein b a Israel Aircraft Industries, Israel bThe Academic College of Tel-Aviv Yaffo, Israel Massive parallelization is a cruciai element of any successful optimization of aerodynamic shapes in engineering environment. This technology was implemented in the framework of a new efficient optimization tool (the code OPTIMAS - OPTIMization of Aerodynamic Shapes). The optimization method employs Genetic Algorithms (GAs) in combination with a Reduced-Order Models (ROM) method, based on linked local data bases obtained by full Navier-Stokes computations. The algorithm is built upon a multilevel embedded parallelization strategy, which efficiently makes use of computational power supplied by massively parallel processors (MPP). Applications (implemented on a 456-processors distributed memory cluster) include various 3D optimizations in the presence of nonlinear constraints. The results demonstrate that the approach combines high accuracy of optimization with high parallel efficiency and thus allows application of the method to practical aerodynamic design in the aircraft industry. 1. I N T R O D U C T I O N This paper is devoted to the problem of aerodynamic shapes optimization via minimization of total drag. First, it should be clarified, why is the issue that important. Or, in other words, what is the issue cost? To answer the question, consider the task of delivering a payload between distant destinations. Based on the Breguet range equation [1], which applies to long-range missions of jet-aircraft, the operator would have to reduce the pay-load (and thus to reduce the revenue) by 7.6%, to recover the 1.0% increase in drag. Since most airlines operate on small margins this service most likely will no longer be a profit-generating venture. This illustrates that a 1% delta in total drag is a significant change. Specifically, the air transportation market is estimated by 500 million passengers per year, which amounts to 200-250 billion USD. Thus, a gain of 1% in total drag is annually equivalent to 15-19 billion USD. In principle, this, of course, was true even thirty years ago. Why does the problem remain one of the "hottest" up to date? The reason is that this is an example of a challenging integrated problem. There exist at least four partial problems which should be resolved in order to make a progress in solving the overall problem of aerodynamic shape optimization.

34 First, the problem of global geometrical representation of aerodynamic shapes still remains open. Second, the construction of a reliable CFD solver suitable for optimization in terms of accuracy and robustness is also not simple. Third, the optimization necessitates high-dimensional search spaces which makes the optimal search highly non-trivial, especially in the presence of non-linear constraints. Finally, in view of the huge computational volume needed for optimization, stringent requirements should be placed upon the computational efficiency of the whole method. High accuracy and fast feedback (standard requirements of any computational algorithm) are essentially contradictory. This is especially true in the case of aerodynamic optimization where the determination of the objective function requires the solution of the boundary-value problem for gas-dynamic equations - a complex system of nonlinear partial differential equations. Such a problem is highly non-trivial and time-consuming even at the present time. The first optimization method in aerodynamics was that of Lighthill [2] which proposed to employ the conformal mapping (a highly simplified fluid dynamics model) for the design of two-dimensional airfoils. As more powerful computers and more sophisticated numerical methods were developed, more accurate gas dynamics models were employed as a driver of the optimization process. Among others the following simplified mathematical models can be mentioned: panel methods, full potential equation, Euler equations (sometimes coupled with boundary layer equations). The large body of literature on the subject is based on the usage of such approximate CFD drivers [3] -[4]. For a number of model problems the above class of methods provides reasonable accuracy. At the same time industrial applications require more accurate estimation of sensitive aerodynamic characteristics such as drag and pitching moment. This may be only achieved through full Navier-Stokes computations. An additional disadvantage of optimization techniques based on incomplete gas-dynamics modeling, consists of their unsufficient robustness due to limited applicability of such models in terms of flight conditions and geometrical diversity. For example, consider the case of flow over a convex-concave geometry, typical for supercritical airfoils. In the framework of coupled Euler/boundary layer equations, such a simulation fails as soon as a reverse flow region appears, since the solution of the boundary layer equations does not exist in the case of strong adverse pressure gradients. Thus, in order to ensure the robustness and to essentially extend the applicability range of aerodynamic optimization, it is crucial to use full Navier-Stokes computations as a driver of optimization. In principle, the state-of the-art of CFD technology allows accurate full Navier-Stokes computations in analysis mode. Nevertheless the use of full Navier-Stokes solutions for optimization purposes is highly conjectural due to the unacceptable level of computation time. The reason is that the Navier-Stokes operator is highly non-linear and possesses a mixed discrete-continuous spectrum. Numerically, this leads to the requirement to accurately resolve different linear scales present in the solution. This makes the numerical problem computationally intensive since huge numerical grids in combination with the non-linearity of the discrete equations result in slow convergence. For example, even in the case of two-dimensional flow, a sample run on sufficiently resolved grids takes at least 15-20 min CPU time on a currently available serial proces-

35 sot such as HP NetServer LP1000R. For a more complex three-dimensional wing-body configuration, a similar computation on a single processor SGI 2000 machine requires a number of days. Since the number of heavy CFD runs needed for optimization may run into the thousands, the corresponding computational volume becomes too large and clearly exceeds an acceptable level. The presence of constraints (which are inherent to any practical optimization problem) further aggravates the situation. Thus the improvement of computational efficiency is vital for the success of the optimization algorithms in an engineering environment. Without going into detail concerning other methods of improving the computational efficiency, we simply note that the parallelization is a general methodology to speed-up the algorithm performance. Due to the huge computational volume needed for the aerodynamic optimization, the massive parallelization is required where the number of processors is measured by hundreds. However, to gain most benefit from the extensive parallelization, a parallel implementation should be highly scalable in order to ensure a dramatic reduction of the overall computation time. It must be underlined that a high parallel efficiency is not easy to attain on a large number of processors. To give only one example, the peak parallel efficiency of the popular commercial CFD code FLUENT for large class of problems is equal to 69% for 16 processors while already for 32 processors the parallel efficiency reduces to only 55% [5]. To achieve this goal we proposed to use a multilevel parallelization strategy which efficiently makes use of computational power supplied by multiprocessor systems. This includes parallelization of the multiblock full Navier-Stokes solver, parallel CFD evaluation of objective function on multiple geometries, parallelization of the optimal search algorithm and, finally, parallel evaluation of optimal search on multiple search domains. An additional parallel level handles automatic grid generation for multiple geometries. The parailelization approach was applied to an optimization method which employs Genetic Algorithms (GAs) (optimization methods based on coupling deterministic and probabilistic strategies in search of the optimum) as an optimization tool. This optimization technique is used in combination with a Reduced-Order Models (ROM) method based on linked local data bases obtained by full Navier-Stokes computations. The method was applied to the problem of 3D aerodynamic optimization with nonlinear constraints. The results demonstrated that the algorithm combines high computational efficiency with high accuracy of optimization. A significant computational time-saving (based on deep embedded parallelization) allowed the use of the method in a demanding engineering environment. 2.

THE OPTIMIZATION

ALGORITHM

The main goal of the aerodynamic shapes optimization is to determine a configuration which minimizes the cost function (typically total drag coefficient CD of the shape) subject to various constraints imposed on the sought solution. There are two classes of constraints: aerodynamic constraints (such as prescribed constant total lift coefficient C~ and pitching moment C~) and geometrical constraints on the shape (such as relative thickness of wing sections (t/c)i, radii of the leading edge (RL)i and trailing edge angles (~)i). As a gas-dynamic model for calculating CD and CL values, the full Navier-Stokes equa-

35 tions are used. Numerical solution of the full Navier-Stokes equations is based on the code NES [6]- [7]. The important advantage of the solver NES as a driver of the optimization process is its ability to supply reliable and sufficiently accurate results already on relatively coarse meshes, and thus to essentially reduce the volume of CFD computations. The optimization technique employed Genetic Algorithms (GAs) in combination with a Reduced-Order Models (ROM) method based on local data bases obtained by full NavierStokes computations. The main features of the present multipoint optimization method include a new strategy tbr efficient handling of nonlinear constraints in the framework of GAs, scanning of the optimization search space by a combination of full Navier-Stokes computations with the ROM method. Unfortunately, in their basic form, Genetic Algorithms are not capable of handling constraint functions limiting the set of feasible solutions. Therefore, additional methods are needed to keep the solutions in the feasible region. To resolve this problem, a new constraint-handling approach is employed. The main idea behind the approach is to use search paths which pass through both feasible and infeasible points (instead of the traditional approach where only feasible points may be included in a path). Since in our case the topology of feasible and infeasible sub-domains is rather complex, the realization of such a strategy should significantly improve the accuracy and efficiency of optimization. The idea is that the information from infeasible sub-domains can be very important for the optimization, as a path to the optimal point via infeasible points can be essentially shorter (or even the only possible, in the case of a non-simply-connected optimization space). The major weakness of GAs lies in their poor computational efficiency, which prevents their practical use where the evaluation of the cost function is expensive as it happens in tile framework of the full Navier-Stokes model even in the two-dimensional case. For example, an algorithm with the population size M = 100 requires (for the case of 200 generations) at least 20000 evaluations of the cost function (CFD solutions). A fast full Navier-Stokes evaluation over a 3D wing takes at least a 10-15 minutes of CPU time. That means that one step of such an algorithm takes about 3500-4000 hours, which is practically unacceptable. To overcome this difficulty we use Reduced-Order Models approach in the form of Local Approximation Method (LAM). On every optimization step, the solution functionals which determine a cost function (such as lift and drag coefficients in the case of drag minimization), are approximated by a local data base. The data base is obtained by solving the full Navier-Stokes equations in a discrete neighbourhood of a basic point positioned in the search space. The general sketch of the optimization algorithm can be presented by the following pseudo-code: opt_step -= 0 Determine_Initial_Basic_Point /starting basic point - initial profile/ while not converged do Calc_LocaLData_Base /CFD computations / Search_Optim_Candidates /Hybrid GA-ROM search / Verification_Optim_Cand /CFD computations for the optimal points/

37 Choose_New_Basic_Point opt_step := opt_step + 1 enddo 3.

/Choose a new basic point /

MULTILEVEL PARALLELIZATION STRATEGY

In order to make the optimization practically feasible, it is necessary to significantly improve the computational efficiency of the algorithm. In fact, such an improvement is vital for the success of the method in engineering environment. This was partially done by decreasing the total number of heavy CFD runs in the framework of the ROM-LAM approach, which allowed to reduce the computational volume by at least 1-2 orders of magnitude. However, the total number of CFD runs still remained very high, which is hardly acceptable even for research purposes. Thus the construction of a computationally efficient algorithm is vital for the success of the method in engineering environment. Extensive parallelization is particularly advantageous for achieving this aim, since a highly scalable parallel implementation allows the dramatic reduction of the overall computation time. To reach this goal we propose to employ an embedded multilevel parallelization strategy which includes: • Level 1 - Parallelization of full Navier-Stokes solver • Level 2 - Parallel CFD scazming of the search space • Level 3 - Parallelization of the GAs optimization process • Level 4 - Parallel optimal search on multiple search domains • Level 5 - Parallel grid generation The first two levels are intended to improve the computational efficiency of the CFD part of the whole algorithm, while the next two levels are needed in order to reach the same goal for the optimization part of the method. The first level parallelization approach (for a detailed decription see [7]) is based on the geometrical decomposition principle. The processors are divided into two groups: one master-processor and N~ slave-processors. An overall computational domain being block-structured, the groups of blocks are mapped to the slave-processors. The main goal of each slave is to carry out calculations associated with the solution of the Navier-Stokes equations, at the cell points located in the blocks mapped to the slave-processor. The aim of the master-processor is to form output files containing global information for the whole configuration by receiving the necessary data from the slave-processors. The amount of information to be transferred among slave-processors is proportional to the width of the computational template which is equal to 1 for most of computational work in the CFD driver. Based on the technical characteristics of the hardware (power of processor for arithmetical operations vs network capacity), and on the amount of computational work induced by the numerical algorithm, we can expect a rather good parallel efficiency on this level of parallelization, provided an average number of grid points per processor exceeds 60000 points.

38 The important advantage of the approach is that the proposed structure of messagepassing ensures a very high scalability of the algorithm from a network point of view, because, on average, the communication work per processor is not increased if the number of processors is increased. Finally, a large body of computational data demonstrated that the described parallelization of the multiblock full Navier-Stokes solver enables one to achieve high level of parallel efficiency while retaining high accuracy of calculations, and thus to reduce significantly the execution time for large-scale CFD computations [7]. The first level of parallelization is embedded with the second level, which performs parallel scanning of the search space and thus provides parallel CFD estimation of fitness function on multiple geometries. It is applied when executing the steps Calc_LocaLData_Base and Verification_Optim_Cand in the pseudo-code of the optimization algoritm. It must be emphasized, that the above two steps can not be executed concurrently, since the input data needed for step Calc_Local_Data_Basemay be assessed only upon the full completion of the preceeding step Verification_Optim_Cand. On this level of parallelization (see fig. 1) all the processors are divided into three groups: one main-processor, Nm master-processors and N m " N s of slave-processors (where Arm is equal to the number of geometries). The aim of the main-processor is to distribute the set of geometries to be evaluated among master-processors, to receive the results of CFD computations from these masterprocessors and, finally, to create the local CFD data bases. The goal of each masterprocessor is to organize the first level parallelization of a Navier-Stokes computation corresponding to its own point in the search space (its own airfoil). Since the volume of data transfer between the main-processor and master-processors is negligible, and the master-processors execute their own jobs independently, the parallel efficiency of the second level parailelization is very close to 100%. The third level parallelizes the GAs optimization work unit. On this level of parallelization, the processors are divided into one master-processor with P8 slave-processors. The goal of the master-processor is to distribute initial random populations among the slaves and to get back the results of optimal search (P~ is the number of initial random populations). The third level of parallelization is embedded with the fourth level, which performs parallel optimal search on multiple search domains. It is applied when executing the step Search_Optim_Candidates in the pseudo-code of the optimization algoritm. On this level of parallelization all the processors are divided into three groups: one main-processor, P,~ master-processors and Pm• Ps of slave-processors (where P,~ is equal to the number of domains). The interconnection of these parallelization levels is similar to that depicted in fig. 1. The aim of the main-processor is to distribute the set of search domains among the master-processors, to receive the results of GA optimization (from these master-processors) and, finally, to create the set of potential optimal geometries subject to CFD verification. The goal of each master-processor is to organize the third parailelization level of GAs search corresponding to its own search domain. Since the volume of data transfer between the main-processor and master-processors is negligible, and the master-processors execute their own jobs independently, the parallel

39

i i Figure 1. Multilevel CFD parallelization. Interconnection of first (main/masters) and second (master/slaves) parallelization levels.

efficiency of the fourth parallelization level is very close to 100%. The fifth parallelization level handles grid generation. On this level, one masterprocessor and Gs slave-processors are employed (Gs is the number of evaluated geometries). The goal of the master processor is to distribute geometries among slave-processors, while each slave-processor creates a grid, corresponding to its own geometry. From a sketch of the pseudo-code, we notice that the optimization algorithm includes a number of markedly different sub-algorithms. In particular, the sub-algorithms handling CFD computations and genetic optimization search, may be mentioned. As it usually happens in practice, such sub-algorithms are not created from scratch, but, instead, are based on already existing computational core software (which is much less expensive). Moreover, the basic core codes may be written in different programming languages. For example, in our case, the CFD sub-algorithms (employing the core code NES [6] - [7]) were written in the C language, while the GAs sub-algorithms employ FORTRAN-77. In order to resolve the difficulties due to this heterogeneity and to ensure the correct interaction between different parts of the pseudo-code, we drive the overall optimization algorithm by means of a control code, which guides the algorithmic flow stream. The control code was written in the C language which facilitates the interconnection of computational and system software and thus increases the ability to manage different executable codes and system calls. Note, that the above modular approach ensures a flexible upgrade of the objects included into the algorithm. An additional important issue is the fault-tolerance of the whole computational proce-

40 dure. It is mainly related to the fact that the optimization process neccessitates massive CFD runs which statistically increases the probability that one of the CFD processes fails. The failure probability is additionally increased if the computations are performed on multiprocessors. To overcome this, the control code monitors all the CFD runs and, in the case of a system fail, excludes the corresponding point from the global data base and restores the continuity of the optimization stream. On the whole, due to the robustness of the algorithm, this happens extremely seldom. Multilevel parallelization strategy based on the PVM software package was implemented on a cluster of MIMD multiprocessors consisting of 228 HP NetServer LP1000R and IBM Blade nodes. Each node has 2 processors, 2GB RAM memory and full duplex 100Mbps E T H E R N E T interface. Totally this cluster contained 456 processors with 456GB RAM memory. 4. A N A L Y S I S OF R E S U L T S We present the results of drag minimization of ONERA M6 wing at Re = 11.72 • 106 and different values of design CL and Mach numbers representing a wide range of flight conditions. The optimization at the design point CL = 0.265, M = 0.84 resulted in the destruction of a strong double-shock, present in the original pressure distribution. This is illustrated by fig.2-3 where the pressure distribution on the upper surface of the original wing is compared with that of the optimized one. The optimized wing exhibits a shockless behaviour along the whole wing span which leaded to a significant reduction in the total drag: from 168 drag counts to 128 counts. Note, that the ideal induced drag for the ONERA M6 aspect ratio at CL ---- 0.265 is equal to 59 counts, and the minimum drag value is equal to about 71 counts for the both wings. Thus it may be concluded that the wave drag contribution to the total drag of the optimized wing is of a minor nature, which is indicative of successful optimization. The results of optimization at a demanding design point (CL = 0.5, M ---- 0.87) are given in fig.4-7 where the pressure distribution on the upper surface of the original wing and the chordwise pressure distributions at a midsection of the wing. At these flight conditions, the original ONERA M6 geometry generates a very strong shock which results in a high total drag value (CD=544 counts). The optimization allowed to essentially decrease the total drag down to CD=300 counts. The comparison of the original ONERA M6 root and tip shapes with those of wings optimized for different Nws, is given in fig. 8-9. The analysis of results shows that, in the middle part of the wing, the optimal shapes tend to possess low curvature immediately outside the leading edge region. The off-design behaviour of the optimized wing is shown in fig.ll where drag rise curves of the wings optimized at CL = 0.5 for different design free-stream Mach numbers are compared to that of the ONERA M6 wing. The optimization allowed to significantly shift the drag divergence point in the direction of higher Mach numbers and to radically extend the low drag zone. In the case of 3D optimization there exists an additional class of constraints to be taken

41 into account: the aerodynamic constraints such as constraint on the pitching moment C ~ . This class of constraints is difficult to handle since the position of a test point in the search space with respect to the constraints boundary is not known in advance which results in a computationally heavy CFD run. The present approach is also able to efficiently handle this class of constraints. In fig.10 drag polars at M -- 0.87 for different C ~ are shown. The unconstrained optimum wing possesses CM = --0.15 and CD = 300.0 counts at the design point M = 0.87, CL = 0.5. A constrained optimization with C ~ = -0.10 achieved a similar drag reduction (CD = 300.5 counts) while for C~4 = -0.075 the optimized wing possesses a slightly higher CD ----305.0 counts at the same design point. The present optimization tool is driven by full Navier-Stokes computations. At the same time, many well known applications are based on the use of simpler gasdynamic models, such as Euler eqnations. In this connection it was interesting to compare the results of optimization achieved by a Navier-Stokes driver vs. an Euler one. The results of comparison at the design point M = 0.87, CL = 0.5 are given in fig. 12, where drag polars at the design Mach value are shown. Note that the both polars are computed by means of the full Navier-Stokes computations. It is clearly seen that the NavierStokes optimization enabled to achieve better optimization results. At the design point the optimization driven by Navier-Stokes computations yielded a total drag value of 300 counts compared to 312 counts in the case of the Euler CFD driver (about 4% more). This advantage is preserved in the whole range of CL > 0.45 which indicates its non-pointwise nature. Let us estimate the computational efficiency of the present optimization algorithm by the example of a 2D airfoil. The most computationally heavy part of the whole optimization algorithm is related to CFD computations. The number of required CFD runs is roughly proportional to the dimensions of the search space. For a typical case considered in this work, each execution of the pseudo-code step Calc_LocaLData_Base requires 36 CFD runs, while the step Verification_Optim_Cand involves 27 such runs (in the case of 9 search domains). For the whole process (for a typical 10-step optimization), this results in 630 CFD computations In the absence of parallelization, the CPU time needed for 1 CFD run by the NES code is equal to 53 min on the current hardware (HP NetServer LP1000R). Thus, for the considered optimization case, steps Calc_Local_Data_Base and Verification_Optim_Cand require 318.1 and 238.5 hours, respectively (a total of about 3 weeks), which is practically unacceptable. On 8 processors, the first level of parallelization reduces the CPU time needed for a single CFD run to 7 rain, which corresponds to about 95% of parallel efficiency. For the considered single optimization, the CPU time, corresponding to the above steps, is equal to 42 and 31.5 hours, respectively (a total of 3 days), which is still too time-consuming in an engineering environment. Note, that an essential reduction of the CPU time by increasing the number of processors on the first level of parallelization, is hampered by the deterioration of parallel efficiency of the NES code since the computational volume per processor becomes too small [7]. Let us present the data related to parallel efficiency of the algorithm which embeds the first level of parallelization with the second one. Since the above two steps of the

42 Step

Calc_LocaLData_Base Calc_Local_Data_Base Calc_Local_Data_Base Calc_LocaLData_Base Calc_LoeaLData_Base Verification_Optim_Cand Verification_Optim_Cand Verification_Optim_Cand Verification_Optim_Cand

N~ CPU (hour) 18 36 72 108 144 18 36 54 72

19.6 9.70 4.80 3.17 2.35 14.40 7.10 4.71 3.50

Ep 0.902 0.911 0.920 0.930 0.940 0.923 0.933 0.941 0.946

Table 1 Parallel efficiency Ep of the first level of parallelization embedded with the second one.

algorithm can not be executed concurrently, we should estimate their parallel efficiency separately. Additionally it should be noted that these steps contain different volumes of computational work. Thus the optimal number of processors which ensures a most efficient regime of parallelization is also different on each of the steps. The overall CPU time needed for execution of the steps Calc_Local_Data_Base and Verification_Optim_Cand is given in Table 1. The results show a sustained high level of parallel efficiency (above 90%) in the both cases. A further increase in the number of processors Np is inefficient due to an insufficient volume of computational work. Finally, the total CPU time per single optimization on the full cluster is equal to 5.8 hours which allowed to employ the optimization code for practical engineering purposes. REFERENCES

1. Jameson, A., Martinetli, L., Vassberg, J., "Using Computational Fluid Dynamics for Aerodynamics - a Critical Assessment", Proceedings of ICAS 2002, Toronto, 2002. 2. Lighthill, M. J., "A New Method of Twodimensional Aerodynamic Design", ARC, Rend M 2112. 3. Optimum Design Methods for Aerodynamics, AGARD-R-803, Nov., 1994. 4. Hajela, P. "Nongradient Methods in Multidisciplinary Design Optimization - Status and Potential", Journal of Aircraft, Vol.36, No.l, 1999, pp.255-265. 5. Fluent Inc. WWW Home Page

http://www.fluent, com/sofware/fluent/fl5bench/problems/fl51c.htm 6. Epstein, B., Rubin, T., and Seror, S., "An Accurate Multiblock ENO Driven NavierStokes Solver for Complex Aerodynamic Configurations", AIAA Journal. 41, pp.582594, 2003. 7. Peigin, S., Epstein, B., Rubin, T., and Seror, S., "Parallel Large Scale High Accuracy Navier-Stokes Computations on Distributed Memory Clusters", The Journal of Supercomputing. 27, pp.49-68, 2004.

43

i

i

iiiiiiii

i

I

Figure 2. Original ONERA-M6-Wing. Figure 5. One-point optimization. Pressure Pressure distribution on the upper surface distribution on the upper surface of the wing at M=0.87, CL = 0.5. of the wing at M=0.84, CL = 0.265.

!

- - B o d y O~,,~W * C~R~ACE CpLOWERV , ~ * C E

cp z~

-iiiii i:iiii

i m

00

~2

04

0~

D8

10

Cp

Original ONERA-M6-Wing. Figure 3. One-point optimization. Pressure Figure 6. distribution on the upper surface of the wing Chordwise pressure distribution at the midsection of the wing at M----0.87, CL = 0.5. at M=0.84, CL = 0.265. a. - - B o d y O e0 m ~ ~ Cp ~ R ~ A C F ~ CpL O ~ R ~ A C E

Cp Z~ 15

J l

ONERA

WING M6 CL=0 500 M=087 CD=544 0 8D

02

04

05

08

ID

Cp

Figure 4. Original ONERA-M6-Wing. Figure 7. One-point optimization. ChordPressure distribution on the upper surface wise pressure distribution at the midsection of the wing at M--0.87, CL ---- 0.5. of the wing at M=0.87, CL = 0.5.

44

Shape of root airfoil. Different one-point optimizations I

0.1

,

i

!

CD vs. Mach at CL=0.500

~0

,

..........

I~::_.

"-..,.

+o

~+,.,,,::+.+'*'*. ....

-0.05 ::

Case5 ........ Case 9 C a s e T 0 ...............

0.6

0.8

-0.1 0

02

0.4

+

I

i / --~

.. . . . . . . +,"

-

..............

350 ...............

...... i

./'..J

I,

I

Case_4 + Case 3 450 • Case-2 DNERA t~6 -"-:'---'-'::

0.05

0

I

i

+

'

Y?

++i

. . . . . . . . .

300 250

×/C

0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

Mach

Figure 8. Shape of optimized wings at root section for different N~s. Shape of tip airfoil. Different one-point optimizations 0.1

I

I

0.05

~

i

Figure 11. Mach drag divergence of the optimized wings versus the original ONERA M6 wing. Case_2, Case_3 and Case_4 correspond to design M=0.84, 0.86 and 0.87 respectively.

.+,L... ~--~:;~;.~;,:: : : -, . ~

-0.05 Case-9 ......... Case_T0 ............... -0.1 0

i 0.4

02

0.6

0.8

x/o CL vs. CD, Target CL=0.500 M=0.87

Figure 9. Shape of optimized wings at tip section for different N~s.

0.7 [

Drag polars at M=0.87 for different pitching moment constraints.

0.65 ]

0.6

!

I

I

I

I

I .....

......

...... :

I

!

I ...

....

0-55 1

0.5 0.4 ~9

, i

o.,5 -1 .................. o.4 ................ /2+

0.3

0,35 11..... n '~ I . ~

0.2 ~ " , : ~'~.-" '.. . . . . . . . . . . ~.{,~ "~. " ~

o. 1

Case-9 Case10 Case_6 ONERA M6

......... - ........ ..........

150

i 200

z

.............

a 250

t

i

o

300

.........

n 350

! .....

400

450

500

CD

0 50

100

150

200

250

300

350

400

CD

Figure 10. Lift/drag curves a t M = Figure 12. Drag polars at M ---- 0.87. 0.87. ONERA M6 wing vs. one-point Navier-Stokes driven optimization vs. Euoptimizations with different values of con- ler driven optimization. straint on the pitching moment. CaseA: no constraint on CM; Case_5, Case_9 and Case_lO: CM > --0.1; Case_6: CM > -0.075.