Review of uncertainty-based multidisciplinary design optimization methods for aerospace vehicles

Review of uncertainty-based multidisciplinary design optimization methods for aerospace vehicles

Progress in Aerospace Sciences 47 (2011) 450–479 Contents lists available at ScienceDirect Progress in Aerospace Sciences journal homepage: www.else...

2MB Sizes 0 Downloads 71 Views

Progress in Aerospace Sciences 47 (2011) 450–479

Contents lists available at ScienceDirect

Progress in Aerospace Sciences journal homepage: www.elsevier.com/locate/paerosci

Review of uncertainty-based multidisciplinary design optimization methods for aerospace vehicles Wen Yao a,n, Xiaoqian Chen a, Wencai Luo a, Michel van Tooren b, Jian Guo b a b

College of Aerospace and Materials Engineering, National University of Defense Technology, Changsha 410073, China Faculty of Aerospace Engineering, Delft University of Technology, Delft 2628KS, The Netherlands

a r t i c l e i n f o

a b s t r a c t

Available online 19 July 2011

This paper presents a comprehensive review of Uncertainty-Based Multidisciplinary Design Optimization (UMDO) theory and the state of the art in UMDO methods for aerospace vehicles. UMDO has been widely acknowledged as an advanced methodology to address competing objectives of aerospace vehicle design, such as performance, cost, reliability and robustness. However the major challenges of UMDO, namely the computational complexity and organizational complexity caused by both time-consuming disciplinary analysis models and UMDO algorithms, still greatly hamper its application in aerospace engineering. In recent years there is a surge of research in this field aiming at solving these problems. The purpose of this paper is to review these existing approaches systematically, highlight research challenges and opportunities, and help guide future efforts. Firstly, the UMDO theory preliminaries are introduced to clarify the basic UMDO concepts and mathematical formulations, as well as provide a panoramic view of the general UMDO solving process. Then following the UMDO solving process, research progress of each key step is separately surveyed and discussed, specifically including uncertainty modeling, uncertainty propagation and analysis, optimization under uncertainty, and UMDO procedure. Finally some conclusions are given, and future research trends and prospects are discussed. & 2011 Elsevier Ltd. All rights reserved.

Keywords: Uncertainty-based multidisciplinary design optimization Uncertainty analysis Optimization under uncertainty Reliability-based design optimization Robust design optimization UMDO procedure

Contents 1. 2.

3.

n

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 UMDO theory preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 2.1. Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 2.1.1. Uncertainty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 2.1.2. Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 2.1.3. Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 2.1.4. Deterministic design optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 2.1.5. Robust design optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 2.1.6. Reliability-based design optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 2.2. General UMDO process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 2.2.1. Uncertain system modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 2.2.2. UMDO procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Uncertainty modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 3.1. Uncertainty classification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 3.2. Uncertainty representation and modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 3.2.1. Model input uncertainty and model parameter uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 3.2.2. Model form uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 3.2.3. Model error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 3.2.4. Other uncertainties related to UMDO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 3.3. Uncertainty sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459

Corresponding author. Tel.: þ86 731 84573188; fax: þ 86 731 84512301. E-mail address: [email protected] (W. Yao).

0376-0421/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.paerosci.2011.05.001

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

4.

5.

6.

7.

451

Uncertainty propagation and analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 4.1. Monte Carlo simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 4.2. Taylor series approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 4.3. Reliability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 4.4. Decomposition based uncertainty analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Optimization under uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 5.1. Reliability-based design optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 5.1.1. Worst case analysis method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 5.1.2. Corner space evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 5.1.3. Variation patterns formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 5.2. Robust design optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 UMDO procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 6.1. Single level procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 6.2. Decomposition and coordination based procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470 6.2.1. CO-based UMDO procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470 6.2.2. CSSO-based UMDO procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 6.2.3. ATC-based UMDO procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474

1. Introduction With progress in science and technology, demands for aerospace vehicles are ever increasing to have better performance, higher reliability and robustness, and lower cost and risk. To effectively address these competing objectives, designers generally take design and optimization methods with consideration of all relevant aspects of the vehicle lifecycle from design, manufacture, operation, to final disposal at the end of life. All through the lifecycle in a realistic world, there inherently exist a vast quantity of uncertainties arising from the aerospace vehicle system itself, as well as the environmental and operational conditions it is involved in. Take structural design for example, uncertainties include prediction errors induced by design model assumption and simplification, performance uncertainty arising from material properties and manufacturing tolerance, and uncertainties of load conditions applied on the structure during operation. These uncertainties may cause system performance to change or fluctuate, or even cause severe deviation and result in unanticipated or even unprecedented function fault and mission failure. Therefore it is important to take uncertainties into account from the beginning of aerospace vehicle system design. In traditional design, to account for uncertainties, constraints imposed on the design are often reformulated with empirical or other predefined factors instead of the ideal ones based on the marginal design philosophy, so as to maintain redundancy of the system in face of uncertainties. For example, the ideal stress constraint is often rewritten by multiplying the actual stress with a safety factor (larger than one) so as to represent the consideration of all the potential uncertainties in a lump [1]. The safety factor is defined mainly based on past experience and prior knowledge about the system, and up to now there is no straightforward universal method to appropriately define it. With bigger safety factors, the design and optimization are prone to reach solutions which are too conservative and over redundant that accordingly result in weight and cost penalty, whereas with smaller safety factors the reliability of the system cannot be guaranteed. Furthermore, the past experience in defining safety factor for existing structure forms may be inappropriate or obsolete for new structures and thus lead to potential danger. Hence the aforementioned traditional methods of implicitly and roughly dealing with uncertainties are far from enough to economically improve system performance, robustness and reliability. So it is highly motivated to develop more advanced and accurate analytical approaches based on uncertainty related

mathematical theory to tackle uncertainties systematically and rationally during design. These new approaches are usually described with the terms as Uncertainty-Based Design (UBD) [2], Non-Deterministic Approaches (NDA) [3], etc., which aim at solving the following two issues: (1) improve robustness of aerospace vehicle and decrease its sensitivity to variations, so as to maintain the stability of performance under uncertainty; (2) enhance reliability of aerospace vehicle and decrease the chance of function failure under potential critical conditions, so as to keep the system in normal state with required level of likelihood under extreme event. Corresponding to the two design aims, there are basically two categories of uncertainty-based design methods, namely robust design optimization and reliability-based design optimization as shown in Fig. 1. Take random uncertainty for example, robust design is mainly concerned with the event distributed near the mean of the probability density function (small fluctuation around the normal status), whereas reliabilitybased design optimization is concerned with the event distributed in the tails of the probability density function (extreme events). These two non-deterministic approaches can also be formulated into one design problem to seek improvements of the system both in terms of robustness and reliability. Uncertainty-based design optimization can be traced back to the 1950s [4,5], and since then there is a surge of research in this area. Lots of research have been devoted to design and optimization algorithms under uncertainty [6,7] and successful applications in wild fields have been observed, especially in the fields of aerospace engineering and civil engineering which have stringent regulations on system reliability and robustness [8–11]. The research of uncertainty-based design optimization in aerospace engineering are mainly focused in disciplines such as structure [12–14], aerodynamics [15–17], and control [18,19], and the current status and barriers are comprehensively surveyed in [2]. As these disciplines are naturally close coupled and uncertainty impacts are cross propagated, it is much more desirable to take a holistic approach to solve the multidisciplinary design optimization (MDO) problem so as to enhance the system design by exploiting potential synergistic effect of the coupled disciplines. From this perspective, Uncertainty-Based Multidisciplinary Design Optimization (UMDO) is introduced into academia. UMDO is a new trend of MDO [2,20]. It can greatly improve design by benefiting from the synergistic effect of coupling disciplinary collaboration optimization, and meanwhile enhance reliability and robustness. Being more close to the realistic

452

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

Nomenclature

yij

x ¼ ½x1 ,x2 ,. . .,xnX  random variable vector with nX dimensions X original design variable space U standard normal space p(x) multi-variate joint probability density function of x pi(xi) uni-variate probability density function of xi g(x)¼0 limit state function R reliability pf probability of failure :U: L2 norm or Euclidean norm F(U) standard normal cumulative distribution function f(U) standard normal probability density function b reliability index yi output vector of discipline i

D

systems engineering by systematically taking uncertainties into consideration in the design phase, UMDO has attracted wide research interest and is now under rapid development. In the NASA white paper which addresses the needs and opportunities for uncertainty-based multidisciplinary design for aerospace vehicles [2], the term multidisciplinary is highlighted as one of the key phrases in uncertainty-based design research of NASA Langley Research Center. On this fertile ground, rich literature has been published which covers extensive topics including uncertainty classification and quantification, multidisciplinary uncertainty cross propagation and analysis, approximation methods for reducing calculation burden, optimization under uncertainty, and multidisciplinary organization of UMDO problems. The successful applications of UMDO in aerospace engineering are also reported which strongly demonstrate the efficacy of this new emerging methodology [21–24]. The scope of this paper is to systematically introduce the UMDO theory and present a comprehensive review of the UMDO methods. Only the fundamental theory and general UMDO approaches for aerospace vehicles are covered, and the detailed issues related to algorithms and applications for specific disciplines are excluded and referred to [2]. This paper is by no means exhaustive, and we apologize to authors and readers for work that could not be cited. The rest of the paper is structured as follows. The UMDO theory preliminaries are firstly introduced. The basic concepts are clarified, and the general solving process of UMDO problems is explained. Following the UMDO process, detailed literature reviews of all the key steps are expounded separately in corresponding sections, which include uncertainty modeling, uncertainty propagation and analysis, optimization under uncertainty, and UMDO procedure. After these review sections, the paper is closed with some conclusion remarks,

O UMDO MDO RIA PMA GSE MPP MCS MDA RBDO RDO

the coupling state vector output from discipline i and input into discipline j failure domain the universe of uncertain variables uncertainty-based multidisciplinary design optimization multidisciplinary design optimization reliability index approach performance measure approach global sensitivity equation most probable point Monte Carlo simulation multidisciplinary analysis reliability-based design optimization robust design optimization

wherein future UMDO research challenges and opportunities are highlighted.

2. UMDO theory preliminaries 2.1. Basic concepts For convenience of discussion, we begin with the basic conceptions and definitions relevant to UMDO in this section. 2.1.1. Uncertainty The term uncertainty has different definitions and taxonomies in different research fields. In computational modeling and simulation, uncertainty is regarded as a potential deficiency in any phase or activity of the modeling process that is due to a lack of knowledge [25]. In systems engineering, uncertainties are things that are not known, or known only imprecisely [26]. In some aerospace engineering literature, uncertainty is defined as the incompleteness in knowledge (either in information or context), that causes model-based predictions to differ from reality in a manner described by some distribution function [27]. In another useful functional definition it is defined as the information/knowledge gap between what is known and what needs to be known for optimal decisions with minimal risks [22]. From the perspective of systems engineering which takes the whole lifecycle into account during the design phase, we give the definition of uncertainty as follows. Definition 1. Uncertainty: The incompleteness in knowledge and the inherent variability of the system and its environment.

Fig. 1. Two categories of uncertainty-based design [2]: (a) uncertainty-based design domains and (b) robustness and reliability in terms of probability density function.

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

2.1.2. Robustness In general, a system, organism or design may be said to be robust if it is capable of coping well with variations (sometimes unpredictable) in its operating environment with minimal damage, alteration or loss of functionality. In IEEE guideline for nuclear power generating station, robustness is referred to as a statistical result that is not significantly affected by small changes in parameters, models, or assumptions [28]. In some non-deterministic design literature, a robust system is defined to be relatively insensitive to variations in both the system components and the environment, and the degree of tolerance to these variations is measured with robustness [22]. We use the definition from Ref. [22] and state it as follows. Definition 2. Robustness: The degree of tolerance of the system to be insensitive to variations in both the system itself and the environment. 2.1.3. Reliability The definition of reliability is quite consistent in different research fields. It is generally defined as the likelihood that an item will perform its intended function for a specified time interval under stated conditions [29,30]. We also use this definition and state it as follows Definition 3. Reliability: The likelihood that a component (or a system) will perform its intended function without failure for a specified period of time under stated operating conditions. With different mathematical theory to model uncertainties, the likelihood can be quantified with different measures, e.g. probability in probability theory, and belief and plausibility in evidence theory. 2.1.4. Deterministic design optimization The process of obtaining optimal designs is known as design optimization. In traditional design optimization, most engineers assume that design variables in the optimization problem are deterministic for simplification, and do not account for uncertainties that inherently exist in the design variables and parameters, as well as simulation models [31]. Definition 4. Deterministic design optimization: The process of obtaining optimal designs assuming that all the variables, parameters, models, and simulations involved in the design problem are deterministic. For a deterministic design optimization, the mathematical problem can be formulated as 8 find x > > > < min f ðx,pÞ ð1Þ > s:t: gðx,pÞ r 0 > > : xL r xr xU

Definition 5. Robust design optimization: A methodology to optimize design which is insensitive to various variations. The mathematical formulation for RDO with probability theory is 8 find x > > > < min f~ ðx,pÞ ¼ Fðmf ðx,pÞ, sf ðx,pÞÞ ð2Þ > s:t: gðx,pÞ r0 > > : L U x rx r x where both x and p could be uncertain, mf and sf are the mean and standard deviation of the original optimization objective f(U), F(U) is the reformulated optimization objective function with respect to mf and sf. The simplest example of F(U) is the weighted sum of the mean and standard deviation stated as kmf(x,p)/wmf þ(1k)sf (x,p)/wsf, where k is the weighting factor, and wmf and wsf are the scaling factors. By integrating sf into the objective function, minimization of system sensitivity to uncertainties can be achieved. The graphical illustration of RDO is shown in Fig. 2. 2.1.6. Reliability-based design optimization Reliability-based design optimization (RBDO) is also referred to as Reliability-based optimization (RBO) [20], which deals with obtaining optimal design and meeting reliability constraints [31]. With reference from [2], we give the definition of RBDO as follows. Definition 6. Reliability-based design optimization: A methodology to optimize design which is reliable with small chance of failure under predefined acceptable level. The mathematical formulation for RBDO with probability theory is 8 find x > > > < min f~ ðx,pÞ ¼ m ðx,pÞ f ð3Þ > s:t: Pfgðx,pÞ r 0g ZR > > : xL rx r xU where P{U} is the probability of the statement within the braces to be true, and R is the reliability vector specified for the constraint vector. The graphical illustration of RBDO is shown in Fig. 3. To improve system design in both robustness and reliability, RDO and RBDO can be combined and referred to as reliability-based robust design optimization (RBRDO), which is formulated as [20,37] 8 find x > > > < min f~ ðx,pÞ ¼ Fðm ðx,pÞ, s ðx,pÞÞ f f ð4Þ > s:t: Pfgðx,pÞ r 0g ZR > > : xL rx r xU

where x is design variable vector, p is system constant parameter vector, xL and xU are lower bound and upper bounds of x which define the boundaries of design space, f(U) is the optimization objective function, and g(U) is unequal constraint vector. 2.1.5. Robust design optimization Robust design optimization (RDO) is a systematic and efficient way to meet the challenge of design optimization for performance and quality [32]. It is widely accepted that robust design is firstly founded by Japanese engineer Genichi Taguchi, who develops the Taguchi method to improve the quality of manufactured goods and makes the product performance insensitive to variation in variables beyond the control of designers [33,34]. With reference from [34–36], the definition of RDO is stated as follows.

453

Fig. 2. Graphical illustration of RDO.

454

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

design. There are many mathematical theories and methods to model uncertainties [27,38], such as probability theory, possibility theory, evidence theory, clouds theory [39], etc. Throughout aerospace vehicle lifecycle, there exist a vast number of uncertainties, which inevitably lead to unacceptable calculation burden. Therefore, it is generally necessary to use sensitivity analysis to screen out the factors which have no significant influence on system design. 2.2.2. UMDO procedure UMDO procedure is the methodology about how to efficiently organize and realize UMDO by programming in computers [40]. As shown in the flowchart, the key steps of UMDO procedure mainly include optimization under uncertainty and uncertainty analysis. Fig. 3. Graphical illustration of RBDO.

2.2.2.1. Optimization under uncertainty. This step is the design space exploration under uncertainty. For large-scale, highly nonlinear, and non-convex problem, the deterministic global optimization is already very difficult and time-consuming, and it naturally becomes even worse with additional efforts to deal with uncertainties, which may lead to prohibitive computation. Hence the researches of optimization algorithms, as well as the special treatments of uncertain objectives and constraints, are essential to enhance the overall optimization efficiency under uncertainty. 2.2.2.2. Uncertainty propagation and analysis. In this step, the uncertainty characteristics of the system output under impacts of uncertainties propagated through the system inner mechanism are quantitatively analyzed, so as to further analyze reliability and robustness of the design. Especially for the complex aerospace vehicle system with multi-disciplines, the cross propagation of uncertainties causes great difficulty to the uncertainty analysis, which is one of the hot issues in the UMDO research. In the following sections, the aforementioned key steps, except system modeling which pertains to the specific research object, are thoroughly discussed and surveyed.

Fig. 4. General flowchart of UMDO.

2.2. General UMDO process A panoramic view of the general UMDO solving process is present in this section to provide an overall understanding of this new methodology. UMDO is the methodology that solves the uncertainty-based design optimization of complex systems by fully considering coupling relationship and uncertainty propagation between disciplines involved in the system. For a UMDO problem, the general solving flowchart is shown in Fig. 4, and the main steps are explained as follows. 2.2.1. Uncertain system modeling Uncertain system modeling is the first step to mathematically describe the design optimization problem, which consists of system modeling and uncertainty modeling. 2.2.1.1. System modeling. System modeling includes mathematical modeling of the system and disciplines, and mathematical formulation of optimization problems, such as design variables, optimization objectives, constraints, design space, etc., which is similar to the system modeling of deterministic optimization. 2.2.1.2. Uncertainty modeling. Uncertainty modeling is classification and quantification of uncertainties involved in the system

3. Uncertainty modeling Appropriate uncertainty modeling is the premise of uncertainty-based design optimization, which includes adopting appropriate taxonomy to comprehensively identify and classify uncertainty sources, utilizing suitable mathematical tools to represent and model these uncertainties, and using sensitivity analysis approaches to screen out uncertainties with minor effects on design so as to simplify UMDO problem. These issues will be studied in this section. 3.1. Uncertainty classification There are numerous taxonomies in literature to address uncertainty classification. The most popular uncertainty taxonomy is firstly proposed in risk assessment, which classifies uncertainty into two general types: aleatory and epistemic. Aleatory uncertainty describes the inherent variation of the physical system or the environment under consideration. It is also known as variability, type A, or stochastic uncertainty, which is irreducible because it cannot be eliminated by collection of more information or data. Epistemic uncertainty is a potential inaccuracy in any phase or activity of the modeling process that is due to a lack of knowledge. It is also known as subjective, type B, or cognitive uncertainty, which is reducible because it can be eliminated with an increased state of knowledge or collection of more data [41,42]. This taxonomy is widely accepted and has been applied in lots of fields, including

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

decision analysis, inference, risk and policy analysis, scientific computing, and modeling and simulation [43,44]. The journal ‘‘Reliability Engineering and System Safety’’ has special issues (vol. 54, nos. 2–3, 1996, and vol. 85, nos. 1–3, 2004) to discuss about these two kinds of uncertainties [3,45]. For complex system design, uncertainty taxonomy is specifically studied. Thunnissen [46,47] proposed to classify uncertainties into ambiguity, epistemic, aleatory, and interaction. Ambiguity (also called imprecision or vagueness) is about little precision in general communication. Epistemic and aleatory uncertainties are defined the same as those mentioned above. Interaction uncertainty arises from unanticipated interaction of many events or disciplines, which might be or should have been foreseeable. Padmanabhan [20] defined the main types of uncertainty as variations in parameters and design variable settings, uncertainties related to decision making or design problem formulations, and modeling and numerical errors associated with analysis tools. DeLaurentis [19] and DeLaurentis and Mavris [27] established a framework to identify uncertainty types that cause ‘‘modelbased predictions to differ from reality’’ in aerospace vehicle synthesis and design. By analogy to the control model, uncertainties are classified into four types: input (the imprecise, ambiguous, or not clearly defined requirements), operational environment (due to unknown or uncontrollable external disturbances), model parameter (error in mathematical models that attempt to represent a physical system), and measurement (arise when the response of interest is not directly computable from the math model). Walton [48] developed a holistic view of primary uncertainties over the space system lifecycle and categorized them into development uncertainty, operational uncertainty, and model uncertainty. With computational simulation based design becoming the main tool in modern aerospace engineering, modeling and simulation uncertainties are also thoroughly studied. In general, uncertainties contributing to simulation prediction uncertainty can be categorized as external uncertainty and internal uncertainty according to whether it is within or outside the system model boundary [49–51]. The internal uncertainty is concerned with the simulation model, which is further categorized as model structure uncertainty and model parameter uncertainty. Model structure uncertainty, also mentioned as non-parametric uncertainty [13,38], is mainly due to assumptions underlying the model which may not capture the physics correctly [52]. Model parameter uncertainty, also called as parametric uncertainty, is mainly due to limited information in estimating the model parameters for a

455

given fixed model form. Besides these two types of uncertainty, Oberkampf et al. [53,54] further proposed to use the term error to specifically define the recognizable deficiency in any phase or activity of modeling and simulation that is not due to a lack of knowledge and is identifiable or knowable upon examination, such as programming error. In the NASA report of uncertaintybased multidisciplinary design methods for aerospace vehicles, two complementary categorizations of uncertainties are used to address computational uncertainties [2]. One categorization distinguishes between parameter uncertainties and model form uncertainties. The other one is based on the taxonomy proposed by Oberkampf which classifies the total computational uncertainty into variability, uncertainty, and error [53]. Since the UMDO problems of aerospace vehicles discussed in this paper are mainly concerned with computational simulation based design optimization which takes lifecycle uncertainties into account, we define the uncertainty taxonomy as follows: (1) Uncertainties are generally categorized into two types: epistemic and aleatory. (2) Sources of uncertainties throughout the aerospace vehicle lifecycle that have influence on UMDO in the design phase can be classified as follows: (a) In mission analysis phase, uncertainties arise from the variability of mission needs and requirements, government and related agency regulations, science and technology development, funding, mission schedule, political and cultural factors, to name a few. (b) In design phase, specifically the simulation-based computational design, uncertainties mainly arise from computational simulations. There are three sources contributing to the total uncertainty of computational simulation, namely model input uncertainty (external uncertainty), model uncertainty (model structure uncertainty and model parameter uncertainty), and model error, as shown in Fig. 5. (c) In manufacturing phase, uncertainties arise from human operation error, manufacturing tolerance, etc. (d) In operation phase, uncertainties arise from operational conditions (environment).These uncertainties are shown in Fig. 6.

3.2. Uncertainty representation and modeling Different approaches should be used to appropriately represent and model uncertainty according to its specific characteristics and information available about it.

Fig. 5. Uncertainty sources in the simulation-based design.

456

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

Fig. 6. Uncertainty sources relevant to UMDO throughout the aerospace vehicle lifecycle.

3.2.1. Model input uncertainty and model parameter uncertainty Model input and model parameter uncertainties have different features in different context. Numerous modeling approaches have been studied, and the most popular ones include: probability theory, evidence theory, possibility theory, interval analysis, and convex modeling. As probability theory has a long history, sound theoretical foundation, and deep root in the research of non-deterministic design, it is more prevalent or better known to engineers than other theories. Therefore, the terms non-probabilistic or imprecise probability approaches are used to cover all mathematical models which measure uncertainty without sharp numerical probabilities [55,56]. 3.2.1.1. Probability theory. Probability theory represents uncertainty as random variable or stochastic process (time-dependent). Here we mainly discuss about random variables. For discrete random variable x, a sample space is firstly defined which relates to the set of all possible outcomes denoted by O ¼{x1,x2,...}. Each element of the sample space xAO is assigned a probability value f(x) between 0 and 1, and the sum of the probabilities of all the elements in the sample space is equal to 1. The function f(x) mapping a point in the sample space to the ‘‘probability’’ value is called probability mass function (pmf). For continuous random variable X within the set of real numbers R, a function called cumulative distribution function (CDF) F(x) exists, defined by F(x)¼P(Xrx), where x denotes a particular realization and P denotes the probability. F(x) returns the probability of the occurrence that X will be less than or equal to x. If F(x) is absolutely continuous, it can be differentiated with respect to X and yields probability density function (PDF) f(x). For a set EAR, the probability of the random variable X being in E is Z PðX A EÞ ¼ f ðxÞ dx ð5Þ xAE

The quantitative measures of random variables, e.g. mean, standard deviation, statistical moments, joint probability properties, etc., can be defined with probability methods. Aleatory uncertainty is generally modeled as random variable or stochastic process by probability theory if information is sufficient to estimate probability distribution. Firstly, certain distribution model (Gaussian, Poisson, log-normal, etc.) should be assumed, and then the parameters of the model can be estimated with sufficient data (or other kind of available information) to accurately fit its CDF or PDF function. The distribution model can be selected according to the uncertainty characteristics and the

context it is involved in based on past experiences, a priori knowledge, or expert opinions [57,58]. The parameters of the model can be estimated with parameter estimation methods, e.g. the method of moments, maximum likelihood method, etc. [59]. If the data sample is small, the statistical distribution model selection and model fitting can be defined by Bayesian inference of unbounded Johnson distribution [60]. Probability theory has been widely used in solving non-deterministic design problems in aerospace engineering [61–65]. But in practice, the application of this theory usually encounters the problem that there cannot always be sufficient information to develop the probability model. Especially in the conceptual design phase, there is very limited available knowledge about the research object, and past experience, expert opinions and empirical knowledge are generally quite subjective. In this situation uncertainties can be considered to degenerate from aleatory to epistemic (subjective) ones. To deal with epistemic uncertainty, the Bayesian probability, a special interpretation of probability theory, has been investigated. Bayesian probability interprets the concept of probability as a measure of a state of knowledge in contrast to interpreting it as a frequency or a physical property of a system. It specifies some prior probability subjectively, and then updates it in the light of new evidence or observations by means of statistical inference approach, so called Bayesian inference. In this way it can combine pre-existing knowledge with subsequent available information and update the prior knowledge with uncertainties. With the capability of dealing with both aleotery and epistemic uncertainties, the Bayesian theory has been widely applied, especially in reliability engineering (Bayesian reliability analysis and Bayesian reliability-based optimization) [66–69]. 3.2.1.2. Evidence theory. Evidence theory, also called the Dempster– Shafer theory (D–S theory), is a mathematical theory which measures uncertainty with belief and plausibility determined from the known evidence (information) for a proposition. These measures can define lower and upper bounds (interval range) of probability with consistent evidence instead of assigning a precise probability for a proposition, which is very useful when poor knowledge is available about the uncertainties under study [70]. The information or evidence to measure belief and plausibility comes from a wide range of sources, e.g. experimental data, theoretical evidence, experts’ opinions concerning belief in value of a parameter or occurrence of an event, etc., and the evidence can be combined with combination rules [71]. Let O be the universal set representing all possible states of a system under consideration. The elements of the power set 2O can be taken to represent propositions concerning the actual state of the system. Evidence theory assigns a belief mass to each element of the power set by a basic belief assignment (BBA) function m:2O-[0.1] which has the following two properties: the mass of the empty set is zero m(|)¼0, and the mass of all the member elements of the power set adds up to a total P of 1, AA2Om(A) ¼1. The mass m(A) expresses the proportion of all relevant and available evidence that supports the claim that the actual state belongs to A. The value of m(A) pertains only to A and makes no additional claims about any subsets of A, each of which has its own mass. From the mass assignments, a probability interval can be defined which contains the precise probability (in the classical probabilistic sense), and the lower and upper bound measures are belief (Bel) and plausibility (Pl) as Bel(A) rP(A)rPl(A). The belief Bel(A) is defined as the sum of mass of all the subsets of A, which represents the amount of all the evidence supporting that the actual state belongs to A, and the plausibility Pl(A) is the sum of mass of all the sets that intersect with A, which represents the amount of all the

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

457

evidence that does not rule out that the actual state belongs to A: BelðAÞ ¼

X

mðBÞ,

PlðAÞ ¼

X

mðBÞ

ð6Þ

B9B\A a |

B9B D A

The two measures are related to each other as PlðAÞ ¼ 1BelðAÞ, BelðAÞ þBelðAÞ r1, PlðAÞ þ PlðAÞ Z 1

ð7Þ

where A is the complement of A. The evidence space is characterized with cumulative belief function (CBF) and cumulative plausibility function (CPF) defined as CBF ¼ f½x,Belðnx Þ : x A Og, CPF ¼ f½x,Plðnx Þ : x A Og,

nx ¼ fx~ : x~ A O and x~ r xg nx ¼ fx~ : x~ A O and x~ rxg

ð8Þ

CBF and CPF are shown in Fig. 7 with CDF, which clearly illustrates how belief and plausibility define a probability interval as lower and upper bounds [72]. Detailed tutorials of evidence theory can be referred to [73]. Evidence theory can handle both aleatory and epistemic uncertainties flexibly with its evidence combination rules to update probability measures. It is actually very close related to probability theory, as an uncertainty representation with this theory can approach an uncertainty representation with probability theory as the amount of available information increases, which is very appealing for application in industry [74,75]. However it has limitations when dealing with highly inconsistent data sources, which may render the evidence combination rule unreliable. Anyway, it has attracted great research interest and been widely applied in uncertainty-based information, risk assessment, decision making, and design optimization [74,76–78]. 3.2.1.3. Possibility theory. Possibility theory is firstly introduced by Zadeh in 1978 as an extension of his theory of fuzzy set and fuzzy logic, which can be used to model uncertainties when there is little information or sparse data [79]. The term fuzzy set is used in contrast with the conventional set (crisp set) which has fixed boundaries. Let A~ be a fuzzy set, the degree of membership of a single point x belonging to A~ is denoted using a membership function mA~ ðxÞ, also called the characteristic function. The comparison between fuzzy set A~ and classical set A is shown in Fig. 8(a). It can

Fig. 8. Uncertainty representation with fuzzy set theory and possibility theory: (a) fuzzy set vs. classical set and (b) plot of CPoF and CNF [72].

be seen that the degree of membership can vary between 0 and 1, while for the classical set the membership should either be 0 or 1, so crisp set can be seen as a special case of fuzzy set. Given the possibility level of 0.4 (which is also referred to as a-cut), the possible value of uncertain variable x lies between 2.5 and 6.5 as an interval. In possibility theory, the membership function is extended to possibility distribution which expresses the degree the analyst considers that an event can occur. This subjective knowledge is numerically modeled with a pair (w,r) to characterize uncertain variable x, where w is the set of possible values for x, and r is a function defined on w such that 0 rr(x)r1 for xAw and supfrðxÞ : x A wg ¼ 1. The function r provides a measure of confidence that is assigned to each element of w and is referred to as possibility distribution function for x. Possibility theory provides two measures of likelihood for subsets of w: possibility and necessity. Specifically, possibility and necessity for a subset n of w are defined by PosðnÞ ¼ supfrðxÞ : x A ng, NecðnÞ ¼ 1Posðnc Þ ¼ 1supfrðxÞ : x A nc g ð9Þ

Fig. 7. Plot of CBF and CPF [72].

nc denotes the complement of n. In consistency with the properties of the possibility distribution function r, Pos(n) provides a measure of the amount of information that does not refute the proposition that n contains the appropriate value for x, and Nec(n) provides a measure of the amount of uncontradicted information that supports the proposition that n contains the appropriate value for x. Relationships satisfied by possibility and necessity for

458

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

the possibility space (w,r) include NecðnÞ þ Posðnc Þ ¼ 1, NecðnÞ rPosðnÞ PosðnÞ þ Posðnc Þ Z1, NecðnÞ þ Necðnc Þ r1

ð10Þ

For any n, either Pos(n)¼1 or Nec(n)¼0. Similar to probability theory, the possibility space can be characterized with cumulative necessity function (CNF) and cumulative possibility function (CPoF), as shown in Fig. 8(b). CNF and CPoF are defined as [72] CNF ¼ f½x,Necðnx Þ : x A wg, CPoF ¼ f½x,Posðnx Þ : x A wg,

nx ¼ fx~ : x~ A w and x~ r xg nx ¼ fx~ : x~ A w and x~ r xg

ð11Þ

Both the aleatory and epistemic (mainly vagueness) uncertainties can be represented by this theory. The detailed introduction of possibility theory and fuzzy set can be referred to [79,80]. Compared to probability theory, possibility theory can be more conservative in terms of a confidence level. But given enough information about uncertainties and accurate predictive models, probability theory is more advantageous [81–84]. The application of fuzzy set and possibility theory in engineering design optimization and decision making can be referred to [85–88]. Possibility theory can also be applied along with probability theory considering there may be both types of uncertainty within one problem, and the integrated or unified algorithms are studied in [89–91]. 3.2.1.4. Interval analysis. Interval analysis is a method developed by mathematicians since the 1950s as an approach to putting bounds on rounding errors and measurement errors in mathematical computation, and thus developing numerical methods that yield reliable results. In interval analysis the value of a variable is replaced by a pair of numbers representing the maximum and minimum values that the variable is expected to take, which is the simplest form to represent uncertainties. Interval arithmetic rules are used to perform mathematical operations with the interval numbers, so as to propagate the interval bounds through the computational model and get the bounds on the output variables. A comprehensive introduction to this theory and its applications can be found in [92–95]. 3.2.1.5. Convex modeling. Convex modeling is a more general approach proposed by Ben-Haim and Elishakoff in 1990 to represent uncertainties with convex sets [96]. The convex models include energy-bound model, interval model, ellipsoid model, envelope-bound model, slope-bound model, Fourier-bound model, etc. One of the typical convex descriptions of uncertain parameter vector x ¼ ½x1 ,x2 ,. . .,xnX  is ellipsoid model defined by xTWxra, where W is a positive definite matrix and a is a positive constant. With this description, the uncertain object denoted by x is an ellipsoid rather than a hypercube defined by the lower and upper bounds on each component of the object. This is reasonable as it is unlikely that the uncertain components are independent with each other and the bounds on the components of the object are reached simultaneously. Therefore it is more general to use the convex model with representation of correlations between uncertain components in realistic application. When the convex models are intervals, techniques in interval analysis can be used. Convex modeling and application, specifically in reliability analysis and design, can be found in [96–98]. Based on convex modeling, Info-Gap decision theory is further developed by Ben-Haim as a methodology for robust decision making under conditions of severe uncertainty [99,100]. Besides the foregoing five theories, there are numerous other alternative approaches to represent uncertainties, especially for

epistemic uncertainties, e.g. cloud theory mediating between fuzzy set theory and probability distributions [39,101,102], fuzzy random theory and random fuzzy theory with characteristics of both fuzzy set theory and probability theory [103], etc., which are reviewed in [104–106]. There is a special issue of Reliability Engineering and System Safety (vol. 85, 2004) dedicated to this research [45]. 3.2.2. Model form uncertainty Model form uncertainty can be characterized by Bayesian approaches [44,107–109], or through model accuracy assessment by comparison between simulation results and experimental measurements [43,110]. This process is also called model validation which determines if the mathematical model of a physical event represents the actual physical event with sufficient reliability [111,112]. In uncertainty based design, uncertainty representation models per se also have model form uncertainties, especially probabilistic models whose distributions are assumed and fitted based on past experience, expert opinions, experimental data, etc. Hence it is also necessary to measure the uncertainty of the uncertainty model to validate the feasibility of the uncertainty representation. To assess whether a specific distribution is suitable to a data-set, the goodness of fit criteria, including the Pearson w2 test, the Kolmogorov–Smirnov test, the Crame´r–vonMises criterion, the Anderson-Darling test, etc., can be applied [113–115]. If the data available to test the hypothesis about probabilistic models are very scarce and do not allow definite conclusions to choose or discard totally one model among others, Bayesian method can be used which is capable of combining several competing probability distribution types together to describe a random variable [116–118]. More generally, a complete Bayesian solution is proposed to average over all possible models which can provide better predictive performance than any single model accounting for model uncertainties [119]. 3.2.3. Model error Great research efforts have been devoted to model error estimation and control in computational simulations. Discretization errors can be evaluated by grid refinement and time step refinement studies [110,120,121], discretization error transport equations [122,123], goal-oriented error estimation [124–126], and other priori information [127] and posteriori error estimation methods [128,129]. Round-off errors can be tested and characterized by comparing model calculations with advanced computer hardware results. Programming errors are simply mistakes and can be detected by redundant procedures and double-checking in model verification [110]. In general, the discretization error, round-off error and programming error can be estimated by comparison between numerical results of the programmed simulation model and analytical results of the exact mathematical model. This process is also called model verification, which determines if the computational simulation codes implementing the theoretical model have sufficient accuracy [110]. To sum up, the model form uncertainty and model error can be characterized by model verification and validation, and detailed studies can be found in [130–134] and the references therein. 3.2.4. Other uncertainties related to UMDO The uncertainties from other phases of aerospace vehicle lifecycle can be characterized with appropriate models according to their aleatory or epistemic features, specific context under study, and available information about their quality and quantity of interest. The general clue underlying representation model selection can be referred to the preceding discussion of this section.

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

3.3. Uncertainty sensitivity analysis Sensitivity analysis (SA) is the study of how the variation (uncertainty) in the model output can be apportioned, qualitatively or quantitatively, to different sources of variations in the model or model input [135]. By means of this technique, uncertainty factors can be systematically studied to measure their effects on the system output, so as to filter out the uncertainty factors with negligible contributions and reduce UMDO complexity (Fig. 9). With this specific aim, sensitivity analysis in this context is also termed Uncertainty Importance Analysis [136]. There are many approaches to address sensitivity analysis under uncertainty, especially with probability theory. Probabilistic sensitivity analysis methods mainly include differential analysis, response surface methodology, variance decomposition, Fourier amplitude sensitivity test (FAST), sampling-based method, etc. [137–140], which can well deal with aleatory uncertainties modeled with probability theory. A thoroughly comparison study of these approaches can be found in [141]. Among these approaches, sampling-based method is widely used for its flexibility and ease of implementation. With the sampling results, different measures and analysis methods can be used to quantify the contribution of each uncertainty factor, e.g. scatterplots, correlation and partial correlation method, regression and nonparametric regression analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, variance decomposition, etc. [136,137]. Considering the calculation burden of global sampling of computationally intensive models, it is desirable to find a balance between computational cost and accuracy. To address this problem, Elementary Effects (EE) method is proposed by Morris [142] and improved by Campolongo et al. [143]. This method calculates a number of incremental ratios for each uncertainty factor, called Elementary Effects (EE), from which basic statistics are computed to derive sensitivity information. For each factor, two sensitivity measures are computed: m which assesses the overall influence of the factor on the output, and s which estimates the nonlinear effects and interactions. This method can provide a good

Fig. 9. Sensitivity analysis for importance ranking and screening of uncertain variables.

459

compromise between accuracy and efficiency, especially for sensitivity analysis of complex models. Little research has been devoted for sensitivity analysis under epistemic uncertainty. So far the approaches are mainly based on sampling methods to study incremental effects of uncertain variables on complementary cumulative belief functions and complementary cumulative plausibility functions with evidence theory [144,145], or based on differential analysis methods to analytically derive sensitivity of plausibility in evidence theory with respect to expert opinions and uncertain parameters [146]. Guo and Du also proposed to use evidence theory to unify sensitivity analysis method for both aleotary and epistemic uncertainties [147,148]. Uncertainty analysis methods are used to calculate belief and plausibility measures, and the gap between these two measures is regarded as an indicator of the uncertainty effect on the model output. The selection of appropriate sensitivity analysis method should rely on uncertainty types and the specific problem context. For example, sampling based method would be computationally expensive for complex system simulation models; but if approximation model is used in place of the high fidelity simulation model, sampling based method would be efficient as well.

4. Uncertainty propagation and analysis Uncertainty analysis is concerned with quantifying uncertainty characteristics of system output resulted from model input uncertainties and model uncertainties propagated through computational simulation (Fig. 10). Generally uncertainty analysis approaches can be categorized into two types: intrusive and nonintrusive [149]. The intrusive type is mainly related to the physicsbased approaches which involve reformulation of governing equations and modification to the simulation codes so as to incorporate uncertainty directly into the system. Typical example of this type is Polynomial Chaos expansion based approaches, which represent a stochastic process with expansion of orthogonal polynomials. The coefficients of the expansion can be defined by substituting the stochastic process with its polynomial chaos expansion in the original governing equations, which results in a coupled system of deterministic equations to be solved by editing the existing analysis codes. Multi-dimensional Hermite orthogonal polynomials are firstly proposed to represent Gaussian stochastic process by Wiener [150], based on which a spectral stochastic finite element method is developed by Ghanem and Spanos [151] and widely used in various applications, including structural mechanics [151], fluid flow [152,153], etc. Xiu and Karniadakis further proposed to represent stochastic process with expansions from the Askey family of orthogonal polynomials as a generalization of the Wiener Hermite chaos expansion, which use different subset of Askey family polynomials according to different property of random variables with different distributions, e.g. Laguerre polynomials for the Gamma distribution, Charlier polynomials for the Poisson distribution, etc. [154]. In contrast to intrusive approaches, non-intrusive approaches treat computer simulation model as black-box and need no modification to the existing deterministic simulation codes. So it can be developed for

Fig. 10. Uncertainty propagation.

460

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

general use and take the advantage of being applicable to legacy codes. With this merit, the preceding Polynomial Chaos expansion based methods are also studied to be solved with non-intrusive approaches [155,156]. The scope of this section excludes the intrusive approach as it depends on specific problems and disciplines. Interval and bound analysis methods based on interval algebra or optimization methods to estimate the upper and lower bounds of system outputs are not discussed either. Herein we review some widely used non-intrusive approaches, including Monte Carlo simulation method, Taylor series approximation method, and some methods specific for reliability analysis. In the end, considering the computational difficulty in application of the conventional uncertainty analysis methods to UMDO problems, decomposition based methods are introduced, which can treat uncertainty cross propagation among complex coupling disciplines more efficiently by decomposing the system uncertainty analysis problem into subsystem or disciplinary level.

can compute the statistics of the system response by simply performing repeated sampling and simulation. The basic MCS procedure with probabilistic uncertainties includes three steps: Step 1: In consistency with the assumed distributions, a set of nS data points are randomly sampled. The random sampling methods are discussed in [165]. Step 2: For each data point a deterministic simulation is executed to get the corresponding system output response and form nS sample pairs [x(i),y(i)]. Step 3: Analyze the samples. The integral in (12) can be approximated as ~ ¼ Ij

nS 1 X jðyðiÞ Þ nS i ¼ 1

ð13Þ

And the standard deviation of j(y) can be approximated as n

s2j 

S 1 X ~ Þ2 ðjðyðiÞ Þj nS 1 i ¼ 1

ð14Þ

4.1. Monte Carlo simulation Monte Carlo simulation (MCS) methods, also referred to as sampling-based methods [157,158], are a class of computational algorithms that perform repeated sampling and simulation so as to compute the statistics of the response quantities of interest. Provided sufficient number of samples, MCS methods can give statistic analysis results with arbitrary level of accuracy. Hence MCS is often used as a benchmark for evaluating the performance of new uncertainty analysis techniques. To begin with, the uncertainty analysis problem with probability theory is firstly stated. Denote the computer simulation model as y¼ f(x) with y being the simulation output. For simplicity, only one dimension output problem is discussed which can be easily extended to multi-dimensional output problem. Assume the joint probability distribution function of the vector x is p(x) and the universe of the random variables is O. For arbitrary function j(y), its expected value is Z I ¼ EðjðyÞÞ ¼ jðf ðxÞÞpðxÞ dx ð12Þ O

when j(y) ¼yk, I is the estimate for the kth statistical moment; when j(y) ¼y, I is the mean of y; when j(y)¼1 for yry0 and j(y)¼0 otherwise, I is an estimate of the quantile on the distribution function of y associated with y0. It is worth noting that (12) can be calculated with analytical methods in very rare cases in reality as both f(x) and p(x) can seldom explicitly defined, and the integration region is also generally complicated. Lots of efforts have been devoted to develop approximation approaches to numerically evaluate this integral. Gauss quadrature approaches [159] and other numerical quadrature and cubature methods [160–162] are proposed to approximate the multidimensional integral with weighted sum of the integrand values at a set of discrete integration points within the integration region. Laplace Approximation approach is proposed to approximate the integrand with second order Taylor series expansion at its minimum so as to derive the integral [163]. These approaches are comprehensively studied in [164]. Unfortunately, these approximate numerical integration approaches are generally only efficient and accurate for a special type of problems, e.g. quadrature based method for polynomial response, and may be not applicable especially for problems with high dimensional uncertainties and complex integrand which has no explicit formula and can only be calculated with time-consuming simulation analysis, e.g. FEA (Finite Element Analysis) codes. These difficulties with the traditional numerical integration approaches motivate the development of simulation based MCS integration methods which

The accuracy of the estimation in (13) can be quantified with the standard error defined as pffiffiffiffiffi err ¼ sj = nS

ð15Þ

The standard error can be used to assess the accuracy of MCS. From (15) it can be seen that the estimation accuracy is not related with the dimension of the problem, which is very appealing for application in large scale uncertainty analysis problems. pffiffiffiffiffi And it is proportional to 1= nS , which means the improvement of accuracy by one order will result in the increase of samples by 100 times. This becomes computational prohibitive especially for complex simulation models, and even worse for UMDO problems which need iterations of several coupled disciplinary simulations to reach a consistent system response result. To address this problem, several improved MCS methods with different sampling techniques have been developed and proved to be more efficient than the random sampling method. Among these sampling methods, importance sampling, also referred to as ‘‘weighted sampling’’ [166], is pervasively studied as it is expected to reduce err to zero with carefully selected importance sampling probability density function [167]. The approaches for selection of optimum importance sampling function are discussed in [167,168], but generally theoretical optimum functions are unpractical in realistic engineering problems. A compromise method is Latin hypercube sampling approach which can improve MCS stability (reduce err) and meanwhile maintain the tractability of random sampling. It divides the range of each variable into nS disjoint intervals of equal probability and one value is selected at random from each interval. Then the nS values of each variable are paired randomly (or with certain criterion, e.g. uniform distributing) to form nS samples for further statistic analysis. This method and its related operation techniques are thoroughly studied in [141,157]. Cao et al. proposed to use the first-order sensitivity information of the target response with respect to the random variables to accelerate MCS estimation convergence as a variance reduction technique [169], and it is observed that this sensitivity enhanced method can improve accuracy by one order of magnitude compared to err in (15). The aforementioned variance reduction techniques are especially important when MCS is applied to estimate small failure probability, which will be discussed in detail in Section 4.3. The MCS methods for other uncertainty types are referred to [170] for evidence theory, [90] for possibility theory, [171,172] for interval analysis, and there is a comprehensive discussion of MCS methods with different uncertainty theories in [72,173].

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

4.2. Taylor series approximation Taylor series approximation methods can be employed to approximate statistical moments of system output based on partial derivatives of the output f with respect to the elements of the random input vector x. The original simulation model function y¼f(x) can be approximated with the first-order Taylor series as yðxÞ  f ðx0 Þ þ

nX X @f ðx0 Þ

@xi

i¼1

ðxi xi0 Þ

ð16Þ

where x0 is the base point vector at which the derivatives are calculated. Based on (16) the output uncertainty resulting from the random input uncertainties can be determined with uncertainty propagation through this approximation formula, and the mean and standard deviation of the output can be estimated as

my ¼ EðyÞ  f ðx0 Þ þ

nX X @f ðx0 Þ i¼1

@xi

Eðxi xi0 Þ ¼ f ðx0 Þ

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uX   nX nX X X u nX @f ðx0 Þ 2 @f ðx0 Þ @f ðx0 Þ sy ¼ t s2xi þ 2 Covðxi ,xj Þ @x @xi @xj i i¼1 i ¼ 1 j ¼ iþ1

For UMDO problem, considering the coupling relationship of disciplines, Gu et al. [174] and Cao and Duan [175] proposed to employ the first-order Taylor series approximation combined with global sensitivity equations (GSE) to analyze system output uncertainty with cross uncertainty propagation between disciplines, specifically for worst case uncertainties [174] and convex model uncertainties [175]. Consider a UMDO problem with nD coupled disciplines. Denote the simulation model (contributing analysis tool) of discipline i as Ti and the output vector of discipline i as yi. The input of Ti includes both the design variable vector x and the coupled state variable vectors from other disciplinary outputs which are denoted as yUi ¼{yj}(jai). yi can be calculated as yi ¼Ti(x,yUi). Considering uncertainties in the design variable x with variation Dx and the analysis tool Ti with bias error DTi, the output uncertainty can be estimated as 9 8 dy1 9 8 > Dy1 > > > dx > > > > > > > > dy > > > = = > < 2 > < Dy2 > dx UDx ¼ ^ > > > > ^ > > > > > > > > > > > > : Dyn ; > ; : dynD > D dx

I1

6 6 @T2 6  @y 1 þ6 6 ^ 6 4 @Tn  @y D 1

@T1  @y

2



@T1  @y

nD

I2 &

^



InD

31 7 7 7 7 7 7 5

as the coefficients of variation (defined as the standard deviation divided by the mean) of the input random vector increase [149]; (2) the increase of Taylor series expansion order leads to rapid increase of estimation complexity as higher-order terms and correlations between the elements of x are involved [177]; and (3) the determination of partial derivatives could be very difficult for complex system simulation models [141]. However, Taylor series approximation methods have been widely used for the relative ease of understanding and implementation. As Taylor series approximation methods only deal with the propagation of first two moments rather than the exact distribution of the randomness, it belongs to first-order, second-moment (FOSM) methods which are related to the class of problems only concerning the means and variances and their propagation [178]. This is a logical naming convention for the uncertainty propagation techniques with a given choice of the order of approximation and the statistical moment to be used [179]. Besides Taylor series approximation methods, there are also several other FOSM approaches such as point-estimate-for-probability-moment (PEPM) methods [180], which are reviewed in [178].

ð17Þ

where Covðxi ,xj Þ is the covariance between the components of the input vector. If the input vector components are uncorrelated, the standard deviation can be simplified as vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u nX  u X @f ðx0 Þ2 ð18Þ sy ¼ t s2xi @xi i¼1

2

4.3. Reliability analysis Reliability analysis of constraint g(x)r0 at design A is to determine the probability of failure pf with impact from uncertainties involved in the system and its operational environment, which is shown in Fig. 11 with two uncertain variables and a linear constraint. The probability of failure can be calculated with the integral as [181] Z pf ¼ pðxÞ dx ð20Þ D

The failure domain D is defined as g(x) 40. Reliability R of the system is given by R¼ 1 pf. This integral is generally difficult to calculate analytically as both the joint probability distribution function p(x) and the failure domain D are seldom accurately defined in an explicit analytical form, and the multidimensional integration can be computationally prohibitive especially for the complex system with time consuming analysis models. Hence, it is motivated to develop various approximation methods, including the preceding numerical integration methods mentioned in Section 4.1, as well as other integration approximation methods specific for reliability analysis, e.g. Laplace multidimensional integral method based asymptotic approximation [182], main domain of failure coverage based integration [183], fast Fourier transform (FFT) based method [184,185], tail modeling approach [186,187], dimension-reduction (DR) methodology [188–190], First Order Reliability Method (FORM) and Second Order

9 8 DT1 ðx,yU1 Þ > > > > > > > = < DT2 ðx,yU2 Þ > U ^ > > > > > > > ; : DTn ðx,yUn Þ > D

461

D

ð19Þ Du and Chen also proposed a system uncertainty analysis method (SUAM) by means of Taylor approximations and sensitivity analysis to estimate the mean and variance of system output subject to both parameter and model uncertainties in multidisciplinary systems [176]. The derived equation for variance estimation of coupled state variables is essentially the same with (19). Taylor series approximation methods have several downsides: (1) its inherent local nature makes the estimation accuracy poor

Fig. 11. Reliability analysis.

462

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

Fig. 12. MPP.

Yang et al. studies the convergence analysis of MPP search iterations based on chaotic dynamics theory [199], which treats the iteration formulation as a multi-dimensional discrete dynamic equation. It is observed that complicated dynamics phenomena, such as periodic oscillation, bifurcation, and chaos, happen in the test MPP search problems, which indicates that it is normal to encounter convergent failure with certain limit state functions. It is also concluded that there is no simple relationship between the curvature value at the design point of the limit state function and the convergence of FORM iteration, but the Lyapunov exponent of nonlinear map corresponding to the limit state function can be used to quantitatively describe the convergence feature of the iterative MPP search computations. Thirdly, the limit state function is approximated with first- or second-order approximation at MPP, and the probability of failure is estimated using the approximate limit state function. FORM fits a tangent hyperplane to the limit state hypersurface at MPP (firstorder Taylor series expansion) as shown in Fig. 13, and probability of failure can be estimated as Pf  FðbÞ

Fig. 13. FORM reliability analysis.

Reliability Method (SORM), etc. Among these approximation methods, FORM and SORM are most prevailing and wide applied in engineering problems. A full theory exists for FORM and SORM in standard space of independent standard normal variables and in original space [191], and several variants have been proposed to enhance the algorithm efficiency [192,193]. FORM and SORM methods generally include three steps. Firstly, the original non-Gaussian random variable vector x is transformed into an uncorrelated Gaussian random variable vector u with zero mean and unit variance in the standard normal space U by Rosenblatt transformation [194]. Denote the transformation as x ¼T(u). The integral (20) is rewritten as Z pf ¼ fðuÞ du ð21Þ Du

where f(u) is the joint standard normal distribution density function, and Du is the failure domain in the U space defined by limit state function G(u)¼g(T(u))¼0. Secondly, the Most Probable Point (MPP, most likely failure point, design point or check point) which is of maximum probability density on the limit state function is searched, which is the key step of FORM and SORM methods (Fig. 12). Generally MPP calculation can be formulated as an optimization problem as ( min :u: u ð22Þ s:t: GðuÞ ¼ 0 The optimum of (22) is denoted as u*. It can be solved by specific iterative algorithms, e.g. HL-RF (Hasofer, Lind, Rackwitz, and Fiessler) method [195,196], or general constrained optimization algorithms, e.g. gradient based method, the augmented Lagrangian method, sequential quadratic programming, penalty method, etc. [197]. For non-convex limit state function, branch and bound strategy can be used to search MPP efficiently [198].

ð23Þ

where b is the reliability index (safety index) defined as b ¼ :un :, and F(U) is the standard normal cumulative distribution function. In reliability based optimization, by comparison between reliability R¼1 Pf and the desired reliability (target reliability) RT, the probabilistic constraint can be assessed as whether the reliability requirement has been achieved, and this method is called reliability index approach (RIA). But this RIA method converges slowly, or even fails to converge for a number of problems [200,201]. Furthermore, in reliability based optimization, it is not necessary to calculate the exact reliability for each iteration point during optimization search, and only the judgment of whether the target reliability has been achieved is enough. Hence, an alternative approach, Performance Measure Approach (PMA), is proposed [202,203]. In PMA, with the constraint failure defined as g(x)40, the reliability analysis is formulated as the inverse of reliability analysis in RIA, and stated as 8 < max GðuÞ u ð24Þ : s:t: :u: ¼ bT where bT is the reliability index corresponding to the desired reliability (target reliability) RT, and the optimum point on the target reliability surface is denoted as MPP unb ¼ b . If Gðunb ¼ b )40, T

T

the reliability requirement is not satisfied. Unlike RIA, only the direction vector of unb ¼ b needs to be determined by exploring the T

spherical equality constraint :u: ¼ bT . Several methods can be used to solve this optimization problem, such as advanced mean value (AMV) approach for convex performance functions, conjugate mean value (CMV) approach for concave performance functions, hybrid mean value (HMV) and Enhanced HMV (HMVþ) methods for both convex and concave performance functions [203,204], steepest decent direction and arc search based algorithm for general nonconcave and non-convex functions [205], etc. The comparative studies of RIA and PMA methods show that PMA has several major advantages over RIA in terms of numerical accuracy, simplicity, and stability [200,201,206]. Besides PMA which essentially employs the inverse reliability strategy [207,208], there are also other inverse reliability measures introduced in recent years as alternate measures of safety to improve computational efficiency of reliabilitybased design optimization. These measures and the corresponding analysis approaches are surveyed in [209]. For highly nonlinear limit state function, the first-order approximation is not sufficient to estimate accurately, so the limit state hypersurface is approximated by a quadratic hypersurface in the

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

463

SORM method to obtain a more refined estimation, and the probability of failure can be estimated as Pf  FðbÞ

nY X 1

ð1bki Þ1=2

ð25Þ

i¼1

where ki is the main curvature of the hypersurface at MPP. There are also other alternative formulations to estimate Pf with SORM considering the balance of accuracy and computational efficiency [210,211]. Based on FORM and SORM methods, high-order reliability method (HORM) [212], second-order third-moment method [213], and higher moment methods [214] are developed to improve accuracy. The method to generate cumulative distribution function (CDF) of system output based on FORM is also discussed in [215]. The accuracy and applicable ranges of FORM and SORM are comprehensively studied in [193,216].The asymptotic behavior of FORM and SORM is thoroughly analyzed in [210], and the results show that only SORM gives an asymptotic approximation for the integral in the sense of asymptotic analysis, whereas FORM produces an uncontrollable relative error; but if the generalized reliability index [217] is used, asymptotic approximation can also be obtained by FORM [210]. However, FORM is more popular in application for its computational efficiency. Besides the foregoing numerical approximation methods to estimate reliability, MCS method is another good choice as it is easy to implement, and flexible for any type of distribution and any form of constraint function. Define the indicator function I[g(x)40] such that I[U]¼1 if x is in D and zero otherwise. Then (20) can be rewritten as Z pf ¼ I½gðxÞ 4 0pðxÞ dx ð26Þ O

An unbiased estimator of (26) by means of MCS with mutually independent sample data pairs [x(i),y(i)] of size nS can be given as pf  p~ f ¼

nS 1 X I½gðxðiÞ Þ 4 0 nS i ¼ 1

ð27Þ

Denote x(i) ¼I[g(x(i)) 40]. {x(i)} is a sequence of nS independent 0/1 experiments, and x(i) is a single Bernoulli trial with occurrence probability of 1 as pf. The expectation of x(i) is m ¼E(x(i)) ¼pf, and qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi the standard deviation is s ¼ VarðxðiÞ Þ ¼ pf ð1pf Þ. Denote z1  u/2 as the 1 u/2 quantile of the standard normal distribution. Given the probability (also referred to as confidence level) 1 u, the error bound e of p~ f pf  is sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p ð1pf Þ s ð28Þ e ¼ z1u=2 pffiffiffiffiffi ¼ z1u=2 f nS nS and the percentage error e~ ¼ e=pf is sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð1pf Þ e~ ¼ z1u=2 nS pf

Fig. 14. Shift of sampling distribution with importance sampling methods.

we can deduce that given the same percentage estimation error and confidence level, the ratio between the sampling number of the original distribution and that of the shifted distribution is 100 for pf ¼1% and 1000 for pf ¼ 0.1%, which clearly indicates that importance sampling approaches can reduce computational expenses efficiently [230]. The aforementioned sampling methods are thoroughly studied and compared in [231], which concludes that MCS with these variance reduction sampling techniques are more accurate and robust than FORM/SORM, and subset simulation and line sampling are especially advantageous in terms of efficiency and accuracy in solving highly dimensional problems with multiple failure domains and irregular limit state functions. To further improve reliability analysis efficiency, response surface methodology (RSM) can be utilized to replace the computationally expensive accurate function so as to reduce calculation burden [232–235]. The reliability analysis approaches discussed above are time-independent and concerned with random uncertainties. Reliability analysis with other uncertainty theories are also studied, e.g. interval analysis [92,236], possibility theory [87,237–239], evidence theory [240], convex uncertainty [97,241]. Time-variant reliability analysis is referred to [242,243]. Besides the methods to determine exact reliability, there are also some approaches dealing with reliability bounds [244–247]. So far the methods introduced can be directly applied to single constraint or component reliability analysis. For complex system consisting of multiple components, constraints, and failure modes, matrix-based system reliability method [248–250], Complementary Intersection Method (CIM) [251,252], and other efficient components integration method [253–255] have been proposed to systematically analyze both the component and system reliability. For more detailed reliability analysis methods, readers are referred to reviews in [256,257]. 4.4. Decomposition based uncertainty analysis

ð29Þ

From (29) it can be seen that given the required confidence level and percentage error requirement, nS would be very large and result in unaffordable computational burden when pf is very small. To solve this problem, several efficient sampling techniques are developed, e.g. importance sampling [218–221] and its variant adaptive sampling [222], subset simulation [223], line sampling [224–226], directional simulation [227–229], etc. Among these sampling techniques, importance sampling (IS) is most frequently applied. The basic idea of IS is to extract more information from the sample points by taking samples in the vicinity of MPP and within the failure domain D, so the sampling distribution is shifted from the original one to MPP (the mean value shifted to MPP) by means of importance sampling probability density function, as shown in Fig. 14. At MPP the probability of failure is about 50%. From (29)

For the complex system with close coupled disciplines, it is extremely time-consuming to run a multidisciplinary analysis (MDA) for one design as it generally involves several iterations to converge to a consistent system output. Hence it becomes computationally prohibitive for those uncertainty analysis methods which entail lots of repeating MDA, e.g. Monte Carlo methods, FORM/SORM, etc. To solve this problem, decomposition strategies are proposed to decompose the uncertainty analysis problem nested with MDA into several discipline or subsystem uncertainty analysis problems, so as to control each subproblem within acceptable level and meanwhile take advantage of distributed parallel computing. In this field, Du and Gu have contributed a lot and proposed several efficient decomposition based uncertainty analysis methods. In [176,258,259], a concurrent subsystem uncertainty analysis method (CSSUA) for uncertainty propagation is proposed. For a system with nD disciplines, the mean and

464

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

standard deviation of each subsystem output are firstly calculated simultaneously with each disciplinary contributing analysis, and denoted as mnyi and snyi for discipline i. In each disciplinary uncertainty analysis, the mean and standard deviation of coupled state variables from other disciplines are passed down from system level with presumed values and denoted as myj and syj (jai). Then based on the results of disciplinary uncertainty analysis, the compatibility of mean and standard deviation of coupled state variables is achieved by a system level optimization stated as find : myi , syi ði ¼ 1,. . .,nD Þ min :

nD X

½:myi mnyi :þ :syi snyi :

ð30Þ

i¼1

After system optimization, myi and syi are updated and passed down to disciplinary uncertainty analysis again to obtain new mnyi and snyi . The two steps iterate until convergence is achieved, and the mean and standard deviation of system outputs can be obtained. Based on CSSUA, a modified concurrent subsystem uncertainty analysis (MCSSUA) approach is further proposed in [260], which only use the strategy of concurrent subsystem analysis to obtain mean values of coupled state variables, so that the design variables in the system level optimization can be reduced by half and the analysis efficiency can be improved. After the mean values are obtained, the standard deviation of the system outputs can be calculated with analytical approach SUAM [176] introduced in Section 4.2. Gu and Renaud [261] developed an Implicit Uncertainty Propagation (IUP) method to provide estimation of system output uncertainties within the bilevel collaborative optimization (CO) framework. The variations of both the design variable vector and the contributing analysis tools (bias errors) are considered, and the coupling effect is treated by employing GSE. The variance of the coupled state variables are estimated with first-order Taylor series approximation method similar to that stated in (19), except that the matrix entries of GSE in IUP are calculated in each subsystem simultaneously, which are not consistent until convergence is obtained. This is why it is call implicit uncertainty propagation, in contrast to the direct/explicit calculation approach. Based on IUP, a CO procedure accommodating uncertainties is developed and expounded in Section 6.2.1. The IUP approach is also successfully applied to the simultaneous analysis and design (SAND) procedure with uncertainty [262]. The accuracy of IUP is investigated in [262] and the results show that if the curvature of the disciplinary design space is small, or the inconsistency between subsystems at the current design point is small, IUP will provide a reliable estimation of propagated uncertainty. In other words, the IUP method should not be employed in the early stage of uncertainty design optimization when the disciplines are far from consistent, and it is recommended that a traditional deterministic formulation should be used in the early iterations until the disciplinary compatibility degree is good enough, and then solve the UMDO problem with IUP. For MPP based uncertainty analysis, considering that the MPP search procedure is essentially a double loop algorithm which includes a MPP search optimization in the outside loop and a MDA iteration procedure in the inner loop, several decomposition based approaches have been proposed to improve the MPP search efficiency. Du and Chen [263,264] proposed to utilize CO to organize MPP search with concurrent operations on the discipline level, and it is tested in FORM which can obtain a five-fold reduction in computational expense [265]. The same idea is proposed by Padmanabhan et al. to employ Concurrent Subspace Optimization (CSSO) procedure to solve the MPP search optimization problem, so called MPP-CSSO, so as to greatly improve efficiency

with parallelization of disciplinary analysis and optimization [266]. Another idea to address the double loop problem is proposed by Ahn et al. to decompose MDA from the MPP search and organize them sequentially as a recursive loop [267]. In this sequential approach to reliability analysis for multidisciplinary systems (SARAM), concurrent subsystem analysis can be applied in the separate MDA based on GSE to further alleviate computational burden. For simulation based reliability analysis, a Markov Chain Monte Carlo methodology named Gibbs sampling [268] is utilized to decompose MDA into disciplinary subproblems as it does not enforce multidisciplinary system consistency at each run, but rely on the sampling process to produce compatibility within the disciplines gradually. Without consistency constraint on MDA, only the number of disciplines times the disciplinary analysis computation are needed for each run of sample simulation, which can greatly reduce calculation cost compared to the traditional sampling method that needs iterations of disciplinary analysis to obtain a consistent system response at each sample. The test indicates that this approach can result in a nine-fold improvement over traditional MCS in the demonstration case [265]. For more detailed discussion on decomposition based uncertainty analysis, a survey can be found in [265].

5. Optimization under uncertainty Being one of the most important issues in optimization, optimization under uncertainty has experienced rapid development in both theory and application. The first mathematical formulation is stochastic linear programming which is firstly introduced in the middle of last century to address optimization problems with parameter randomness [4,269]. Later, to meet the need for dealing with optimizations in a realistic world which usually involve discrete integers and nonlinear nature, there is a surge in programming approaches to address these issues, e.g. stochastic integer programming [270,271], stochastic nonlinear programming [272,273], robust stochastic programming [274,275], stochastic dynamic programming [276,277], etc., which are generally grouped together and referred to as stochastic programming [278,279], and the wide research bibliography in this field can be found in [6]. To address non-probabilistic uncertainties in optimization, fuzzy programming is also quickly developed [280–282], as well as the programming method for hybrid random and fuzzy uncertainties [283,284]. To unify the stochastic programming and fuzzy programming, Liu proposed to use the term uncertain programming to describe the general part of the optimization theory for uncertain systems [284]. Different from the preceding uncertainty programming methods which formulate the optimization constraints as reliability constraints (chance constraints), there is another type of approach, so called robust optimization, which formulates the optimization problem to search the robust feasible optimal solution which can satisfy constraints for all possible realizations of uncertainties in the given uncertainty set [285–287]. To solve the different formulations of optimization under uncertainty introduced above, several search algorithms have been studied and can be generally categorized into two types: (1) gradient based, such as the Robbins–Monro algorithm, and (2) gradient free, such as finite difference based stochastic approximation, random direction search, genetic algorithm, simulated annealing algorithm, etc. [288,289]. Specifically for simulation based optimization wherein the system output is calculated by complex computer simulation rather than analytical equations, the applicable simulation based algorithms are studied in [290,291].

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

Details of the mathematical theory and algorithms of optimization under uncertainty are not covered here. The scope of this section is to review two special formulations which are of great interest for both academia and industry, namely reliability-based design optimization and robust design optimization. 5.1. Reliability-based design optimization In reliability-based design optimization (RBDO), or reliabilitybased optimization (RBO), the central task is to optimize the objective and meanwhile maintain the failure probability of constraints within an acceptable level. The reliability of each constraint can be analyzed with reliability analysis methods discussed in Section 4.3. As the complexity of reliability analysis is prohibitive especially for large systems, there are alternative approximation methods which translate the constraints with uncertainty into quasi-deterministic constraints with simplifications and assumptions, so as to obtain a balance between computational cost and accuracy. Such methods are also widely used in the robust design optimization where the feasibility of constraints should be maintained [230]. Herein some prevalent methods are introduced. 5.1.1. Worst case analysis method The worst case analysis method is firstly proposed by Parkinson et al. [292], which presumes that all fluctuations may occur simultaneously in the worst possible combinations. For constraint g(x,p)rb, the effect of variations of x and p on the constraint function is estimated from the first-order Taylor’s series as  X  n  m  X    @g Dxi  þ  @g Dpi  Dgðx,pÞ ¼ ð31Þ @x  @p  i i i¼1 i¼1 where Dxi and Dpi are the variation range (tolerance) of the ith component of x and p, respectively. The total variation of the constraint is

D ¼ Dgðx,pÞDb

ð32Þ

To maintain the constraint value of the design within the safe region, the constraint is reformulated as gðx,pÞ þ D r b

ð33Þ

It can also be viewed that the safe region has been reduced to accommodate worst case variation. For uncertainties defined with statistical characteristics, the second moment (standard deviation) is used to characterize the output performance of the constraint function as vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n  2 X 2 m  u X @g @g sgðx,pÞ ¼ t sxi þ spi ð34Þ @xi @pi i¼1 i¼1

5.1.2. Corner space evaluation The corner space evaluation method, proposed by Sundaresan et al. [293], is quite similar to the worst case analysis method. For constraint g(x,p) rb, assume the design variable vector x with nominal value xt and tolerance Dx, and the parameter vector p with nominal value pt and tolerance Dp. The tolerance space T is defined as a set of points close to the design point with nominal value wherein each point represents a possible combination due to uncertainties in each variable: Tðxt ,pt Þ ¼ fx : 9xt x9 r Dx,p : 9pt p9 r Dpg

Wðxt ,pt Þ ¼ fx : 9xt x9 ¼ Dx,p : 9pt p9 ¼ Dpg

ð38Þ

To maintain the design solution within the safe region under any variation, the tolerance space should be kept within the safe region by keeping the corner space always touching the original constraint boundary. Then the equivalent constraint can be formulated as Maxfgðx,pÞ,

8x,p A Wðxt ,pt Þg r b

ð39Þ

This approach is shown in Fig. 15 for a two-dimensional problem. For the normal distributed random variables and parameters, the tolerance can be chosen as three standard deviations so as to obtain a reliability of 99.87%. The great advantage of this method compared to the worst case analysis method is that it does not require partial differential calculations of the constraint function. 5.1.3. Variation patterns formulation Based on the corner space evaluation method, Manufacturing Variation Patterns (MVP) method is proposed with consideration of the coupled variation relationship [294], and is given a more general name as variation patterns formulation in [230] as this method is not limited to the manufacturing related problems. Define the Manufacturing Variation Pattern (MVP) under the confidence level (1 a) as MVP(1  a). The shape of MVP(1  a) is determined by the distribution of random variables and parameters, and the size of MVP(1  a) is determined by the confidence level. For example, for the problem with two normally distributed dependant variables, the shape of the pattern is an ellipsoid as shown in Fig. 16, and the equivalent constraint can be stated as gðx,pÞ rb

8x,p A MVPð1aÞ

ð40Þ

This method is more accurate than the corner space evaluation method, but it is quite complicated if the shape of the pattern is irregular. Similar to the foregoing methods to convert reliability constraints into equivalent deterministic constraints to maintain

The constraint is reformulated as ð36Þ

where k is a constant chosen by the designer that reflects the desired reliability level. For example, k¼3 means that the reliability is 99.87% with normal randomness. This worst case formulation tends to yield conservative solution as it is unlikely that the worst cases of variable or parameter deviations will simultaneously occur. Besides, the accuracy of Taylor series approximation is also very limited. However, it still has been applied widely for its simplification.

ð37Þ

Define the corner space W consists of the corner vertices of the tolerance space T as

The total statistical standard deviation of the constraint is qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð35Þ s ¼ s2gðx,pÞ þ s2b

gðx,pÞ þ ks r b

465

Fig. 15. Corner space evaluation method.

466

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

Fig. 16. Variation patterns evaluation method.

designs within feasible region, Shan and Wang proposed the concept of reliable design space (RDS) which is identified by reformulation of constraints as boundaries to clearly distinct the safe region, and within RDS the RBDO can be solved as deterministic optimization problems. The constraints are reformulated to maintain the estimated inverse MPP with predefined reliability index located in the safe region, and the inverse MPP is estimated with direction cosine of the constraints at the design variable vector in the standard normal space [295]. The preceding approaches do not accurately calculate the reliability of constraint, but only formulate the constraint with uncertainty into its quasi-deterministic counterpart so as to approximately maintain optimization designs within the feasible region with required reliability. Therefore generally the optimization solutions with these methods should be confirmed with reliability analysis methods in the end to ensure that reliability requirement can be satisfied. If reliability analysis is directly incorporated into optimization to calculate reliability of each constraint at each optimization iteration point, RBDO becomes a double loop optimization problem with optimization in the outer loop and iterative reliability analysis in the inner loop. To improve computational efficiency, several approaches have been proposed to convert this double loop algorithm into a single loop architecture, which will be discussed in detail in Section 6.1 where the single level optimization procedure is expounded. To further alleviate computational burden, approximation methods can be used to either replace the high accuracy simulation model or approximate the limit state function [296–299]. These computational methods to solve RBDO problems are surveyed in detail in [7]. For non-normal probabilistic uncertainties and other types of uncertainties, e.g. evidence theory, possibility theory, etc., readers are referred to [66,300–303]. To sum up, with RBDO the reliability of the solution can be effectively improved which is demonstrated in [304,305]. 5.2. Robust design optimization Robust Design Optimization (RDO) is the methodology concerned with optimization of system mean performance and minimization of its sensitivity to uncertainties. It has a deep root in robust design and quality engineering which aim at developing low-cost and high quality products and processes by making products insensitive or ‘‘robust’’ to the effects of natural variations in their production and operational environments. This method was firstly pioneered by Genichi Taguchi in the 1950s, which is also referred to as Taguchi method, and has been widely used to improve product quality [306]. The kernel part of the Taguchi method is design of experiment (DOE) which evaluates different designs to identify factors that affect product quality and to optimize their nominal levels [307]. The experiment method carried out via computer simulators is studied in [308]. As this

method is DOE based rather than an automated optimization procedure, and design variables are defined in a discrete space, it is difficult to treat a wide range of design problems in a continuous space with several design constraints. Hence RDO based on optimization techniques are developed. The introduction of Taguchi method to nonlinear programming was firstly proposed by Ramakrishnan and Rao [309]. Later researchers argued necessity of incorporating constraints in robust design [310] and Parkinson proposed the concept of constraint feasibility under variation in 1993 [292], which is also called feasibility robustness. Thus the optimization problem is similar to reliability-based robust design optimization problems (RBRDO) and the mathematical formulation is stated similarly with (4). In this situation, the goal in RDO is to locate a constrained optimum that is insensitive to variations for both the objective function and the constraints [230]. The objective function robustness makes the system performance insensitive to variations of design variables and parameters, and the constraint function robustness ensures the optimum always lies in the feasible region under uncertainties. To address objective robustness, it is essentially a multi-objective optimization problem. Tradeoff method between mean and variance of objective has been widely studied in literature. The general used approaches include weighted sum of the two objectives (aggregate objective function) [275,311], preference based physical programming method [312,313], compromise programming (CP) method [314,315], Normal-Boundary Intersection (NBI) method [316], and genetic and evolutionary optimization method [317,318]. The visualization of the multi-objective RDO has also been studied in [319]. For constraint robustness, the methods of reliability-based design optimization discussed in Section 5.1 can be utilized. Among the numerous approaches to accommodate objective and constraint robustness, the method Design for Six Sigma has been specifically developed in engineering and has been widely applied in industry for product quality improvement [320,321], which is formulated as 8 find x > > > > < min f~ ðx,pÞ ¼ m ðx,pÞ þ 6sf ðx,pÞ f ð41Þ mg ðx,pÞ þ 6sg ðx,pÞ r0 > s:t: > > > : L U x rx r x Here we only consider about inequality constraints. For equality constraints in RDO, readers are referred to [322,323]. This formulation ensures that the system performance with its six standard deviations can be minimized and the constraint with six standard deviations can be maintained within the feasible region. To solve RDO problems, the mean and variance of objectives and the feasibility robustness of constraints can be calculated with uncertainty analysis methods discussed in Section 4. In gradientbased optimization, calculating sensitivities of the objectives and constraints are key issues to be addressed. Considering the computational cost of finite differencing techniques, automatic differentiation method can be used [324], or approximation of the original objectives and constraints by response surface models can be employed so that sensitivities of these response surface models can be calculated analytically [325,326]. As realistic optimization problems usually feature multimodal characteristics, and gradient-based optimization approaches need sensitivity calculations which are computationally expensive, genetic and evolutionary algorithms (gradient free) are quite popular [327] and implementation of evolutionary algorithms for RDO is introduced in [149]. Besides the popular probabilistic RDO approaches, RDO with other uncertainty types are also studied, such as cloud theory [102,328] and Info-Gap Decision Theory [99] for uncertainties with imprecise probabilities. For more detailed numerical implementation of RDO, readers are referred to [7,35,36,329,330].

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

467

6. UMDO procedure UMDO procedure is the organization of all the elements involved in the uncertainty based design optimization, such as system optimization, system analysis, disciplinary analysis, uncertainty analysis, to name a few. How to efficiently arrange these elements into an execution sequence and implement it in the computer or distributed parallel network is the key to realize UMDO of large systems, as the complex coupling disciplinary relationship and computational intensive system analysis models make UMDO very time consuming and computationally prohibitive. To illustrate the computational burden of UMDO considering coupling relationship, the general uncertainty optimization formulation (4) is reformulated as 8 find > > > < min > st: > > :

x f~ ðx,p,yÞ ¼ Fðmf ðx,p,yÞ, sf ðx,p,yÞÞ Pfgi ðx,p,yÞ r 0g ZRi ,

i ¼ 1,2,. . .,ng

ð42Þ

xL r xr xU

where y denotes the intermediate state variables of disciplinary analysis. Denote the output vector of discipline i as yi, the coupling state vector output from discipline i and input into discipline j as yij, the complete set of output vectors from discipline i coupled with other disciplines as yiU, and the complete set of coupling state vectors input into discipline i as yUi, then y ¼ fyi ,i ¼ 1,2,. . .,nD g, yUi ¼ fyji ,j ¼ 1,2,. . .,nD ,j aig, yiU ¼yiU(xi,p,yUi), and yiU ¼ fyij ,j ¼ 1,2,. . .,nD ,j a ig. xi is the local design variable vector of discipline i, and p is the system parameter vector. yiU is a subset of yi, but for expression convenience and without loss of generality, we assume that yiU is equal to yi and denote equally as yi in the following discussion. For a three-discipline system, the coupling relationship with uncertainties is shown in Fig. 17. The conventional approach to solve this UMDO problem is to employ a double-loop strategy as shown in Fig. 18. In the outer loop, optimization algorithm executes optimum search. At every iteration point, it calls the inter loop uncertainty analysis to evaluate the design and its uncertainty characteristics, which involves lots of sampling simulations based on MCS methods or optimization (usually referred to as lower level or inner loop optimization in this double loop procedure) based on FORM or SORM methods. If it needs N iterations of optimization to converge to the optimum and M times MDA to analyze the uncertainty characteristics at each iteration point, the total UMDO problem computation size is at the scale of N  M times the single MDA calculation cost. Keep in mind that iterations are also needed to perform each MDA to obtain a consistent output due to the coupling relationship of the disciplines. From this point of view, it is clear that the conventional UMDO procedure is very computationally expensive especially for aerospace vehicle design whose analysis or simulation tools are extremely time consuming, e.g. FEA of structure dynamic analysis and CFD

Fig. 18. The conventional double-loop UMDO procedure [349].

(Computational Fluid Dynamics) of aerodynamic analysis which may use up to several days to run a single simulation for a full scale airplane with high fidelity models. To address this problem, sensitivity analysis methods have been used to reduce design variable and uncertain parameter number, so as to control the UMDO problem to an acceptable level [331,332]. Approximation methods have also been widely used to replace the high fidelity disciplinary models with metamodels and replace reliability constraints with simple approximation functions. However, it is found that for large number of factors, if more than 10 factors remain after screening, the computational expenses associated with creating response surface approximations can easily begin to outweigh the associated gains [332]. With progress in computer technology, advanced parallel computing tools can be utilized to realize UMDO on distributed computer network so as to reduce computational time. For example, the MDA of sampling points for sensitivity analysis, approximation modeling, or MCS uncertainty analysis can be executed in parallel [331,333]. Another promising alternative is to solve the problem from inside, i.e. to reformulate the organization of UMDO, including MDA, disciplinary analysis, uncertainty analysis, etc., so as to implement the whole UMDO procedure more efficiently. Now much research has been devoted to this field and several approaches have been proposed, which can be generally classified into the following two categories: (1) Single level procedure. The optimization loop and uncertainty analysis loop are either decoupled and executed in a sequential way or merged into a single equivalent deterministic optimization problem. As such, in the equivalent deterministic MDO, the existing MDO methods can be utilized directly to enhance the efficiency. (2) Decomposition and coordination based procedure. It is expected that decomposition and coordination based procedures can bring computational efficiency for UMDO as they do in deterministic MDO problems [2]. Hence, the existing decomposition based procedures for deterministic MDO, such as IDF (individual discipline feasible), BLISS (bi-level integrated system synthesis), CSSO (concurrent subspace optimization), CO (collaborative optimization), ATC (analytical target cascading), etc., can be used as reference to decompose the nested optimization and uncertainty analysis problem of UMDO into several disciplinary or subsystem level uncertainty optimization problems, so that each sub-problem is within manageable control. Besides, this decomposition formulation can better take advantage of distribution computation, which can further alleviate the time consuming problem. These two categories are discussed, respectively, in detail in the rest of this section. 6.1. Single level procedure

Fig. 17. The coupling relationship of a three-discipline UMDO problem.

Considering the computational burden attributed to the conventional double loop procedure, several approaches have been proposed to merge or decouple this nested formulation into a single level procedure, also called single level approach (SLA). To merge the double loop optimization into one single level problem, Agarwal et al. [334] proposed to replace the lower-level

468

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

reliability analysis with FORM PMA method by its corresponding first-order necessary Karush–Kuhn–Tucker (KKT) optimality conditions, and impose these KKT conditions on the outer optimization loop, so as to eliminate the inner loop uncertainty analysis and meanwhile satisfy reliability requirement. Assume that hard constraints can be identified and only reliability analysis of these hard constraints is needed. The merged optimization problem can be formulated as 8 find x, u1 ,. . .,uNhard > > > > > > min f~ ðx,p,yÞ ¼ Fðmf ðx,p,yÞ, sf ðx,p,yÞÞ > > > > > Gi ðui , gÞ r 0 i ¼ 1,. . .,Nhard > > s:t: > > T > h1 > i  :ui ::r u Gi ðui , gÞ:þ ui ru Gi ðui , gÞ < ð43Þ h2i  :ui :bi > > > h1 ¼ 0, i ¼ 1,. . .,N > i hard > > > > > h2i ¼ 0, i ¼ 1,. . .,Nhard > > > > > gi ðx,p,yÞ r0, i ¼ 1,. . .,Nsoft > > > : xL rx r xU where bi is reliability index corresponding to the reliability requirement for the ith constraint, Gi(ui,g) is the limit state function of the ith hard constraint in the standard normal space, g is the transformation of p in the standard normal space U, ui is the inverse MPP for the ith hard constraint, and gi(U) is the soft constraint which remains in the original X space. The initial optimization search space is augmented which consists of both the initial design variables and inverse MPPs of all the hard constraints. For this reason, this approach is also called singlelevel-double-vector (SLDV) method, in contrast with the initial double-loop-single-vector approach (DLSV) [335]. For multidisciplinary coupled problem, GSE is used to implicitly calculate gradient of the limit state functions. This formulation is mathematically equivalent to the initial double loop optimization problem given that the KKT constraint qualifications are satisfied. But the major problem herein is that the design variables will be greatly increased if the number of hard constraints is large, i.e. for original optimization with nX dimensional design variables, the augmented design variable number will be nXU(1þNhard), which may increase optimization computational burden substantially and outweigh the gains in computational efficiency by integration of double loop optimization into a single one. Furthermore, enforcing large number of equality constraints on the outer loop optimization may also lead to poor numerical stability and convergence behavior. Besides, the KKT conditions are derived from PMA first-order reliability analysis algorithm, the accuracy of which is also doubtful for highly nonlinear uncertainty problems. Chen et al. [335] proposed another method named single-loopsingle-vector algorithm (SLSV) to approximately locate MPP of each active constraint with gradients of the limit state function and the desired safety factor, so that the inner MPP searching loop can be eliminated and the approximation of MPP can be directly embedded in the outer loop optimization as equivalent deterministic constraints. The outer loop optimization problem (42) can be reformulated as 8 find x > > > > > min f~ ðx,p,yÞ ¼ Fðmf ðx,p,yÞ, sf ðx,p,yÞÞ > > > > > > > s:t: Gi ðzðkÞ i ¼ 1,. . .,Nhard > i , gÞ r0, > > > ðkÞ ðkÞ nðk1Þ < z ¼ m þb a , i ¼ 1,. . .,N i

> > > > > > > > > > > > > > > > :

z

hard

i i

mðkÞ z ¼ x=sx ani ðk1Þ ¼ rz Gi ðzðk1Þ , gÞ=:rz Gi ðzðk1Þ , gÞ: i ¼ 1,. . .,Nhard i i gi ðx,p,yÞ r0,

i ¼ 1,. . .,Nsoft

x rx r x L

U

ð44Þ

where zðkÞ i denotes the approximate MPP of the ith hard constraint in the kth iteration of optimization, ani is the vector of direction cosine of the ith hard constraint at zi. It is noteworthy that the limit state function Gi, MPP vector zi, and transformed system parameter g are in the uncorrelated normalized space. In this algorithm, the direction cosine of the hard constraint at the previous approximate MPP is used to approximately locate MPP in the current optimization run, and after iterations zi can converge to the accurate MPP of the ith hard constraint and the optimum design can be obtained. Compared with SLDV, this method is simplified as the design vector remains the same as the initial optimization problem. But the numerical demonstration indicated that this method may be instable when the uncertainties feature constant coefficients of variation (COV), and the accuracy is also compromised moderately for improvement of computational efficiency. This method is further extended to nonnormal distribution random variables with normal tail transformation method in [300]. The thought of SLSV is also employed to address the problem of design variable augmentation in the SLDV approach, and the KKT conditions are reformulated to allow MPP vector to gradually converge to the accurate MPP after iterations [336]. The other type of SLA is to decouple the inner uncertainty analysis and outer optimization with sequential cycles of uncertainty analysis and deterministic MDO. In each cycle, the reliability constraints are converted into equivalent deterministic constraints based on the uncertainty analysis and then used in the separate deterministic MDO to guide the optimum search towards the feasible region to meet the reliability requirements. The uncertainty analysis and deterministic optimization are arranged in a sequential manner and alternately executed, which finally converge to the optimum in compliance with reliability requirements after several iterations. The key issue herein is how to convert the reliability constraint into the equivalent deterministic one with satisfaction of reliability requirement. Sues and Cesare proposed to search MPPs firstly in reliability analysis at initial design and approximate each limit state function at its MPP with first-order linearized models, which are used as equivalent deterministic constraints in the following MDO [337]. When the optimum of MDO is achieved, the reliability analysis is conducted again at the optimum to update the MPPs and the corresponding approximate models of the limit state functions, which are further fed into the MDO of the next cycle. The steps of reliability analysis and deterministic MDO are iteratively conducted until convergence is achieved. The linearization of limit state functions is easy to implement and calculate in MDO, but the accuracy is also limited. Wu et al. [338] developed a safety-factor based approach to convert the reliability constraints into the equivalent deterministic constraints with safety factors. In uncertainty analysis, MPP xni for random parameters (in the X space) and design shift factor si of the ith hard constraint are defined as Pfgi ðxni ,dÞ þ si r 0g ¼ Ri

ð45Þ

where d is the deterministic design variable vector, Ri is the required reliability of the constraint i, and si is the constant to shift the current constraint (without shape change) to exactly meet the target reliability. Given d, MPP xni can be located with existing MPP search approaches, and si can be consequently calculated with (45). Then the active reliability constraint i can be converted as gi ðxni ,dÞ þ si r 0 and used in the deterministic optimization, where the design variable vector d is updated with optimization algorithm and fed into the uncertainty analysis of the next cycle to update the MPPs and the corresponding design shift factors. These two steps are alternately repeated until they converge to the optimum and the accurate MPPs. This method

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

firstly identifies active constraints in each cycle, and uncertainty analysis of only these constraints is treated, which can save a lot of computational cost. Du and Chen [339] proposed a sequential optimization and reliability assessment (SORA) method to formulate the deterministic constraint by shifting the inverse MPP at the current design at least onto the deterministic boundary which ensures the constraint feasibility in the next deterministic optimization satisfying required reliability. Its schematic flowchart is shown in Fig. 19, and the graphical illustration of shifting constraint boundary is shown in Fig. 20. The deterministic optimization of the kth iteration is formulated as 8 find x > > > > min f~ ðx,p,yÞ ¼ Fðm ðx,p,yÞ, s ðx,p,yÞÞ > > f f > < ðk1Þ s:t: gi ððxsðkÞ Þ,p ,yÞ r0, i ¼ 1,2,. . .,ng ð46Þ i iMPP > > > ðkÞ ðk1Þ ðk1Þ > si ¼ lx xiMPP > > > : xL r xr xU When the optimum x* with mean value lðkÞ of this kth x optimization is achieved, reliability analysis is conducted at the optimum, and the corresponding inverse MPP ðxðkÞ ,pðkÞ Þ and iMPP iMPP ðk þ 1Þ shift vector si of the ith constraint for the next cycle can be calculated, which are used to reformulate the deterministic optimization in the next iteration. The computational efficiency of SORA was testified by numerical examples compared with conventional double loop procedure, but the convergence of high nonlinear system with large number of design and random variables cannot be guaranteed with this approach. As the shift vector in each deterministic optimization is based on the MPP of the preceding iteration, the estimated MPP with the shift vector in current optimization may be inaccurate as design variables change, which may lead to pseudo-optimum solutions. To address this problem, Agarwal and Renaud proposed to use the first-order Taylor series expansion of the MPP at the preceding MPP with its postoptimal sensitivities with respect to the design variables, and use this approximation to update MPPs for constraints during the deterministic optimization [340]. Based on the SORA framework, Mixed Variables (random and fuzzy variables) Multidisciplinary Design Optimization (MVMDO) [341] and Random/Fuzzy Continuous/Discrete Variables Multidisciplinary Design Optimization (RFCDV-MDO) [342] procedures are further developed, which utilize existing discrete-continuous optimization approaches in the deterministic optimization, and deterministic MDO procedure IDF for each MDA in uncertainty analysis. SORA has been successfully applied to a reusable launch vehicle design, which integrates the system design for minimum weight with aerodynamic constraints and the component design of a liquid-hydrogen tank into a reliability-based MDO problem. It is observed that the decoupling approach of SORA can significantly reduce computational burden for the UMDO problem, and it can be even more efficient if active constraints can be identified during iterations [343]. Royset et al. [344] proposed to reformulate the reliability constraint with general reliability index as its equivalent

469

deterministic constraint by maintaining all the values of the limit state function (in the standard normal space) within a ball of specified radius defined by the required reliability index in the feasible region. For example, the constraint bt r b(x) with safe region denoted as g(u,x) r0 can be reformulated as

cr ðxÞ r0 cr ðxÞ ¼ maxfgðu,xÞ9u A Br g

ð47Þ

Br ¼ fuA Rm 9:u:r rg where u is the random parameter vector in the standard normal space, and x is the deterministic design variable vector. In the deterministic optimization, the method to approximately calculate reliability index of affine limit state function with the form of g(u,x)¼1þ c(x)Tu in the standard normal space is

bðxÞ ¼

rUgð0,xÞ

cr ðxÞ þ gð0,xÞ

ð48Þ

After the deterministic optimization, reliability analysis is conducted to analyze the real reliability index at the current optimum, which is used to update the radius of the ball on the constraint with (48) for the deterministic optimization in the next cycle. This method is equivalent to the original optimization problem and can yield accurate solution when the limit state functions are affine in the standard normal space. But it is only correct to first-order approximation for the nonaffine limit state functions. Zou and Mahadevan [345] proposed to use the first-order Taylor series approximation of the reliability index or failure probability with respect to the mean of design variables at the current optimum for all potentially active reliability constraints as the deterministic constraints for the next cycle optimization, which is stated as pkf þ

bkf þ

n X @pf i¼1 n X

@mki n @bf

@mki n i¼1

ðmki þ 1 mki n Þ r pft ðmki þ 1 mki n Þ Z bft k

ð49Þ

where pkf and bf are failure probability and reliability index, respectively, for the optimum mki n in the deterministic optimization of the kth cycle, pft and bft are predefined target failure probability and reliability index, and mki þ 1 is the mean value of the ith component of the design variable vector in the kþ1th k optimization. In the separate reliability analysis, pkf and bf can be calculated with any existing reliability analysis method flexibly. If FORM, SORM, and MCS approaches are used, the sensitivity of the reliability index and failure probability with respect to the design variables can be calculated analytically as the by-product of reliability analysis without additional function evaluation. To further improve efficiency, the worst case failure probability estimated with (49) is used to identify potentially active reliability constraints, and reliability analysis only for the active constraints is conducted. This method is more flexible than the aforementioned approaches as the formulation of deterministic constraint is not confined to specific reliability analysis approach

Fig. 19. SORA procedure [343].

470

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

Fig. 20. Graphical illustration of shifting constraint boundary in SORA [339].

such as MPP based method. But the first-order approximation accuracy is limited when the difference between the current design variables and the optimum of the previous iteration is large, which may influence convergence. For the separate deterministic MDO decoupled from uncertainty analysis, any formulations of MDO problems and existing MDO approaches can be used, such as AAO (all-at-once) [346,347], SAND [348], IDF, MDF [23,347,349], and BLISS [350], so as to enhance the deterministic optimization efficiency. The preceding SLA approaches, including SLSV, SFA and SORA, have been tested and compared with the conventional double loop approach, and the results validate the computational efficiency of these methods [351,352]. It is observed that SLSV is most promising in terms of stable convergence and low calculation cost among these approaches, but it does not assess the accurate reliability of each design which should be confirmed for the optimum in the end. SFA and SORA also have good convergence capability, and their computational burden for reliability assessment in each cycle can be further reduced by identification of active constraints. 6.2. Decomposition and coordination based procedure In deterministic MDO, decomposition and coordination based procedures are widely studied and their efficiency is verified by both numerical and engineering demonstrations. This provides a good reference for UMDO to decompose the large scale computational prohibitive problem into several manageable sub uncertainty optimization problems which are coordinated to achieve a consistent optimum with certain strategy. 6.2.1. CO-based UMDO procedure Collaborative Optimization (CO) is a bilevel optimization procedure specifically for large scale distributed analysis applications [353]. To decompose the coupled disciplines and execute disciplinary design optimization concurrently, auxiliary design variables are introduced as additional design variables which represent coupling state variables. In the system level optimization, the shared design variables and auxiliary design variables are optimized to minimize objective with disciplinary compatibility constraints which enforce the auxiliary design variables to be equal to the real coupling state variables (passed up from the disciplinary analysis) by equality constraints. In each subsystem optimization, the local design variables are optimized to minimize the discrepancy between the subsystem output and the auxiliary variable values passed down from the system, meanwhile satisfy the local

disciplinary constraints. After iterations the optimization can converge to a consistent system optimum design. McAllister et al. firstly proposed to adapt CO to accommodate uncertainty to solve UMDO problems [354]. In the CO framework, both the system and subsystem level optimizations are formulated with multi-objective mathematical programming formulations, so called compromise Decision Support Problem (DSP) [355]. In the system DSP, both the mean and variance (robustness) of the objective are minimized with subsystem compatibility constraints. In the sub-system DSP, minimization of the discrepancy between the targets specified by the system and the local output values is set to be the optimization goal with first priority. Once this objective is achieved, the local objectives with secondary priority which the subsystem designers may concern but not dealt with in the system level can be optimized, which can be realized based on the multi-objective optimization nature of compromise DSP. The uncertainty analysis for DSP in both levels employs first-order Taylor expansion method for mean and variance estimation based on the assumptions that variations of uncertainties are small and sources of uncertainties are independent. In the subsystem level DSP, the reliability constraints are approximately converted to the equivalent deterministic constraints with worst case analysis method introduced in Section 5.1, which only accounts for uncertainties of the local design and shared variables. It has been applied to an internal combustion engine multidisciplinary robust design optimization problem [354], and the results show that it needs significant iterations to achieve system-level compatibility attributed to equality constraints in the system level optimization, which would be worse for large scale problem with more shared variables. Gu and Renaud [261] proposed a robust collaborative optimization (RCO) framework based on an implicit method for estimating system performance uncertainties with uncertainty propagation through coupling disciplines. To address the convergence problem caused by the compatibility equality constraints imposed on the system level optimization, an improved CO framework proposed by DeMiguel and Murray [356] is utilized to formulate system and sublevel optimization problems. In the system level, the mean of the original objective (robust objective not considered) is optimized with subsystem compatibility constraints, and the implicit uncertainty propagation (IUP) is analyzed based on the uncertainty analysis method for multidisciplinary system proposed by Gu et al. [174] and stated in (19), which is reformulated for CO framework as 9 8 dy1 9 8 > Dðxaux Þ1 > > > dx > > > > > > > > dy > > > = = > < 2 > < Dðxaux Þ2 > dx  Dx ^ > > > > ^ > > > > > > > > > > > > : Dðxaux Þn ; > ; : dynD > D dx

2

I1 6 6  @T2 6 @ðxaux Þ12 þ6 6 ^ 6 4 @TnD  @ðxaux Þ

1nD

1  @ðx@T aux Þ

21



@T1  @ðxaux Þ

nD 1

I2 &

^



I nD

31 7 7 7 7 7 7 5

8 DT1 > > > > < DT2

9 > > > > =

^ > > > > : DT n

> > > > ;

D

ð50Þ where xaux represents the coupling state variables and treated as design variables in CO, and (xaux)ij denotes the coupling state variable vector output from disciplinary i and input into disciplinary j. Here both the errors of the design variable vector x (with variation Dx) and the contributing analysis tool Ti (with bias errors DTi) are considered, and the coupling effect is treated with employment of GSE. The derivatives qTi/q(xaux)Ui are calculated within each subsystem with corresponding analysis tool Ti, and fed

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

upward to the system level. After implicit uncertainty analysis in the system level, variation characteristics of auxiliary variables are obtained and passed down to subsystems, which are used along with the local design variable uncertainties and simulation model uncertainties to analyze subsystem output uncertainty characteristics. The block diagram of RCO procedure is shown in Fig. 21, which takes a three-discipline UMDO problem for illustration. As CO does not enforce disciplinary consistency explicitly, it is notable that the sensitivities calculated from subsystems are not consistent until the system level optimizer has satisfied the compatibility constraints, which may lead to inaccuracy of the uncertainty analysis and further influence optimization convergence. Both the preceding CO based UMDO procedures adopt the first-order worst case uncertainty analysis method to deal with uncertainty propagation and reliability analysis, which still need accuracy improvement. 6.2.2. CSSO-based UMDO procedure Concurrent Subspace Optimization (CSSO) is a bilevel optimization procedure which can decompose the coupled multidisciplinary design optimization problem into several independent subspace problems. These subspace optimizations are performed concurrently by operating on the corresponding local design variables and approximating non-local coupled state variables with GSE or approximation models. Then the subspace optimization results are coordinated by certain coordination strategy to obtain system compatibility. CSSO can enable each disciplinary designer to design and optimize with their own tools independently, which complies with the organizational feature of industry, so it has been widely studied and improved for its organizational flexibility and computational efficiency [357,358]. Based on CSSO procedure, UMDO problems can be decomposed into several local (contributing analysis level) uncertainty-based optimization subproblems which can be organized concurrently. Padmanabhan and Batill [359] proposed to realize reliability based MDO in the CSSO procedure. In this procedure, a system analysis and a reliability analysis (with FORM method) are firstly conducted to obtain outputs of objectives, intermediate state variables, reliability constraints, as well as their sensitivities with respect to the deterministic and random variables at the initial design point. Based on these outputs and sensitivities, the firstorder Taylor series approximation models are employed to build the metamodels to be used in the subspace optimizations for estimation of non-local state variables and reliability constraints. Then the subspace optimizations (SSO) are executed concurrently,

471

and the optimization results are further coordinated in the Coordination Procedure (CP) with approximation models built from the data obtained during the optimization iterations within SSOs. The design solution is updated after CP and fed into the system analysis and reliability analysis of next cycle, and the aforementioned steps are repeated until convergence is achieved. Yao et al. [360] also proposed to integrate uncertainty analysis into CSSO so as to account for MDO problems under uncertainty, and have successfully applied it to solve a small satellite conceptual system design problem [361]. In this procedure, sensitivity analysis is firstly conducted to screen out the uncertain design variables and uncertain system parameters which have negligible influence on the design. Then approximation models of objectives and coupled state variables are built with design of experiment techniques. In SSOs, the mean and standard deviations of the objective are optimized with local reliability constraints, and any existing uncertainty analysis methods mentioned in Section 4 can be used to quantify uncertainty propagation within the subspace. In each SSO, only the accurate analysis tool of the local subspace is used and non-local state variables are estimated with approximation models, so the computational burden can be balanced. In the coordination procedure, all the design variables are optimized consistently with approximation models. Then the approximation models are updated at the new optimum and to be used in the SSOs and CP of the next cycle. The preceding steps are repeated until convergence is achieved, and the reliability of the optimum is confirmed with Monte Carlo uncertainty analysis method. This procedure is very flexible in selection of uncertainty analysis approaches in system and subsystem level optimizations, but it ignores the cross propagation effect of uncertaintes in the local SSOs, which results in loss of accuracy. 6.2.3. ATC-based UMDO procedure Analytical target cascading (ATC) procedure is a very promising approach to organize MDO of hierarchical systems [362]. In this procedure, each element receives optimization target from its parent, and cascades down sub targets to its children. Meanwhile, each element passes its response up to its parent, so that the parent can adjust new targets according to the response. After iterations of targets cascading down from top to bottom and responses passing up from bottom to top, the design can converge to a consistent solution. The probabilistic version of ATC is firstly proposed by Kokkolaras et al. [363] to solve MDO problems of hierarchical system with uncertainties. For the hierarchical system shown in Fig. 22, assuming that initial uncertainty information is available at the

Fig. 21. Block diagram of RCO [261].

472

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

Fig. 22. The hierarchical system in ATC procedure.

bottom level of the hierarchy, the bottom-to-top coordination strategy is used to propagate uncertainty. The advanced mean value (AMV) method [364] is used to efficiently generate CDF of each element response with propagated random variables in uncertainty-based optimization of each element, and the mean and standard deviation of each element response are passed upwards to its parent. The elementary optimization problems are solved upwards level by level until the top of the hierarchy is reached. Once the top-level optimization problem is finished, new targets begin to be cascaded downwards level by level from the top to the bottom. With updated parameters, elementary optimizations are conducted from bottom to top again. The preceding steps are iterated until convergence is achieved. In this approach, since only mean values of uncertainties are matched between different levels, convergence efficiency to achieve a consistent optimum solution is low. To solve this problem, it is proposed to also match standard deviation [365] and covariance [366] between interrelated responses and linking variables. It is demonstrated that the improved approach can obtain almost the same result as that of Probabilistic All-In-One (AIO) optimization method when the first two moments can sufficiently describe the probabilistic characteristics of random variables. Since each subproblem in each element of the hierarchy is linked only to the subproblems directly above and below, it does not require extensive link among all the subsystems, so that the data relationship can be simplified, which is very promising for ATC to be applied in UMDO problems of large scale hierarchical system.

7. Conclusions In this paper, we have reviewed the fundamental issues of UMDO theory and general UMDO approaches for aerospace vehicle design. The acknowledgement of importance of UMDO is spreading fast, but UMDO theory research is still in the early stages of development. While fundamental terminology, philosophy, and general procedure are well established, detailed computational algorithms and organizational procedures are still needed to solve the major challenges of UMDO, namely computational complexity and organizational complexity. Hence the future research should be directed towards increasing UMDO efficiency with given level of cost and effort expended on UMDO activities. Besides, UMDO should be amenable to the realistic multidisciplinary organization of aerospace vehicle design, especially concerning the trend of distributed and concurrent design, so as to be appealing to the industry for practical applications. Towards these goals, we provide some detailed recommendations as follows. The premise of UMDO is to appropriately represent uncertainties, which essentially includes three parts: (1) List all the uncertainties exhaustively which should be considered in the design optimization, as discussed in Section 3.1 wherein the relevant uncertainty sources for UMDO in the design phase are enumerated throughout the whole lifecycle of aerospace vehicle.

(2) Select appropriate mathematical models to represent uncertainties according to the different uncertainty types and available information about uncertainties. Generally speaking, probability theory is more advantageous with sufficient information, while non-probabilistic approaches are more feasible with insufficient information or imprecise data. Possibility theory is especially suitable for epistemic uncertainties concerning subjective vagueness and sparse available information, but if the available information (evidence) is conflicting, evidence theory should be used. Possibility theory uses possibility and necessity to describe the likelihood of a proposition, while evidence theory uses belief and plausibility to define lower and upper bounds of the probability. These measures are determined from the known information without any assumptions beyond what is available, which can provide designers or decision makers with more reliable choice than the single probability obtained via probability approaches with strong assumptions. But the problem with non-probabilistic approaches is that generally more computational cost is needed, and the analysis result is not intuitively interpretable for designers, especially those who lack understanding of these new mathematical theories in contrast with probability theory which has long history. Therefore we strongly recommend that the probabilistic and non-probabilistic approaches should be used flexibly as complementary techniques to represent uncertainties according to the specific situation and make better use of available information. (3) Screen uncertainties with sensitivity analysis to reduce the uncertainty problem scale. There are numerous sensitivity analysis approaches with probability theory, but very few for the non-probabilistic uncertainties. However, sampling-based method is a universal method for both the aleatory and epistemic uncertainties. To quantify output uncertainty characteristics resulted from uncertainty propagation through the system inner mechanism, we have covered a range of non-intrusive uncertainty analysis approaches and have analyzed pros and cons of each method along the way. There is no method which is universally better than all others, and there are no accurate rules about the choice of method to be used in a specific situation. But there is an underlying clue that the suitability of each method mainly depends on its accuracy and computational cost in the specific problem, as well as designers’ preference towards the balance of accuracy and computational efficiency. Generally MCS approaches are more accurate and also more computationally expensive, but with quickly development of variance reduction sampling techniques the efficiency can be greatly enhanced. The most computational affordable and widely applied method is Taylor series approximation approach for the relative ease of implementation and understanding. But this method can only estimate the first two moments of the output rather than the exact distribution, and its accuracy is quite limited for highly nonlinear systems. Considering reliability as one of the dominating measures in UMDO, there is a specific type of methods dedicated to reliability analysis. These approaches are specially devised to account for small failure probability calculation, which may lead to poor accuracy or prohibitive computational cost with normal uncertainty analysis methods. Among these methods, FORM and SORM approaches are most prevailing which have formed a full theory with substantial mathematical foundation and been widely applied in practical reliability based optimization problems. But the first or second approximation of the limit state function at the MPP to estimate the failure probability integral may be insufficient for highly nonlinear systems, and the algorithms to locate MPP especially with multi failure regions or multi optimums are also problematic which may contribute to inaccuracy. For highly nonlinear problems, MPP based MCS method may be a good choice. Considering that aerospace vehicle design generally

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

involves several closely coupled disciplines, all of which usually use complex analysis tools for disciplinary analysis, e.g. CFD and FEA, it would be computationally prohibitive to employ normal uncertainty analysis methods as each single MDA would be very time consuming to obtain a consistent system output for a given set of parameters. For these problems we strongly recommend the decomposition based uncertainty analysis methods, which decompose the uncertainty analysis into several sub analysis problems within disciplinary level at a controllable scale and meanwhile account for the cross propagation through certain coordination strategies. Furthermore, the decomposition based method can take advantage of distributed and concurrent computation, which can further alleviate computational burden. But so far these approaches mainly deal with the cross propagation of the first two moments and the primary strategies to propagate uncertainty effect are based on first-order GSE, which should be investigated more thoroughly so that the exchange information between disciplines can be sufficient to enhance analysis accuracy. The kernel of UMDO is to state the UMDO problem as its corresponding mathematical optimization formulation and use appropriate algorithm to search the optimum or pareto set (in multi-objective optimization problem) with predefined criterion. Numerous formulation models and a variety of search algorithms have been proposed in literature for optimization under uncertainty and used successfully in many applications. The most widely studied and applied formulations are reliability based optimization and robust design optimization. These two formulations have deep roots in industry, and have quickly developed as driven by urgent need for robustness and reliability especially in the high risk and high cost aerospace field. To solve these optimization formulations with uncertainty, a straightforward method is to integrate uncertainty analysis directly into the search algorithm which analyzes the uncertainty characteristics of the system response at each optimization iteration point, but this leads to prohibitive computational burden. Hence several approaches have been developed to transform the original UMDO problems into quasi-equivalent simplified formulations, such as Design for Six Sigma method to approximate the robust design optimization, the approaches to convert reliability constraints into quasi-deterministic ones, etc. But generally these simplification approaches tend to be too conservative resulted from the pervasively used worst case assumptions, and the accuracy of the result should also be confirmed by high fidelity uncertainty analysis so as to enhance confidence in optimization result. As for the algorithms to search the optimum under uncertainty, there is rich literature to address this problem. Specifically for the simulation based design optimization, as the case of aerospace vehicle design which usually resorts to computer simulation for system analysis rather than explicit analytical equations, the gradient free algorithms are more appealing and have seen rich research activities in this area. Among the gradient free approaches, we firmly believe that genetic algorithms (GA) are very promising with their global optimization capability and multi-objective pareto set identification capability. And the population calculation of GA can take advantage of distributed computation to alleviate time cost. But the convergence of GA is largely affected by crossover and mutation strategies, which need further investigation to enhance efficiency. With the technical blocks of UMDO as discussed above, the problem of how to effectively organize them in an executive sequence so as to realize UMDO in the computer environment is of extraordinary importance, which is addressed by the UMDO procedure research. Corresponding to the straightforward method of optimization under uncertainty with direct integration of uncertainty analysis in the search algorithm, the most intuitive UMDO procedure is to organize the optimization and uncertainty analysis blocks in this iterative way, which results in a double-loop optimization problem with outer optimization loop and inner

473

uncertainty analysis loop. For multidisciplinary system, it is computational prohibitive as each MDA also needs iterations to obtain a consistent analysis result, which essentially makes UMDO into a triple-loop optimization problem. Therefore many efforts are devoted to this area to develop strategies to organize UMDO more efficiently. In general there are two types, namely single level procedure and decomposition and coordination based procedure. Single level procedures either merge the two optimization loops into a single one, or decompose the two loops into two separate steps executed in a sequential way, so that the uncertainty analysis and deterministic optimization can be programmed separately, which is easier to develop and can make use of legacy codes. It is worth noting that for multidisciplinary system optimization, the separate deterministic optimization formulation is especially attractive as it can make full use of the existing advanced approaches developed in the traditional deterministic MDO to enhance optimization efficiency. But the problem with this single level approach is that convergence efficiency and result accuracy are not guaranteed as the formulation of deterministic optimization is with strong simplification based on the preceding uncertainty analysis result, which should be investigated more thoroughly. Specifically to account for the multidisciplinary feature, we strongly recommend the decomposition and coordination based procedure, which can decompose large scale UMDO problems into manageable sub uncertainty optimization problems within disciplinary scope and execute these disciplinary optimizations simultaneously with certain coordination strategy to converge to a consistent optimum. This type of procedure is very appealing not only for its advantage in efficiency as it can utilize distributed computation technology to solve computational complexity problem, but also for its capability of allowing distributed concurrent disciplinary design optimization which is very desirable in industry as to comply with the realistic disciplinary organization structure and maintain disciplinary autonomy. The research in this field is relatively new and up to now the decomposition and coordination strategies are mainly based on the mature deterministic MDO procedures to accommodate uncertainty, e.g. CO and CSSO, but we believe that with its promising feature it will be quickly developed. Regarding computational efficiency, an additional area for UMDO that requires further research is approximation modeling methods. As discussed in the review sections, it is desirable to use approximation models as surrogate models of the high fidelity models in UMDO, as it can effectively reduce computational cost in uncertainty analysis and optimization compared with the time consuming models. Furthermore, it can transform the implicit and discrete simulation based models into explicit and continuous models, so as to facilitate some UMDO technical blocks which need utilization of the continuous function feature, such as sensitivity analysis which needs derivative calculation and some optimization algorithms which need gradient information. To achieve the level of maturity of UMDO in aerospace engineering, a much deeper understanding of mathematics, aerospace disciplines, computation, and their relationship is required than it is reflected in current UMDO practice. And additional training and understanding of UMDO theory is also needed for practical application. The road to develop UMDO is long and difficult, but we firmly believe that with strong demand from industry and quick development of science and technology, UMDO will hopefully become an advanced and powerful tool for aerospace vehicle design in the near future.

Acknowledgment The authors sincerely thank Dr. Yong Zhao, Dr. Weiwei Yang, Dr. Qi Ouyang, and Dr. Yuexing Wei for their helpful comments in

474

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

preparing the manuscript. This work was supported in part by National Nature Science Foundation of China under Grant nos. 50975280 and 61004094, Program for New Century Excellent Talents in University of Ministry of Education of China under Grant no. NCET-08-0149, Fund of Innovation by Graduate School of National University of Defense Technology under Grant no. B090102, and Fund of Innovation by Hunan Province, China.

References [1] Elms DG. Structural safety-issues and progress. Progress in Structural Engineering and Materials 2004;6:116–26, doi:10.1002/pse.176. [2] Zang TA, Hemsch MJ, Hilburger MW, Kenny SP, Luckring JM, Maghami P, et al. Needs and opportunities for uncertainty-based multidisciplinary design methods for aerospace vehicle. NASA/TM-2002-211462. Langley Research Center; 2002. [3] Oberkampf WL, Helton JC, Joslyn CA, Wojtkiewicz SF, Ferson S. Challenge problems: uncertainty in system response given uncertain parameters. Reliability Engineering and System Safety 2004;85(1–3):11–9. [4] Dantzig BG. Linear programming under uncertainty. Management Science 1955;1(3–4):197–206. [5] Freund RJ. The introduction of risk into a programming model. Econometrica 1956;24(3):253–63. [6] Sahinidis NV. Optimization under uncertainty: state-of-the-art and opportunities. Computers and Chemical Engineering 2004;28(6–7):971–83. ¨ [7] Schuellera GI, Jensen HA. Computational methods in optimization considering uncertainties—an overview. Computer Methods in Applied Mechanics and Engineering 2008;198(1):2–13. [8] Dhillon BS, Belland JS. Bibliography of literature on reliability in civil engineering. Microelectronics and Reliability 1986;26(1):99–121. [9] Tong YC. Literature review on aircraft structural risk and reliability analysis. DSTO-TR-1110. DSTO Aeronautical and Maritime Research Laboratory; 2001. [10] Frangopol DM, Maute K. Life-cycle reliability-based optimization of civil and aerospace structures. Computers and Structures 2003;81(7):397–410. [11] Padula SL, Gumbert CR, Li W. Aerospace applications of optimization under uncertainty. Optimization and Engineering 2006;7(3):317–28. [12] Long MW, Narciso JD. Probabilistic design methodology for composite aircraft structures. DOT/FAA/AR-99/2. US Department of Transportation; 1999. [13] Uebelhart SA. Non-deterministic design and analysis of parameterized optical structures during conceptual design. PhD dissertation, Massachusetts Institute of Technology, 2006. [14] Li L. Structural design of composite rotor blades with consideration of manufacturability, durability, and manufacturing uncertainties. PhD dissertation, Georgia Institute of Technology, 2008. [15] Li W, Huyse L, Padula S. Robust airfoil optimization to achieve consistent drag reduction over a mach range. NASA/CR-2001-211042. Langley Research Center; 2001. [16] Gumbert CR, Newman PA. Effect of random geometric uncertainty on the computational design of a 3-D flexible wing. In: Proceedings of the 20th AIAA applied aerodynamics conference, 2002. [17] Lindsley NJ, Pettit CL, Beran PS. Nonlinear plate aeroelastic response with uncertain stiffness and boundary conditions. Structure and Infrastructure Engineering 2006;2(3–4):201–20. [18] Wie B, Liu Q, Sunkel J. Robust stabilization of the space station in the presence of inertia matrix uncertainty. In: The first IEEE regional conference on aerospace control systems proceedings, 1993. [19] DeLaurentis DA. A probabilistic approach to aircraft design emphasizing guidance and stability and control uncertainties. PhD dissertation, Georgia Institute of Technology, 1998. [20] Padmanabhan D. Reliability-based optimization for multidisciplinary system design. PhD dissertation, University of Notre Dame, 2003. [21] Sues RH, Oakley DR, Rhodes GS. MDO of aeropropulsion components considering uncertainty. In: AIAA/NASA/ISSMO symposium on multidisciplinary analysis and optimization, Reston, VA, AIAA-96-4062-CP, 1996. [22] Noor AK. Nondeterministic approaches and their potential for future aerospace systems. NASA/CP-2001-211050. Langley Research Center; 2001. [23] Yu X, Du X. Reliability-based multidisciplinary optimization for aircraft wing design. Structure and Infrastructure Engineering 2006;2(3-4):277–89. [24] Zeeshan Q, Yunfeng D, Rafique AF, Kamran A, Nisar K. Multidisciplinary robust design and optimization of multistage boost phase interceptor. In: 51st AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, Orlando, Florida, AIAA-2010-2920, 2010. [25] Oberkampf WL, DeLand SM, Rutherford BM, Diegert KV, Alvin KF. A new methodology for the estimation of total uncertainty in computational simulation. In: 40th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference and exhibit, St. Louis, MO, AIAA-991612, 1999. [26] Hastings D, McManus H. A framework for understanding uncertainty and its mitigation and exploitation in complex systems. In: 2004 engineering systems symposium, Massachusetts Institute of Technology, 2004.

[27] DeLaurentis DA, Mavris DN. Uncertainty modeling and management in multidisciplinary analysis and synthesis. In: 38th aerospace sciences meeting and exhibit, Reno, NV, AIAA-2000-0422, 2000. [28] IEEE guide for the definition of reliability program plans for nuclear power generating stations. IEEE Std. 933-1999, 1999. [29] Ingram-Cotton J.B., Hecht M.J., Duphily R.J., Zambrana M., Hiramoto T., O’connor C. Reliability program requirements for space systems. Aerospace Report NO.TOR-2007(8583)-6889, US The Aerospace Corporation, 2007. [30] MIL-STD-785REVB reliability program for systems and equipment development and production. MIL-STD-785B, US Department of Defense, 1980. [31] Agarwal H. Reliability based design optimization formulations and methodologies. PhD dissertation, University of Notre Dame, 2004. [32] Mohan NS. Robust design. PhD dissertation, Indian Institute of Technology, 2002. [33] Taguchi G, Elsayed EA, Hsiang TC. Quality engineering in production systems. New York: McGraw-Hill; 1989. [34] Park SH. Robust design and analysis for quality engineering. London: Chapman and Hall; 1996. [35] Park G, Lee T, Lee KH, Hwang K. Robust optimization: an overview. AIAA Journal 2006;44(1):181–91. [36] Beyer H, Sendhoff B. Robust optimization: a comprehensive survey. Computer Methods in Applied Mechanics and Engineering 2007;196(33–34): 3190–218. [37] Mourelatos ZP, Jinghong L. A methodology for trading-off performance and robustness under uncertainty. Journal of Mechanical Design 2006;128(4): 856–63. [38] Uebelhart SA, Millery DW, Blaurock C. Uncertainty characterization in integrated modeling. In: 46th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials, Austin, Texas, AIAA-2005-2142, 2005. [39] Neumaier A, Fuchs M, Dolejsi E, Csendes T, Dombi J, Banhelyi B, et al. Application of clouds for modeling uncertainties in robust space system design. ACT Ariadna Research ACT-RPT-05-5201. European Space Agency; 2007. [40] Wang Z, Chen X, Luo W, Zhang W. Research on theory and application of multidisciplinary design optimization of flight vehicles. Beijing: National Defense Industry Press; 2006. [41] Hoffman FO, Hammonds JS. Propagation of uncertainty in risk assessments: the need to distinguish between uncertainty due to lack of knowledge and uncertainty due to variability. Risk Analysis 1994;14(5):707–12. [42] Helton JC, Burmaster DE. Treatment of aleatory and epistemic uncertainty in performance assessments for complex systems. Reliability Engineering and System Safety 1996;54(2–3):91–4. [43] Roy CJ, Oberkampf WL. A complete framework for verification, validation, and uncertainty quantification in scientific computing. In: 48th AIAA aerospace sciences meeting including the New Horizons forum and aerospace exposition, Orlando, Florida, AIAA-2010-124, 2010. [44] Draper D. Assessment and propagation of model uncertainty. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 1995;57(1):45–97. [45] Fersona S, Joslyn CA, Helton JC, Oberkampf WL, Sentz K. Summary from the epistemic uncertainty workshop: consensus amid diversity. Reliability Engineering and System Safety 2004;85(1–3):355–69. [46] Thunnissen DP. Uncertainty classification for the design and development of complex systems. In: The third annual predictive methods conference, Newport Beach, CA, 2003. [47] Thunnissen DP. Propagating and mitigating uncertainty in the design of complex multidisciplinary systems. PhD dissertation, California Institute of Technology, 2005. [48] Walton MA. Managing uncertainty in space systems conceptual design using portfolio theory. PhD dissertation, Massachusetts Institute of Technology, 2002. [49] Batill SM, Renaud JE, Gu X. Modeling and simulation uncertainty in multidisciplinary design optimization. In: Eighth AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, Long Beach, CA, AIAA-2000-4803, 2000. [50] Du X, Chen W. A methodology for managing the effect of uncertainty in simulation-based design. AIAA Journal 2000;38(8):1471–8. [51] de Weck O. Claudia Eckert, Clarkson J. A classification of uncertainty for early product and system design. In: International conference on engineering design, Paris, France, ICED’07/No.480, 2007. [52] McKay MD, Morrison JD, Upton SC. Evaluating prediction uncertainty in simulation models. Computer Physics Communications 1999;117(1-2): 44–51. [53] Oberkampf WL, Deland SM, Rutherford BM. Estimation for total uncertainty in modeling and simulation. SAND2000-0824. Sandia National laboratories; 2000. [54] Oberkampf WL, Helton JC, Sentz K. Mathematical representation of uncertainty. In: Non-deterministic approaches forum, WA, Seattle, AIAA-20011645, 2001. [55] Klir GJ. Uncertainty and information measures for imprecise probabilities: an overview. In: First international symposium on imprecise probabilities and their applications, Ghent, Belgium, 1999. [56] Walley P. Towards a unified theory of imprecise probability. International Journal of Approximate Reasoning 2000;24(2–3):125–48. [57] Haimes YY, Barry T, Lambert JH. When and how can you specify a probability distribution when you don’t know much? Risk Analysis 1994;14(5): 661–706.

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

[58] Hattisl D, Burmaste DE. Assessment of variability and uncertainty distributions for practical risk analyses. Risk Analysis 1994;14(5):713–30. [59] Rice JA. Mathematical statistics and data analysis. 3rd ed. California: Duxbury Press; 2006. [60] Marhadi K, Venkataraman S, Pai SS. Quantifying uncertainty in statistical distribution of small sample data using Bayesian inference of unbounded Johnson distribution. In: 49th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, Schaumburg, IL, AIAA-20081810, 2008. [61] Ryan RS, Townsend JS. Application of probabilistic analysis design methods in space programs—the approaches, the status, and the needs. In: 34th AlAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials conference, AIAA/ASME Adaptive Structures Forum, La Jolla, CA, AIAA-931381, 1993. [62] Smith N, Mahadevan S. Probabilistic methods for aerospace system conceptual design. Journal of Spacecraft and Rockets 2003;40(3):411–8. [63] Nam T, Sobany DS, Mavris DN. A non-deterministic aircraft sizing method under probabilistic design constraints. In: 47th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, Newport, Rhode Island, AIAA-2006-2062, 2006. [64] Merrill RG, Andraschko M, Stromgren C, Cirillo B, Earle K, Goodliff K. A comparison of probabilistic and deterministic campaign analysis for human space exploration. In: AIAA SPACE 2008 conference & exposition, San Diego, California, AIAA-2008-7748, 2008. [65] Hassan R, Crossley W. Spacecraft reliability-based design optimization under uncertainty including discrete variables. Journal of Spacecraft and Rockets 2008;45(2):394–405. [66] Youn BD, Wang P. Bayesian reliability based design optimization under both aleatory and epistemic uncertainties. In: 11th AIAA/ISSMO multidisciplinary analysis and optimization conference, Portsmouth, Virginia, AIAA-2006-6928, 2006. [67] Wang P, Youn BD, Xi Z, Kloess A. Bayesian reliability analysis with evolving, insufficient, and subjective data sets. Journal of Mechanical Design 2009;131(11):111008. [68] Yu BH, Van Kuiken CJA, Telford DG. Managing uncertainty in reliability analysis with Bayesian inference and uncertainty propagation. In: US Air Force T&E Days 2010, Nashville, Tennessee, AIAA-2010-2594, 2010. [69] Wang P, Youn BD. Efficient Bayesian reliability analysis and design with a user-defined confidence level. In: 51st AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, Orlando, Florida, AIAA-2010-2590, 2010. [70] Yager R, Kacprzy KJ, Fedrizzi M. Advances in the Dempster—Shafer theory of evidence. New York: John Wiley and Sons; 1994. [71] Sentz K, Ferson S. Combination of evidence in Dempste–Shafer theory. SAND2002-0835. Sandia National Laboratories; 2002. [72] Helton JC, Johnson JD, Oberkampf WL, Sallaberry CJ. Representation of analysis results involving aleatory and epistemic uncertainty. SAND20084379. Sandia National Laboratories; 2008. [73] Shafer G. A mathematical theory of evidence. Princeton, New Jersey: Princeton University Press; 1976. [74] Mourelatos Z, Zhou J. A design optimization method using evidence theory. Journal of Mechanical Design 2006;128(4):901–8. [75] Heltona JC, Johnson JD, Oberkampf WL, Storlie CB. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Computer Methods in Applied Mechanics and Engineering 2007;196(37–40):3980–98. [76] Oberkampf WL, Helton JC. Investigation of evidence theory for engineering applications. In: Fourth non-deterministic approaches forum, Denver, Colorado, AIAA-2002-1569, 2002. [77] Agarwal H, Renaud JE, Preston EL, Padmanabhan D. Uncertainty quantification using evidence theory in multidisciplinary design optimization. Reliability Engineering and System Safety 2004;85(1–3):281–94. [78] Croisarda N, Vasilea M, Kembleb S, Radicea G. Preliminary space mission design under uncertainty. Acta Astronautica 2010;66(5–6):654–64. [79] Zadeh LA. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and Systems 1978;1(1):3–28. [80] Zadeh LA. Fuzzy sets. Information and Control 1965;8(3):338–53. [81] Maglaras G, Nikolaidis E, Haftka RT, Cudney HH. Analytical-experimental comparison of probabilistic methods and fuzzy set based methods for designing under uncertainty. Structural and Multidisciplinary Optimization 1997;13(2–3):69–80. [82] Ferrari P, Savoia M. Fuzzy number theory to obtain conservative results with respect to probability. Computer Methods in Applied Mechanics and Engineering 1998;160(3–4):205–22. [83] Chen S. Comparing probabilistic and fuzzy set approaches for design in the presence of uncertainty. PhD dissertation, Virginia Polytechnic Institute and State University, 2000. [84] Nikolaidis E, Chen S, Cudney H, Haftka RT, Rosca R. Comparison of probability and possibility for design against catastrophic failure under uncertainty. Journal of mechanical design 2004;126(3):386–94. [85] Braibant V, Oudshoorn A, Boyer C, Delcroix F. Non-deterministic possibilistic approaches for structural analysis and optimal design. AIAA Journal 1999;37(10):1298–303. [86] Youn B, Choi KK, Du L, Gorsich D. Integration of possibility-based optimization to robust design for epistemic uncertainty. In: Sixth world congresses of structural and multidisciplinary optimization, Rio de Janeiro, Brazil, 2005.

475

[87] Mourelatos ZP, Zhou J. Reliability estimation and design with insufficient data based on possibility theory. AIAA Journal 2005;43(8):1696–705. [88] He L, Huang H, Du L, Zhang X, Miao Q. A review of possibilistic approaches to reliability analysis and optimization in engineering design. In: Jacko J, editor. Human–computer interaction, part IV, HCII 2007, Lecture notes in computer science, vol. 4553. Springer-Verlag; 2007. p. 1075–84. [89] Langley RS. Unified approach to probabilistic and possibilistic analysis of uncertain systems. Journal of Engineering Mechanics 2000;126(11):1163–72. [90] Shin Y, Wu Y. A hybrid possibilistic–probabilistic analysis framework for uncertainty management. In: 44th AIAA/ASME/ASCE/AHS structures, structural dynamics, and materials, Norfolk, Virginia, AIAA-2003-1573, 2003. [91] Du L, Choi KK, Youn BD, Gorsich D. Possibility-based design optimization method for design problems with both statistical and fuzzy input data. In: Sixth world congresses of structural and multidisciplinary optimization, Rio de Janeiro, Brazil, 2005. [92] Rao SS, Berke L. Analysis of uncertain structural systems using interval analysis. AIAA Journal 1997;35(4):727–35. [93] Rao SS, Lingtao C. Optimum design of mechanical systems involving interval parameters. Journal of Mechanical Design 2002;124(3):465–72. [94] Majumder L, Rao SS. Interval-based multi-objective optimization of aircraft wings under gust loads. AIAA Journal 2009;47(3):563–75. [95] Moore RE, Kearfott RB, Cloud MJ. Introduction to interval analysis. Philadelphia: SIAM Press; 2009. [96] Ben-Haim Y, Elishakoff I. Convex models of uncertainty in applied mechanics. Amsterdam: Elsevier; 1990. [97] Ben-Haim Y. A non-probabilistic concept of reliability. Structural Safety 1994;14(4):227–45. [98] Ben-Haim Y. Convex models of uncertainty: applications and implications. Erkenntnis 1994;41(2):139–56. [99] Ben-Haim Y. Info-gap decision theory: decisions under severe uncertainty. 2nd ed. Amsterdam: Academic Press; 2006. [100] Ben-Haim Y. Uncertainty, probability and information-gaps. Reliability Engineering and System Safety 2004;85(1–3):249–66. [101] Fuchs M, Neumaier A. Uncertainty modeling with clouds in autonomous robust design optimization. In: Proceedings of the third international workshop reliable engineering computing, 2008. [102] Fuchs M, Neumaier A. Handling uncertainty in higher dimensions with potential clouds towards robust design optimization. Advances in Intelligent and Soft Computing 2008;48:376–82. [103] Liu B. Uncertainty theory an introduction to its axiomatic foundations. Berlin: Springer-Verlag; 2004. [104] Klir GJ, Smith RM. On measuring uncertainty and uncertainty-based information: recent developments. Annals of Mathematics and Artificial Intelligence 2001;32(1-4):1012–2443. [105] Helton JC, Oberkampf WL. Alternative representations of epistemic uncertainty. Reliability Engineering and System Safety 2004;85(1–3):1–10. [106] Helton JC, Johnsonb JD, Oberkampf WL. An exploration of alternative approaches to the representation of uncertainty in model predictions. Reliability Engineering and System Safety 2004;85(1–3):39–71. [107] Laskey KB. Model uncertainty: theory and practical implications. IEEE Transactions on System, Man, and Cybernetics—Part A: System and Human 1996;26(3):340–8. [108] Rebba R, Mahadevan S, Zhang R. Validation of uncertainty propagation models. In: 44th AIAA/ASME/ASCE/AHS structures, structural dynamics, and materials conference, Norfolk, Virginia, AIAA-2003-1913, 2003. [109] Mahadevan S, Rebba R. Validation of reliability computational models using Bayes networks. Reliability Engineering and System Safety 2005;87(2): 223–32. [110] Faragher J. Probabilistic methods for the quantification of uncertainty and error in computational fluid dynamics simulations. DSTO-TR-1633. DSTO Platforms Sciences Laboratory; 2004. [111] Oberkamp WL, Trucano TG. Verification and validation in computational fluid dynamics. Progress in Aerospace Sciences 2002;38(3):209–72. [112] Ivo B, Tinsley OJ. The reliability of computer predictions: can they be trusted? International Journal of Numerical Analysis and Modeling 2005;1(1):1–18. [113] Anderson TW, Darling DA. Asymptotic theory of certain ’’goodness-of-fit’’ criteria based on stochastic processes. Annals of Mathematical Statistics 1952;23(2):193–212. [114] Stephens MA. EDF statistics for goodness of fit and some comparisons. Journal of the American Statistical Association 1974;69(347):730–7. [115] Anderson TW, You L. Adequacy of asymptotic theory for goodness-of-fit criteria for specific distributions. Journal of Time Series Analysis 2008;17(6): 533–52. [116] Bichon BJ, McFarlandy JM, Mahadevan S. Using Bayesian inference and efficient global reliability analysis to explore distribution uncertainty. In: 49th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, Schaumburg, IL, AIAA-2008-1712, 2008. [117] McFarland JM, Bichon BJ. Bayesian model averaging for reliability analysis with probability distribution model form uncertainty. In: 50th AIAA/ASME/ ASCE/AHS/ASC structures, structural dynamics, and materials conference, Palm Springs, California, AIAA-2009-2231, 2009. [118] Zhang R, Mahadevan S. Model uncertainty and Bayesian updating in reliability-based inspection. Structural Safety 2000;22(2):145–60. [119] Adrian R, Madigan D, Hoeting J. Model selection and accounting for model uncertainty in linear regression models. Journal of the American Statistical Association 1993;92:179–91.

476

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

[120] Roache PJ. Verification and validation in computational science and engineering. New Mexico: Hermosa Publishers; 1998. [121] Roy CJ. Review of discretization error estimators in scientific computing. In: 48th AIAA aerospace sciences meeting, Orlando, FL, AIAA-2010-0126, 2010. [122] Cavallo PA, Sinha N. Error quantification for computational aerodynamics using an error transport equation. Journal of Aircraft 2007;44(6):1954–63. [123] Shih TI, Williams BR. Development and evaluation of an a posteriori method for estimating and correcting grid-induced errors in solutions of the Navier– Stokes equations. In: Proceedings of the 47th AIAA aerospace sciences meeting including the New Horizons forum and aerospace exposition, 2009. [124] Oden JT, Prudhomme S. Goal-oriented error estimation and adaptivity for the finite element method. Computers and Mathematics with Applications 2001;41(5–6):735–56. [125] Oden JT, Prudhomme S, Bauman P. On the extension of goal-oriented error estimation and hierarchical modeling to discrete lattice methods. Computer Methods in Applied Mechanics and Engineering 2005;194(34–35):3668–88. [126] Fuentes D, Littlefield D, Oden JT, Prudhomme S. Error control in finite element approximations of nonlinear problems in mechanics. In: Proceedings of the second international conference on adaptive modeling and simulation, 2005. [127] Abdulle A. On a priori error analysis of fully discrete heterogeneous multiscale FEM. Multiscale Modeling and Simulation 2005;4(2):447–59. [128] Ainsworth M, Oden JT. A posteriori error estimation in finite element analysis. New York: Wiley Interscience; 2000. [129] Zienkiewicz OC, Zhu JZ. The superconvergent patch recovery and a posteriori error estimates, Part 2: error estimates and adaptivity. International Journal for Numerical Methods in Engineering 1992;33(7):1365–82. [130] Oden JT, Prudhomme S. Estimation of modeling error in computational mechanics. Journal of Computational Physics 2002;182(2):496–515. [131] Babuska I, Oden JT. Verification and validation in computational engineering and science: basic concepts. Computer Methods in Applied Mechanics and Engineering 2004;193(36–38):4057–66. [132] Apley DW, Liu J, Chen W. Understanding the effects of model uncertainty in robust design with computer experiments. Journal of Mechanical Design 2006;128(4):945–58. [133] Rebba R, Mahadevan S. Statistical methods for model validation under uncertainty. In: Proceedings of the 47th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, 2006. [134] Rebbaa R, Mahadevan S, Huang S. Validation and error estimation of computational models. Reliability Engineering and System Safety 2006; 91(10–11):1390–7. [135] Saltelli A, Ratto M, Andres T, Campolongo F, Cariboni J, Gatelli D, et al. Global sensitivity analysis: the primer. Chichester: John Wiley and Sons; 2008. [136] Hofer E. Sensitivity analysis in the context of uncertainty analysis for computationally intensive models. Computer Physics Communications 1999;117(1–2):21–34. [137] Helton JC, Johnson JD, Salaberry CJ, Storlie CB. Survey of sampling-based methods for uncertainty and sensitivity analysis. Reliability Engineering and System Safety 2006;91(10–11):1175–209. [138] Iman RL, Helton JC. An investigation of uncertainty and sensitivity analysis techniques for computer models. Risk Analysis 1988;8(1):71–90. [139] Cacuci DG, Ionescu-Bujor M. A comparative review of sensitivity and uncertainty analysis of large-scale systems—II: statistical methods. Nuclear Science and Engineering 2004;147(3):204–17. [140] Liu H, Chen W, Sudjianto A. Probabilistic sensitivity analysis methods for design under uncertainty. In: Proceedings of the 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, 2004. [141] Helton JC, Davis FJ. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering and System Safety 2003;81(1):23–69. [142] Morris MD. Factorial sampling plans for preliminary computational experiments. Technometrics 1991;33(2):161–74. [143] Campolongo F, Cariboni J, Saltelli A. An effective screening design for sensitivity analysis of large models. Environmental Modelling and Software 2007;22(10):1509–18. [144] Helton JC, Johnson JD, Oberkampf WL, Sallaberry CJ. Sensitivity analysis in conjunction with evidence theory representations of epistemic uncertainty. Reliability Engineering and System Safety 2006;91(10–11):1414–34. [145] Oberguggenberger M, King J, Schmelzer B. Classical and imprecise probability methods for sensitivity analysis in engineering: a case study. International Journal of Approximate Reasoning 2009;50(4):680–93. [146] Bae HR, Grandhi RV, Canfield RA. Sensitivity analysis of structural response uncertainty propagation using evidence theory. Structural and Multidisciplinary Optimization 2006;31(4):270–91. [147] Guo J, Du X. Sensitivity analysis with mixture of epistemic and aleatory uncertainties. AIAA Journal 2007;45(9):2337–49. [148] Guo J. Uncertainty analysis and sensitivity analysis for multidisciplinary systems design. PhD dissertation, Missouri University of Science and Technology, 2008. [149] Keane AJ, Nair PB. Computational approaches for aerospace design: the pursuit of excellence. Chichester, West Sussex: John Wiley and Sons; 2005. [150] Wiener N. The homogeneous chaos. American Journal of Mathematics 1938;60(4):897–936. [151] Ghanem R, Spanos P. Stochastic finite elements: a spectral approach. New York: Springer-Verlag; 1991.

[152] Le Maitre OP, Knio OM, Najm HN, Ghanem RG. A stochastic projection method for fluid flow I. Basic formulation. Journal of Computational Physics 2001;173(2):481–511. [153] Le Maitre OP, Reagan MT, Najm HN, Ghanem RG, Knio OM. A stochastic projection method for fluid flow II. Random process. Journal of Computational Physics 2002;181(1):9–44. [154] Xiu D, Karniadakis GE. The Wiener-Askey polynomial chaos for stochastic differential equations. SIAM Journal of Scientific Computing 2002;24: 619–44. [155] Eldred MS, Burkardt J. Comparison of non-intrusive polynomial chaos and stochastic collocation methods for uncertainty quantification. In: Proceedings of the 47th AIAA aerospace sciences meeting including The New Horizons forum and aerospace exposition, 2009. [156] Eldred MS. Recent advances in non-intrusive polynomial chaos and stochastic collocation methods for uncertainty analysis and design. In: Proceedings of the 50th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, 2009. [157] Helton JC, Johnson JD, Sallaberry CJ, Storlie CB. Survey of sampling-based methods for uncertainty and sensitivity analysis. SAND2006-2901. Sandia National Laboratories; 2006. [158] Landau DP, Binder K. A guide to Monte Carlo simulations in statistical physics. 2nd ed. New York: Cambridge University Press; 2005. [159] Davis PJ, Rabinowitz P. Methods of numerical integration. 2nd ed. Orlando: Academic Press; 1984. [160] Li KS, Lumb P. Reliability analysis by numerical integration and curve fitting. Structural Safety 1985;3(1):29–36. [161] Bernardo FP, Pistikopoulos EN, Saraiva PM. Integration and computational issues in stochastic design and planning optimization problems. Industrial and Engineering Chemistry Research 1999;38(8):3056–68. [162] Yiben L. A quadrature-based technique for robust design with computer simulations. PhD dissertation, Massachusetts Institute of Technology, 2007. [163] Evans M, Swartz T. Methods for approximating integrals in statistics with special emphasis on Bayesian integration problems. Statistical Science 1995;10(3):254–72. [164] Monahan JF. Numerical methods of statistics. Cambridge: Cambridge University Press; 2001. [165] Barry T. Recommendations on the testing and use of pseudo-random number generators used in Monte Carlo analysis for risk assessment. Risk Analysis 1996;16(1):93–105. [166] Christian R, George C. Monte Carlo statistical methods.2nd ed. US: Springer; 2004. [167] Ang GL, Tang WH. Optimal importance-sampling density estimator. Journal of Engineering Mechanics 1992;118(6):1146–63. [168] Hinrichs A. Optimal importance sampling for the approximation of integrals. Journal of Complexity 2010;26(2):125–34. [169] Cao Y, Hussaini MY, Zang TA. On the exploitation of sensitivity derivatives for improving sampling methods. In: Proceedings of the 44th AIAA structures, structural dynamics and mechanics conference, 2003. [170] Heltona JC, Johnson JD, Oberkampf WL, Storlie CB. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. SAND2006-5557. Sandia National Laboratories; 2006. [171] Kreinovich V, Ferson SA. A new Cauchy-based black-box technique for uncertainty in risk analysis. Reliability Engineering and System Safety 2004;85(1–3):267–79. [172] Swiler LP, Paez TL, Mayes RL. Epistemic uncertainty quantification tutorial. In: Proceedings of the IMAC-XXVII, 2009. [173] Helton JC. Conceptual and computational basis for the quantification of margins and uncertainty. SAND2009-3055. Sandia National Laboratories; 2009. [174] Gu X, Renaud JE, Batill SM, Brach RM, Budhiraja AS. Worst case propagated uncertainty of multidisciplinary systems in robust design optimization. Journal of Structural and Multidisciplinary Optimization 2000;20(3): 190–213. [175] Cao H, Duan B. Uncertainty analysis for multidisciplinary systems based on convex models. In: Proceedings of the 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, 2004. [176] Du X, Chen W. An efficient approach to probabilistic uncertainty analysis in simulation-based multidisciplinary design. In: Proceedings of the 38th AIAA aerospace sciences meeting and exhibit, 2000. [177] Hahn G, Shapiro S. Statistical models in engineering. New York: Wiley; 1967. [178] Wong FS. First-order, second-moment methods. Computers and Structures 1985;20(4):779–91. [179] Green LL, Lin H, Khalessi MR. Probabilistic methods for uncertainty propagation applied to aircraft design. In: Proceedings of the 20th AIAA applied aerodynamics conference, 2002. [180] Rosenblueth E. Point estimates for probability moments. Proceedings of the National Academy of Sciences of the United States of America 1975;72(10): 3812–4. [181] Melchers RE. Structural reliability analysis and prediction. Chichester: John Wiley and Sons; 1999. [182] Breitung K. Asymptotic approximations for probability integrals. Probabilistic Engineering Mechanics 1989;4(4):187–90. [183] Song BF. A numerical integration method for computing structural system reliability. Computers and Structures 1990;36(1):65–70.

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

[184] Sakamoto J, Mori Y, Sekioka T. Probability analysis method using fast Fourier transform and its application. Structural Safety 1997;19(1):21–36. [185] Penmetsa RC, Grandhi RV. Adaptation of fast Fourier transformations to estimate structural failure probability. Finite Elements in Analysis and Design 2003;39(5–6):473–85. [186] Chen X, Lind NC. Fast probability integration by three-parameter normal tail approximation. Structural Safety 1982;1(4):269–76. [187] Kim NH, Ramu P. Tail modeling in reliability-based design optimization for highly safe structural systems. In: Proceedings of the 47th AIAA/ASME/ ASCE/AHS/ASC structures, structural dynamics, and materials conference, 2006. [188] Acar E, Rais-Rohani M, Eamon CD. Reliability estimation using dimension reduction and extended generalized lambda distribution. In: Proceedings of the 49th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, 2008. [189] Rahman S, Xu H. A univariate dimension-reduction method for multidimensional integration in stochastic mechanics. Probabilistic Engineering Mechanics 2004;19(4):393–408. [190] Youn BD, Zhimin X, Wells L, Lamb D. Enhanced dimension-reduction (eDR) method for reliability-based robust design optimization. In: Proceedings of the 11th AIAA/ISSMO multidisciplinary analysis and optimization conference, 2006. [191] Rackwitz R. Reliability analysis—a review and some perspectives. Structural Safety 2001;23:365–95. [192] Hohenbichler M, Gollwitzer S, Kruse W, Rackwitz R. New light on first- and second-order reliability methods. Structural Safety 1987;4(4):267–84. [193] Zhao Y, Ono T. A general procedure for first/second-order reliability method (FORM/SORM). Structural Safety 1999;21(2):95–112. [194] Rosenblatt M. Remarks on a multivariate transformation. Annals of Mathematical Statistics 1952;23(3):470–2. [195] Hasofer AM, Lind NC. Exact and invariant second-moment code format. Journal of Engineering Mechanics 1974;100(1):111–21. [196] Rackwitz R, Fiessler B. Structural reliability under combined random load sequences. Computers and Structures 1978;9(5):489–94. [197] Peiling L, Der KA. Optimization algorithms for structural reliability. Structural Safety 1991;9(3):161–77. ~ LMC. A branch and bound strategy for finding the reliability index with [198] Simoes non-convex performance functions. Structural Safety 1988;5(2):95–108. [199] Yang D, Li G, Cheng G. Convergence analysis of first order reliability method using chaos theory. Computers and Structures 2006;84(8–9):563–71. [200] Lee J, Yang Y, Ruy W. A comparative study on reliability-index and targetperformance-based probabilistic structural design optimization. Computers and Structures 2002;80(3–4):257–69. [201] Choi KK, Youn BD. On probabilistic approaches for reliability-based design optimization (RBDO). In: Proceedings of the ninth AIAA/ISSMO symposium on multidisciplinary analysis and optimization, 2002. [202] Tu J, Choi KK, Young HP. A new study on reliability-based design optimization. Journal of Mechanical Design 1999;121(4):557–64. [203] Youn BD, Choi KK, Park YH. Hybrid analysis method for reliability-based design optimization. Journal of Mechanical Design 2001;125(2):221–32. [204] Youn BD, Choi KK, Du L. Adaptive probability analysis using an enhanced hybrid mean value method. Journal of Structural and Multidisciplinary Optimization 2005;29(2):134–48. [205] Du X, Sudjianto A, Chen W. An integrated framework for optimization under uncertainty using inverse reliability strategy. In: Proceedings of the DETC’03 ASME 2003 design engineering technical conferences and computers and information in engineering conference, 2004. [206] Choi K, Youn B. An investigation of the nonlinearity of reliability-based design optimization. In: Proceedings of the 28th ASME design automation conference, 2002. [207] Der Kiureghian A, Zhang Y, Li C. Inverse reliability problem. ASCE Journal of Engineering Mechanics 1994;120(5):1150–9. [208] Li H, Foschi R. An inverse reliability method and its application. Structural Safety 1998;20(3):257–70. [209] Ramu P, Qu X, Youn BD, Choi KK. Safety factor and inverse reliability measures. In: Proceedings of the 45th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials conference, 2004. [210] Breitung K. Asymptotic approximations for multinormal integrals. Journal of Engineering Mechanics 1984;110(3):357–66. [211] Cai GQ, Elishakoff I. Refined second-order reliability analysis. Structural Safety 1994;14(4):267–76. [212] Grandhi RV, Wang L. Higher-order failure probability calculation using nonlinear approximations. Computer Methods in Applied Mechanics and Engineering 1999;168(1–4):185–206. [213] Zhao Y, Ono T, Kato M. Second-order third-moment reliability method. Journal of Structural Engineering 2002;128(8):1087–90. [214] Zhao Y, Ono T. Moment methods for structural reliability. Structural Safety 2001;23(1):47–75. [215] Du X, Chen W. A most probable point based method for uncertainty analysis. In: Proceedings of the DETC’00, ASME 2000 design engineering technical conferences and computers and information in engineering conference. [216] Elishakoff I, Hasofer AM. Exact versus approximate determination of structural reliability. Applied Scientific Research 1987;44(3):303–12. [217] Ditlevsen O. Generalized second moment reliability index. Structures and Machines 1979;7(4):435–51.

477

[218] Engelund S, Rackwit R. A benchmark study on importance sampling techniques in structural reliability. Structural Safety 1993;12(4):255–76. [219] Au SK, Beck JL. Important sampling in high dimensions. Structural Safety 2003;25(2):139–63. [220] Li F, Wu T. An importance sampling based approach for reliability analysis. In: Proceedings of the third annual IEEE conference on automation science and engineering, 2007. ¨ [221] Patelli E, Pradlwarter HJ, Schueller GI. On multinormal integrals by importance sampling for parallel system reliability. Structural Safety 2010, doi:10.1016/j.strusafe.2010.04.002. [222] Bucher CG. Adaptive sampling—an iterative fast Monte Carlo method. Structural Safety 1988;5(2):119–26. [223] Au S, Beck JL. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Engineering Mechanics 2001;16(4):263–77. ¨ [224] Koutsourelakis PS, Pradlwarter HJ, Schueller GI. Reliability of structures in high dimensions, Part I: algorithms and applications. Probabilistic Engineering Mechanics 2004;19(4):409–17. ¨ [225] Pradlwarter HJ, Pellissetti MF, Schenk CA, Schuellera GI, Kreis A, Fransen S, et al. Realistic and efficient reliability estimation for aerospace structures. Computer Methods in Applied Mechanics and Engineering 2005;194(12–16): 1597–617. ¨ [226] Pellissetti MF, Schueller GI, Pradlwarter HJ, Calvi A, Fransen S, Klein M. Reliability analysis of spacecraft structures under static and dynamic loading. Computers and Structures 2006;84(21):1313–25. [227] Ditlevsen O, Melchers RE, Gluver H. General multi-dimensional probability integration by directional simulation. Computers and Structures 1990;36(2):355–68. [228] Melchers RE. Structural system reliability assessment using directional simulation. Structural Safety 1994;16(1–2):23–37. [229] Nie J, Ellingwood BR. Directional methods for structural reliability analysis. Structural Safety 2000;22(3):233–49. [230] Du X, Chen W. Towards a better understanding of modeling feasibility robustness in engineering design. Journal of Mechanical Design 2000;122(4): 385–94. ¨ [231] Schueller GI, Pradlwarter HJ, Koutsourelakis PS. A critical appraisal of reliability estimation procedures for high dimensions. Probabilistic Engineering Mechanics 2004;19(4):463–74. [232] Rajashekhar MR, Ellingwood BR. A new look at the response surface approach for reliability analysis. Structural Safety 1993;12(3):205–20. [233] Zou T, Mahadevan S, Mourelatos ZP. Reliability analysis with adaptive response surfaces. In: Proceedings of the 44th AIAA/ASME/ASCE/AHS structures, structural dynamics, and materials conference, 2003. [234] Buchera C, Most T. A comparison of approximate response functions in structural reliability analysis. Probabilistic Engineering Mechanics 2008;23(2–3):154–63. [235] Bichon BJ, McFarland JM, Mahadevan S. Applying EGRA to reliability analysis of systems with multiple failure modes. In: Proceedings of the 51st AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, 2010. [236] Guo J, Du X. Reliability analysis for multidisciplinary systems with random and interval variables. AIAA Journal 2010;48(1):82–91. [237] Braibant V, Oudshoorn A, Boyer C. Nondeterministic ’’possibilistic’’ approaches for structural analysis and optimal design. AIAA Journal 1999;37(10):1298–303. [238] Du L, Choi K, Youn B. Inverse possibility analysis method for possibilitybased design optimization. AIAA Journal 2006;44(11):2682–90. [239] Adduri PR, Penmetsa RC. System reliability analysis for mixed uncertain variables. Structural Safety 2009;31(5):375–82. [240] Du X. Unified uncertainty analysis by the first order reliability method. Journal of Mechanical Design 2008;130(9):91401. [241] Luo Y, Kang Z, Li A. Structural reliability assessment based on probability and convex set mixed model. Computers and Structures 2009;87(21–22): 1408–15. [242] Mori Y, Ellingwood BR. Time-dependent system reliability analysis by adaptive importance sampling. Structural Safety 1993;12(1):59–73. [243] Kuschel N, Rackwitz R. Optimal design under time-variant reliability constraints. Structural Safety 2000;22(2):113–27. [244] Cui W, Blockley DI. On the bounds for structural system reliability. Structural Safety 1991;9(4):247–59. [245] Ramachandran K. System reliability bounds: a new look with improvements. Civil Engineering and Environmental Systems 2004;21(4):265–78. [246] Tonon F, Bae H, Grandhi RV, Pettit CL. Using random set theory to calculate reliability bounds for a wing structure. Structure and Infrastructure Engineering 2006;2(3–4):191–200. [247] Adduri PR, Penmetsa RC. Bounds on structural system reliability in the presence of interval variables. Computers and Structures 2007;85(5–6): 320–9. [248] Kang W, Song J, Gardoni P. Matrix-based system reliability method and applications to bridge networks. Reliability Engineering and System Safety 2008;93(11):1584–93. [249] Song J, Kang W. System reliability and sensitivity under statistical dependence by matrix-based system reliability method. Structural Safety 2009;31(2):148–56. [250] Nguyen TH, Song J, Paulino GH. Single-loop system reliability-based design optimization using matrix-based system reliability method theory and applications. Journal of Mechanical Design 2010;132(1):11005–11.

478

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

[251] Wang P, Youn BD, Hu C. A generalized complementary intersection method (CIM) for system reliability analysis. In: Proceedings of the 50th AIAA/ ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, 2009. [252] Youn BD, Wang P. Complementary intersection method for system reliability analysis. Journal of Mechanical Design 2009;131(4):41004. [253] Park S, Choi S, Sikorsky C, Stubbs N. Efficient method for calculation of system reliability of a complex structure. International, Journal of Solids and Structures 2004;41(18-19):5035–50. [254] Mahadevan S, Nagpal BSV, Venkataraman S, Pai SS. Probabilistic design and analysis for system-level application. In: Proceedings of the 48th AIAA/ ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, 2007. [255] McDonald M, Mahadevan S. Reliability based design optimization formulations for component and system reliability. In: Proceedings of the 49th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, 2008. [256] Madsen H, Krenk S, Lind N. Methods of structural safety. Englewood Cliffs.Prentice-Hall; 1986. [257] Ditlevsen O, Madsen HO. Structural reliability methods. Chichester: John Wiley and Sons; 1996. [258] Du X, Chen W. Efficient uncertainty analysis methods for multidisciplinary robust design. AIAA Journal 2002;40(3):545–52. [259] Du X, Wang Y, Chen W. Methods for robust multidisciplinary design. In: Proceedings of the 41st AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference and exhibit, 2000. [260] Du X, Chen W. Concurrent subsystem uncertainty analysis in multidisciplinary design. In: Proceedings of the eighth AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, 2000. [261] Gu X, Renaud JE. Implicit uncertainty propagation for robust collaborative optimization. In: Proceedings of the DETC’01 ASME 2001 design engineering technical conferences and computers and information in engineering conference, 2001. [262] Gu X, Renaud J. Implementation study of implicit uncertainty propagation (IUP) in decomposition-based optimization. In: Proceedings of the ninth AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, 2002. [263] Du X, Chen W. Collaborative reliability analysis for multidisciplinary systems design. In: Proceedings of the ninth AIAA/NASA/USAF/ISSMO symposium on multidisciplinary analysis and optimization, 2002. [264] Du X, Chen W. Collaborative reliability analysis under the framework of multidisciplinary systems design. Optimization and Engineering 2005;6: 63–84. [265] Mahadevan S, Smith NL. System risk assessment and allocation in conceptual design analysis. NASA/CR-2003-212162. Langley Research Center; 2003. [266] Padmanabhan D, Batill S. Decomposition strategies for reliability based optimization in multidisciplinary system design. In: Proceedings of the ninth AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, 2002. [267] Ahn J, Kwon JH. Sequential approach to reliability analysis of multidisciplinary analysis systems. Journal of Structural and Multidisciplinary Optimization 2004;28(6):397–406. [268] Casella G, George EI. Explaining the Gibbs sampler. American Statistician 1992;46(3):167–74. [269] Beale EML. On minimizing a convex function subject to linear inequalities. Journal of the Royal Statistical Society. Series B (Methodological) 1955;17(2):173–84. [270] Stougie L. Design and analysis of methods for stochastic integer programming. PhD dissertation, University of Amsterdam, 1985. [271] Hene´ TS, Dua V, Pistikopoulos EN. A hybrid parametric/stochastic programming approach for mixed-integer nonlinear problems under uncertainty. Industrial and Engineering Chemistry Research 2002;41(1):67–77. [272] Bastin F. Nonlinear stochastic programming. MS dissertation, University of Namur, 2001. [273] Bastin F. Trust-region algorithms for nonlinear stochastic programming and mixed logit models. PhD dissertation, University of Namur, 2004. [274] Mulvey JM, Vanderbei RJ, Zenios SA. Robust optimization of large-scale systems. Operations Research 1995;43(2):264–81. [275] Chen X, Sim M, Sun P. A robust optimization perspective on stochastic programming. Operations Research 2007;55(6):1058–71. [276] Zhang Y, Monder D, Forbes JF. Real-time optimization under parametric uncertainty: a probability constrained approach. Journal of Process Control 2002;12(3):373–89. [277] Kadam JV, Schlegel M, Srinivasan B, Bonvin D, Marquardt W. Dynamic optimization in the presence of uncertainty from off-line nominal solution to measurement-based implementation. Journal of Process Control 2007;17(5):389–98. [278] Ruszczynski A, Shapiro A. Handbooks in operations research and management science: stochastic programming. Amsterdam: Elsevier; 2006. [279] Kall P, Wallace SW. Stochastic programming. Chichester: John Wiley and Sons; 1994. [280] Delgado M, Verdegay JL, Vila MA. A general model for fuzzy linear programming. Fuzzy Sets and Systems 1989;29(1):21–9. [281] Guu S, Wu Y. Two-phase approach for solving the fuzzy linear programming problems. Fuzzy Sets and Systems 1999;107(2):191–5.

[282] Li X, Zhang B, Li H. Computing efficient solutions to fuzzy multiple objective linear programming problems. Fuzzy Sets and Systems 2006;157(10): 1328–32. [283] Shih CJ. Wangsawidjaja RAS. Mixed fuzzy-probabilistic programming approach for multiobjective engineering optimization with random variables. Computers and Structures 1996;59(2):283–90. [284] Liu B. Theory and practice of uncertain programming. Berlin: SpringerVerlag; 2009. [285] Ben-Tal A, Nemirovski A. Robust convex optimization. Mathematics of Operations Research 1998;23(4):769–805. [286] Zhang Y. General robust-optimization formulation for nonlinear programming. Journal of Optimization Theory and Applications 2007;132(1): 111–24. [287] Boni O, Ben-Tal A, Nemirovski A. Robust solutions to conic quadratic problems and their applications. Optimization Engineering 2008;9(1):1–18. [288] Spall JC. Introduction to stochastic search and optimization. Hoboken, New Jersey: John Wiley and Sons; 2003. [289] Ljung L, Pflug GC, Walk H. Stochastic approximation and optimization of random systems. Basel: Birkhauser Verlag; 1992. [290] Andrado´ttir S. A review of simulation optimization techniques. In: Proceedings of the 1998 winter simulation conference, 1998. [291] Rosen SL, Harmonosky CM, Traband MT. A simulation optimization method that considers uncertainty and multiple performance measures. European Journal of Operational Research 2007;181:315–30. [292] Parkinson A, Sorensen C, Pouthassan N. A general approach for robust optimal design. Transactions of the ASME 1993;115(1):74–80. [293] Sundaresan S, Ishii K, Houser DR. A robust optimization procedure with variations on design variables and constraints. Advances in Design Automation 1993;69(1):379–86. [294] Yu J, Ishii K. Design for robustness based on manufacturing variation patterns. Transactions of the ASME 1998;120(2):196–202. [295] Shan S, Wang GG. Reliable design space and complete single-loop reliability-based design optimization. Reliability Engineering and System Safety 2008;93(8):1218–30. [296] Youn BD, Choi KK. A new response surface methodology for reliabilitybased design optimization. Computers and Structures 2004;82(2–3): 241–56. [297] Cheng G, Xua L, Jiang L. A sequential approximate programming strategy for reliability-based structural optimization. Computers and Structures 2006; 84(21):1353–67. [298] Kuran B. Reliability based design optimization of a solid rocket motor using surrogate models. In: Proceedings of the 43rd AIAA/ASME/SAE/ASEE joint propulsion conference and exhibit, 2007. [299] Hyeon JB, Chai LB. Reliability-based design optimization using a moment method and a kriging metamodel. Engineering Optimization 2008;40(5): 421–38. [300] Wang L, Kodiyalam S. An efficient method for probabilistic and robust design with non-normal distributions. In: Proceedings of the 43rd AIAA/ ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, 2002. [301] Bae H, Grandhi RV, Canfield RA. Reliability-based design optimization under imprecise uncertainty. In: Proceedings of the 46th AIAA/ASME/ASCE/AHS/ ASC structures, structural dynamics and materials conference, 2005. [302] Du X, Sudjianto A, Huang B. Reliability-based design with the mixture of random and interval variables. Journal of Mechanical Design 2005;127(6): 1068–76. [303] Du L, Choi KK. An inverse analysis method for design optimization with both statistical and fuzzy uncertainties. Structural and Multidisciplinary Optimization 2008;37(2):107–19. [304] Thanedar PB, Kodiyalam S. Structural optimization using probabilistic constraints. Structural and Multidisciplinary Optimization 1992;4(3–4): 236–40. [305] Maglaras G, Ponslet E, Haftka RT, Nikolaidis E, Sensharma P, Cudney HH. Analytical and experimental comparison of probabilistic and deterministic optimization. AIAA Journal 1996;34(7):1512–8. [306] Phadke M. Quality engineering using robust design.Englewood Cliffs. NJ: Prentice-Hall; 1989. [307] Steinberg DM. Robust design: experiments for improving quality. In: Ghosh S, Rao CR, editors. Handbook of Statistics, vol. 13. Elsevier Science; 1996. p. 199–240. [308] Bates RA, Kenett RS, Steinberg DM, Wynn HP. Robust design using computer experiments. Progress in industrial mathematics at ECMI 2004. Springer; 2004 pp. 564–8. [309] Ramakrishnan B, Rao SS. A robust optimization approach using Taguchi’s loss function for solving nonlinear optimization problems. ASME Advances in Design Automation 1991;DE-32(1):241–8. [310] Otto JN, Antonsson EK. Extensions to the Taguchi method of product design. Journal of Mechanical Design 1993;115(1):5–13. [311] Lee K, Park G. Robust optimization considering tolerances of design variables. Computers and Structures 2001;79(1):77–86. [312] Chen W, Sahai A, Messac A, Sundararaj GJ. Exploration of the effectiveness of physical programming in robust design. Journal of Mechanical Design 2000;122(2):155–63. [313] Messac A, Ismail-Yahaya A. Multiobjective robust design using physical programming. Structural and Multidisciplinary Optimization 2002;23(5): 357–71.

W. Yao et al. / Progress in Aerospace Sciences 47 (2011) 450–479

[314] Chen W, Wiecek MM, Zhang J. Quality utility: a compromise programming approach to robust design. Journal of Mechanical Design 1999;121(2): 179–87. [315] Govindaluri SM, Cho BR. Robust design modeling with correlated quality characteristics using a multicriteria decision framework. Journal of Advanced Manufacturing Technology 2007;32(5–6):423–33. [316] Das I. Robustness optimization for constrained nonlinear programming problems. Engineering Optimization 2000;32(5):585–618. [317] Rai MM. Robust optimal design with differential evolution. In: Proceedings of the 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, 2004. [318] Li M, Azarm S, Aute V. A multi-objective genetic algorithm for robust design optimization. In: Proceedings of the 2005 conference on Genetic and evolutionary computation, 2005. p. 771–8. [319] Rangavajhala S, Mullur AA, Messac A. Uncertainty visualization in multiobjective robust design optimization. In: Proceedings of the 47th AIAA/ ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, 2006. [320] Koch PN. Probabilistic design: optimizing for six sigma quality. In: Proceedings of the 43rd AIAA/ASME/ASCE/AHS structures, structural dynamics, and materials conference, 2002. [321] Pyzdek T. The six sigma handbook. 2nd ed. New York: McGraw-Hill; 2003. [322] Mattson C, Messac A. Handling equality constraints in robust design optimization. In: Proceedings of the 44th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials conference, 2003. [323] Rangavajhala S, Mullur A, Messac A. The challenge of equality constraints in robust design optimization: examination and new approach. In: Proceedings of the 46th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics & materials conference, 2005. [324] Su J, Renaud JE. Automatic differentiation in robust optimization. AIAA Journal 1997;35(6):1072–9. [325] Wang H, Kim NH. Robust design using stochastic response surface and sensitivities. In: Proceedings of the first AIAA/ISSMO multidisciplinary analysis and optimization conference, 2006. [326] Lee Y, Hong K, Choi D. An efficient robust optimal design method for engineering systems with numerical noise. In: Proceedings of the 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, 2004. [327] Hacker K, Lewis K. Robust design through the use of a hybrid genetic algorithm. In: Proceedings of the DETC’02 ASME 2002 design engineering technical conferences and computers and information in engineering conference, 2002. [328] Fuchs M, Neumaier A, Girimonte D. Uncertainty modeling in autonomous robust spacecraft system design. Proceedings in Applied Mathematics and Mechanics 2007;7(1):2060041–2. [329] Zang C, Friswell MI, Mottershead JE. A review of robust optimal design and its application in dynamics. Computers and Structures 2005;83(4–5): 315–26. [330] Egorov I, Kretinin G, Leshchenko I. How to execute robust design optimization. In: Proceedings of the ninth AIAA/ISSMO symposium on multidisciplinary analysis and optimization, 2002. [331] Koch PN, Wujek B, Golovidov O. A multi-stage, parallel implementation of probabilistic design optimization in an MDO framework. In: Proceeding of the eighth AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, 2000. [332] Koch PN, Simpson TW, Allen JK, Mistree F. Statistical approximations for multidisciplinary design optimization: the problem of size. Journal of Aircraft 1999;36(1):275–86. [333] Sues RH, Rhodes GS. Portable parallel stochastic optimization for the design of aeropropulsion components. NASA CR-195312. Lewis Research Center; 1994. [334] Agarwal H, Renaud J, Lee J, Watson L. A unilevel method for reliability based design optimization. In: Proceedings of the 45th AIAA/ASME/ASCE/AHS structures, structural dynamics, and materials conference, 2004. [335] Chen X, Hasselman TK, Neill DJ. Reliability based structural design optimization for practical applications. In: Proceedings of the 38th AIAA/ASME/ ASCE/AHS structures, structural dynamics, and materials conference, 1997. [336] Liang J, Mourelatos ZP. A single-loop method for reliability-based design optimisation. International Journal of Product Development 2008;5(1–2): 76–92. [337] Sues R, Cesare M. An innovative framework for reliability-based MDO. In: Proceedings of the 41st AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference and exhibit, 2000. [338] Wu YT, Shin Y, Sues R, Cesare M. Safety-factor based approach for probabilitybased design optimization. In: 42nd AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials conference and exhibit, 2001. [339] Du X., Chen W. Sequential optimization and reliability assessment method for efficient probabilistic design. In: Proceedings of the ASME 2002 design engineering technical conference and computers and information in engineering conference, 2002. [340] Agarwal H, Renaud JE. New decoupled framework for reliability-based design optimization. AIAA Journal 2006;44(7):1524–31. [341] Zhang X, Huang H. Sequential optimization and reliability assessment for multidisciplinary design optimization under aleatory and epistemic

[342]

[343]

[344]

[345]

[346]

[347]

[348]

[349]

[350]

[351]

[352]

[353]

[354]

[355]

[356]

[357]

[358]

[359]

[360]

[361]

[362] [363]

[364]

[365]

[366]

479

uncertainties. Journal of Structural and Multidisciplinary Optimization 2010;40(1):165–75. Zhang X, Huang H, Xu H. Multidisciplinary design optimization with discrete and continuous variables of various uncertainties. Journal of Structural and Multidisciplinary Optimization 2010, doi:10.1007/s00158010-0513-y. Smith N, Mahadevan S. Integrating system-level and component-level designs under uncertainty. Journal of Spacecraft and Rockets 2005;42(4): 752–60. Royset JO, Der Kiureghian A, Polak E. Reliability-based optimal structural design by the decoupling approach. Reliability Engineering and System Safety 2001;73(3):213–21. Zou T, Mahadevan S. A direct decoupling approach for efficient reliabilitybased design optimization. Journal of Structural and Multidisciplinary Optimization 2006;31(3):190–200. McDonald M, Mahadevan S. All-at-once multidisciplinary optimization with system and component-level reliability constraints. In: Proceedings of the 12th AIAA/ISSMO multidisciplinary analysis and optimization conference, 2008. Chiralaksanakul A, Mahadevan S. Decoupled approach to multidisciplinary design optimization under uncertainty. Optimization and Engineering 2007;8(1):21–42. Agarwal H, Renaud JE, Mack JD. A decomposition approach for reliabilitybased multidisciplinary design optimization. In: Proceedings of the 44th AIAA/ASME/ASCE/AHS structures, structural dynamics, and materials conference, 2003. Du X, Guo J, Beeram H. Sequential optimization and reliability assessment for multidisciplinary systems design. Journal of Structural and Multidisciplinary Optimization 2008;35(2):117–30. Ahn J, Kwon J. An efficient strategy for reliability-based multidisciplinary design optimization using BLISS. Journal of Structural and Multidisciplinary Optimization 2006;31(5):363–72. Yang RJ, Chuang C, Gu L, Li G. Numerical experiments of reliability-based optimization methods. In: 45th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics & materials conference, 2004. Yang RJ, Chuang C, Gu L, Li G. Experience with approximate reliability-based optimization methods II: an exhaust system problem. Journal of Structural and Multidisciplinary Optimization 2005;29(6):488–97. Braun RD, Gage P, Kroo I, Sobieski I. Implementation and performance issues in collaborative optimization. In: Proceedings of the sixth AIAA/ NASA/USAF/ISSMO symposium on multidisciplinary analysis and optimization, 1996. McAllister CD, Simpson TW. Multidisciplinary robust design optimization of an internal combustion engine. Journal of Mechanical Design 2003;125(1): 124–30. Mistree F, Hughes OF, Bras BA. The compromise decision support problem and the adaptive linear programming algorithm. In: Kamat MP, editor. Structural optimization: status and promise. AIAA; 1993. p. 247–89. DeMiguel A, Murray W. An analysis of collaborative optimization methods. In: Proceedings of the eighth AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization, Long Beach, California, 2000. Sobieszczanski-Sobieski J. Optimization by decomposition: a step from hierarchic to non-hierarchic systems. In: Second NASA/Air force symposium on recent advances in multidisciplinary analysis and optimization, 1988. Sellar RS, Batill SM, Renaud JE. Response surface based, concurrent subspace optimization for multidisciplinary system design. In: Proceedings of the 34th AIAA aerospace sciences meeting and exhibit, 1996. Padmanabhan D, Batill S. Reliability based optimization using approximations with applications to multi-disciplinary system design. In: Proceedings of the 40th aerospace sciences meeting and exhibit, 2002. Yao W, Chen X, Wei Y, Gao S. A game theory based composite subspace uncertainty multidisciplinary design optimization procedure. In: Proceedings of the eighth world congress on structural and multidisciplinary optimization, 2009. Yao W, Guo J, Chen X, van Tooren M. Utilizing uncertainty multidisciplinary design optimization for conceptual design of space systems. In: Proceedings of the eighth conference on systems engineering research, 2010. Kim HM, Michelena NF, Papalambros PY. Target cascading in optimal system design. Journal of Mechanical Design 2003;125(3):474–80. Kokkolaras M, Moulrlatos J, Papalambros PY. Design optimization of hierarchically decomposed multilevel systems under uncertainty. In: Proceedings of the ASME 2004 design engineering technical conference and computers and information in engineering conference, 2005. Wu YT, Millwater HR, Cruse TA. Advanced probabilistic structural analysis method for implicit performance functions. AIAA Journal 1990;28(19): 1663–9. Liu H, Chen W, Kokkolaras M, et al. Probabilistic analytical target cascading: a moment matching formulation for multilevel optimization under uncertainty. Journal of Mechanical Design 2006;128(4):503–8. Xiong F, Yin X, Chen W, Yang S. Enhanced probabilistic analytical target cascading with application to multiscale design. In: Proceedings of the eighth world congress on structural and multidisciplinary optimization, 2009.