Epistemic uncertainty quantification techniques including evidence theory for large-scale structures

Epistemic uncertainty quantification techniques including evidence theory for large-scale structures

Computers and Structures 82 (2004) 1101–1112 www.elsevier.com/locate/compstruc Epistemic uncertainty quantification techniques including evidence theo...

666KB Sizes 4 Downloads 63 Views

Computers and Structures 82 (2004) 1101–1112 www.elsevier.com/locate/compstruc

Epistemic uncertainty quantification techniques including evidence theory for large-scale structures Ha-Rok Bae a, Ramana V. Grandhi a

a,*

, Robert A. Canfield

b

Department of Mechanical and Materials Engineering, Wright State University, 209 RC, Dayton, OH 45435, USA b Department of Aeronautics and Astronautics, Air Force Institute of Technology, WPAFB, OH 45433, USA Accepted 5 March 2004 Available online 12 April 2004

Abstract Over the last decade, probability theory has been studied and embedded in engineering structural design through uncertainty quantification (UQ) analysis, instead of simply assigning safety factors. However, recently it has been found by the scientific and engineering community that there are limitations in using only one framework (probability theory) to quantify the uncertainty in a system because of the impreciseness of data or knowledge. In this paper, evidence theory is proposed as an alternative to the classical probability theory to handle the imprecise data situation. The possibility of adopting evidence theory as a general tool of UQ analysis for large-scale built up structures is investigated with an algorithm that can alleviate the computational difficulties. Ó 2004 Elsevier Ltd. All rights reserved. Keywords: Evidence theory; Possibility theory; Belief; Plausibility; Multi point approximation

1. Introduction Probability theory has gained popularity in many applications such as modeling and quantifying uncertainty in engineering systems for the last three decades. However, the complexity of modern engineering systems has been increased by various requirements, such as high performance, efficiency, and cost reduction. Also, multiple types of uncertainties in a system must be considered for a robust prediction of the target system performance. Since extensive information is required in a probabilistic Uncertainty Quantification (UQ) analysis, scientific and engineering communities have realized recently that there are limitations in using the probabilistic framework in their systems. To represent the uncertainty in a system, Helton [1] and Oberkampf and Helton [2] described uncertainties into two distinct types––aleatory uncertainty and epistemic uncertainty.

*

Corresponding author. Tel.: +1-937-775-5090; fax: +1-937775-5147. E-mail address: [email protected] (R.V. Grandhi).

Aleatory uncertainty is also called irreducible and inherent uncertainty. Epistemic uncertainty is subjective and reducible uncertainty that stems from lack of knowledge or data. The most appropriate mathematical representation of aleatory uncertainty is the probabilistic framework when the given information is perfect and complete. However, as a system becomes more complex and sophisticated, an accurate prediction cannot be expected from probability theory, because of epistemic uncertainty induced by impreciseness of information. The formal theory that we may choose to handle epistemic uncertainty is possibility theory, which was first introduced by Zadeh [3]. Since then, possibility theory has been applied in many areas. The structure design problems with fuzzy variables were investigated by Wood et al. [4], Antonsson and Otto [5] and Penmetsa and Grandhi [6]. Contrary to the classical probability theory, which is best suited to aleatory uncertainty, possibility theory is usually used to quantify only epistemic uncertainty. Until now, when both kinds of uncertainties are present together in a system, UQ analyses have been performed by treating them separately, or by making

0045-7949/$ - see front matter Ó 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.compstruc.2004.03.014

1102

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

assumptions to accommodate either a probabilistic framework or a possibilistic framework. However, because of the flexibility of the basic axioms in evidence theory, not only epistemic uncertainty, but also aleatory uncertainty can be tackled in its framework without any baseless assumptions. Even though the capability of evidence theory to handle both types of uncertainties was proposed [1], the use of evidence theory has barely been explored in engineering structural systems. One of the major difficulties in applying evidence theory to an engineering system is the computational cost. Unlike the probability density function (PDF) or possibility distribution function (membership function of fuzzy variable), there is no explicit function of the given imprecise information in evidence theory. Since many possible discontinuous sets can be given for an uncertain variable instead of a smooth and continuous explicit function, intensive computational cost might be inevitable in quantifying uncertainty using evidence theory. In this paper, the possibility of adopting evidence theory as a general tool of UQ analysis in an engineering structural system is investigated with a methodology which can alleviate the computational difficulties of using evidence theory. The technical terms in evidence theory are briefly introduced in Section 2, and the UQ problem definition for an engineering system using evidence theory is presented in Section 3 with the methodology of reducing computational cost. The flexibility of evidence theory is discussed in Section 4 with two numerical examples. Finally, some summary remarks are presented in Section 5.

2. Evidence theory Shafer [7] developed Dempster’s work and presented evidence theory, also called Dempster–Shafer theory. The main concept of evidence theory is that our knowledge of a given problem can be inherently imprecise. Hence, the bound result, which consists of both belief and plausibility, is presented. 2.1. Frame of discernment (FD) Any problem of likelihood takes some possible sets as given. These sets might be nested in one another or might partially overlap. The FD is defined by the finest possible subdivisions of the sets. The finest possible subdivision is called the elementary proposition. FD consists of all finite elementary propositions and can be viewed as the finite sample space in probability theory. FD is denoted by X . For example, if FD is given as X ¼ fx1 ; x2 ; x3 g, then x1 , x2 , and x3 are elementary propositions and mutually exclusive to each other.

Various propositions can be expressed by applying connectives for negation, conjunction, and disjunction to elementary propositions. If we let 2X denote the power set of X , then, 2X contains 2n distinct propositions that indicate all the possible subset propositions of X , where n is the number of elementary propositions. 2X ¼ f;; fx1 g; fx2 g; fx3 g; fx1 ; x2 g; fx2 ; x3 g; fx1 ; x3 g; X g 2.2. Basic belief assignment (BBA) In evidence theory, the basic propagation of information is through BBA. BBA expresses the degree of belief in a proposition. BBA is assigned by making use of a mapping function (m) in order to express our belief with a number in the unit interval ½0; 1 m : 2X ! ½0; 1

ð1Þ

The number mðAÞ represents the portion of total belief assigned exactly to proposition A. The measure m, basic belief assignment function, must satisfy the following three axioms mðAÞ P 0 for any A 2 2X

ð2Þ

mð;Þ ¼ 0

ð3Þ

X

mðAÞ ¼ 1

ð4Þ

A22X

Though these three axioms of evidence theory look similar to those of probability theory, the axioms for the basic belief assignment functions are less stringent than those for probability measure. 2.3. Dempster’s rule of combining Two BBA structures, m1 and m2 , given by different evidence sources, can be aggregated by the so-called Dempster’s rule of combining in order to make a new combined BBA structure as given by Eq. (5). P C \C ¼A m1 ðCi Þm2 ðCj Þ Pi j mðAÞ ¼ ; A 6¼ ; ð5Þ 1  Ci \Cj ¼; m1 ðCi Þm2 ðCj Þ from each sources where Ci and Cj denote propositions P (m1 and m2 ). In Eq. (5), Ci \Cj ¼; m1 ðCi Þm2 ðCj Þ can be viewed as contradiction or conflict among the information given by the independent knowledge sources. Even when some conflict is found among the information, Dempster’s rule disregards every contradiction by normalizing with the complementary degree of contradiction in order to consider only consistent information. However, this normalization can cause a counterintui-

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

tive and numerically unstable combination of information, when the given information from different sources has significant contradiction or conflict [8,9]. If there is a serious conflict, it is recommended to investigate the given information or to collect more information. Recently, Sentz and Ferson [10] investigated Dempster’s rule of combining by comparing the algebraic properties with other combination rules, and by defining various types of evidences. They concluded that Dempster’s rule of combining could be considered reliable under situations of minimal conflict and various types of information sources.

2.4. Belief and plausibility functions Due to the lack of information and various possibilities in constructing BBA structure, it is more reasonable to present a bound of the total degree of belief in a proposition, as opposed to a single value of probability given as a final result in probability theory. The total degree of belief in a proposition ÔA’ is expressed within bounds [BelðAÞ; PlðAÞ] which lies in the unit interval ½0; 1 as shown in Fig. 1, where BelðÞ and PlðÞ are given as, BelðAÞ ¼

X

mðCÞ : Belief function

ð6Þ

mðCÞ : Plausibility function

ð7Þ

CA

PlðAÞ ¼

X

1103

3. Problem definition in imprecise information situation The form of the mathematical model that describes the physical system can be expressed abstractly as Eq. (8). Y ¼ f ðX Þ

ð8Þ

where Y ¼ ½y1 ; y2 ; . . . ; yn  is a vector of system responses, and X ¼ ½x1 ; x2 ; . . . ; xn  is a vector of input data. In this work, only parametric uncertainty is considered; that is, there is no uncertainty in the defined mathematical modeling, system failure modes, and so on. When only parametric uncertainty is considered, the uncertainty of Y is determined from the uncertainty of X in the model. Once enough data for those parameters of X are obtained, the parametric uncertainties in X can be expressed by PDFs and probabilistic UQ techniques can be used. When the available data are not sufficient to construct a PDF, upper and lower bounds might be provided from experts’ opinions. For the imprecise bound information (epistemic uncertainty) of an uncertain parameter, the Bayesian method can be used in probability theory under the assumption that the imprecise information is given to events which are mutually exclusive and exhaustive [11]; that is, the uncertain information consists of a probability density p on the all finite elementary events of S, the universal set of events, such that p : S ! ½0; 1 and X pðsÞ ¼ 1 ð9Þ s2S

C\A6¼;

BelðAÞ is obtained by the summation of BBAs for propositions which are included in proposition A fully. BelðAÞ is the Ôtotal’ degree of belief. The degree of plausibility PlðAÞ is calculated by adding BBAs of propositions whose intersection with proposition A is not an empty set. That is, every proposition consistent with proposition A at least partially is considered to imply proposition A, because BBA in a proposition is not divided into its subsets. Briefly, BelðAÞ is obtained by adding the BBAs of propositions that totally agree with the proposition A as a measure of belief. Whereas PlðAÞ is plausibility calculated by adding BBAs of propositions that correspond to the proposition A totally or partially. In a sense these two measures consist of lower and upper probability bounds.

Bel(A)

Uncertainty

Bel(¬A)

Pl(A)

Fig. 1. Belief (Bel) and Plausibility (Pl).

Hence, in case the imprecise information is given to any subset of S, the probability information for each elementary event should be reproduced by using any assumption for the probability mass distribution in the subset. On the other hand, in possibility theory, for given bound information, a membership function is defined to represent the degree of belonging or not belonging to the leveled interval (membership) by taking the uncertain variable as a fuzzy variable. With different levels of degree of membership (a cuts), fuzzy subsets of the fuzzy variable are obtained. Since the fuzzy set is originally developed with the contention that meaning in natural language is a matter of degree [12], the fuzzy subsets are consonant sets with corresponding a cuts. When the imprecise information is given by multiple non-consonant intervals with corresponding degrees of belief, the fuzzy membership function should be approximated to solve with possibility theory [13]. In evidence theory, imprecise information expressed by any subset of FD is assigned to a BBA structure without any additional assumption. The subsets (intervals of an uncertain variable) to which the bodies of information (BBAs) are assigned can be consonant or non-consonant and continuous or discrete. The interval can be the

1104

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

interval of physical value or the interval of imprecise statistics. As mentioned previously, the evidence theory gives a bounded result ([Bel, Pl]) due to lack of information, and the bounded result includes the probability result, which can be obtained by assuming any distribution for the given interval information. The measurements (Bel, Pl and probability) eventually will converge to a single value when the information is increased sufficiently. However, unlike a PDF of probability theory and a membership function of possibility theory, the BBA structure in evidence theory cannot be expressed with an explicit function. For multiple uncertain parameters, the joint BBA structure which is similar with the joint probability density function in probability theory, is defined for UQ analysis of a structural system. The possible joint set, denoted by C, is constructed by using the Cartesian product of the propositions of each uncertain parameter. The joint BBA structure must follow the three axioms of BBA structure. The Joint Frame of Discernment (JFD) is also defined by the finest elementary propositions of the possible joint set. For example, for only two uncertain parameters, the possible joint set is defined as, C ¼ a1  a2 ¼ fck ¼ ½a1m ; a2n  : a1m 2 a1 ; a2n 2 a2 g

ð10Þ

where ai is the proposition set of ith uncertain parameter. And the BBA for the joint BBA structure is defined by mðck Þ ¼ mða1m Þmða2n Þ

ð11Þ

Thus every possible event is required to be checked in the evaluation of the belief and plausibility functions by finding the maximum and minimum responses. ½Y min ; Y max  ¼ ½min½f ðck Þ; max½f ðck Þ

ð12Þ

where ck is the element of the possible joint set C. The methods for computing the response bound include the optimization method [14], the sampling method [15] and the vertex method [16]. Since the simulation of a structural system is usually performed by numerically intensive procedures, such as Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD), the sampling method or the optimization method might be infeasible in practice due to high computational cost even though these methods are robust. The vertex method can be used to reduce the computational cost by checking only vertices of each joint event to find the maximum and minimum. However, the result of the vertex method is valid only for monotonic system responses. For non-monotonic and highly non-linear system responses, the vertex method can give erroneous results, and moreover, the computational cost of the vertex method varies exponentially with the number of uncertain variables and the number of given intervals. To overcome these problems, a cost effective algorithm

with an approximation method is employed to reduce the cost without sacrificing the accuracy. After presenting the algorithm briefly, three different solution approaches (probability theory, possibility theory and evidence theory) are discussed with numerical examples. 3.1. The cost effective algorithm The main computational cost of UQ analysis is due to the large number of structural model simulations for exploring the entire Joint Frame of Discernment (JFD) of uncertain variables. However, in most cases of UQ analysis of an engineering structural system, the failure region is small compared to the JFD. So, in order to reduce the computational cost, the structural simulation effort can be devoted only to the failure region, instead of the entire JFD in the evaluations of belief and plausibility functions. In the algorithm, first sub-optimization is performed to identify the failure surfaces as shown in Fig. 2. In the optimization procedure, the exact optimum point (failure boundary point) is not required, so the computational cost for finding the failure boundary point can be reduced by relaxing the convergence criteria. For the next step, an approximation method is applied to construct the surrogate of the limit state function over the identified failure region by deploying the approximation constructing points as shown in Fig. 3. In this work the Multi Point Approximation (MPA) method [17] is used. The general formulation of MPA is given as follows, Fe ðX Þ ¼

N X

wi ðX Þ Fei ðX Þ

ð13Þ

i¼1

where N is the number of local approximations, X is the vector of uncertain variables (X 2 Rn ), Fei ðX Þ is a local approximation and wi ðX Þ is a weighting function that determines the contributions of each local approximation function with distance factors. The accuracy of

Fig. 2. Identifying the failure region via optimization technique.

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

1105

where X2 is the expansion point. This approximation is a second-order Taylor series expansion in terms of interp vening variables yj , (yj ¼ xj j ), in which the Hessian matrix has only diagonal elements of the same value e. Once the surrogate on the failure region is constructed, the computational cost is trivial because the surrogate is a closed form equation. Thus, any robust computational method (sampling methods, optimization methods, and so on) can be applied without high computational cost by using the surrogate. The algorithm of UQ analysis using evidence theory and approximation method is shown in Fig. 4. Approximation constructing point Constructed approximation

4. Numerical examples

Fig. 3. Deploying approximation constructing points and constructed approximation.

MPA is mainly dependent on the quality of local approximations. In this work, the Two-Point Adaptive Non-linear Approximation (TANA2) method, which was developed by Grandhi and Wang [18], is employed as a local approximation method. The efficiency and accuracy of this method has been extensively discussed and proved in many applications [17–20]. TANA2 is very efficient when dealing with highly non-linear implicit problems with a large number of design variables. TANA2 approximations are constructed as follows: Fe ðX Þ ¼ F ðX2 Þ þ

1pj n  X oF ðX2 Þ xj;2  pj p xj  xj;2j oxj pj j¼1

n  2 1 X p p xj j  xj;2j þ e 2 j¼1

4.1. Case study-I: three bar truss Fig. 5 shows the structural model of a three bar truss. There are three truss elements and a static load is 10′′

10′′

10′′

1

2

A1

A2

3 A1

4

ð14Þ

Material:E=1.06 psi

P (40000lb, -40000lb)

Fig. 5. Three bar truss.

Fig. 4. UQ algorithm with an approximation method using evidence theory.

1106

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

and a fuzzy set A over the referential X is defined by means of membership function: lF from X to ½0; 1. The referential X could be viewed as the frame of discernment in evidence theory and also the sample space in probability theory. For any x in X , lF ðxÞ is the membership degree of x in A. The a-level cut of A is the subset defined by fx; lF ðxÞ P ag. As a special case of BBA structure, the BBA structure can be defined as fuzzy sets when the intervals are consonant [22]. In this example, since the given intervals shown in Fig. 6 are not consonant, the possibility theory approach cannot be applied directly. When the given interval sets are not consonant, the consonant interval information can be reproduced by performing inclusion techniques. The inclusion procedure proposed by Tonon et al. [13] is applied to the current problem. In the inclusion procedure, the consonant intervals are constructed to give a conservative result by decreasing the loss of information. The intervals are ordered based on the effect on the reliability index and extended to include other intervals. The BBAs of the obtained consonant intervals are corrected by introducing a correction mass b. We refer the reader to the Ref. [13] for the details of the inclusion procedure. The reproduced consonant intervals and the plausibility function of the singletons are shown in Figs. 7 and 8. The plausibility function for focal sets is accepted as the approximate membership function of the fuzzy set in this procedure. When multiple fuzzy variables are considered in a functional relationship, the corresponding fuzzy responses must be computed via the Zadeh’s extension principle. Based on Zadeh’s extension principle, Dong and Wong [23] proposed the Level Interval Algorithm (LIA), also called the Fuzzy Weighted

applied at node 4. The finite element analysis (FEA) of this structure was performed using GENESIS 6.0 [21]. The displacement of node 4 is considered as a limit state response function. It is assumed that uncertainties exist in the independent parameters of elastic modulus (E) and applied force (P ). The nominal values for the uncertain parameters are fixed and the actual values are obtained by multiplying the nominal values by uncertain factors. The goal of this problem is to obtain an assessment of the likelihood that the displacement of node 4 is larger than the limit state value (dlimit ¼ 3:000 ); that is, the likelihood that the displacement is in the set given by Eq. (15). dfail ¼ fdNode

4

: dNode 4 P dlimit g

ð15Þ

In this example, we consider the situation where an expert gives multiple interval information for the two uncertain parameters as shown in Fig. 6. Different solution approaches (evidence theory, possibility theory, and probability theory) are investigated and discussed in the following subsections. 4.1.1. Possibility theory approach Since only parametric uncertainties, which are characteristically aleatory uncertainties, are considered in this example, it is possible to calculate bounds on the probability of system failure with frequentistic view of fuzzy sets of possibility theory. A fuzzy set is characterized by a fuzzy membership grade (also called a possibility) that ranges in ½0; 1, indicating a continuous increase from non-membership to full membership. A degree of membership is associated to every element x,

Elastic modulus factor 0.5

0.6

0.7

0.8

e1

e2

0.03

0.05

0.9

1.0

1.1

1.2

1.3

1.4

e4

e5

e6

0.60

0.15

0.07

1.5

e3

0.10 Force factor 0.5

0.6

0.7

0.8

0.9

p1

p2

0.10

0.05

1.0

1.1

1.2

p3

0.70

1.3

1.4

1.5

p5

p6

0.03

0.10

p4

0.02 Fig. 6. Imprecise information for the scale factors of uncertain parameters (E and P ).

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

1107

Fig. 7. Consonant intervals and an approximate membership function for the scale of uncertain parameter (E) using the inclusion technique.

Fig. 8. Consonant intervals and an approximate membership function for the scale of uncertain parameter (P ) using the inclusion technique.

Average algorithm and the vertex method. LIA, which is basically the vertex method, is reliable only for a monotonic system response. Several variation methods were developed to improve the computational perfor-

mance in fuzzy sets context by Liou and Wang [24], Guh et al. [25], and so on. In this example, LIA is applied due to its simplicity of implementation. LIA simplifies the process to obtain the fuzzy output by discretizing the

1108

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

Fig. 9. System response (displacement) membership function for the three bar truss.

membership functions of the input fuzzy variables into prescribed a-cuts. We refer the reader to the Ref. [5] for the details of the LIA procedure. With the approximate membership functions of uncertain variables (E and P ) from the inclusion technique, the fuzzy response (displacement) is obtained as shown in Fig. 9 by LIA. From the response membership function, the possibility of failure can be obtained as 0.1308 for the defined failure set given in Eq. (15). Further discussions comparing the results of evidence theory and probability theory are presented later. 4.1.2. Probability theory approach Since in the probabilistic framework, probability should be assigned to only elementary events, the given imprecise information shown in Fig. 6 is not appropriate for the probabilistic analysis. In probability theory, when a PDF for an uncertain variable is not available, the uniform distribution function is often used, justified by Laplace’s Principle of Insufficient Reason [26]. This principle can be interpreted to mean that all simple events for which a PDF is unknown have equal probabilities. In this example, there is no further information to select or approximate a PDF for the given intervals, but only the probability masses (BBAs) are assigned by available evidence (expert’s opinion or experimental data). The approximate PDFs of uncertain variables are obtained as shown in Figs. 10 and 11 by the assumption that probability mass in each interval is distributed uniformly. The popular sampling technique, Monte Carlo Simulation (MCS) with 100,000 samples is performed for the obtained PDFs of uncertain variables

Fig. 10. PDF of e (scale of elastic modulus) using uniform distribution assumption.

Fig. 11. PDF of p (scale of force) using uniform distribution assumption.

(e and p). The resulting failure probability is obtained as 0.0058 for the current example. The discussions of the result are presented later. 4.1.3. Evidence theory approach In evidence theory, unlike possibility theory and probability theory, there is no need to make any assumption or approximation for the given imprecise information because the BBA structure can consist of any combination of the possible subset of FD (see the three axioms of basic belief assignment). The given imprecise interval information is adopted as a BBA structure itself. For multiple independent uncertain parameters in a structural system, a joint BBA structure, which is similar with the joint probability density function in probability theory, is defined by using the Cartesian product in the JFD. As a result, the belief and plausibility functions are evaluated and the bounded result ([0.0039, 0.0345]) is obtained with the cost effective algorithm. From this result, we have lower bound 0.0039 probability and upper bound 0.0345 probability for the system failure based on the given limit state function. It is intuitive and reasonable to obtain the bound result instead of a single value such as the probability because the given information is not precise. 4.1.4. Comparison and discussions of different approaches Table 1 shows the results from each approach and corresponding computational cost. Possibility theory and evidence theory give bounded results and probability theory gives a single valued result. The necessity in possibility theory is zero because the interval for determining the measurements is set to [dlimit ; þ1]. Fig. 12 shows the Complementary Cumulative Functions (CCFs) for each measurement. CCFs are defined for the set dfail , with varying value of dlimit 2 d, where d and dfail are defined as in Eqs. (16) and (17) d ¼ fdTip : dTip ¼ f ðxÞ; x ¼ fx1 ; x2 ; . . . ; xn Þ 2 X g

ð16Þ

dfail ¼ fdTip : dTip P dlimit ; dlimit 2 dg

ð17Þ

CCFs can be interpreted in the same way as cumulative distribution function in probability theory. From this figure, useful insights into the confidence of the result

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

1109

Table 1 The comparison of results and costs in the three bar truss example UQ approaches

UQ results

Solution techniques/number of simulations

Possibility theory Probability theory Evidence theory

[0.0000, 0.1308] 0.0058 [0.0039, 0.0345]

LIA/48 MCS/100,000 The proposed algorithm/17

lF ðyÞ ¼ sup ½lF ðxÞ

ð18Þ

x:y¼f ðxÞ

Fig. 12. Complementary cumulative measurements of possibility theory, probability theory and evidence theory for the three bar truss example.

from UQ analysis with imprecise information can be obtained. Probability theory does not allow any impreciseness on the given information, so it gives a single valued result. However, possibility theory and evidence theory give a bounded result. Specially, the difference between plausibility and belief in evidence theory can be defined as another Uncertainty ( ¼ Pl  Bel). This Uncertainty reflects the lack of confidence in an UQ analysis result. By increasing the available data and knowledge, the difference (Uncertainty) decreases to zero and our confidence on the resulting measurement increases to one. If Pl and Bel are the same for a certain limit state value, so the Uncertainty is zero, then it can be interpreted that there is no doubt on the resulting degree of belief of system failure. For the computational cost, it is shown in Table 1 that the cost effective algorithm is very efficient in decreasing the computational cost. The computational performances of possibility theory and probability theory can be enhanced by using advanced techniques, however, the cost effective algorithm has the most efficiency and generality. Even in possibilistic and probabilistic approaches, the algorithm can be incorporated to reduce the computational cost. The detailed discussions are given for the result of each approach as follows. (1) The result from possibility theory gives the most conservative value essentially because of the Zadeh’s extension principle. In that principle, the degree of membership of the system response corresponds to the degree of membership of the overall most preferred set of fuzzy variables as in Eq. (18).

where x can be viewed as a vector of fuzzy variables for a multiple dimension problem. However, in the inclusion procedure to reproduce consonant intervals, the location where the reliability is maximized in the referential X should be correctly identified to avoid the extreme conservative result. Hence, there are no unique consonant intervals, and the extension of intervals in the inclusion technique is not limited to only one side; that is the constructed consonant intervals are dependent on the given limit state functions. For example in a convex limit state function, when the maximizing reliability location is at the middle of X , the original intervals are extended in both directions (right and left) to be the new inclusion intervals. However, in a concave limit state function which gives two boundary points in the referential X as maximizing reliability locations, the inclusion technique can give an extreme result (0 or 1) for the possibility and necessity measurements unless other assumptions or criteria are introduced for the inclusion technique. Thus, even though it is not clearly stated in the Ref. [13], the inclusion technique can be applied only for a system for which limit state functions are known and monotonic. It is noted that by expanding the intervals to include other intervals in the inclusion technique, the information given to an interval could lost its physical meaning. For example, the BBA of e1 interval in Fig. 7, which can be viewed as a probability mass of the interval, is assigned to the new interval which is the same as the referential X ð½0:5; 1:5Þ to include the other consonant intervals with the given correction mass b. Moreover, based on Zade’s basic idea of fuzzy sets, the transition between membership and non-membership of a location in the set is gradual [3], the sharp boundaries in the approximate membership function shown in Fig. 9 should be smoothed by introducing other assumptions. In this example, non-consonant multiple intervals are reproduced as a fuzzy membership function to apply the possibilistic approach. Conversely, the membership function can be modeled as a consonant BBA structure to analyze with the evidence theory framework. When the membership function is modeled by BBA structure, there is no need for additional techniques or assumptions, once the a cut is accepted as a level of basic belief.

1110

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

The consonant BBA structure can be constructed with discretized a cuts. (2) Contrary to possibility theory, probability theory gives the smallest prediction of system failure among the upper limits (possibility, plausibility, and probability). Based on different assumptions other than uniform distribution function, the resulting probability is changed significantly. Hence, probability theory can seriously underestimate a possible event unless the additional assumption (uniform distribution) is not justified properly. In other words, once an assumption is introduced, the resulting probability would be merely the reflection of the assumption on a target system with imprecise information. Moreover, since it just gives a single value result, additional techniques might be required to obtain supplementary measurements (expectation, variation, confidence bound, and so on) which can be used in a decision making situation. (3) Evidence theory gives a bounded result ([Bel, Pl]) which always includes the probabilistic result; that is, lower and upper bounds of probability based on the available information. Two main reasons, that the structural analysts were not familiar with evidence theory, are the high computational cost and the misunderstanding of the capability of incorporating the pre-existing probabilistic information. As discussed throughout this paper, a BBA structure in evidence theory can be used to model both fuzzy sets and probability distribution functions due to its flexibility. That is, different types of information (fuzzy membership function and PDF) can be incorporated in one framework to quantify uncertainty in a system. The obtained bounded result of evidence theory, which tends to be less conservative than that of possibility theory, and less marginal than the result of probability theory, can be viewed as the best estimate of system uncertainty because the given imprecise information is propagated through the given limit state function without any unnecessary assumptions in evidence theory. As shown in Table 1, the computational cost of evidence theory can be significantly reduced by using the cost effective algorithm. It shows that even though there is no closed form function for the given imprecise information, the belief and plausibility function evaluations can be performed efficiently by the proposed algorithm. As mentioned previously, even in possibilistic and probabilistic approaches, the algorithm can be employed to reduce the computational cost. Once the surrogate model is constructed, there is no additional cost for updating the result with increased information. For example, when there exist two exact normal PDFs for the scale factors, e and p (means of one and standard deviations of 0.2), an imprecise information situation can be assumed due to lack of information or data in the

Fig. 13. Discretized PDF (normal distribution) (N : the number of discretization).

Fig. 14. The convergence of Bel, Pl, and probability regarding the number of discretization.

current three bar example. For imprecision, discretized exclusive probability sets might be obtained as shown in Fig. 13 with different levels of discretization. As the number of discretization levels increases, Fig. 14 shows that the bound of evidence theory decreases. This result shows that the three measurements (belief, probability, and plausibility) eventually converge to a single value by increasing the data sufficiently. The updated bounds in Fig. 14 are calculated without additional simulations due to the construction of the surrogate for the limit state function. 4.2. Case study-II: intermediate complexity wing (ICW) For the second numerical example, the structural model of an intermediate complexity wing is shown in

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

1111

Fig. 16. Scale factor information for static force from different sources.

Fig. 15. Intermediate complexity wing structure model.

Fig. 15. This is a representative wing-box structure for a fighter aircraft. There are 62 quadrilateral composite membrane elements ([0°/90°/±45°]) for upper and lower skins and 55 shear elements for ribs and spars. Root chord nodes are constrained as supports. Static loads, which represent aerodynamic moments and lifting forces, are applied along the surface nodes. The dominant frequency and tip displacement at the marked point as shown in Fig. 15 are considered as multiple limit state functions. 1:Displacement :

2:Frequency :

Disptip 6 1:0 2:0 ðin:Þ

Freq 6 1:0 20:0 ðHzÞ 

3:Combination :

Fig. 17. Discretized intervals for elastic modulus with given interval statistics.

ð19Þ Table 2 ICW results using vertex and proposed methods

ð20Þ

   Disptip Freq 6 1:0 \ 6 1:0 0:45 ðin:Þ 6:5 ðHzÞ ð21Þ

In this example, the uncertainties are expressed by intervals of scale factor for the static loads, and by an interval of statistical mean value of the elastic modulus of the skin elements from two information sources as shown in Figs. 16 and 17. Information of force factor from two different sources is aggregated by Dempster’s rule of combining, and the averaging discretization method [27] has been used to obtain the BBA structure with interval mean value of the normal distribution of elastic modulus factor as shown in Fig. 17. The surrogates are constructed for each limit state function. As a result, Table 2 was obtained with multiple limit state functions. The result of proposed method shows us that we have as much as 0.0526 plausibility, which is deter-

Vertex method Proposed method

Bel

Pl

Number of function evaluation

0.000 0.000

0.0101 0.0526

512 79

mined by the third limit state function (parallel system reliability given in Eq. (21)), for the failure of the wing structure. When the limit state function is not monotonic, the failure event can be missed and plausibility can be underestimated by using the vertex method as shown in Table 2 unless other considerations, such as linear variations of responses, are given. However, by using the proposed algorithm, the non-linearity and non-monotonicity can be reflected to assess more accurate Bel and Pl measures. The number of computations also decreases by approximately 85% by using the proposed method instead of the simple vertex method. The benefits of the proposed method is expected to increase as the scale of the problem increases.

1112

H.-R. Bae et al. / Computers and Structures 82 (2004) 1101–1112

5. Summary Evidence theory basically originated from the classical probability theory. However, due to its flexible framework, both epistemic and aleatory uncertainties can be modeled together. Different solution approaches (possibility theory, probability theory, and evidence theory) were investigated with a three bar truss problem, and an aircraft wing structure. Several issues regarding the computational cost and the generality of the evidence theory framework were discussed. The vertex method could be inappropriate while handling imprecise information, because of the non-linearity and nonmonotonicity of the limit state function of an engineering structural system. The cost effective algorithm with MPA is accurate and efficient and once the surrogate is constructed, there is no additional computational cost for updating the UQ results. This algorithm makes large-scale uncertainty quantification practical and accurate for problems with dozens of variables and implicit limit state functions.

Acknowledgements This research has been sponsored by the Air Force Office of Scientific Research (AFOSR) through the Grant F49620-00-1-0377.

References [1] Helton JC. Uncertainty and sensitivity analysis in the presence of stochastic and subjective uncertainty. J Stat Comput Simul 1997;57:3–76. [2] Oberkampf WL, Helton JC. Mathematical representation of uncertainty. Non-deterministic approaches Forum 2001, Seattle, WA, AIAA-2001-1645. [3] Zadeh L. Fuzzy sets. Inf Control 1965;8:338–53. [4] Wood LK, Otto NK, Antonsson KE. Engineering design calculations with fuzzy parameters. Fuzzy Sets Syst 1992;53:1–20. [5] Antonsson EK, Otto NK. Improving engineering design with fuzzy sets. In: Dubois D, Prade H, Yager RR, editors. Fuzzy information engineering: a guided tour of applications. New York: John Wiley & Sons; 1997. [6] Penmetsa RC, Grandhi RV. Efficient estimation of structural reliability for problems with uncertain intervals. Comput Struct 2002;80:1103–12.

[7] Shafer G. A mathematical theory of evidence. Princeton, NJ: 1976. [8] Yager RR, Kacprzyk J, Fedrizzi M. Advances in the Dempster–Shafer theory of evidence. New York: John Wiley & Sons; 1994. [9] Zadeh L. Review of shafer’s a mathematical theory of evidence. Artif Intell Mag 1984;5:81–3. [10] Sentz K, Ferson S. Combination of evidence in Dempster– Shafer theory. SAND2002-0835 Report. Sandia National Laboratories. 2002. [11] Cohen PR. Heuristic reasoning about uncertainty: an artificial intelligence approach. London: Morgan Kaufmann; 1985. [12] Nguyen HT, Walker EA. A first course in fuzzy logic. Boca Raton: CRC press; 1997. [13] Tonon F, Bernardini A, Mammino A. Determination of parameters range in rock engineering by means of random set theory. Reliability Eng Syst Saf 2000;70:241–61. [14] Arora JS. Introduction to optimum design. New York: McGraw-Hill; 1989. [15] Walpole RE. Probability and statistics for engineers and scientists. New Jersy: Prentice; 1998. [16] Dong WM, Shah HC. Vertex method for computing functions of fuzzy variable. Fuzzy Sets Syst 1987;24: 65–78. [17] Xu S, Grandhi RV. Multi-point approximation development: thermal structural optimization case study. Int J Numer Methods Eng 2000;48:1151–64. [18] Wang LP, Grandhi RV. Improved two-point function approximations for design optimization. AIAA J 1995;33(9): 1720–7. [19] Xu S, Grandhi RV. Structural optimization with thermal and mechanical constraints. J Aircr 1999;36(1):29–35. [20] Wang LP, Grandhi RV. Multi-point approximations: comparisons using structural size, configuration and shape design. Struct Optimization 1996;12:177–85. [21] GENESIS user manual. Colorado: Vanderplaats Research & Development; 2000. [22] Dubois D, Prade H. Random sets and fuzzy interval analysis. Fuzzy Sets Syst 1990;38:308–12. [23] Dong WM, Wong FS. Fuzzy weighted averages and implementation of the extension principle. Fuzzy Sets Syst 1987;21:183–99. [24] Liou TS, Wang MJ. Fuzzy weighted average: an improved algorithm. Fuzzy Sets Syst 1992;49:307–15. [25] Guh YY, Hon CC, Wang KM, Lee ES. Fuzzy weighted average: a max–min paired elimination method. Comput Math Appl 1996;32:115–23. [26] Savage LJ. The foundations of statistics. New York: Dover Pub; 1972. [27] Tonon F. Using random set theory to propagate epistemic uncertainty through a mechanical system. Reliability Eng. Syst. Saf. [in press].