Evidence-theory-based model validation method for heat transfer system with epistemic uncertainty

Evidence-theory-based model validation method for heat transfer system with epistemic uncertainty

International Journal of Thermal Sciences 132 (2018) 618–627 Contents lists available at ScienceDirect International Journal of Thermal Sciences jou...

2MB Sizes 0 Downloads 42 Views

International Journal of Thermal Sciences 132 (2018) 618–627

Contents lists available at ScienceDirect

International Journal of Thermal Sciences journal homepage: www.elsevier.com/locate/ijts

Evidence-theory-based model validation method for heat transfer system with epistemic uncertainty

T

Chong Wanga,∗, Hermann G. Matthiesa, Menghui Xub, Yunlong Lic a

Institute of Scientific Computing, Technische Universität Braunschweig, Braunschweig, 38106, Germany Faculty of Mechanical Engineering & Mechanics, Ningbo University, Ningbo, Zhejiang, 315211, PR China c Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, United States b

A R T I C LE I N FO

A B S T R A C T

Keywords: Model validation Epistemic uncertainty with limited data Evidence theory BPA-based parameter calibration method Sandia thermal challenge problem

In numerical heat transfer, the model validation problem with respect to epistemic uncertainty, where only a small amount of experimental information is available, has been recognized as a challenging issue. To overcome the drawback of traditional probabilistic methods in dealing with limited data, this paper proposes a novel model validation approach by using evidence theory. First, the evidence variables are adopted to characterize the uncertain input parameters, where the focal elements are expressed as mutually connected intervals with basic probability assignment (BPA). In the subsequent process of predicting response focal elements, an interval collocation analysis method with small computational cost is presented. By combining the response BPAs in both experimental measurements and numerical predictions, a new parameter calibration method is then developed to further improve the accuracy of computational model. Meanwhile, an evidence-theory-based model validation metric is defined to test the model credibility. Eventually, the famous Sandia thermal challenge problem is utilized to verify the feasibility of presented model validation method in engineering application.

1. Introduction In thermal engineering, the experimental tests and computational simulations are the two important means for system analysis. The large number of experimental tests can obtain the intuitive and reliable results, but the expenses are always considerable, especially for complex systems [1,2]. With rapid development of modern computer technology, the computational simulations play an increasingly important role in engineering due to the relatively small cost. However, the strong dependency on computational models creates a critical issue of quantifying credibility in simulation accuracy, which can provide the decision-maker with the necessary information [3–5]. Model validation, defined as the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model [6], is just the general technology for characterizing this credibility of computational model before practical application. In recent years, the model validation problem has received considerable attentions and intensive investigations from many professional societies and national laboratories [7–9]. The American Institute of Aeronautics and Astronautics (AIAA) and the American Society of Mechanical Engineers (ASME) published guidance documents for model validation in computational fluid dynamics and computational solid mechanics,



respectively [10,11]. The U.S. Department of Energy (DoE) emphasized the importance of model validation in the Accelerated Strategic Computing Initiative program [12]. As is known to all, uncertainties are widely involved in the real world due to the unpredictable environment factors, inevitable measurement errors and incomplete knowledge [13–15]. Thus, compared with the traditional model validation activities in deterministic framework, the uncertainty-based model validation is more feasible and practical [16,17]. Generally speaking, the uncertainties can be classified into two categories: aleatory uncertainty and epistemic uncertainty [18]. Using the sufficient sample statistical information, the aleatory uncertainty is usually quantified as random variable or stochastic process by probability theory. Up to now, a lot of investigations have been conducted on aleatory uncertainty-based model validation [19–22]. Based on stochastic uncertainty propagation and data transformations, Chen et al. proposed a generic model validation method, where the number of required physical tests can be efficiently reduced [23]. In order to characterize the coherence between predictions and observations in random uncertain circumstance, four kinds of validation metrics, namely classical hypothesis testing, Bayes factor, frequentist's metric and area metric, were verified by a group of mathematical examples [24]. Using Bayesian updates and prediction related

Corresponding author. Institute of Scientific Computing, Technische Universität Braunschweig, Braunschweig, 38106, Germany. E-mail address: [email protected] (C. Wang).

https://doi.org/10.1016/j.ijthermalsci.2018.07.006 Received 20 November 2017; Received in revised form 29 May 2018; Accepted 3 July 2018

1290-0729/ © 2018 Elsevier Masson SAS. All rights reserved.

International Journal of Thermal Sciences 132 (2018) 618–627

C. Wang et al.

Subsequently, all subsets of FD Θ can construct a power set 2Θ , which is adopted to represent all possible various propositions

rejection criteria, Babuska et al. developed a systematic probabilistic approach for model validation [25]. Considering the high stochastic dimension of modeling uncertainties, a nonparametric probabilistic approach was investigated to perform the prior model in validating process [26]. Besides, for the three famous challenge problems proposed by Sandia National Laboratories, a lot of research results have been obtained in the probabilistic framework [27–30]. In contrast to the aleatory uncertainty analysis with sufficient available information, the epistemic uncertainty is more challenging because of the incomplete knowledge, especially in the case of limited data [31,32]. Several quantification methods, such as convex model [33], fuzzy set [34], interval variable [35] and evidence theory [36], have been proposed to characterize epistemic uncertainty, where the evidence theory is considered to be the most capable. It is because the concepts in evidence theory, such as focal element, basic probability assignment and so on, can be flexibly defined and utilized, which means the evidence theory could provide equivalent transformations to other models by necessary extensions. Along with the widespread concern in the recent two decades, evidence theory has obtained many achievements in uncertainty quantification and reliability analysis [37–40]. To describe the imprecise data, Bae et al. developed a novel epistemic uncertainty quantification technique by evidence theory, which can be considered as an effective alternative to the classical probabilistic methods [41]. Based on the Jacobi polynomial, Yin et al. proposed an evidence-theory-based method for response analysis of acoustic system [42]. To improve the computational efficiency of epistemic uncertainty analysis, Xie et al. presented a radial point interpolation method in evidence theory [43]. In the research work of Helton and Oberkampf, the performance of evidence theory in reliability analysis was summarized by a simple algebraic function [44]. Using the experiment design technique, Zhang et al. presented an efficient response surface method to evaluate the structural reliability in evidence theory [45]. From the overall perspective, evidence theory shows excellent superiority in epistemic uncertainty characterization and response prediction. Unfortunately, its application in model validation has not been reported by now. In this study, a novel model validation method based on evidence theory is proposed for the engineering heat transfer system under epistemic uncertainties, which can efficiently assess the credibility of computational model. The structure of this paper is organized as follows. The fundamental concepts in evidence theory are firstly reviewed in Section 2. Subsequently, by using evidence variables to quantify the input uncertainties, an interval collocation analysis method is proposed in Section 3 to efficiently predict the response focal elements. In order to improve the prediction accuracy of computational model, a parameter calibration framework is established in Section 4 by updating response BPA. The famous Sandia thermal challenge problem is provided as the numerical example in Section 5 to verify the performance of proposed method. Finally, we conclude the paper with a brief discussion in Section 6.

2Θ = {Φ , {θ1}, {θ2}, ..., {θn}, {θ1, θ2}, {θ1, θ3}, ..., {θ1, θn}, ...., Θ}

where Φ stands for the empty set, and the total number of elements in 2Θ is 2n . In evidence theory, the probability can be assigned to any element in the power set 2Θ . In other words, not only the elementary proposition but also the proposition combinations can respectively obtain the independent probability, which is able to reasonably represent the imprecise probability information. This kind of probability description is called as the basic probability assignment (BPA), which can be denoted by a function m: 2Θ → [0,1] mapping from 2Θ to [0,1]. Similar with the probability density function in probability theory, the BPA is used to quantify the elementary belief measure of each proposition in evidence theory. Thus, for any proposition A ∈ 2Θ , three kinds of BPA conditions should be satisfied

(i) m (A) ≥ 0 for any A ∈ 2Θ (ii) m (Φ ) = 0 (iii) ∑A ∈ 2Θ m (A) = 1

(3)

where the proposition A with positive BPA m (A) > 0 is named as the focal element. In many cases, the evidential information may come from different sources. Thus, it is crucial to combine the available information to update the BPA. For the same FD Θ , assume that two independent BPAs m1 and m2 have been derived from two different sources. Introducing B and C to respectively express the corresponding propositions, then the popular Dempster-based combination rule can be formulated as follows to update the BPA m

m (A) =

if A = Φ 0 ⎧ m (B ) × m2 (C ) ∑ ⎨ B∩C=A 1 if A ≠ Φ 1−K ⎩

(4)

where

K=



m1 (B ) × m2 (C )

B∩C=Φ

(5)

stands for all the inconsistent information. Another combination rule, named as Yager-based combination rule, is considered to be more suitable to the problem with strongly inconsistent information, where the updated BPA is calculated by

m (A) =

0 if A = Φ ⎧ ⎪ if A ≠ Φ and Θ ∑B ∩ C = A m1 (B ) × m2 (C ) ⎨ ⎪∑B ∩ C = A m1 (B ) × m2 (C ) + K if A = Θ ⎩

(6)

where K is the same as that in Eq. (5). 3. Response prediction with input evidence variables In evidence theory, the elementary propositions in FD as shown in Eq. (1) can exist in various forms. However, in practical heat transfer system, the parameter epistemic uncertainty usually exhibits a certain fluctuation around its nominal value. Thus, it is reasonable to adopt continuous intervals to denote the elementary propositions in FD.

2. Fundamental concepts in evidence theory Evidence theory, also named as the Dempster-Shafer (DS) theory, was proposed by Dempster and Shafer to characterize the epistemic uncertainty [36]. As the basis of evidence theory, some fundamental concepts, such as frame of discernment (FD), basic probability assignment (BPA), combination rule and so on, are firstly reviewed in this section. The frame of discernment (FD) Θ is defined as an exhaustive set, which is consisted of a group of mutually exclusive propositions as follows

Θ = {θ1, θ2, ..., θn}

(2)

3.1. Input parameter characterization by evidence variables First, the system input parameters with epistemic uncertainty can be modeled as l independent evidence variables as follows

X = (X1 , X2 , ..., Xl )

(1)

(7)

where Xi ∈ [ X i , Xi ] stands for the evidence variable; the interval [ X i , Xi ] represents the FD range of Xi ; X is the evidence variable vector. For each evidence variable Xi , the elementary propositions in FD

where θi denotes the elementary proposition; n is the number of elementary propositions. 619

International Journal of Thermal Sciences 132 (2018) 618–627

C. Wang et al.

can be expressed as subinterval forms. In practical application, due to the conflicting information from different sources, some gaps or overlaps may exist between the subinterval-based elementary propositions [46]. However, for simplicity, many existing researches about evidence theory still adopt the connected subintervals with no gaps or overlaps to model the evidential uncertainty [37,40,42,45]. Our study in this paper continues to employ this simplified construction, where the elementary propositions in FD are written as

⎡ (x i ) j = ⎢ X i + ⎣

j−1 k=1

Xi +

∑ Δk,i ⎤⎥ k=1

j = 1,2, ..., Mi

ΘY = {YpI p = 1,2, ..., M } = {Y1I , Y2I , ..., YMI }

From the construction of elementary proposition subintervals as shown in Eq. (8), it is obvious that the two adjacent subintervals (x i ) j and (x i ) j + 1 are connected but mutually exclusive, and the union of all subintervals covers the entire FD range [ X i , Xi ]. Without considering the proposition combinations, the Bayesian structure of evidence theory [47] is investigated in this study, where the BPAs are assumed to be assigned only on the elementary propositions (x i ) j , i.e. Mi

i = 1,2, ..., l (10)

(

h (Y (vp), x i, p) = Y (vp)

Subsequently, by using the Cartesian product operation, the joint FD for the evidence variable vector X can be defined as

− Y (vp)

where δ is a small quantity; Y (vp)

xi, p

)⋅(Y (v )

p xi, p

− Y (vp)

xi, p − δ

) (17)

xi, p + δ

, Y (vp)

xi, p

, Y (vp)

xi, p − δ

denote

the deterministic temperature responses at the points x i, p + δ , x i, p , x i, p − δ , respectively. If there are no points to make the inequality h (Y (vp), x i, p) < 0 be satisfied, it means the response Y (vp) is global monotonic in the interval range [ x i, p , x i, p]. Thus, only the lower bound x i, p and upper bound x i, p are selected as the individual collocation points with respect to x i, p , and the collocation point set Si is written as

Si = { x i, p , x i, p}

l

(18)

Conversely, if there are Ni points to make the inequality h (Y (vp), x i, p) < 0 be satisfied, it indicates the monotonicity of Y (vp) will change on the two sides of x i, p . These points can be approximated as the extreme points and will be added into the collocation point set as follows

p = 1,2, ..., M

i=1

(12) 3.2. Response prediction by interval collocation analysis method

Si = { x i, p , x i, p , x i1,, p∗, x i2,, p∗, ..., x iN, pi, ∗}

Different heat transfer systems have different types of mathematical governing equations [48]. Without loss of generality, the computational model for heat transfer systems with evidence variables can be universally expressed by an implicit function

f (X , Y (X)) = 0

xi, p + δ

x i, p < x i, p < x i, p

where Θ V represents the joint FD defined in the l-dimensional hypercube domain [ X 1 , X1] × [ X 2 , X2 ] × ⋯×[ X l , Xl ]; vp stands for the pth joint focal element of evidence variable vector X ; x i, p denotes the individual focal element of ith evidence variable Xi in the pth joint focal element. By taking any focal element out of each Xi , a total of M = M1 × M2 × ⋯×Ml joint focal elements are yielded in the joint FD, and the corresponding joint BPA for vp can be calculated by

∏ m (xi,p) = m (x1,p) × m (x2,p) × ⋯×m (xl,p)

(16)

i = 1,2, ..., l} (11)

m (vp) =

can be de-

Apparently, the crucial issue for response prediction with input evidence variables is to calculate the interval in Eq. (14) under the joint focal element vp . As is known, the direct optimization methods can obtain the most accurate interval results [49], but the computational cost caused by the repetitive full-scale simulation in optimization iterations is usually very huge, especially for the problem with a large number of evidence variables. Recently, the surrogate model technique has been successfully applied in the optimization-based methods [50], but the repetitive surrogate model constructions for various joint focal elements will also cause heavy computational burden. In this section, an interval collocation analysis method with relatively small computational cost will be proposed as an alternative for response prediction. Firstly, using the difference method, the monotonicity change of output temperature response Y (vp) with respect to each individual focal element x i, p ∈ vp in the interval range [ x i, p , x i, p] can be approximately prejudged by the following function

(9)

Θ V = X1 × X2 × ⋯×Xl = {vp = (x1, p , x2, p, ..., xl, p) x i, p ∈ Xi ,

YpI

mc (YpI ) = m (vp) p = 1,2, ..., M

i = 1,2, ..., l

j=1

(15)

where the computational response BPA mc for each termined by

i = 1,2, ..., l

Mi

∑ m ((xi) j) = 1

(14)

= [ Y (vp), Y (vp)] p = 1,2, ..., M as the By treating the intervals focal elements of response evidence variable Y (X) , the response FD ΘY can be constructed as

where (x i ) j stands for the jth elementary proposition subinterval of the ith evidence variable Xi ; Mi is the number of elementary proposition subintervals; Δk, i denotes the width of the kth subinterval, and all the independent widths satisfy

= Xi − X i

v ∈ vp}

YpI



k=1

Y (vp) = max{Y (v) f (v , Y (v)) = 0 v

(8)

∑ Δk,i

v ∈ vp}

v

j

∑ Δk,i ,

Y (vp) = min{Y (v) f (v , Y (v)) = 0

(19)

where x i,jp, ∗ denotes the jth monotonicity changing point in interval range [ x i, p , x i, p]. According to the Cartesian product, the joint collocation point set S for the joint focal element vp can be constructed as

(13)

S = S1 × S2 × ⋯×Sl

where X = (X1 , X2 , ..., Xl ) is the input evidence variable vector; Y (X) stands for the output temperature response, and has become an evidence variable with respect to X . Based on above interval assumption of individual focal element as shown in Eq. (8), it can be easily seen from Eq. (11) that the joint focal element vp is a l-dimensional hypercube. Therefore, with respect to each joint focal element vp , the temperature response Y (vp) will change in a certain range YpI , whose lower bound Y (vp) and upper bound Y (vp) can be quantified by

(20)

where the total number of joint collocation points is

N = (N1 + 2) × (N2 + 2) × ⋯×(Nl + 2)

(21)

As is known, the extreme values of continuous function can be derived at the variable endpoints and the approximate extreme points. Therefore, the lower bound Y (vp) and upper bound Y (vp) with respect to the joint focal element vp can be eventually derived via the finite simulations at the N joint collocation points 620

International Journal of Thermal Sciences 132 (2018) 618–627

C. Wang et al.

Y (vp) = min {Y (vp, i) f (vp, i , Y (vp, i)) = 0}

(

Y (vp) = max {Y (vp, i) f (vp, i , Y (vp, i)) = 0} i = 1, ..., N

p

(

)

m (YpI ) = ∑B ∩ C = Y I mc (B ) × me (C ) + K

(22)

where vp,1, vp,2, ..., vp, N stand for the N joint collocation points of vp . Overall speaking, by extracting the approximate extreme points, the proposed interval collocation analysis method can efficiently predict the response focal element with relatively high computational accuracy. However, from Eqs. (12) and (22) it can be seen that the total computational cost caused by the M joint focal elements and N joint collocation points will be huge if the number of evidence variables l and the number of individual collocation points Ni are large enough. But as shown in Eq. (8), the original evidence variable has been divided into some focal elements with relatively small subintervals. Thus, it can be prejudged that the required individual collocation points in each focal element subinterval will not be too many. Besides, considering that the similar computing procedure can be run for the various joint focal elements vp p = 1, ..., M , thus the existing parallel simulation methods can be utilized to further reduce the executing time [51].

p

p = 1,2, ..., M (Yager)

(26) where

K=

∑ B∩C=Φ

mc (B ) × me (C )

(27)

B, C represent the computational focal element and experimental focal element, respectively. Eventually, extract the response focal element YqI whose updated response BPA is zero m (YqI ) = 0 , and remove the corresponding focal elements x i, q from the original input evidence variable Xi i = 1,2, ..., l . Then the remaining focal elements constitute the updated input evi∼ dence variable Xi i = 1,2, ..., l . If the EMVM as shown in Eq. (23) can still be satisfied under the updated framework, then the evidence ∼ variable Xi will be considered as the eventual parameter calibration results. Briefly speaking, based on the basic concepts of focal element and BPA in evidence theory, the epistemic uncertainties in model validation can be well quantified. The detailed procedure of proposed evidencetheory-based model validation method is represented by a flowchart as shown in Fig. 1. Compared with the initial estimation of input evidence variables, the parameter calibration results derived via response BPA will be more accurate to the practical cases, as the information about available experimental responses is fully utilized.

4. Parameter calibration via BPA Another important issue in model validation is to compare the experimental measurement results with numerical prediction results for system output responses. Introduce denotations Yimv i = 1,2, ..., Nmv to represent the Nmv experimental response measurements for model validation. In evidence theory, if every experimental measurement Yimv falls within the computational response FD, it can be declared that the computational model is consistent with the experimental results. In other words, the evidence-theory-based model validation metric (EMVM) requires every experimental measurement Yimv to be included in at least one computational response focal element YpI , which can be formulated as

∀ i ≤ Nmv , ∃ p ≤ M , s. t . Yimv ∈ YpI

)

m (YpI ) = ∑B ∩ C = Y I mc (B ) × me (C ) /(1 − K ) (Dempster)

i = 1, ..., N

(23)

where M is the number of response focal elements in the computational model. However, it should be pointed out that the computational response FD obtained by the overly conservative input evidence variables is still meaningless, even if the EMVM as shown in Eq. (23) can be satisfied. Therefore, before the model validation, it is necessary to calibrate the input evidence variables of computational model based on the available experimental response information. In order to distinguish the experimental response measurements in model validation and parameter calibration, we denote the latter ones as Yipc i = 1,2, ..., Npc , where Npc is the number of experimental responses in parameter calibration. Firstly, under the response FD ΘY as shown in Eq. (15), construct the experimental response BPA me (YpI ) by using the available experimental data. For each deterministic experimental response Yipc i = 1,2, ..., Npc , it is approximately assigned the same belief degree 1/ Npc . Furthermore, if it falls within ni response focal elements Yi1I , Yi2I , ..., YiIni , then its belief degree will be uniformly distributed into the ni focal elements. Consequently, the experimental response BPA me (YpI ) can be cumulated as Npc

me (YpI ) =

∑ i=1

1 ⋅sign (Yipc , YpI ) Npc⋅ni pc

where sign (Yi ,

sign (Yipc , YpI ) =

YpI ) ⎧0 ⎨1 ⎩

i = 1,2, ..., Npc

(24)

is the sign function and satisfies

if

Yipc ∉ YpI

if

Yipc ∈ YpI

(25)

Subsequently, by combining the experimental response BPA me (YpI ) and the computational response BPA mc (YpI ) in Eq. (16), the updated response BPA can be derived via the Dempster-based combination rule or Yager-based combination rule

Fig. 1. Flowchart of evidence-theory-based model validation method. 621

International Journal of Thermal Sciences 132 (2018) 618–627

C. Wang et al.

From the measurement data under different temperatures, it can be seen that the transient temperature has a significant influence on the thermal conductivity. Therefore, the thermal conductivity k is considered as a temperature-dependent input variable, which can be approximated by a linear function as follows (30)

k = k1⋅T + k 0

The expansion coefficient k1 is treated as a constant number, and can be derived as k1 = 2.6313 × 10−5 W/(m⋅∘ C)/∘C by the linear regression analysis method [54]; whereas the term k 0 is assumed to be an evidence variable, and a conservative FD range [ k 0, k 0] = [0.0304, 0.0688] W/(m⋅∘ C) can be initialized for k 0 by the available data. In contrast to the thermal conductivity, the change of volumetric heat capacity ρCp with respect to temperature is so small that it can be neglected. Thus, the volumetric heat capacity ρCp is modeled as a normal evidence variable, whose FD range can be conservatively estimated as [ρCp , ρCp ] = [2.8125, 5.0655] × 105 J/(m3⋅∘ C) . Based on the simplified construction as shown in Eq. (8), the initialization FD ranges of k 0 and ρCp are divided into the following n subintervals as the focal elements with the same BPA

Fig. 2. Schematic of heat conduction problem.

5. Numerical example In this section, the famous Sandia thermal challenge problem [52] will be conducted to demonstrate the efficiency of proposed model validation method. 5.1. Problem statement

(k 0) j = ⎡ k 0 + ⎣

For a one-dimensional transient heat conduction problem as shown in Fig. 2, the Sandia Validation Challenge Workshop provides a mathematical model, three sets of experimental data, and a regulatory requirement. The mathematical model for transient temperature response is written as

m ((k 0) j) =

2 π2

6

∑ n=1

1 −n2π 2 e n2

(k / ρCp) t L2

x ⎤ cos ⎛nπ ⎞ ⎥ ⎝ L ⎠⎦

j ⋅ (k 0 − k 0) ⎤ n ⎦

j = 1,2, ..., n

1 n

⎡ (ρCp) j = ⎢ ⎢ρCp + ⎢ ⎣ m ⎛⎜ (ρCp) j⎞⎟ = ⎝ ⎠

(j − 1) ⋅ ⎜⎛ρCp − ρCp⎟⎞ ⎝ ⎠ , n

ρCp +

j ⋅ ⎜⎛ρCp − ρCp⎟⎞ ⎤ ⎝ ⎠⎥ ⎥ n

⎥ ⎦

1 n

(32)

j = 1,2, ..., n

(28) 5.3. Parameter calibration

where T stands for the temperature response; x is the distance from the left surface; t represents the time; Ti denotes the initial temperature condition; q is the applied heat flux; L is the thickness; k and ρCp stand for the material thermal conductivity and volumetric heat capacity, respectively. The experimental data includes three sets: material characterization, ensemble validation and accreditation validation. The material characterization data is consisted of several measurements for the material properties k and ρCp , while the ensemble validation data and accreditation validation data are consisted of some experimental observations for temperature responses under various design parameters x , t , q, L . For the above experimental data, there are three levels (‘low’, ‘medium’ and ‘high’) with different sample numbers for flexible selection. The details about experimental data are listed in Ref. [53] and will not be reiterated here. Under the condition of heat flux q = 3500 W/m2 and thickness L = 1.90 cm , the regulatory requirement states that the surface temperature Ts = T (x = 0) at the time t = 1000 s is not to exceed a failure temperature Tf = 900 ∘C in a specified fraction of the units ( pf = 0.01), i.e.

Poss (Ts (t = 1000 s) > Tf ) < pf

k0 +

(31)

qL ⎡ (k / ρCp) t 1 x 1 x 2 + − + ⎛ ⎞ T (x , t ) = Ti + k ⎢ L2 3 L 2 ⎝L⎠ ⎣ −

(j − 1) ⋅ (k 0 − k 0) , n

Considering the above conservative estimation about input evidence variables, the high-level experimental response data in ensemble validation will be adopted for the further parameter calibration in this section. A group of experiments are implemented for two thicknesses and two heat flux magnitudes, as listed in Table 1. For each experimental configuration, the transient temperatures on the boundary x = 0 are measured from time t = 100 s to t = 1000 s . The main computational cost of parameter calibration is consumed in predicting response focal elements. Under the first experimental configuration, we will take the case with only one focal element n = 1 for instance to demonstrate the computational efficiency of proposed interval collocation analysis method (ICAM). For this uncertainty propagation problem with two variables, the lower bound (LB) and upper bound (UB) of transient temperature responses on the boundary x = 0 are listed in Table 2, where the traditional Monte Carlo method (MCM) with 100 samples is introduced as the referenced approach. It easily can be seen that the results evaluated by ICAM match the referenced results very well. Besides, considering the monotonicity of temperature response with respect to thermal conductivity and volumetric heat capacity, only the lower bound and upper bound of focal element interval

(29)

where Poss stands for the possibility. Table 1 Four experimental configurations for parameter calibration.

5.2. Material characterization by evidence variables The material characterization experiments provide some specific data for thermal conductivity k and volumetric heat capacity ρCp under some given temperatures. In this study, the high-level experimental data with Nc = 30 samples are taken for instance, and the parameters uncertainties are characterized by evidence variables. 622

Exp configuration

Heat flux q (W/m2)

Thickness L (cm)

1 2 3 4

1000 1000 2000 2000

1.27 2.54 1.27 2.54

International Journal of Thermal Sciences 132 (2018) 618–627

C. Wang et al.

be larger than 2, the computational temperature responses obtained by the updated input evidence variables match the experimental data more perfectly. It indicates that using appropriate focal element division and experimental response information to calibrate input parameters can further improve the prediction accuracy of computational model.

Table 2 Bounds of transient temperature responses on the boundary x = 0. Time (s)

200 400 600 800 1000

Bound

LB UB LB UB LB UB LB UB LB UB

MCM (100 simulations)

ICAM (4 simulations)

Value (°C)

Value (°C)

108.80 185.25 144.43 248.77 175.75 302.86 206.36 355.40 236.86 407.88

108.80 185.25 144.43 248.77 175.75 302.86 206.36 355.40 236.86 407.88

5.4. Accreditation validation In this section, the high-level experimental response data in accreditation validation is used for the eventual model validation. Different from above ensemble validation experiments in Section 5.3, there is only one experimental configuration in this accreditation validation, where the heat flux is q = 3000 W/m2 and the thickness is L = 1.90 cm. But the response measurements of accreditation validation experiments are taken at three locations, where the temperature responses at the surface (x = 0), in the middle (x = L/2) and at the back (x = L) are collected. Similarly, the proposed interval collocation analysis method is adopted for the prediction of computational response focal elements. Under the initial and updated input evidence variables, the bounds of FD of transient temperature responses at three locations are respectively illustrated in Fig. 5. It is obvious that the computational results are consistent with the experimental results, and all the experimental measurements in accreditation validation lie well within the FD bounds. According to the evidence-theory-based model validation metric (EMVM) defined in Section 4, the computational model in Eq. (28) is considered to successfully pass the model validation test, and can be used for the following prediction of regulatory performance. From the comparison between computational results, it demonstrates again that the computational model updated by more divided focal elements can achieve more accurate prediction. In addition, different temperature tendencies with respect to time can be observed at three locations, which can be easily analyzed by the explicit expression of transient temperature response in Eq. (28). Besides, considering the parameter calibration in Section 5.3 is conducted based on the experimental measurements on boundary x = 0, it is the reason why the temperature prediction at Location 1 seems more accurate than the other two locations in Fig. 5.

are selected as the collocation points in ICAM. Thus, the number of required simulations in ICAM is only 22 = 4 . Compared with the 100 simulation in MCM, the computational cost of ICAM is greatly improved. For the general case with n focal elements, the total number of simulations in MCM will be 100⋅n2 . Comparatively speaking, the 22 × n2 simulations in ICAM are completely acceptable. Subsequently, based on the response focal elements predicted by ICAM, construct the experimental response BPAs by Eq. (24). Then the Dempster-based combination rule is adopted to update the final response BPAs by combining computational response BPAs and experimental response BPAs. In our proposed parameter calibration method, the number of focal elements can be flexibly selected in the same FD range, but there exist some differences in the computational accuracy and cost. With respect to the different numbers of divided focal elements n , the lower bound (LB) and upper bound (UB) of FD of updated input evidence variables are listed in Table 3 and plotted in Fig. 3. It can be observed that the parameter calibration results converge fast to the stable values with respect to the increase of focal element number. Larger number of focal elements can obtain the more accurate results, but the additional computational cost caused by 22 × n2 simulations will be unavoidably introduced. Contrarily, the smaller number of focal elements can avoid the huge computational cost, but the computational accuracy is relatively low. Thus, in order to balance the accuracy requirement and computational burden, it is necessary to assign the appropriate value to the focal element number n . Under the above four experimental configurations, the FD of transient temperature responses with respect to input evidence variables can be calculated and constructed by combining computational response focal elements. The bounds of computational response FD are plotted in Fig. 4, from which it can be easily seen all the experimental response measurements are enveloped in the FD bounds. When the focal element number is too small (such as n = 1 or 2), the evidence variables do not change in the parameter calibration process. The conservative initial estimation causes the phenomenon that the computational temperature responses go beyond the experiment data seriously. Comparatively speaking, by setting the focal element number to

5.5. Prediction of regulatory performance After successfully validating the computational model, the eventual task is to assess model performance at the given regulatory criterion. Under the configuration of heat flux q = 3500 W/m2 and thickness L = 1.90 cm, the FD bounds of transient temperature response at surface (x = 0) are plotted in Fig. 6. Based on the concept of satisfaction degree [55], the possibilities that the temperature response FDs are bigger than the failure temperature Tf = 900 ∘C can be approximately calculated as follows

Poss (Ts, n = 4 ∈ [716.62, 915.39] > Tf ) =

Poss (Ts, n = 32 ∈ [738.92, 889.37] > Tf ) = 0 (33)

Number of focal elements

Variable k0 (W/(m·°C))

Variable ρCp (J/(m ·°C))

n

LB

LB

1 (initial value) 2 4 8 16 32

0.0304 0.0304 0.0400 0.0448 0.0448 0.0460

0.0688 0.0688 0.0592 0.0544 0.0544 0.0544

= 0.3853

Poss (Ts, n = 8 = Ts, n = 16 ∈ [732.90, 894.30] > Tf ) = 0

Table 3 Bounds of FD of updated input evidence variables.

UB

1059.52 − 900 1059.52 − 645.52 915.39 − 900 = 0.0774 915.39 − 716.62

Poss (Ts, n = 1 = Ts, n = 2 ∈ [645.52, 1059.52] > Tf ) =

Under the initial input evidence variables, the derived failure possibility is 0.3853 and greater than the failure index pf = 0.01, which indicates that the regulatory criterion obtained by computational model with overly conservative input estimation cannot be satisfied. With respect to the increase of parameter calibration accuracy, the failure possibility decreases to the failure index until the whole temperature response FD is less than the failure temperature Tf = 900 ∘C . It should be pointed out that this conclusion is not completely the same as the other studies [52], where the final failure possibility is calculated to range from 0.02 to 0.28. It is because different uncertainty

3

UB 5

2.8125 × 10 2.8125 × 105 3.3758 × 105 3.3758 × 105 3.3758 × 105 3.3758 × 105

5.0655 × 105 5.0655 × 105 4.5023 × 105 4.5023 × 105 4.5023 × 105 4.4318 × 105

623

International Journal of Thermal Sciences 132 (2018) 618–627

C. Wang et al.

Fig. 3. Bounds of FD of updated input evidence variables.

Fig. 4. Bounds of computational response FD under four experimental configurations.

624

International Journal of Thermal Sciences 132 (2018) 618–627

C. Wang et al.

Fig. 5. Bounds of computational response FD at three locations.

Sandia thermal challenge problem. Compared with the existing probabilistic methods, the evidence-theory-based method proposed in this paper provides a new idea for the model validation in engineering.

quantification methods, experimental data with different levels, and various validation metrics are utilized. In this study, it is the first time to use evidence theory to deal with the epistemic uncertainty in the

Fig. 6. Bounds of computational response FD in intended application.

625

International Journal of Thermal Sciences 132 (2018) 618–627

C. Wang et al.

6. Conclusions

2375–2380. [9] R.W. Logan, C.K. Nitta, Verification & Validation (V&V) Methodology and Quantitative Reliability at Confidence (QRC): Basis for an Investment Strategy, Lawrence Livermore National Laboratory, 2002 UCRL-ID-150874. [10] W.L. Oberkampf, M.M. Sindir, A.T. Conlisk, Guide for the Verification and Validation of Computational Fluid Dynamics Simulations, American Institute of Aeronautics and Astronautics, 1998 G-077–1998. [11] L.E. Schwer, Guide for Verification and Validation in Computational Solid Mechanics, American Society of Mechanical Engineers, 2006 PTC 60/V&V 10. [12] Accelerated Strategic Computing Initiative (ASCI) Program Plan, Department of Energy, 2000 DOE/DP-99–000010592. [13] C. Wang, Z. Qiu, Hybrid uncertain analysis for steady-state heat conduction with random and interval parameters, Int. J. Heat Mass Tran. 80 (2015) 319–328. [14] D. Moens, D. Vandepitte, Recent advances in non-probabilistic approaches for nondeterministic dynamic finite element analysis, Arch. Comput. Meth. Eng. 13 (2006) 389–464. [15] C. Wang, Z. Qiu, M. Xu, Y. Li, Novel reliability-based optimization method for thermal structure with hybrid random, interval and fuzzy parameters, Appl. Math. Model. 47 (2017) 573–586. [16] S. Sankararaman, S. Mahadevan, Model validation under epistemic uncertainty, Reliab. Eng. Syst. Saf. 96 (9) (2011) 1232–1241. [17] A. Deraemaeker, P. Ladeveze, T. Romeuf, Model validation in the presence of uncertain experimental data, Eng. Comput. 21 (8) (2004) 808–833. [18] F.O. Hoffman, J.S. Hammonds, Propagation of uncertainty in risk assessments: the need to distinguish between uncertainty due to lack of knowledge and uncertainty due to variability, Risk Anal. 14 (5) (1994) 707–712. [19] A. Halder, R. Bhattacharya, Probabilistic model validation for uncertain nonlinear systems, Automatica 50 (8) (2014) 2038–2050. [20] M.C. Kennedy, A. O'Hagan, Bayesian calibration of computer models, J. R. Stat. Soc. Ser. B 63 (3) (2001) 425–464. [21] W.L. Oberkampf, T.G. Trucano, C. Hirsch, Verification, validation, and predictive capability in computational engineering and physics, Appl. Mech. Rev. 57 (5) (2004) 345–384. [22] C. Wang, Z. Qiu, D. Wu, Numerical analysis of uncertain temperature field by stochastic finite difference method, Sci. China Phys. Mech 57 (4) (2014) 698–707. [23] W. Chen, L. Baghdasaryan, T. Buranathiti, J. Cao, Model validation via uncertainty propagation and data transformations, AIAA J. 42 (7) (2004) 1406–1415. [24] Y. Liu, W. Chen, P. Arendt, H. Huang, Toward a better understanding of model validation metrics, J. Mech. Des. 133 (7) (2011) 071005. [25] I. Babuska, F. Nobile, R. Tempone, A systematic approach to model validation based on Bayesian updates and prediction related rejection criteria, Comput. Meth. Appl. Mech. Eng. 197 (29) (2008) 2517–2539. [26] A. Batou, C. Soize, Stochastic modeling and identification of an uncertain computational dynamical model with random fields properties and model uncertainties, Arch. Appl. Mech. 83 (6) (2013) 831–848. [27] R. Field, Overview of Sandia Validation challenge Workshop, Sandia National Laboratories, 2008 SAND2008–3062C. [28] R.G. Ghanem, A. Doostan, J. Red-Horse, A probabilistic construction of model validation, Comput. Meth. Appl. Mech. Eng. 197 (29) (2008) 2585–2595. [29] R.G. Hills, K.J. Dowding, Multivariate approach to the thermal challenge problem, Comput. Meth. Appl. Mech. Eng. 197 (29) (2008) 2442–2456. [30] M.D. Brandyberry, Thermal problem solution using a surrogate model clustering technique, Comput. Meth. Appl. Mech. Eng. 197 (29) (2008) 2390–2407. [31] E. Hofer, M. Kloos, B. Krzykacz-Hausmann, J. Peschke, M. Woltereck, An approximate epistemic uncertainty analysis approach in the presence of epistemic and aleatory uncertainties, Reliab. Eng. Syst. Saf. 77 (3) (2002) 229–238. [32] C. Wang, Z. Qiu, M. Xu, Collocation methods for fuzzy uncertainty propagation in heat conduction problem, Int. J. Heat Mass Tran. 107 (2017) 631–639. [33] Y. Luo, Z. Kang, Z. Luo, A. Li, Continuum topology optimization with non-probabilistic reliability constraints based on multi-ellipsoid convex model, Struct. Multidiscip. Optim. 39 (3) (2009) 297–310. [34] L.A. Zadeh, Fuzzy sets, Inf. Control 8 (1965) 338–353. [35] C. Wang, Z. Qiu, Y. Yang, Collocation methods for uncertain heat convection-diffusion problem with interval input parameters, Int. J. Therm. Sci. 107 (2016) 230–236. [36] G. Shafer, A Mathematical Theory of Evidence, Princeton University Press, Princeton, 1976. [37] N. Chen, D. Yu, B. Xia, Evidence-theory-based analysis for the prediction of exterior acoustic field with epistemic uncertainties, Eng. Anal. Bound. Elem. 50 (2015) 402–411. [38] J.C. Helton, J.D. Johnson, W.L. Oberkampf, C.B. Storlie, A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory, Comput. Meth. Appl. Mech. Eng. 196 (37) (2007) 3980–3998. [39] H. Agarwal, J.E. Renaud, E.L. Preston, D. Padmanabhan, Uncertainty quantification using evidence theory in multidisciplinary design optimization, Reliab. Eng. Syst. Saf. 85 (1–3) (2004) 281–294. [40] C. Jiang, Z. Zhang, X. Han, J. Liu, A novel evidence-theory-based reliability analysis method for structures with epistemic uncertainty, Comput. Struct. 129 (2013) 1–12. [41] H.R. Bae, R.V. Grandhi, R.A. Canfield, Epistemic uncertainty quantification techniques including evidence theory for large-scale structures, Comput. Struct. 82 (13) (2004) 1101–1112. [42] S. Yin, D. Yu, H. Yin, B. Xia, A new evidence-theory-based method for response analysis of acoustic system with epistemic uncertainty by using Jacobi expansion, Comput. Meth. Appl. Mech. Eng. 322 (2017) 419–440. [43] L. Xie, J. Liu, J. Zhang, X. Man, Evidence-theory-based analysis for structural-

Due to the lack of knowledge, the computational models in thermal engineering practice always contain some epistemic uncertainties. Based on the evidence theory, this paper presents a novel approach for model validation with limited experimental data. As a supplement to the traditional probabilistic analysis methods, the proposed evidencetheory-based method ensures the integrity of uncertainty analysis framework in model validation. From this study, the following conclusions can be drawn: (1) The application of evidence theory can simplify the modeling process of uncertainty characterization, especially in the case with a small amount of data. The subinterval vectors with independent BPAs are used to represent the focal elements of input parameters. To efficiently calculate the response focal elements, an interval collocation analysis method with small computational cost is developed, where the parallel simulation procedure can be introduced to further reduce the executing time of high-dimensional problems. (2) In order to conquer the drawback of conservative estimation for input evidence variables, a BPA-based parameter calibration framework is constructed by using the available measurement information, where the prediction accuracy of computational model can be significantly improved by the updated response BPA. Meanwhile, an evidence-theory-based model validation metric is defined to qualitatively characterize the relationship between experimental measurements and numerical predictions. (3) In order to verify the effectiveness of proposed model validation method, its implementation on the Sandia thermal challenge problem has been described in detail. The excellent performance indicates that the evidence-theory-based method could be a reasonable alternative to current methods for evaluating uncertainties in model validation. The suggested numerical example is relatively simple with only two uncertain variables, but with additional computational cost, the proposed method can be undoubtedly extended to solve the high-dimensional problems. (4) The proposed evidence-theory-based model validation method is efficient to solve the problem with continuous intervals, where the focal elements are described as mutually connected intervals with no gaps or overlaps. For the more complex FD with overlapping or disjoint focal elements, there are still some shortcomings in the proposed method, which will be the research emphasis in our future work. Acknowledgements This work was supported by the Alexander von Humboldt Foundation. References [1] Q. Zhu, A. Li, J. Xie, W. Li, X. Xu, Experimental validation of a semi-dynamic simplified model of active pipe-embedded building envelope, Int. J. Therm. Sci. 108 (2016) 70–80. [2] D.F. Fletcher, D.D. McClure, J.M. Kavanagh, G.W. Barton, CFD simulation of industrial bubble columns: numerical challenges and model validation successes, Appl. Math. Model. 44 (2017) 25–42. [3] R.G. Sargent, Verification and validation of simulation models, J. Simulat. 7 (1) (2013) 12–24. [4] S. Liang, T. Wong, Experimental validation of model predictions on evaporator coils with an emphasis on fin efficiency, Int. J. Therm. Sci. 49 (1) (2010) 187–195. [5] S.A. Billings, Q.M. Zhu, Nonlinear model validation using correlation tests, Int. J. Contr. 60 (6) (1994) 1107–1120. [6] D. Sornette, A.B. Davis, K. Ide, K.R. Vixie, V. Pisarenko, R. Kamm, Algorithm for model validation: theory and applications, Proc. Natl. Acad. Sci. Unit. States Am. 104 (16) (2007) 6562–6567. [7] C. Pecheur, S. Nelson, Survey of NASA V&V Processes/methods, National Aeronautics and Space Administration, 2002 NASA/CR-2002–211401. [8] R.G. Hills, M. Pilch, K.J. Dowding, J. Red-Horse, T.L. Paez, I. Babuska, R. Tempone, Validation challenge workshop, Comput. Meth. Appl. Mech. Eng. 197 (29) (2008)

626

International Journal of Thermal Sciences 132 (2018) 618–627

C. Wang et al.

[44] [45] [46]

[47] [48] [49]

acoustic field with epistemic uncertainties, Int. J. Comput. Meth. 14 (2) (2017) 1750012. J.C. Helton, W.L. Oberkampf, Alternative representations of epistemic uncertainty, Reliab. Eng. Syst. Saf. 85 (1–3) (2004) 1–10. Z. Zhang, C. Jiang, X. Han, D. Hu, S. Yu, A response surface approach for structural reliability analysis using evidence theory, Adv. Eng. Software 69 (2014) 37–45. S. Salehghaffari, M. Rais-Rohani, E.B. Marin, D.J. Bammann, A new approach for determination of material constants of internal state variable based plasticity models and their uncertainty quantification, Comput. Mater. Sci. 55 (2012) 237–244. T. Ali, P. Dutta, Methods to obtain basic probability assignment in evidence theory, Int. J. Comput. Appl. 38 (4) (2012) 46–51. W. Tao, Numerical Heat Transfer, Xi’an Jiaotong University Press, Xi’an, 2001. Z.P. Mourelatos, J. Zhou, A design optimization method using evidence theory, J. Mech. Des. 128 (4) (2006) 901–908.

[50] S. Salehghaffari, M. Rais-Rohani, E.B. Marin, D.J. Bammann, Optimization of structures under material parameter uncertainty using evidence theory, Eng. Optim. 45 (9) (2013) 1027–1041. [51] R.M. Fujimoto, Parallel and Distributed Simulation Systems, John Wiley & Sons, New York, 2000. [52] R.G. Hills, K.J. Dowding, L. Swiler, Thermal challenge problem: summary, Comput. Meth. Appl. Mech. Eng. 197 (29) (2008) 2490–2495. [53] K.J. Dowding, M. Pilch, R.G. Hills, Formulation of the thermal problem, Comput. Meth. Appl. Mech. Eng. 197 (29) (2008) 2385–2389. [54] D.C. Montgomery, E.A. Peck, G.G. Vining, Introduction to Linear Regression Analysis, John Wiley & Sons, New Jersey, 2012. [55] C. Jiang, X. Han, G. Liu, Optimization of structures with uncertain constraints based on convex model and satisfaction degree of interval, Comput. Meth. Appl. Mech. Eng. 196 (49) (2007) 4791–4800.

627