A statistical methodology for quantification of uncertainty in best estimate code physical models

A statistical methodology for quantification of uncertainty in best estimate code physical models

annals of NUCLEAR ENERGY Annals of Nuclear Energy 34 (2007) 628–640 www.elsevier.com/locate/anucene A statistical methodology for quantification of u...

1011KB Sizes 4 Downloads 62 Views

annals of

NUCLEAR ENERGY Annals of Nuclear Energy 34 (2007) 628–640 www.elsevier.com/locate/anucene

A statistical methodology for quantification of uncertainty in best estimate code physical models Paolo Vinai a

a,b,*

, Rafael Macian-Juan a, Rakesh Chawla

a,b

Laboratory for Reactor Physics and Systems Behavior, Paul Scherrer Institute, 5232 Villigen PSI, Switzerland b Ecole Polytechnique Fe´de´rale de Lausanne (EPFL), 1015 Lausanne, Switzerland Received 28 November 2006; accepted 1 March 2007 Available online 24 April 2007

Abstract A novel uncertainty assessment methodology, based on a statistical non-parametric approach, is presented in this paper. It achieves quantification of code physical model uncertainty by making use of model performance information obtained from studies of appropriate separate-effect tests. Uncertainties are quantified in the form of estimated probability density functions (pdf’s), calculated with a newly developed non-parametric estimator. The new estimator objectively predicts the probability distribution of the model’s ‘error’ (its uncertainty) from databases reflecting the model’s accuracy on the basis of available experiments. The methodology is completed by applying a novel multi-dimensional clustering technique based on the comparison of model error samples with the Kruskall–Wallis test. This takes into account the fact that a model’s uncertainty depends on system conditions, since a best estimate code can give predictions for which the accuracy is affected by the regions of the physical space in which the experiments occur. The final result is an objective, rigorous and accurate manner of assigning uncertainty to coded models, i.e. the input information needed by code uncertainty propagation methodologies used for assessing the accuracy of best estimate codes in nuclear systems analysis. The new methodology has been applied to the quantification of the uncertainty in the RETRAN-3D void model and then used in the analysis of an independent separate-effect experiment. This has clearly demonstrated the basic feasibility of the approach, as well as its advantages in yielding narrower uncertainty bands in quantifying the code’s accuracy for void fraction predictions.  2007 Elsevier Ltd. All rights reserved.

1. Introduction Conservative calculation tools have been traditionally used for the safety analysis of nuclear power plants (NPPs). These, in many cases, employ overestimated boundary and initial conditions and make use of simplified physical models containing overtly conservative approximations that yield results which reduce the safety margins of the plants. Such an approach leads to predictions that can be quite non-realistic. In some cases, the NPPs may be forced to operate with very large safety margins, because of the cau* Corresponding author. Address: Ecole Polytechnique Fe´de´rale de Lausanne (EPFL), 1015 Lausanne, Switzerland. Tel.: +41 56 310 2046; fax: +41 56 310 2327. E-mail address: [email protected] (P. Vinai).

0306-4549/$ - see front matter  2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.anucene.2007.03.003

tious assumptions made in the physical models used and the conditions assumed during the analyzed transients. More recently, the use of a best-estimate approach for safety analysis has been gaining acceptance in the nuclear safety community, i.e. among regulators, policy makers and utilities. A best-estimate assessment of transient behavior is usually carried out by using complex system analysis codes, such as RETRAN-3D (RETRAN-3D, 1999), RELAP5 (RELAP5/MOD3.3, 2001) and TRACE (Odar et al., 2003), in conjunction with a considerably more physically realistic description of the system. This description is in terms of the basic mass, momentum and energy equations, and the physical models that constitute their closure laws. The results of such simulations of NPP transients are much closer to the real behavior of the system under investigation than are the predictions of the traditional

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

conservative codes. However, the ‘best estimate’ character of the calculations reduces, by definition, the conservatisms of the results, so that an assessment of their uncertainty becomes crucial for their general acceptability. Thus, when the results of a best-estimate analysis have to be compared against specified safety limits that should not be reached or crossed, e.g. for the maximum cladding temperature during the transient, the corresponding prediction has to be accompanied by an estimate of its uncertainty, so that the probability of transgressing the specified safety limits can be explicitly computed. In the above context, a number of statistical methodologies have been proposed during the last decade, for quantifying the uncertainty in code results on the basis of estimates of uncertainties attributed to individual sources, e.g. code input data, physical models, scaling effects, solution methods and user effects (Glaeser et al., 1998; Cacuci, 1981, 1988). Effectively, such methodologies are able to assess the uncertainty in the code prediction of a certain important safety related parameter (e.g. peak clad temperature, maximum over-pressurization, or minimum core liquid level), provided appropriate uncertainty estimates can be made for the individual physical models used, the assumed plant conditions, etc. For many of the individual sources of uncertainty, however, objective information is not readily available for the needed quantification. In the case of physical models, in particular, resort is made to input from subjective sources, e.g. expert opinion, documentation of previous experience, etc. This, in many cases, results in biases introduced by the experts or the assignment of a not-easily justifiable large uncertainty to overcome a lack of knowledge, while, at the same time, not violating any safety margins. In this context, it would clearly be advantageous to have methods that could yield objective uncertainty information, e.g. based on the evaluation of the performance of a best estimate code’s physical models by comparison of their predictions with well documented experiments. The latter could be separate-effect tests and/or integral tests, depending on the type of physical model in question. The current paper presents such a new methodology. The objective quantification of code physical model uncertainty is assessed by applying a statistical non-parametric approach for analyzing the model performance against a suitable experimental database. The methodology quantifies uncertainties in the form of estimated probability density functions (pdf’s). By applying a novel multi-dimensional clustering technique based on the Kruskall–Wallis test, it also takes into account the fact that a model’s uncertainty can depend on the regions of physical space involved. The model uncertainties computed in this way can then serve as input for code uncertainty propagation methodologies (replacing, for example, expert-opinionbased estimates). This, in turn, should enable a much more objective quantification of the accuracy of best estimate codes for simulating transient events in which the physical model adopted plays an important role.

629

The following section presents the theoretical basis for the proposed methodology. Section 3 describes its illustrative application to estimating uncertainties of a particular physical model, viz. one commonly used in system codes for void fraction predictions. This has been done on the basis of the model’s qualification against a wide range of appropriate, separate-effect tests. In Section 4, an application to a best estimate simulation of a specific, independent void fraction experiment is described and in Section 5 the results are discussed. Conclusions are drawn in Section 6. 2. Methodology As discussed above, the uncertainty assessment methodology proposed is based on the performance of a given code physical model in the analysis of separate-effect tests addressing the physical phenomena relevant to the model. For this purpose, the main quantity predicted by the given physical model (e.g. void fraction in the case of drift-flux models), or some other variable directly related to the main quantity, is compared with corresponding experimental results. The distribution of the ‘errors’ in the predictions, in absolute or relative terms, is assumed to behave statistically in the manner of a stochastic variable. In this way, a ‘database of errors’ can then be statistically processed to quantify the uncertainty of the model in the prediction of the physical quantity of interest. This information can subsequently be used as basic uncertainty data for the application of most of the code uncertainty propagation methods developed to date (Wickett, 1998). The current methodology makes use of non-parametric statistical methods to quantify the uncertainty of a given code physical model as a pdf, which statistically describes the distribution of the ‘errors’ of the model’s predictions for the set of assessment studies included in the database. It is clear then that the statistical significance of such a pdf will increase with the size of the database, i.e. with the number of experiments included. However, non-parametric methods can generally produce good estimates (as quantified by statistical measures (Efromovich, 1999)) for small samples (size of the database). Moreover, these methods do not require a priori assumptions on the probability distribution of the uncertainty which, in many cases, cannot simply be considered as normal (Gaussian), characterized simply by a mean (‘bias’) and an uncertainty spread (variance or standard deviation). The proposed methodology also accounts for the observed fact that, in general, a model’s uncertainty depends on the region of the physical space, as defined by the independent variables on which the model depends (mass flux, heat flux, pressure, sub-cooling, void fraction, etc.), over which the model is assessed or applied. If the uncertainty quantification considers the entire physical space to define a single pdf, then the regions in which a code model performs well could be penalized by the regions in which it has difficulties, thus resulting in a poor (usually over-conservative) uncertainty quantification. Subsequent

630

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

use of such information as input to code uncertainty propagation analyses will then yield unnecessarily large uncertainty estimates for those code results that are significantly influenced by the model’s predictions. On the other hand, when the physical model is employed for conditions for which its predictions are particularly poor, the uncertainty will be described by pdfs that may be too optimistic and the consequent output uncertainty bands will reflect this fact. To address this problem in a rigorous manner, the present methodology has entailed the development and application of a multi-dimensional clustering technique based on an extension of the non-parametric Kruskall–Wallis test. This results in the partition of the assessment state space into regions, in each of which an appropriate pdf is computed to quantify the model uncertainty. The final result is a set of estimated pdfs which quantify the uncertainty of a given code physical model over the entire assessment state space, but in a region-specific manner. This information can then be used as input to any code uncertainty propagation methodology which accounts for uncertainties in code input variables and physical models (McKay, 1988). 2.1. Non-parametric pdf estimation The proposed procedure for estimating the pdfs that quantify the uncertainty of a given physical model is based on the development of a novel non-parametric estimator, suitable for the type of statistical samples (databases of model accuracy measures) obtained from assessment studies, usually a relatively small number of scattered sample points. The estimator is based on the combination of two frequently used non-parametric pdf estimators, viz. the universal orthogonal series estimator (OSE) and the kernel estimator (KE). 2.1.1. Universal orthogonal series estimator The non-parametric universal OSE (Efromovich, 1999; Silverman, 1986) is based on the expansion of an a priori unknown, real pdf, f(x), in a Fourier series. The Fourier expansion of the unknown pdf that characterizes the distribution of a stochastic variable X (the comparison between the experimental and calculated values) can be expressed as f~ J ðx; fxj gÞ ¼

J X

xj #j uj ðxÞ;

ð1Þ

j¼0

where the functions /j(x) form an orthonormal basis, hj are the Fourier coefficients and xj are the so-called smoothing coefficients. For J = 1, the estimate is the actual pdf. The estimation of the pdf for a given sample X1, X2, . . ., Xn of X (the sample values are the elements of the assessment database) is based on the determination, from the available sample data, of the Fourier coefficients {hj}, the cutoff J and the smoothing coefficients {xj}.

As an orthonormal basis for the estimator developed in this work, the cosine basis has been chosen, viz. pffiffiffi uj ðxÞ ¼ 2 cosðpjxÞ with j ¼ 1; 2; . . . ð2Þ u0 ðxÞ ¼ 1 because it yields the empirical pdfs as relatively smooth functions. The Fourier coefficients are obtained by means of a probabilistic approach as the expected value of the basis functions   #j ¼ E I fX 2½0;1g uj ðX Þ ; ð3Þ where E[ ] denotes the expected value, and the indicator I{e} is equal to 1 when the event e occurs and 0 otherwise, i.e. it represents the probability of the random variable X taking values inside the support interval of the basis functions. A natural estimate for Eq. (3) is the sample mean n 1X #^j ¼ I fX l 2½0;1g uj ðX l Þ: ð4Þ n l¼1 The cutoff J is chosen such that it minimizes the mean integrated squared error (MISE); a statistical quantification of how far the estimator is from the true pdf, f(x), defined as Z 1  2 ^ ^ MISEðf J ; f Þ ¼ E ðf J ðxÞ  f ðxÞÞ dx ; ð5Þ 0

where f^ J ðxÞ ¼

J X

#^j uj ðxÞ:

ð6Þ

j¼0

By minimizing Eq. (5), the optimal cutoff J can be calculated as ! J X ^ d b J ¼ arg-min 2  #^2j ; ð7Þ n 06J 6J n j¼0 where d^ is a function of the sample size n (database size) and the variance of the estimate of the Fourier coefficients d^ ffi nVarð#^j Þ;

ð8Þ

and arg-min is defined as m ¼ arg-minfam g;

ð9Þ

06m6M

with am ¼ minfa0 ; a1 ; . . . ; aM g. The upper bound Jn is set as J n ¼ bcj0 þ cj1 lnðnÞc;

ð10Þ

where the notation º ß signifies bxc ¼ rounded-down x;

ð11Þ

and the constants cj0 and cj1 assume the values 4 and 0.5, respectively (Efromovich, 1999). The smoothing coefficients xj, determined also by minimizing MISE for the current estimator, can be calculated as

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

xj ¼

#2j : 2 #2 þ Efð#^j  #j Þ g

ð12Þ

j

Eq. (12), after algebraic manipulation, gives the follow^0 ¼ 1 ing estimators for xj x ! d^ ^j ¼ 1  ; j > 0: ð13Þ x n#^2j þ

The notation (x)+ indicates the positive part of x, i.e. max (0, x). Finally, the universal OSE, a classical smoothed series estimator, can be written as f^ OSE ðxÞ ¼

bJ X

^ j #^j uj ðxÞ: x

ð14Þ

j¼0

If the stochastic variable X is defined on a generic support [a, b], the set of observations {Xl}, with l = 1, . . ., n, and the coordinate x representing the realization of X, can be rescaled by a linear translation as X 0l ¼

Xl  a ba

and

x0 ¼

xa ; ba

ð15Þ

with the entire development above remaining applicable. 2.1.2. Kernel estimator The Kernel Estimator defines a pdf for a sample of N observations of the stochastic variable X (X1, X2, . . ., XN) as (Silverman, 1986)   N N 1 X 1 X Xi  x ^ K h ðX i  xÞ ¼ K ; ð16Þ f KE;h ðxÞ ¼ N i¼1 Nh i¼1 h where the kernel K : R ! R is an integrable function. A modified kernel includes the bandwidth h in the series expansion to ‘smooth’ the estimator in the following way 1

1

K h ðxÞ ¼ h Kðxh Þ

with x; h 2 R:

ð17Þ

The function K is defined as a kernel of order k, when it has the following property Z uj KðuÞ du ¼ 0 for j < k; and R Z ð18Þ j u KðuÞ du 6¼ 0 for j P k: R

A second-order kernel, i.e. k = 2, has the same properties as a pdf and, therefore, for non-parametric pdf estimation, these functions play an important role. As second-order kernels, several functions are available, namely Uniform, Triangle, Epanechnikov function, Quartic, Triweight and Gaussian. The choice of the kernel function is not crucial to produce a good estimator, and it is usual to select a Gaussian kernel, since this is a smooth wellknown function which also satisfies the requirements of a pdf. The form of this kernel function, for a bandwidth h, is x  x

1 xxi 2 1 i K ¼ pffiffiffiffiffiffi e2ð h Þ : h 2p

ð19Þ

631

The fundamental point that has to be addressed when such estimators are used is to determine the bandwidth h because this is the strategic parameter that drastically affects the quality of the estimator (Silverman, 1986; Park and Marron, 1990; Sheather and Jones, 1991). In fact, small values of h produce a too detailed estimate while losing the general shape of the distribution, whereas values of h which are too large generate a too smooth shape of the estimator which neglects the fine, local details of the distribution. 2.1.3. Novel combined estimator In order to obtain a more flexible estimator to apply in any possible case with reasonable results, a new approach is proposed, combining the universal OSE’s good behavior in dense data regions with the local accuracy of the KE for sparse data. The new combined estimator eliminates the difficulties of the universal OSE in sparse data regions, which contribute to the worsening of the overall quality of the estimator. The new estimator has the form of a Gaussian KE, but with its bandwidth h chosen to be the minimizer of the integrated square difference between the KE and the universal OSE. The numerical results discussed below demonstrate that this criterion for bandwidth selection is indeed quite appropriate. Thus, considering the integrated square error (ISE) of the KE, one has Z 2 ISE ¼ ðf^ KE;h  f Þ dx Z Z 2 ^ ^ ¼ ðf KE;h  f OSE Þ dx þ 2 ðf^ KE;h  f Þðf^ OSE  f Þ dx Z 2 ð20Þ  ðf^ OSE  f Þ dx: Since f^ KE;h converges to f as n ! 1, f^ KE;h can approximate f globally (within the domain of x, as measured by the ISE) as closely as needed for any h. Therefore, it can also be assumed to be globally close to f^ OSE , and it is possible to write the following Z ðf^ KE;h  f Þðf^ OSE  f Þ dx Z ð21Þ  ðf^ OSE  f Þðf^ OSE  f Þ dx: An approximate expression for the ISE is obtained as Z ISE ¼ ðf^ KE;h  f Þ2 dx Z Z 2 2 ð22Þ  ðf^ OSE  f Þ dx þ ðf^ KE;h  f^ OSE Þ dx By minimizing the last term in Eq. (22) with respect to h Z 2 ð23Þ min ðf^ KE;^h  f^ OSE Þ dx ) ^h; ^ h

an optimal value for the bandwidth ^h can be determined that makes the ISE for the KE close to that of the universal

632

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

OSE. This then provides an estimator which is very satisfactory, since it has the favorable characteristics of both. The performance of the new estimator can be evaluated from Fig. 1, in which the pdf for a random sample of a standard Gaussian distribution (l = 0, r = 1) has been estimated. The random samples used in the test have a relatively small size. However, it is clearly seen that, although the proposed estimator indicates local clusters to start with (e.g. size 20), its performance improves markedly with the increase of the sample size. 2.2. Non-parametric clustering technique As mentioned earlier, the development of a clustering technique, based on an extended Kruskall–Wallis nonparametric test (Conover, 1980; Weerahandi, 1995) to the assessment database, is an additional feature of the current methodology described in this paper. The technique identifies clusters of physical model ‘error’ values which, because their probability distribution can be considered to be represented by a common pdf according to the test, are able to statistically quantify the uncertainty of a model in a given region of the assessment state space in which the model is applied. Such clusters can contain information from various separate-effect tests, which have points within a common region of the assessment state space. If a group of ‘error’ values in the database (grouped with the criterion of representing assessment results in a given region of the state space) is considered as an independent stochastic sample of the accuracy of a physical model (the values, as mentioned, may come from different, independent separate-effect tests), then one can divide the entire database in a series of k samples of different

sizes n1, n2, . . ., nk according to a partition of the state space in k regions. The j-th element of the i-th sample is denoted as Xij with i = 1, 2, . . ., k and j = 1, 2, . . ., ni and the total number of experimental assessment values is N = n1 + n2 +  + nk. The Kruskall–Wallis test permits one to compare these k samples and to decide whether they belong to the same statistical population: the null hypothesis is that the k distribution functions of the samples are identical. The alternative hypothesis states that at least one of the samples belongs to a different population to the others. The test statistic is a function of the ranks of the elements in the combined sample: the smallest value of all the N observations is assigned rank 1 and the largest one rank N. If Rij indicates the rank of Xij, and Ri is the sum of all the ranks from sample i, then the test statistic is defined as h i2 Pk ðN  1Þ i¼1 ni Rnii  N þ1 2 ð24Þ wðX Þ ¼ Pk Pn   : i N þ1 2 j¼1 Rij  2 i¼1 If no ties exist, i.e. if the variable X (the assessment ‘errors’) assumes a particular value not more than once, the following simplification is possible X R2 12 i  3ðN þ 1Þ: ð25Þ wðX Þ ¼ N ðN þ 1Þ ni If the number of ties is moderate, the difference between Eqs. (24) and (25) is negligible. This statistic tends asymptotically to the v2 distribution with k  1 degrees of freedom. Thus, by fixing the level of significance a, w(X) can be compared with the value assumed by the v2 distribution with k  1 degrees of freedom and p = (1  a). If the calculated statistic of Eqs. (24) and (25) yields a value smaller

Fig. 1. Estimated pdf for a Gaussian Distribution. - - -: new estimator; —: Gaussian; ·: sample values.

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

than the value of the v2 distribution, then the null hypothesis is accepted. This means that it is not possible to exclude that all the samples considered belong to the same statistical population with level of significance a or less, and they can be are considered as belonging to the same cluster. 2.2.1. One-dimensional clustering Considering just one independent variable to define the assessment state space, e.g. pressure, mass flux, etc., the assessment data from each experiment in the database can be considered as a stochastic sample of the code model’s performance and, since the experiments are independent from each other, so are the samples. For each sample, the assessment results contained in the database can then be ordered and ranked from the one with the smallest value of the chosen independent variable to that with the largest value. The aim is to test if all these samples can be ascribed to the same statistical population defined by a common pdf. For the application of the Kruskall–Wallis test described above, the state space is partitioned initially into a certain number of regions, as covered by the experiments. Then, the assessment data are assigned to these regions, forming a new sample set per region that contains information from different experiments performed with values of the independent variable within the limits of this region. After this ‘re-sampling’ process, the Kruskall–Wallis test is applied to two samples (belonging to contiguous statespace regions). If they can be represented by the same pdf, then a third sample from the contiguous state-space region will be added to the cluster, and the test will be repeated for the three samples. If the new added sample cannot be considered as having the same pdf as the ones already included in the cluster, a new cluster is created with the rejected sample, and the process is continued with the next contiguous sample. Each sample is compared only with samples from the adjacent cluster and not with those from regions that are not adjacent. It can then either be included in the already existing cluster or can originate the new following cluster In fact, the aim is to study statistically how the model’s accuracy changes as one varies the system conditions gradually in a continuous state space. Moreover, if two clusters are not contiguous, but have the same statistical properties, application of the density estimator to the information contained in them will result in approximately the same pdf. When all the samples have been processed in this way, the assessment state space defined by the independent variable will have been divided into a new set of regions, usually fewer than the initial partition, containing, each one, assessment data with a statistical distribution described by a similar pdf which characterizes the uncertainty of the model’s predictions in that particular region of its physical space. The approach described results in a more detailed and accurate analysis of the assessment information and rigorously divides the assessment state space into well defined

633

regions, each with a particular pdf quantifying the uncertainty of the model in that region. These pdfs can then be sampled and used in a code uncertainty propagation methodology which takes into account uncertainty in code physical models as input. This technique has been extended to the general case of a physical assessment space determined by n independent variables. 2.2.2. Extension to n-dimensional clustering A physical space, defined by n independent variables, can be considered as a n-dimensional cartesian space. Then, in such a space, the distance d between two points with coordinates (e1, e2, . . ., en) and (er1, er2, . . ., ern) is defined as qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 ð26Þ d  ðe1  er1 Þ þ ðe2  er2 Þ þ    þ ðen  ern Þ ; In the current context, the first point is one of the experiments that is identified as a point in the physical space through the coordinates (e1,e2, . . ., en), representing the values of the relative independent variables. The second point, with coordinates (er1, er2, . . ., ern), is chosen as a reference point. At the beginning, the reference is the origin of the assessment state space. To simplify the analysis, the reference coordinates system is chosen in such a way that the values of each of the coordinates have the same sign for all the experiments. For instance, in a 2-dimensional coordinates system, all the points should be in the same quadrant. If this does not occur for the natural coordinates system, it can be obtained with a transformation of the axes. After calculating d for all the experiments, they are ranked from the one closest to the origin to the one furthest from it, and the samples of assessment ‘errors’ for each experiment can then be assigned, based on their values of the distance d, to multi-dimensional regions in the state space which ‘radiate’ from the origin. The choice of this distance is a good compromise because it permits one to go through the continuity of the state space, following the direction along which the coordinates increase step by step, in a straightforward manner. Now, it is possible to follow a similar procedure to that discussed for a single state variable, based on the partition of the multi-dimensional state space into ‘concentric’ regions. It starts with the samples of assessment ‘errors’ contained in the two regions closest to the origin (er1, er2, . . ., ern). When one of the samples cannot be included in the previous cluster to which the contiguous samples closer to the origin belong, a new cluster is created, and the values of the independent state-space variables which characterize this sample are used in the next step as a reference point for the rest of the samples. The remaining experiments are classified by their new distances, and the Kruskall–Wallis test is applied again. This operation is repeated until all the experiments have been considered. An application to a two-dimensional state space is discussed in Section 3.3.

634

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

3. Quantification of the uncertainty of a void fraction model The statistical methodology developed currently has been applied to the analysis of an experimental database available in the framework of the LRS-STARS project at PSI (Maier and Coddington, 1996, 1997, 1999) in order to extract the uncertainty information related to the physical models implemented in the best-estimate thermalhydraulic code RETRAN-3D (RETRAN-3D, 1999) for void fraction prediction. The quantification of the uncertainty is based on the selection of a random variable e for the comparison between experiments and predictions. This variable e is chosen as the difference between experimental and calculated results. An investigation and partition of the physical space in which the model is employed has been performed by making use of the algorithm developed in Section 2.2. Then, the density estimators presented in Section 2.1 are applied in order to quantify the uncertainty in terms of pdfs. 3.1. Database A wide range of thermal-hydraulic experiments for the assessment of void fractions in heated rod bundles were considered, each of these being characterized by certain values of subcooling, mass flux, heat flux, pressure, etc. The experiments were carried out to study the axial profile of the void fraction along heated channels in several different separate-effect facilities, under either steady-state or transient conditions, in undersaturated and subcooled boiling regimes. All the experiments were analyzed in simulations using RETRAN-3D. The resulting predictions of the void fraction were then compared with experimental values. The database was chosen because of the large number of experimental points available. This makes it feasible to apply, in a meaningful manner, statistical techniques to extract uncertainty information for those physical models used by system codes, which predict vapor production and distribution. Moreover, the measurement conditions in the experiments that form the database span a wide range of thermal-hydraulic conditions, thus allowing for an assessment of the void fraction models over a large state space, as defined by the most important physical variables of relevance, viz. pressure, mass flux, subcooling, heat flux and power. The experimental facilities included in the database are Neptun III (Dreier et al., 1988), Pericles (Deruaz et al., 1985), TPTF (Kumamaru et al., 1994) and LSTF (Anoda et al., 1990). The data related to the Achilles facility (Pearson and Denham, 1989) were excluded, so as to be usable later for performing an independent uncertainty analysis based on the currently developed methodology (see Section 4). The original database also includes experiments related to 4 · 4 and 8 · 8 BWR bundles, but these were not used here because they are difficult to interpret on the basis of direct comparisons.

Since, as mentioned above, the chosen experiments represent a wide range of thermal-hydraulic conditions, the database can give a detailed picture of the behavior of a given void model over a large region of the physical space defined by pressure, mass flux, subcooling, heat flux and power. 3.1.1. The Neptun III experiments The Neptun III test facility consisted of 37 heated rods and was used to simulate thermal-hydraulic phenomena in tight-pitch lattice LWRs (LWHCRs, or light water high conversion reactors) under reflooding and boil-off conditions. This series of experiments corresponds to low values of pressure, heat flux and subcooling, but at quite high mass flux. 3.1.2. The pericles experiments The Pericles facility was used for experimental investigations of reflooding in a full-length electrically heated rod bundle of standard PWR geometry. A significant part of the program was devoted to the study of two-dimensional effects during reflooding These experiments are characterized by low pressure, high subcooling, and relatively low mass flux and heat flux. 3.1.3. The two-phase test facility (TPTF) The Two-Phase Test Facility (TPTF) contained 24 electrically heated rods and 8 non-heated rods, simulating a full-length (3.7 m) 17 · 17 PWR rod bundle. The aim was to study phenomena involved in two-phase flows with a particular interest in void fractions at intermediate and high pressure, over quite a large range of mass fluxes, heat flux and subcooling. 3.1.4. The large-scale test facility (LSTF) The large-scale test facility (LSTF) was a non-nuclear, full-height but 1/48 volume scale model of a Westinghouse four-loop PWR and was designed to simulate SB-LOCAs and operational transients. The facility consisted of two equally sized loops that differed only in the possible break geometries and in the presence of a pressurizer in one of the loops. These experiments correspond to intermediate to high pressure, quite low mass flux and heat flux, without any subcooling. All the different experimental facilities were modeled in RETRAN-3D as a single flow channel with a ‘‘fill junction’’ at the bottom and a ‘‘time dependent volume’’ at the top. Heat conductors simulate the heated rods and the power input. The fill junction sets the inlet mass flux and subcooling, while the system pressure is defined by using the time-dependent volume. The channel nodalization was selected such that the node locations corresponded to the power distribution steps for those experiments with cosine shaped power profiles, and to the location of the differential pressure measurements for the experiments with uniformly distributed power.

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

3.2. Stochastically distributed variable The void fraction a = Volvapor/Volcell was selected as the assessed variable for application of the current methodology, the database to be employed being generated from the comparisons of calculated (ac) and experimental (aex) values of this quantity. The variable e representing the absolute assessment ‘error’ was then defined as e  aex  ac :

ð27Þ

In each experiment, the void fraction was measured at several specific points along the test section. If the points at which the void fraction profile was computed coincided with the experimental measurements, the calculation of e is straightforward. Otherwise, with an interpolation of the calculated data by using cubic splines, values of computed void fraction were produced at the points along the pipe where the measurements were made. The variable e can be assumed as stochastically distributed for several reasons: first, the best-estimate systems analysis is affected by simplifications and uncertainties in the model with respect to the real facility; second, the code physical models are a source of uncertainty because they are physical approximations used to represent the experimental phenomena; third, a contribution to the assessment uncertainty comes from the numerical methods and approximations used by the code; and, finally, experimental data are affected by random errors and these are a further source of uncertainty which is included in e.

635

model. The results of the application of the clustering technique are shown in Fig. 2, where the points represent experiments (with several e values each) included in the three different regions identified. In each of these, a pdf was estimated to quantify the uncertainty of the drift-flux model for the values of pressure and mass flux included in the region. For a practical application, it is necessary to delimit reasonably continuous regions of the state space on the basis of those groups that are identified through the clustering technique. The boundaries for the first region are identified by pressure values lower than (or equal to) 67 bar and mass fluxes lower than (or equal to) 30 kg/m2/s. The second region corresponds to pressure between 0 and 67 bar, and mass flux in the range 30–95 kg/m2/s. For pressure between 67 and 70 bar, mass flux is between 14 and 26 kg/m2/s and, for the 67–75 bar range, it is between 26 and 45 kg/m2/s. The third region covers the remaining state space. After the identification of the regions in the physical space that show the same statistical properties according to the values of e, a density estimator has been applied to the set of data in each region of the map to determine a reasonable, corresponding pdf. These pdf’s for the three regions can then be used to sample the variable e, the sampled values being introduced, in turn, into the code to ‘‘perturb’’ the calculation. Estimates can thereby be obtained for the impact of the physical model’s accuracy on the final predictions.

3.3. Application of the methodology to the database The assessment of the RETRAN-3D void fraction predictions based on the simulation of the experiments listed in Table 1, produced a large database of the variable e as defined by Eq. (27). Its analysis proceeded by first defining the assessment state space as a function of pressure and mass flux, the two variables considered most important in determining the performance of the void fraction prediction model. Power and heat flux only determine the amount of energy input to the flow, which, in a correct RETRAN-3D simulation, should be the same as in the experiments. The inlet sub-cooling influences the void fraction prediction in the sub-cooled boiling region, but, by using the void data in the saturated boiling sections of the heated channel, the influence of the accuracy of the sub-cooled boiling model can be minimized. The accuracy of the prediction of void fraction at a given location is mainly a function of the accuracy of the code’s drift-flux

Fig. 2. Identification of the regions of the physical space defined by pressure and mass flux from a statistical point of view. Region 1: triangles; Region 2: diamonds; Region 3: circles.

Table 1 Ranges of thermal-hydraulic variables covered by the assessment database Facility

Power (kW)

Pressure (bar)

Mass flux (kg/m2s)

Heat flux (kW/m2)

DTsub (K)

TPTF Neptun III LSTF Pericles

24–450 10.3–20.1 500–5000 540–1550

29.8–118 4.1–4.2 10–150 3.0–5.5

10.5–189.3 42.5–91.0 2.8–84.0 8.9–28.6

9.05–169.8 4.9–9.6 4.5–45.4 13.9–39.9

5–35 0.46–2.84 0 20–60

636

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

Fig. 3 compares the results for Region 1 obtained with the universal OSE, the KE with bandwidth selected with the Sheather–Jones plug-in method, and the new combined estimator. It can be seen that the OSE is affected by oscillations (on the right tail) and is truncated to zero on the left tail (because it assumes negative values there). The KE shows ‘‘wiggly’’ effects, thereby giving inappropriately detailed local estimations. Through closer examination, made by comparing the distribution of the data with the results obtained with this estimator, one can observe that the measurements show a certain clustering and that this is reflected in the positions of the peaks of the estimator. Not withstanding this clustering, the possibility of a high local resolution of the KE is not quite realistic, because the details of the distribution are affected by the relatively low number of points and probably occur by chance. In such a situation, it is more important that an estimator retains its ability to describe the information contained in the data correctly in a global sense, rather than following possibly fortuitous local variations. The advantages with the current, new estimator can be observed in Fig. 3: no zero values, no oscillations and appropriate ‘‘smoothing’’ of the data. The estimated pdf’s, with the entire assessment database of Table 1 and with the three separate regions identified in Fig. 2, are shown in Figs. 4 and 5, respectively. One can clearly see that the uncertainty in each region of the assessment state space is different, and that the use of the uncertainty information from the entire database (Fig. 4) would penalize the model’s performance in Region 3, where the model appears to be more accurate (Fig. 5c). In addition, information on the different biases of the model, as shown in Fig. 5, is completely lost if the entire database is used to quantify the uncertainty, as commonly done in assessment studies.

Fig. 4. Estimated pdf for the uncertainty of the RETRAN-3D Chexal– Lellouche drift-flux model. Assessment of the void fraction predictions over the entire database of Table 1.

Fig. 5a. Estimated pdf for the uncertainty of the RETRAN-3D Chexal– Lellouche drift-flux model. Assessment state-space: Region 1 (see Fig. 2).

In fact, the model appears to have a negative bias (overpredicts) in Regions 1 and 2, while suffering from a small positive bias (underprediction) in Region 3. The methodology presented in this paper allows the identification of this type of model behaviour in a rigorous manner, and, at the same time, offers a much more realistic quantification of the uncertainty. 4. Uncertainty assessment for Achilles Experiment No. 2

Fig. 3. Comparison between the kernel estimator (the bandwidth is selected with a Sheather–Jones Plug-in method) and the universal orthogonal series estimator. The data are those corresponding to Region 1 of the physical space (see Fig. 2).

The Achilles facility represented a model PWR fuel assembly with 69 fuel rods, a heated shroud surrounding them and a simulated downcomer component. Each fuel rod simulator was of PWR representative dimensions, viz. 3.66 m heated length and 9.5 mm diameter. The facility was used for single-phase flow, low flooding rate reflood

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

Fig. 5b. Estimated pdf for the uncertainty of the RETRAN-3D Chexal– Lellouche drift-flux model. Assessment state space: Region 2 (see Fig. 2).

Fig. 5c. Estimated pdf for the uncertainty of the RETRAN-3D Chexal– Lellouche drift-flux model. Assessment state space: Region 3 (see Fig. 2).

and low pressure level swell experiments, conducted at low pressure, low mass flux, low heat flux and relatively high subcooling. In particular, Experiment No. 2, which has been chosen here to illustrate the effectiveness of the current methodology, was carried out at a pressure of 2.06 bar with a mass flux of 8.3 kg/m2/s. The experiment, which did not form part of the database used for generating the uncertainty pdfs, has been simulated with RETRAN-3D subsequently. The analysis presented in this section thus serves as a viable test of the effectiveness of the current methodology. 4.1. Code uncertainty propagation The most important model used in RETRAN-3D for the determination of the void fraction distribution along a heated channel is the drift-flux model. It is effectively

637

the uncertainty of this model for the prediction of void fraction that has been estimated with the new methodology (Sections 2 and 3). The propagation of this uncertainty to the RETRAN-3D results for Achilles Experiment No. 2 has been carried out on the basis of a Monte-Carlo type sampling (McKay, 1988). The uncertainty quantification of the code results (in this case, the void fraction distribution along the axial length of the facility) is given in terms of tolerance limits. The latter measure the uncertainty in the code’s predictions by providing maximum and minimum values which bracket a certain fraction (e.g. 95%) of all possible output values that might result from the uncertainty in the input variables and code physical models, thus giving the corresponding confidence level (i.e. 95%) for this statement. The size of the sample necessary for a given probability content and confidence level, as defined by the maximum and minimum values of the output sample, can be obtained from non-parametric statistical analysis (Conover, 1980). A rigorous approach is proposed to calculate the minimum sample size based on, for example, the general formula obtained by Noether rþm1 X N  i ð28Þ aP ð1  qÞ qN i ; i i¼0 where (1  a) is the confidence level, q is the proportion of the total population within the tolerance limits, N is the size of the sample (usually the unknown), and r and m are numbers related to the values of the sample that bound the tolerance limits. The most commonly used values are r + m = 1 (one-side tolerance limits) and r + m = 2 (twosided tolerance limits). The solution of this equation for N, when q and (1  a) are known, yields the minimum sample size needed. Tables are given with the solution for various ((1  a), q) pairs and for the cases r + m = 1 and r + m = 2. For r + m = 2, Eq. (28) is known as the Wilks Formula: a P qN + N qN1(1  q) (Wickett, 1998; Conover, 1980). For instance, for two-sided tolerance limits of 95% for the population (q = 0.95) and for 95% confidence (1  a = 0.95), the minimum sample size N is 93. The parameters q and (1  a) are also referred to usually as b and c, respectively, so that one talks about a tolerance limits with b = 0.95 and c = 0.95. It is important to indicate that the uncertainty measure based on tolerance limits is not the same as the variance (confidence interval) of the result of a typical Monte-Carlo calculation. The first is related to the expected variation (uncertainty) of a code’s results due to uncertainties in the input and code models, whereas the variance measures the precision of the calculation (how certain the calculated average value is) assuming no uncertainties in input variables or code models. For this reason, the variance is inversely proportional to the number of calculations N, tending to 0 when N tends to 1, whereas the size of the tolerance limits will never be zero with an increased number of code executions, since the size of the sample will not reduce the uncertainties in the input variables and code models.

638

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

4.2. Uncertainty in the RETRAN-3D drift-flux model The pdfs obtained in Section 3.3 quantify the uncertainty of the drift-flux model in RETRAN-3D for the prediction of void fraction as a function of two assessment state-space variables, namely pressure P and mass flux G. These pdfs can be sampled, and the values thus obtained used as inputs for the code uncertainty propagation method discussed in Section 4.1. However, the uncertainties in the drift-flux model were obtained based on void fraction assessments, while the code drift-flux model produces, as the physical quantity, the slip velocity VSL. This variable is included in the flow equations whose solution yields the main thermal-hydraulic variables, including void fraction. Thus, a transfer of the uncertainty information from void fraction must be made to the value of VSL predicted by the drift-flux model. In RETRAN-3D, a specific subroutine calculates the slip velocity, and this calculation involves the void fraction. Thus, the perturbation of the void fraction when it is called in the calculation of the slip velocity permits one to alter the slip velocity. Then, the ‘‘modified’’ slip velocity will go through the main computational algorithm for the void fraction affecting this last output. Appropriate changes have been made to the source code of RETRAN-3D in order to carry out this assignment of uncertainty during run-time according to the values of P and G for a given computational volume, based on the pdfs calculated following the current methodology.

Fig. 6. RETRAN-3D results for uncertainty propagation of the Chexal– Lellouche drift-flux model. pdfs based on assessment state space partition.

5. Results and discussion The results of the calculations to quantify the uncertainty in the void fraction profile in the Achilles Experiment No. 2 are illustrated in Figs. 6–9. A total of 100 calculations were performed to generate the sample of output values of void fractions necessary to obtain the desired tolerance limits. The entire process was carried out automatically with the help of a specially developed driver program that controls the modification of RETRAN-3D input decks (for uncertainties in input variables), and also prepares the uncertainty information needed to modify physical code models (in this case the drift-flux model) to be read and used during execution. The driver extracts also the output information to generate a sample of output values (in this case, 100 values of the void fraction profile along the heated channel). The statistical processing of input and output uncertainty information is done by means of a specifically written program which can sample the generated non-parametric pdfs in the state space (Figs. 4 and 5), as well as the uncertainty in input variables, and compute tolerance limits and statistical sensitivity measures for the code’s results. For the current application, no other uncertainties, i.e. in input variables or in additional code models, were considered, thus achieving a clearer illustration of the effectiveness of the proposed methodology.

Fig. 7. RETRAN-3D results for uncertainty propagation of the Chexal– Lellouche drift-flux model. pdf from the entire assessment database.

Three cases are presented in the form of b = 95%, c = 95% tolerance limits, together with the nominal RETRAN-3D case, the experimental measurements and the maximum, minimum and mean of the calculations. The cases refer to uncertainty in the drift-flux model prediction based on, respectively, the set of region-specific pdfs covering the P-G state space as shown in Fig. 5, the single pdf estimate based on the entire assessment database of Fig. 4, and ‘Expert Opinion’. The uncertainty corresponding to the third case is based on a subjective assessment, viz. on the extensive previous PSI experience with RETRAN-3D. Essentially a normal pdf, with a standard deviation of 0.2 (absolute ‘error’) and a bias of 0 (mean), has been assumed for the uncertainty in void fraction prediction. One can see that the pdf in Fig. 4 has a spread of roughly ±0.20, i.e. is quite similar to ‘Expert Opinion’, although it has been truncated to eliminate its long ‘tail’ in the overprediction region (negative error).

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

639

data points between 1.1 m and 1.7 m. Moreover, one sees that the tolerance limits are almost identical, for all three sets of assumptions, when the experimental void fraction is either lower than 0.2 (at the bottom of the facility) or has reached the value of 1 (at the top of the facility). 6. Summary and conclusions

Fig. 8. RETRAN-3D results for uncertainty propagation of the Chexal– Lellouche drift-flux model. ‘Expert Opinion’.

Fig. 9. Tolerance limit comparisons for the three cases.

From the analysis of the results displayed in Fig. 9, it can be observed how the novel methodology provides narrower tolerance limits in comparison with those obtained from information based on the entire database or from an expert evaluation. The advantage of separately considering Region 1 of the database (where the current experiment is located), rather than the entire database as a whole, can be seen most clearly for the upper tolerance limit. This is as could be expected from the shorter left tail of the corresponding pdf of the uncertainty (cf. Figs. 4 and 5a). The lower tolerance limits for the estimates based on the partitioned and global state space are almost identical, and this reflects the fact that the right-hand sides of the two pdfs are very similar. Comparing the experimental data against the tolerance limits obtained with the different assumptions (Fig. 9), it can be seen that more accurate results are obtained by applying the current methodology than with ‘Expert Opinion’, except for the lower tolerance limit in the case of the 2

An objective methodology has been presented for the quantification of uncertainty in best estimate code physical models, from the information provided by code model assessment using separate-effect test data. It is based on a new non-parametric estimator that quantifies the uncertainty by means of an estimated pdf of the statistical distribution of the model’s accuracy. The methodology incorporates a multidimensional clustering technique that assigns uncertainty to the model, while accounting for the particular region of the assessment state space in which the model is being applied. Quantification of the uncertainty in void fraction predictions with the Chexal–Lellouche drift-flux model (normally used in the RETRAN-3D code) has been achieved by applying the new methodology to a wide database of steady-state void fraction measurements (see Section 3.1). The uncertainty in the RETRAN-3D model has been quantified on the basis of the information contained in the stochastically distributed variable e that measures the discrepancy between calculated and experimental data points. Three regions of the assessment state space (defined by pressure and mass flux) were identified according to the Kruskall–Wallis test (see Fig. 2). Each region includes the experiments of the database that give a ‘random’ sample of the variable e from the same statistical population and is thus characterized by the same statistical properties, i.e. by the same pdf. With the code predictions showing different trends at low, intermediate and high values of pressures and mass fluxes, such statistical evidence clearly permits one to ‘‘draw a map’’ giving relevant indications of the different accuracies of the physical model across the state space. In each of the separately considered state-space regions, the estimated pdf was found to be coherent and in agreement with the observed distribution of the analyzed data, with neither any excessive oversmoothing or undersmoothing, nor any discontinuities. The pdfs can thus be used as a reliable basis for sampling e, on running a number of cases in which the nominal code predictions are modified by ascribing appropriate uncertainties to the model employed. The outputs of these code calculations can then be processed statistically in order to ascertain the corresponding uncertainty of the code predictions for any transient of interest. In order to demonstrate the effectiveness of the current methodology, the propagation of model uncertainty to the results of a RETRAN-3D simulation has been studied for an independent separate-effect test, viz. Experiment No. 2 carried out at the Achilles facility. The analysis of the

640

P. Vinai et al. / Annals of Nuclear Energy 34 (2007) 628–640

results has clearly shown that a more detailed and accurate representation of model uncertainty is achieved with the proposed methodology than is possible on an ‘Expert Opinion’ basis. Moreover, the importance of considering the dependence of the model’s accuracy on the considered region of the assessment state space, rather than simply assuming a single pdf for the entire database, has been clearly brought out. Acknowledgements This research has been carried out in the framework of PSI’s STARS project, which has the support of the Swiss Federal Nuclear Safety Inspectorate (HSK), a division of the Federal Office of Energy (BFE). References Anoda, Y., Kukita, Y., Tasaka, K., 1990. Void fraction distribution in rod bundle under high pressure conditions. In: Proceedings of the ASME Winter Annual Meeting for Advances in Gas–Liquid Flows, pp. 283– 289. Cacuci, D.G., 1981. Sensitivity theory for non-linear systems: non-linear functional analysis approach. Journal of Mathematical Physics 22, 2794–2802. Cacuci, D.G., 1988. The forward and the adjoint methods of sensitivity analysis. In: Ronen, Y. (Ed.), Uncertainty Analysis. CRC Press, Inc., Boca Raton, FL, USA (Chapter 3). Conover, W.J., 1980. Practical Non-Parametric Statistics. Wiley, New York. Deruaz, R., Clement, P., Veteau, J.M., 1985. Study of two-dimensional effects in the core of a light water reactor during the ECC’s phase following a loss of coolant accident. EUR-10076-EN, CEA, Centre d’Etude Nucleaires de Grenoble. Dreier, J., Analytis, J., Chawla, R., 1988. Neptun-III reflooding and boiloff experiments with an LWHCR fuel bundle simulator: experimental results and initial code assessment efforts. Nuclear Technology 80, 93– 106. Efromovich, S., 1999. Non-Parametric Curve Estimation. Springer-Verlag, New York.

Glaeser, H. et al., 1998. Uncertainty Methods Study for Advanced Best Estimate Thermal-hydraulic Code Applications, vol. 2, Report from the Uncertainty Analysis Group, CSNI. Kumamaru, H., Kondo, M., Murata, H., Kukita, Y., 1994. Void-fraction distribution under high-pressure boiloff conditions in rod bundle geometry. Nuclear Engineering and Design 150, 95–105. Maier, D., Coddington, P., 1996. Validation of RETRAN-3D against a wide range of rod bundle void fraction data. ANS Transactions 75, 372–374. Maier, D., Coddington, P., 1997. Review of wide range void correlations against an extensive database of rod bundle void measurements. In: Proceedings of ICONE5, Nice, France, Paper No. 2434. Maier, D., Coddington, P., 1999. Evaluation of the slip options in RETRAN-3D. Nuclear Technology 128 (2), 153–168. McKay, M.D., 1988. Sensitivity and uncertainty analysis using a statistical sample of input values. In: Ronen, Y. (Ed.), Uncertainty Analysis. CRC Press, Inc., Boca Raton, FL, pp. 145–186. Odar, F., Murray, C. (U.S. NRC), Shumway, R., et al. (Information Systems Laboratories, Inc.), Mahaffy, J. (The Pennsylvania State University), 2003. TRACE v4.0, User’s Manual, US Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. Park, B.U., Marron, J.S., 1990. Comparison of data-driven bandwidth selectors. Journal of the American Statistical Association 85, 66– 72. Pearson, K., Denham, M.K., 1989. Achilles unballooned cluster experiments, Part 4: low pressure level swell experiments. AEEW-R 2339, UK Atomic Energy Authority, Winfrith Technology Centre, Safety and Engineering Science Division, Winfrith. RELAP5/MOD3.3. Code Manuals. Nuclear Safety Analysis Division, Information Systems Laboratories, Inc., Rockville, MD, Idaho Falls, Idaho, Dec. 2001. RETRAN-3D. A Program for Transient Thermal-Hydraulic Analysis of Complex Fluid Flow Systems. vol. 1: Theory and Numerics. Computer Code Manual, July 1999. Sheather, S.J., Jones, M.C., 1991. A reliable data-based bandwidth selection method for kernel density estimation. Journal of the Royal Statistical Society, Series B 53, 683–690. Silverman, B.W., 1986. Density Estimation for Statistics and Data Analysis. Chapman and Hall, London, New York. Weerahandi, S., 1995. Exact Statistical Methods for Data Analysis. Springer-Verlag, New York. Wickett, T. (Ed.), 1998. Report of the Uncertainty Methods Study. vols. 1& 2, OECD/CSNI Report.