Annals of Nuclear Energy 137 (2020) 107146
Contents lists available at ScienceDirect
Annals of Nuclear Energy journal homepage: www.elsevier.com/locate/anucene
Uncertainty and sensitivity analysis of PWR mini-core transients in the presence of nuclear data uncertainty using non-parametric tolerance limits A. Aures a,b,⇑, N. Berner a, A. Pautz b, W. Zwermann a a b
Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH, Boltzmannstrasse 14, 85748 Garching bei München, Germany École polytechnique fédérale de Lausanne, 1015 Lausanne, Switzerland
a r t i c l e
i n f o
Article history: Received 2 August 2019 Received in revised form 23 September 2019 Accepted 17 October 2019
Keywords: Uncertainty analysis Sensitivity analysis TMI-1 mini-core transient XSUSA DYN3D-ATHLET Wilks tolerance limit Squared multiple correlation coefficient
a b s t r a c t The impact of nuclear data uncertainties is studied for the reactor power and the reactivity during control rod withdrawal transients with reactivity insertions of 0.5$ and 0.97$, respectively, for a PWR mini-core model. Multi-group cross sections, the multiplicities of both prompt and delayed neutrons, and fission spectra are varied by the application of the random sampling-based method XSUSA with covariance data of SCALE 6.1 supplemented by JENDL-4.0. The varied multi-group data are used by TRITON/NEWT to generate varied 2-group cross sections, which are then applied in neutron-kinetic/thermal-hydraulic calculations with DYN3D-ATHLET. A significant impact on both the reactivity uncertainty and the power uncertainty is observed. Since the distributional properties of the output time series vary across the problem time, the distribution-free Wilks tolerance limit is applied as a robust uncertainty measure to complex time series patterns. The most contributing nuclide reactions to the power uncertainty are identified via sensitivity analysis. Ó 2019 Elsevier Ltd. All rights reserved.
1. Introduction For design and safety analysis of nuclear reactors, the application of best-estimate codes in combination with established uncertainty evaluation methods has recently been pursued in order to propagate input uncertainties through the calculation sequence and thus to yield calculation results and their attributed uncertainties. Input uncertainties arise, for example, from nuclear data, geometrical parameters and material parameters, and their impact on the output uncertainty can be assessed through the application of linear perturbation theory or random sampling as the most commonly used approaches to date (Bostelmann et al., 2018). The linear perturbation theory approach is mainly used in stand-alone neutron transport calculations, e.g. criticality problems. The random sampling approach is beneficial in the field of coupled reactor core simulations which often consist of calculations that build on one another. In recent years, especially within the framework of the ‘‘Benchmark for Uncertainty Analysis in Modeling for Design, Operation
⇑ Corresponding author at: Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH, Boltzmannstrasse 14, 85748 Garching bei München, Germany. E-mail address:
[email protected] (A. Aures). https://doi.org/10.1016/j.anucene.2019.107146 0306-4549/Ó 2019 Elsevier Ltd. All rights reserved.
and Safety Analysis of LWRs, Phase I” (Ivanov et al., 2016) (LWRUAM) of the OECD/NEA, uncertainty quantification and sensitivity analyses (UQ/SA) have been successfully performed for the successive steps of a typical light-water reactor calculation sequence, starting with cell physics, followed by lattice physics, reaching core physics (Aures et al., 2018). In the last couple of years, first UQ/SA analyses with respect to nuclear data uncertainties have been performed on reactor transient simulations (Dokhane et al., 2018). Such analyses have also been recently carried out in Phase II of the LWR-UAM benchmark. This part of the benchmark focuses on UQ/SA analysis of various reactor kinetic and dynamic problems to assess the impact of input uncertainties on the outcome of coupled multi-physics calculations (Hou et al., 2014). In this study, we propagate nuclear data uncertainties through the analysis of two control rod withdrawal transients at 0.5$ and 0.97$, respectively, (1$ refers to a reactivity increment of the effective delayed neutron fraction) of a PWR mini-core model. The transient simulations follow the 2-step approach, namely, first, the generation of homogenised few-group cross sections and, second, the coupled multi-physics calculation. We discuss the development of the uncertainties over time of both the reactor power and the reactivity by means of statistical measures. In general,
2
A. Aures et al. / Annals of Nuclear Energy 137 (2020) 107146
the assumption of normality is valid for most output quantities of interest and the analysis of variance can be carried out via classical tolerance limits. However, in this study we aim to investigate complete output time series for particular quantities of interest in order to study the complex transient behaviour. The distributional patterns of the output sample may vary over the time series and the assumption of normality may be violated. For this reason, we apply the Wilks tolerance limits as a non-parametric (aka distribution-free) approach to analyse the uncertainty patterns of the output time series. In the following we outline how the random sampling-based XSUSA method in combination with the coupled neutron-kinetic/ thermal-hydraulic code system DYN3D-ATHLET is applied for UQ/SA analysis of the PWR mini-core transients. By means of tolerance limits we explain the uncertainty analysis for data following a normal distribution and for data with an unknown underlying distribution. Moreover, we briefly describe the sampling-based sensitivity analysis via the squared multiple correlation coefficient. The investigated scenarios are specified in detail by the employed mini-core model and the considered transients. Finally, the results of the UQ/SA analysis for the reactor power and the reactivity are presented and discussed.
2. Methodology and tools used for the analysis The analysis involves several successive stages: i) the generation of N randomly sampled 2-group cross section sets along with kinetic parameters for K assembly types parametrised with L combinations of thermal-hydraulic grid points (Fig. 1), ii) the execution of N coupled neutron-kinetic/thermal-hydraulic (NK/TH) calculations (Fig. 2), and iii) the uncertainty and sensitivity analysis on the N resulting data sets.
Fig. 2. Execution of N NK/TH calculations with varied 2-group cross sections, and subsequent UQ/SA analysis of the N resulting data sets.
For the first step, we generate the varied 2-group cross sections with the random sampling method XSUSA (Cross Section Uncertainty and Sensitivity Analysis) (Zwermann et al., 2009) in combination with the lattice physics sequence TRITON/NEWT of the SCALE 6.1 code package (Oak Ridge National Laboratory, 2011). The transient analyses of the N sample cases are carried out with the NK/TH code system DYN3D-ATHLET (Rohde et al., 2016; Lerchl et al., 2016). The sample NK/TH calculations only differ in terms of the cross section library (red, Fig. 2) due to the variation of the cross sections, but not in terms of the calculation model (blue, Fig. 2), namely the neutron-kinetic description and the thermal–hydraulic description. In addition to the sample calculations, a nominal calculation is done with unperturbed cross section data. For the efficient application of the traditional 2-step approach for transient analysis, we use the core simulator KMACS (Zilly and Périn, 2018). 2.1. Uncertainty quantification with XSUSA
Fig. 1. Generation of varied 2-group cross sections (XS) by combining the lattice calculation sequence TRITON/NEWT of SCALE 6.1 (blue) and the cross section random sampling method XSUSA (red), with K: number of assembly types, L: number of thermal–hydraulic grid points, and N: sample size. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
XSUSA (Zwermann et al., 2009) developed by GRS allows the output uncertainty quantification of nuclear reactor calculations with respect to nuclear data uncertainties. XSUSA is based on the random sampling method with nuclear reactions as input parameters. These are sampled with correlations between the input parameters. These quantities are available in the covariance data accompanying the nominal values of the cross sections in evaluated nuclear data libraries. Since these libraries lack information about the distribution, a normal distribution with appropriate cut-offs is assumed for the sampling. For each realisation of the input sample, a nuclear reactor calculation is performed in which the varied input parameters are propagated through the calculation sequence leading to a corresponding individual set of output quantities. A statistical analysis of the output quantities of interest is performed in order to obtain appropriate uncertainty estimators. In this work, XSUSA is used in combination with the TRITON/NEWT sequence of the SCALE 6.1 code package for the generation of varied 2-group cross section sets. Correspondingly, the ENDF/B-VII.0 238-group cross section library and the 44-group covariance library of the SCALE 6.1 code package are used. It is noted that with SCALE 6.2, in addition to the 44-group covariance library a more modern covariance library mainly based on ENDF/B-VII.1 with 56 energy groups is available. One of the main differences is a
A. Aures et al. / Annals of Nuclear Energy 137 (2020) 107146
substantial reduction of the 239Pu multiplicity (henceforth for brevity we use the term ‘‘multiplicity” for the average number of neutrons per fission). Since the system under consideration in the present investigation does not contain Plutonium, the uncertainties obtained with the older 44-group data can be regarded as representative. Within the XSUSA approach, nuclear data uncertainties are propagated through the complete reactor calculation sequence, with one exception: the nuclear data is varied after the selfshielding calculation, thus implicit effects are not taken into account. In many comparisons with models of a broad variety of spectral conditions, it turned out that the neglection of implicit effects only has a minor influence on the investigated integral output uncertainties (Bostelmann et al., 2015); hence this approximation appears to be justified. Since, in contrast to a steady-state system, the time-dependent behaviour of the reactor core is influenced by the delayed neutron fraction, which in turn is strongly affected by the delayed neutron multiplicities, the uncertainties of the delayed neutron multiplicities should also be propagated through the 2-group cross section generation. The 44-group covariance library of the SCALE 6.1 code package does not include individual uncertainties of both the prompt and the delayed neutron multiplicities, but only of the total multiplicity. In this study, the uncertainties of the total neutron multiplicity are applied only to the prompt neutron multiplicity, and uncertainties for the delayed neutron multiplicity are additionally taken from the covariance data of the JENDL-4.0 (Shibata et al., 2011) continuous-energy cross section library (Aures et al., 2017). For this purpose, the covariance data is processed and converted into a 44-group library with energy group boundaries according to the SCALE 6.1 covariance library.
2.2. Transient analysis with DYN3D-ATHLET The NK/TH code system DYN3D-ATHLET consists of the reactor dynamics code DYN3D (Rohde et al., 2016), developed by HZDR, and the thermal–hydraulic system code ATHLET (Lerchl et al., 2016) developed by GRS. The code coupling is based on the internal coupling technique. In this way, the neutron-kinetics model of DYN3D is applied, and the thermal–hydraulics of either the core, the primary circuit or the whole nuclear power plant is simulated by ATHLET. During a coupled simulation, ATHLET integrates over a time-step and provides the distributions of the fuel temperature, the moderator density and the moderator temperature to DYN3D. Subsequently, DYN3D interpolates the few-group cross sections according to the actual thermal-hydraulic values, performs the time integration and returns the power distribution to ATHLET. DYN3D allows steady-state and transient analysis of light water reactors covering square and hexagonal assembly geometries. Here, the diffusion solver is applied with 2-group cross sections. The suitability of this method for the analysis of small reactor cores is demonstrated in Knebel et al. (2016). The time step size for the neutron flux integration is automatically adapted in the range 105 s 102 s. Nevertheless, the coupling interface ensures that the neutron-kinetic time step size does not exceed that of the thermal-hydraulic calculation. ATHLET has been developed for the analysis of operational and abnormal conditions in nuclear power plants. In this study, the ATHLET model simulates only the thermal-hydraulics of the core. Every fuel assembly is modelled with a parallel independent fluid channel with fixed boundary conditions, i.e. the inlet temperature, the outlet pressure and the mass flow are constant. The time step size is automatically adapted but limited to a maximum of 5 104 s in the power excursion phase.
3
2.3. Uncertainty analysis The N NK/TH simulations are performed based on the N randomly sampled varied 2-group cross section sets. In order to quantify the influence of the input uncertainties on the simulation model, a statistical analysis can be performed on the N resulting output data sets with respect to any quantity of interest (QOI). Let variable Y represent an uncertain scalar QOI and let y1 ; ; yN denote a sample of independent output values (i.e. realisations) derived for Y via N simulations. A general and descriptive uncertainty measure is provided by the estimators of tolerance limits (TLs). The TLs are given by the lower L and upper U limit of an interval ½L; U defined such that the TLs enclose a specific proportion b of the values of variable Y at a specific statistical confidence level c. The general TL definition of a two-sided TL (i.e. tolerance interval) can be formulated as a nested probability expression via
PðPðY 2 ½L; UÞ P bÞ P c:
ð1Þ
The variability of the TLs from sample to sample is taken into account by the confidence level c. Thereby, the confidence parameter c depicts the probability of all samples of size N for which the respective TLs cover a proportion of at least b percent (i.e. coverage probability aka tolerance proportion). The definition can be adapted for one-sided TLs either by setting L ! 1 for the lower TL or by setting U ! 1 for the upper TL. Consequently, the TLs depend on the parameters sample size N, proportion b and confidence c. In general, the TLs are important in the context of complex, time-expensive simulation runs for which it is not feasible to perform a large set of Monte-Carlo (MC) simulations (Patel, 1986; Krishnamoorthy and Mathew, 2009). The TLs allow the determination of the minimally required sample size N for a desired proportion b and confidence c, e.g. commonly ðb; cÞ ¼ ð95%; 95%Þ. Thus, TLs offer an objective criterion to reduce the size of required MC samples for uncertainty analyses of complex simulation codes. However, in this study we are investigating a relatively large output sample ðy1 ; ; yN Þ of size N ¼ 1; 000. Thereby, the distributional patterns of the QOIs series can vary across the problem time. Hence, we are interested in another aspect of the TLs framework. The Wilks TLs offer robust uncertainty estimators independent of the particular distribution underlying the output sample and allow to perform an uncertainty analysis of QOIs via a distribution-free approach (Patel, 1986; Porter, 2019). In the following we explain TLs for normally distributed data as well as the distribution-free Wilks TLs, in order to describe their intuitve meaning and their properties for a large number of considered simulations N. 2.3.1. Tolerance limits for normally distributed data Often, it may be reasonable to assume that an output variable Y of a coupled NK/TH calculation is following a normal distribution. For large sample sizes N P 1; 000 a suitable uncertainty estimator is commonly considered via the confidence interval
2 s y
ð2Þ
given by the sample mean
¼ y
N 1X y N i¼1 i
ð3Þ
as an unbiased estimator of the mathematical expectation, and the sample variance
s2 ¼
N 1 X Þ2 ðy y N 1 i¼1 i
ð4Þ
indicating the estimation uncertainty. Due to the finite sample size of the MC simulations, this estimator comprises uncertainty itself.
4
A. Aures et al. / Annals of Nuclear Energy 137 (2020) 107146
In general, the confidence interval of the population mean indicates the uncertainty resulting in the simulation response in the presence of uncertain input parameters. For the purpose of understanding the reformulation between the different uncertainty limits we focus here on the upper limit defined as
þ k s: y
ð5Þ
Thus, the upper confidence limit of the population mean can be calculated via the factor
1 k ¼ pffiffiffiffi tðN1;cÞ N
ð6Þ
where the second term refers to the quantile of the t-distribution parametrised by N 1 degrees of freedom and the confidence c. The upper confidence limit of the population mean can be generalised to an upper TL, defined via the factor
1 k ¼ pffiffiffiffi tðN1;c;U1 ðbÞpffiffiNffiÞ N
ð7Þ
using a non-central t-distribution with non-centrality parameter pffiffiffiffi
U1 ðbÞ N given the inverse standard normal distribution function U1 at coverage probability b (Wald and Wolfowitz, 1946; Howe, 1969; Krishnamoorthy and Mathew, 2009). The notion of generalisation can be exemplified by considering again the two-sided TL with ðb; cÞ ¼ ð95%; 95%Þ for different sample sizes N. For large samples N ¼ 1; 000, the tolerance interval is parametrised by k ¼ 2:04 leading towards the commonly used confidence interval of Eq. 2. For the case N ! 1, the tolerance interval is parametrised by k ¼ 1:96 leading towards the confidence interval of the population mean at significance level c ¼ 0:95. The assumption of normality may not be valid for the development of the QOI realisations ðy1 ; ; yN Þ over the complete problem time. This might particularly be expected for a complex behaviour of a system, monitored via the QOI time series. Since we aim to perform a comprehensive uncertainty analysis in case of complex system behaviour, we want to employ a general non-parametric uncertainty measure that enables the comparison of tolerance limits over time for multiple QOIs. 2.3.2. Distribution-free tolerance limits The uncertainty analysis of complete output time series of a QOI is challenging, since the assumption of a particular distributional type is often not valid. Therefore, distribution-free (aka nonparametric) approaches to TLs have to be applied. Basically, two main strategies are used to overcome the parametric limitations: resampling-based methods such as Bootstrapping of TLs (Rebafka et al., 2007) or analytical approaches such as the Wilks TLs (WTLs) (Wilks, 1942; Wald, 1943; Porter, 2019). The focus of this study lies on the investigation of potentially complex time series of multiple QOIs. In general, Bootstrapping approaches exhibit a varying convergence behavior of the estimated TLs with respect to the underlying distributional patterns of the output sample. To avoid this inherent performance dependence, we consider the distributionfree WTLs as the appropriate uncertainty measures to analyse complex time series of the output realisations. The WTL is based on the idea that simulations may be considered as independent trials of an experiment with an underlying unknown continuous distribution. Given the success probability, each trial results either in success or in failure as captured by the Binomial distribution. In the context of WTLs, a success is interpreted as a simulation output falling with a certain statistical confidence c into an interval with a success probability related to the coverage proportion b. Hence, let y1 ; . . . ; yN be N independent output values of a scalar QOI Y and suppose that nothing is known about the probability distribution except that it is continuous.
The individual values yi for i ¼ 1; . . . ; N are ranked, i.e. arranged in an increasing order, and are denoted by yðkÞ, such that
yð1Þ ¼ min yk
ð8Þ
16k6N
and
yðNÞ ¼ max yk : 16k6N
ð9Þ
By defining the boundary values yð0Þ ¼ 1 and yðN þ 1Þ ¼ þ1 it can be shown, that the confidence level c can be analytically expressed via the Binomial distribution as
c¼
sr1 X j¼0
N j
bj ð1 bÞNj
ð10Þ
with ranking 0 6 r 6 s 6 N, and lower TL L ¼ yðrÞ and upper TL U ¼ yðsÞ. As a first order approach to the two-sided WTL the basic parameters are set to r ¼ 1 and s ¼ N. By using the formalism of series expansion, the functional dependency can be resolved to calculate the required number of samples N for a desired WTL specified as the set ðb; cÞ of tolerance proportion b and statistical confidence c. In contrast to that, by providing a fixed sample size N and demanding a particular WTL via ðb; cÞ the lower TL L ¼ yðrÞ and upper TL U ¼ yðsÞ can be derived. For instance, for sample size N ¼ 1; 000, the two-sided ð95%; 95%Þ WTL is given by the 18th lowest and the 983rd highest simulation, i.e. with respect to the ranked QOIs by the interval ½yð18Þ; yð983Þ with an actual confidence c ¼ 0:957. For the one-sided ð95%; 95%Þ WTL, we obtain the 962nd highest simulation run. Note that this ranking has to be carried out at each computed time step of the output time series sample. An important property of the WTLs depicts the dependency of the theoretical WTL behaviour on the sample size N referred to as conservativity profiles. For clarity, the conservativity profile is explained via the one-sided, upper WTL for sample sizes N that provide precise TLs of ðb; cÞ ¼ ð95%; 95%Þ and presented via the complementary cumulative distribution function (ccdf) and the probability density function (pdf) (Fig. 3). The conservativity profile of the WTL illustrates at what confidence c the respective
Fig. 3. Conservativity profiles of the one-sided, upper Wilks Tolerance Limit of tolerance proportion b ¼ 0:95 and statistical confidence c ¼ 0:95 for varying sample sizes N. The profiles are presented via the complementary cumulative distribution function (ccdf) and via the probability density function (pdf).
A. Aures et al. / Annals of Nuclear Energy 137 (2020) 107146
WTL may exceed high quantiles b > 0:95 or may fall below low quantiles b < 0:95. For instance, for N ¼ 59 the 0:99 quantile may be enclosed by the WTL with a statistical confidence of 0:45, while the WTL may fall below the 0:94 quantile with a statistical confidence of 0:03. For an increasing sample size N the statistical confidence c for the WTL to exceed the target b decreases, i.e. the conservativity is reduced. Meanwhile the statistical confidence c for the WTL to fall below the target b decreases. In other words, for an increasing sample size N the WTLs become less conservative with respect to ensuring b > 0:95 for each output realisation (as visualized by the ccdf representation), but more accurate with respect to ensuring b 0:95 on average (as visualized by the pdf representation).
5
by non-linear effects: this can be assumed if the total R2 significantly deviates from 1, since – despite all input parameters are considered – the output variance cannot be fully explained by their joint contribution (Bostelmann et al., 2018). However, in the case of cross section uncertainty propagation, the total number of input parameters can be very large. For example, if 50 or more nuclides are involved, with 2–10 reactions per nuclide, and with about 44 energy groups per microscopic cross section, the number of input parameters may be about 20,000 and the number of independent parameters may be in the order of 10,000. Consequently, the analysis of the total R2 would require a sample size that may be impractical for computationally-intensive calculations. 3. Model and transients description
2.4. Sensitivity analysis The major contributors of the varied input parameter groups, i.e. nuclide cross sections, to the output variance are identified by determining the squared multiple correlation coefficient R2 . The coefficient of each parameter group is calculated from correlations between the output quantity and the sampled input parameters of the respective group, while taking into account correlations between these input parameters. In this way and under the assumption of linearity, R2 indicates the relative amount of the output variance resulting from uncertainties of an input parameter group including the fraction of uncertainties due to dependencies with other input parameter groups (Bostelmann et al., 2018; Glaeser et al., 2012). With the determination of the R2 coefficients for all possible input parameter groups and their subsequent ranking, it is possible to identify the major contributors to the output variance. For instance, let an input parameter be represented as a cross section Ri in multi-group representation for i ¼ 1; . . . ; k where k is the number of independently sampled parameters of this group, i.e. k 6 number of energy groups. The R2j for the variance of the reactor power P resulting from an input parameter group RðjÞ is calculated as follows:
0 B B R2j ¼ ðqðP; R1 Þ; . . . ; ðqðP; Rk ÞÞ C 1 RðjÞ @
qðP; R1 Þ .. .
qðP; Rk Þ
1 C C A
whereas qðP; Ri Þ refers to the sample Pearson correlation coefficient between the reactor power P and the particular cross section Ri , and
This section provides a description of the mini-core model and the simulated transients. 3.1. Mini-core model The reactor core model used in this study resembles a pressurised water reactor. It is based on the TMI-1 mini-core model specified as Case II-2b in Phase II of the LWR-UAM benchmark (Hou et al., 2014). Nine identical UO2 fuel assemblies are arranged in a 3 3 lattice and thus build the active part of the core (Fig. 4). The fuel lattice is surrounded by one row of reflector assemblies which are composed of water and a stainless steel shroud. Axial reflectors are not specified and therefore not modelled. The active core length is approximately 365 cm. The fuel assembly in the core center contains 16 AgInCd control rods. The UO2 fuel assemblies consist of 204 UO2 fuel rods, 4 UO2 +Gd2 O3 fuel rods, 16 guide tubes, and 1 instrumentation tube (Fig. 5). The UO2 fuel has an 235U enrichment of 4.85%. The UO2+Gd2O3 fuel has an 235U enrichment of 4.12% and consists of 2.00 wt.-% Gd. The cladding tubes, the guide tubes, and the instrumentation tube are made from Zircaloy-4. The fuel assemblies are modelled as infinite-lattice models. The reflector model consists of two parts, a reflector region and a fuel assembly. The purpose of the fuel assembly is to establish a fission source. Except for the outer boundary of the reflector region, which is specified with a vacuum boundary condition, all other outer boundaries are reflective.
C 1 RðjÞ refers to the inverse of the sample correlation matrix with size k k containing the sample Pearson correlation coefficients between the sampled input parameters Ri of the input parameter group RðjÞ. Note that the sample correlation matrices may be singular due to complete dependencies between the input parameters, and thus cannot be inverted. In such cases, an appropriate reduction of the correlation matrix is necessary. Another requirement for the calculation of R2 of an individual parameter group represents the condition that the input sample size needs to be significantly larger than the number of independent input parameters in the considered group. Moreover, since R2 is a descriptive statistical measure its interpretation should ideally be made with respect to its according significance bound and confidence interval (Bostelmann et al., 2018). Beside the determination of R2j coefficients for each individual input parameter group RðjÞ, in principle a total R2 coefficient can be determined from a parameter group that contains all the input parameters of the model under investigation. From the total R2 , it can then be concluded whether the output variance is affected
Fig. 4. Core layout of the TMI-1 mini-core model composed of ’UO2’: uncontrolled fuel assemblies, ’UO2 CR’: controlled fuel assembly and ’Refl’: reflector assemblies (Hou et al., 2014).
6
A. Aures et al. / Annals of Nuclear Energy 137 (2020) 107146
whereas Transient 2 approaches prompt criticality. After a 2 s long zerotransient, this means that the power distribution is kept fixed, and 2 s of settling time, the control rod is withdrawn from its fully inserted position within 10 s. Afterwards, within the next 20 s, it is moved back to the fully inserted position. We set a fuel temperature of 560 K, a moderator density of 0.75206 g=cm3 , and a reactor power of 0.1409 MW as initial conditions. Furthermore, we force the reactor core to be critical in the initial state by evaluating the critical boron concentration as a prior step to the actual NK/TH calculation. Note that in the original benchmark specification (Hou et al., 2014), this exercise is defined as a mere neutron-kinetic calculation without thermal-hydraulic feedback and with an alteration of the control rod position of only 5 cm. 4. Results
Fig. 5. Layout of one quarter of the controlled UO2 assembly of the core layout of the TMI-1 mini-core model shown in Fig. 4.
The results of the NK/TH calculations with varied nuclear data for both Transient 1 and Transient 2 are statistically analysed with respect to the QOIs reactivity and reactor power. These results along with appropriate TLs are presented in the following sections. The UQ/SA analyses are done with a sample size of 1,000. Additionally, the results of the nominal calculations are presented. 4.1. Uncertainty analysis of the critical boron concentration
Table 1 Thermal-hydraulic grid points for the parametrisation of the 2-group cross section sets. Fuel temperature [K]
Moderator density [g/cm3]
Boron concentration [ppm]
560 900 1320
0.66114 0.71187 0.75206
0 1000 2000
Table 1 lists the thermal-hydraulic grid points at which the 2group cross sections are determined.
The NK/TH calculation of each input sample realisation begins with the determination of the critical boron concentration to ensure that the transient starts from criticality. An individual critical boron concentration is obtained as a consequence of the varied 2-group cross sections. For Transient 1, Fig. 7 shows the distribution of the critical boron concentration of the complete sample, and the ^; r ^ 2 Þ with estimated mean estimated normal distribution Nðl ^ ¼ 1252 ppm and estimated standard deviation value l r^ ¼ 56 ppm. The critical boron concentration of the nominal calculation is 1251 ppm. Identical results are obtained for Transient 2 because the only difference to Transient 1 is the different alteration of the control rod position.
3.2. Transients In this work, we perform NK/TH calculations in order to analyse two transients that are driven by an alteration of the control rod position. In the first transient, here denoted as Transient 1, the control rod is moved by 54 cm, and in the second, denoted as Transient 2, the control rod is moved by 60 cm, (Fig. 6). The particular choices of removal distance result in two characteristically different transient evolutions: Transient 1 is subprompt critical
Fig. 6. Control rod position over time for the analysed transients.
4.2. Uncertainty and sensitivity analysis of transients 4.2.1. Transient 1 The development over time of the reactivity triggered by the control rod movement is shown in Fig. 8 for both the nominal calculation and the sample calculations. Reactivity here refers to
Fig. 7. Distribution of the critical boron concentration at the initial state and the ^; r ^ 2 Þ with l ^ ¼ 1252 ppm and r ^ ¼ 56 ppm for estimated normal distribution Nðl Transient 1.
A. Aures et al. / Annals of Nuclear Energy 137 (2020) 107146
Fig. 8. Nominal reactivity and sample mean reactivity with 2r-confidence interval for Transient 1. Note that the sample mean curve congruently covers the nominal curve.
dynamic reactivity calculated by the adjoint neutron flux (Rohde et al., 2016). The reactivity of the nominal calculation encounters a maximum of 0.508$ at 14.005 s. The maximum of the sample mean occurs at the same time, reaching a value of 0.51$ with a relative uncertainty of 8.26%. Fig. 9 shows the development over time of the nominal reactor power, and of the mean and the median of the power output sample. The nominal power has a maximum of 0.452 MW at 14.595 s. Afterwards, it gradually decreases and reaches 0.221 MW at 65 s. The trend of the sample median is very similar to the nominal case. The sample mean lies above the sample median. It has a maximum of 0.462 MW at the same time as the nominal calculation, and gradually decreases to 0.223 MW. For comparison purposes, the plot shows also the 2rconfidence interval for normally distributed output values, and the one-sided, upper ð95%; 95%Þ WTL and the two-sided ð95%; 95%Þ WTL as distribution-free tolerance limits. The twosided WTL encloses almost the same area as the 2r-confidence interval, but it is shifted towards larger power values, particularly at the maximum. This behaviour of the two-sided WTL is in compliance with the theoretical conservativity profile (Fig. 3) indicating a relative high conservativity of the applied non-parametric tolerance limits. The one-sided WTL lies below the two-sided WTL, but still encloses the 2r-confidence interval.
Fig. 9. Nominal power, and mean, median, 2r-confidence interval and the onesided ð95%; 95%Þ WTL and the two-sided ð95%; 95%Þ WTL of the power output sample for Transient 1. The marked time points t1 and t 2 are used for the investigation of the distributional patterns of the power output sample.
7
However, the distributional properties of the power output values vary across the problem time, see Fig. 10. By visual inspection, the distribution of the power output values at t 1 appears to follow a normal distribution, whereas at t2 the distribution is obviously right-skewed. To confirm any deviation from the normality assumption, the Anderson–Darling test statistic is applied for the power output sample at each time point of the time series. The hypothesis test provides statistical evidence that the power output values at individual time steps after about 13 s do not follow a normal distribution (Fig. 11). Hence, the application of the 2r-confidence interval may not be an appropriate uncertainty measure, and the one-sided or the two-sided WTL has to be applied instead. Note that for the studied transient the 2r-confidence interval that relies on normally distributed data is in good agreement to the distribution-free WTL. For the time interval 4:515 s 24:365 s enclosing the CR movement, the development of the R2 coefficients of the major contributors to the power uncertainty are shown in Fig. 12. During the withdrawal of the control rod, significant changes of the R2 coefficients are observed, while they remain almost constant during the control rod reinsertion. From the beginning of the control rod withdrawal, the uncertainty of the 235U delayed neutron multiplicity dominates the power uncertainty, while gradually decreasing. At the same time, the contributions from the 238U scattering cross sections grow, and, from approximately 11.5 s on, they provide the dominant contributions. To provide a physical explanation of this pattern, further research with other methods would be necessary. While the 238U inelastic scattering cross section is the direct contributor to the uncertainty, the significant contribution of the 238U elastic scattering cross section can be explained by strong correlations between the inelastic and elastic scattering cross sections of 238U (Aures et al., 2017). The contributions from the 235U fission energy spectrum and the 238U capture cross section are minor and can therefore be considered as unimportant. 4.2.2. Transient 2 The reactivity of the nominal calculation encounters a maximum of 0.967$ and the mean of the output sample reaches a maximum of 0.953$ with a relative uncertainty of 6.38% (Fig. 13). The maximum reactivities of both cases occur at 14.005 s. The reactivities of some realisations exceed the threshold of prompt criticality and therefore lead to pronounced power peaks. This results in an extremely asymmetrical distribution of the power curves at almost all time points of the entire time series. A simple calculation leads to a relative standard deviation of
Fig. 10. Distributions of the power output sample shown in Fig. 9 at the time points t1 = 12.005 s (top, yellow in Fig. 9) and t2 = 20.005 s (bottom, green in Fig. 9) for Transient 1. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
8
A. Aures et al. / Annals of Nuclear Energy 137 (2020) 107146
Fig. 11. Development of the control rod (CR) position and the Anderson-Darling test statistics of both the power output sample and the reactivity output sample, all of Transient 1, compared to the critical threshold for power and reactivity, i.e. sign.level: 5%, normal distribution.
on normally distributed data completely useless in this case. For this reason, instead of the standard deviation the one-sided, upper ð95%; 95%Þ WTL is considered. The two-sided WTL is not evaluated for this transient because maximal values are typically of interest in the context of transient safety analysis. The development over time of the nominal reactor power as well as the mean, the median and the WTL for the reactor power of the sample calculation are shown in Fig. 14. The nominal power rises to a maximum of 15.99 MW at 14.26 s, and then drops to 1.85 MW during the power reversal phase. The maximum of the sample mean is 38.36 MW and that of the median is 16.7 MW. The fact that the mean is greater than the median, especially at the time of the maximum, indicates that outliers in terms of the power are present among the output realisations. In contrast to Transient 1, the maximum of the sample mean occurs earlier than that of the nominal case, namely at 14.045 s. The light blue area with the lower limit of 0.0 MW and the upper limit given by the WTL indicates the value range that covers 95% of the output realisations with a confidence level of 95%. The maximum of the tolerance limit is 189.20 MW at 14.015 s. In terms of the reactor power, the central plot of Fig. 15 illustrates again the course of the WTL within the variation of all output realisations (light-grey) of the sample for the time interval 13:6 s—14:6 s. Additionally, the distribution of the maxima of the power and their respective occurence times are shown via the histograms. The lower power region is more pronounced, thus indicating that many output realisations only reach a moderate maximal power. The distribution of the occurence times of the power maxima is bimodal, with one mode occuring between the maximum of the nominal case and the maximum of the sample case, and a 2nd mode occuring around 14.6 s. By performing the sensitivity analysis, we find that the power uncertainty is sensitive to the same set of nuclide reactions as Transient 1 (Fig. 16). The development over time of the R2 coefficients for the 238U scattering cross sections, the 238U capture cross section, the 235U delayed neutron multiplicity, and the 235U energy spectrum, is almost the same, with one exception: in contrast to
Fig. 12. Development of R2 coefficients over time with confidence intervals and significance bounds of the major contributors to the power uncertainty for Transient 1.
Transient 1, the R2 coefficients encounter strong dips in the time interval 13 s—14 s, particularly for the 238U scattering cross sections as the dominant contributors during this phase. The occurrence of these dips in the trends of the R2 coefficients of individual reactions again indicate that the power uncertainty is substantially affected by non-linear effects during this phase. This interpretation could be validated by unraveling such dips also in the trend of the total R2 (cf. Section 2.4). However in this analysis
Fig. 13. Nominal reactivity and sample mean reactivity with 2r-confidence interval for Transient 2.
133.72% at the maximum of the sample mean of the power. The strong asymmetry of the distribution along with the large standard deviation renders the application of uncertainty measures that rely
Fig. 14. Nominal power, and mean, median and one-sided, upper ð95%; 95%Þ WTL of the power output sample for Transient 2.
A. Aures et al. / Annals of Nuclear Energy 137 (2020) 107146
9
the total R2 cannot be evaluated because of the large number of input parameters and the limited sample size of 1,000. To overcome this limitation, we estimate the total R2 on the basis of the reactions of only the most contributing nuclides 235U and 238U (Fig. 17). While during the time interval 13 s—14 s the estimated total R2 of Transient 1 decreases from 0.98 to 0.95, the estimated total R2 of Transient 2 shows a strong dip with a minimum value of 0.71. This pattern may be explained by the presence of strong non-linear effects. Note that the estimated total R2 coefficients of both transients are below 1 for the period examined as a consequence of the limited number of considered input parameter groups for this estimation. 5. Conclusion
Fig. 15. Transient 2: Power curves (light-grey) of the entire sample; mean, median and one-sided, upper ð95%; 95%Þ WTL of the power output sample; histograms showing the power maxima and their respective occurence times; power of the nominal case.
Fig. 16. Development of R2 coefficients over time with confidence intervals and significance bounds of the major contributors to the power uncertainty for Transient 2.
Control rod withdrawal transients with nominal reactivity insertions of 0.5$ and 0.97$, respectively, for a PWR mini-core model were investigated in the presence of uncertainties of nuclear cross sections, the average numbers of both prompt and delayed neutrons per fission, and the fission spectra. For this purpose, we varied the nuclear data with the random sampling-based method XSUSA and performed the neutron-kinetic/thermal-hydraulic calculations with DYN3D-ATHLET. The impact of the nuclear data uncertainties on the development of both reactivity uncertainty and reactor power uncertainty over time were analysed and discussed. As a major finding of our uncertainty analysis, we observed that the assumption of normally distributed output values was not valid for the entire problem time, in particular for the 0.97$ transient. In terms of the reactivity, the assumption was violated only at the end of the power reversal phase and therefore we considered it appropriate to determine the relative uncertainty based on the normality assumption. The relative uncertainties of the reactivities of both transients are in the range of 6%—9% at the time when the sample means reach their respective maximum. In contrast, the assumption of normality for the output values of the reactor power was violated already in the early phase of both transients due to strong skewness of the output sample distribution. While the majority of the output realisations yielded moderate power values, some outliers with large power values occurred, particularly in the 0.97$ transient. For the purpose of a statistical analysis of such a complex time series, i.e. time series with heterogeneous distributional patterns across time, we applied the distribution-free Wilks tolerance limits (WTL). Given a fixed sample size of simulations, the one-sided and the two-sided WTL were calculated and used to determine the tolerance interval that covers the demanded proportion of output values at a certain confidence level for each computed time step. The major finding of our sensitivity analysis depicts the identification of the most contributing nuclide reactions to the reactor power uncertainty over time. For both transients, the uncertainty was dominated by the 235U delayed neutron multiplicity in the initial phase, and at the peak time of the reactor power and beyond, the inelastic and elastic scattering cross sections of 238U dominated. In case of the 0.97$ transient, in the squared multiple correlation coefficients R2 , especially of the 238U scattering cross sections, a strong dip occurred around the peak time of the reactor power. We interpret this pattern as an indication of a strong nonlinear effect around the peak time substantially affecting the reactor power uncertainty. Acknowledgment
Fig. 17. Estimated total R2 coefficients for the power uncertainty of both Transient 1 and Transient 2 if only the most contributing nuclides 235U and 238U are considered.
This work was supported by the German Federal Ministry for Economic Affairs and Energy. The authors are grateful to Bernard
10
A. Aures et al. / Annals of Nuclear Energy 137 (2020) 107146
Krzykacz-Hausmann for fruitful discussions on the uncertainty and sensitivity analysis methods. References Bostelmann, F., Krzykacz-Hausmann, B., Aures, A., Zwermann, W., Velkov, K., 2018. Sensitivity indices for nuclear data uncertainty analysis with XSUSA and TSUNAMI. In: Proceedings of ANS Best Estimate Plus Uncertainty International Conference (BEPU2018) May 13–19, Lucca, Italy. Ivanov, K., Avramova, M., Kamerow, S., Kodeli, I., Sartori, E., Ivanov, E., Cabellos, O., 2016. Benchmark for Uncertainty Analysis in Modelling (UAM) for Design, Operation and Safety Analysis of LWRs. Volume I: Specification and Support Data for the Neutronics Cases (Phase I), Version 2.1, NEA/NSC/DOC(2012) 10. Aures, A., Bernnat, W., Bostelmann, F., Bousquet, J., Krzykacz-Hausmann, B., Pautz, A., Velkov, K., Zwermann, W., 2018. Reactor simulations with nuclear data uncertainties. In: Proceedings of ANS Best Estimate Plus Uncertainty International Conference (BEPU2018) May 13–19, Lucca, Italy. Dokhane, A., Grandi, G., Vasiliev, A., Rochman, D., Ferroukhi, H., 2018. Validation of PSI best estimate plus uncertainty methodology against SPERT-III reactivity initiated accident experiments. Ann. Nucl. Energy 118, 178–184. 10.1016/j. anucene.2018.04.022. Hou, J., Blyth, T., Porter, N., Avramova, M., Ivanov, K., Royer, E., Sartori, E., Cabellos, O., Ferroukhi, H., Ivanov, E., 2014. Benchmark for uncertainty analysis in modelling (UAM) for design. In: Operation and Safety Analysis of LWRs – Volume II: Specification and Support Data for the Core Cases (Phase II). Version 1.9, NEA/NSC/DOC(2014). Zwermann, W., Krzykacz-Hausmann, B., Gallner, L., Pautz, A., 2009. Influence of nuclear covariance data on reactor core calculations. In: Proceedings of 2nd International Workshop On Nuclear Data Evaluation for Reactor applications September 29 - October 2, France. Oak Ridge National Laboratory, Oak Ridge, TN, USA, SCALE: A Comprehensive Modeling and Simulation Suite for Nuclear Safety Analysis and Design, 2011. Version 6.1, available from Radiation Safety Information Computational Center at Oak Ridge National Laboratory as CCC-785. Rohde, U., Kliem, S., Grundmann, U., Baier, S., Bilodid, Y., Duerigen, S., Fridman, E., Gommlich, A., Grahn, A., Holt, L., Kozmenkov, Y., Mittag, S., 2016. The reactor dynamics code DYN3D – models, validation and applications. Prog. Nucl. Energy 89, 170–190. 10.1016/j.pnucene.2016.02.013. Lerchl, G., Austregesilo, H., Schöffel, P., von der Cron, D., Weyermann, F., 2016. ATHLET 3.1A User’s Manual, Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH, Germany, GRS-P-1/ Vol. 1, Rev. 7. Zilly, M., Périn, Y., 2018. KMACS Validation Report, GRS-P-8/ Vol. 2, Rev. 0.
Bostelmann, F., Weiss, F.-P., Aures, A., Velkov, K., Zwermann, W., Rearden, B.T., Jessee, M.A., Williams, M.L., Wiarda, D., Wieselquist, W.A., 2015. Uncertainty and sensitivity analysis in criticality calculations with perturbation theory and sampling. In: Proceedings of ANS M&C 2015 April 19–23, Nashville, TN, USA. Shibata, K., Iwamoto, O., Nakagawa, T., Iwamoto, N., Ichihara, A., Kunieda, S., Chiba, S., Furutaka, K., Otuka, N., Ohsawa, T., Murata, T., Matsunobu, H., Zukeran, A., Kamada, S., Katakura, Jichi, 2011. JENDL-4.0: a new library for nuclear science and engineering. J. Nucl. Sci. Technol. 48 (1), 1–30. https://doi.org/10.1080/ 18811248.2011.9711675. Aures, A., Bostelmann, F., Kodeli, I., Velkov, K., Zwermann, W., 2017. Uncertainty in the delayed neutron fraction in fuel assembly depletion calculations. In: Proceedings of ND2016: International Conference on Nuclear Data for Science and Technology, Bruges, Belgium, September 11–16. EPJ Web of Conferences, vol. 146. Knebel, M., Mercatali, L., Sanchez, V., Stieglitz, R., Macian-Juan, R., 2016. Validation of the Serpent 2-DYNSUB code sequence using the Special Power Excursion Reactor Test. III (SPERT III), Annals of Nuclear Energy 91, 79–91. https://doi.org/ 10.1016/j.anucene.2016.01.005. Patel, J.K., 1986. Tolerance limits – a review. Commun. Stat. – Theory Methods 15 (9), 2719–2762. doi10.1080/03610928608829278. Krishnamoorthy, K., Mathew, T., 2009. Statistical tolerance regions: theory, applications, and computation, vol. 744. John Wiley & Sons. Porter, N., 2019. Wilks’ formula applied to computational tools: a practical discussion and verification. Ann. Nucl. Energy 133, 129–137. Wald, A., Wolfowitz, J., 1946. Tolerance limits for a normal distribution. Ann. Math. Stat. (2), 208–215 https://doi.org/10.1214/aoms/1177730981. Howe, W.G., 1969. Two-sided tolerance limits for normal populations-some improvements. J. Am. Stat. Assoc. 64 (326), 610–620. https://doi.org/10.1080/ 01621459.1969.10500999. Rebafka, Tabea, Clémençon, Stéphan, Feinberg, Max, 2007. Bootstrap-based tolerance intervals for application to method validation. Chemom. Intell. Lab. Syst. 89 (2), 69–81. https://doi.org/10.1016/j.chemolab.2007.06.001. Wilks, S.S., 1942. Statistical prediction with special reference to the problem of tolerance limits. Ann. Math. Statist. (4), 400–409 https://doi.org/10.1214/aoms/ 1177731537. Wald, A., 1943. An extension of wilks’ method for setting tolerance limits. Ann. Math. Statist. (1), 45–55 https://doi.org/10.1214/aoms/1177731491. Glaeser, H., Graf, U., Herb, J., Krzykacz-Hausmann, B., Lerchl, G., Papadimitriou, P., Papukchiev, A., Ringer, F., Scheuerer, M., Schöffel, P., Skorek, T., von der Cron, D., Weyermann, F., 2012. Thermohydraulische Rechenmethoden zu Transienten und Störfllen im Reaktorkühlkreislauf unter besonderer Berücksichtigung mehrdimensionaler Strömungen (ATHLET, FLUBOX, CFX), GRS-A-3644.