Nuclear Data Sheets 109 (2008) 2785–2790 www.elsevier.com/locate/nds
Uncertainty Quantification, Sensitivity Analysis, and Data Assimilation for Nuclear Systems Simulation H. Abdel-Khalik,∗ P. Turinsky, M. Jessee, J. Elkins, T. Stover, and M. Iqbal North Carolina State University, Campus Box 7909, Raleigh, NC 27695-7909, USA (Received July 28, 2008) Reliable evaluation of nuclear data will play a major role in reduction of nuclear systems simulation uncertainties via the use of advanced sensitivity analysis (SA), uncertainty quantification (UQ), and data assimilation (DA) methodologies. This follows since nuclear data have proven to constitute a major source of neutronics uncertainties. This paper will overview the use of the Efficient Subspace Method (ESM), developed at NCSU, to overcome one of the main deficiencies of existing methodologies for SA/UQ/DA, that is the ability to handle codes with large input and output (I/O) data streams, where neither the forward nor the adjoint approach alone are appropriate. We demonstrate the functionality of ESM for an LWR core, a boiling water reactor, and a fast reactor benchmark experiment, the ZPR6/7A assembly. This work demonstrates the capability of adjusting cross section data thereby providing guidance to cross section evaluation efforts by identification of key cross sections and associated energy ranges that contribute the most to the propagated core attributes uncertainties. I.
INTRODUCTION
This paper represents an ongoing effort to establish an integrated framework to perform sensitivity analysis, uncertainty quantification, and data assimilation on a routine basis with best-estimate calculations for nuclear models with voluminous input and output data streams. The importance of performing these analyses on a routine basis has been recently emphasized by the nuclear community in response to the projected change of course in the design and evaluation strategies of complex nuclear systems – that is shifting from heavy reliance on expensive experimental validation to highly accurate computer simulation. An effective simulation-based strategy for design and evaluation is expected to play a leading role in guiding the overall system design as well as optimizing the setup of the validating benchmark experiments. In order to optimally exploit the potential of a simulationbased strategy, the simulation must be able to perform not only best-estimate calculations, but also routine execution of sensitivity analysis, uncertainty quantification, and data assimilation. Knowledge of sensitivities and uncertainties of macroscopic system attributes, originating due to uncertainties in input model parameters, e.g., physical constants such as interaction cross sections, material thermal and mechanical properties, etc., and design data such as geometry, composition, and control pa-
∗ Electronic
address:
[email protected]
0090-3752/$ – see front matter © 2008 Published by Elsevier Inc. doi:10.1016/j.nds.2008.11.010
rameters, etc., can be used to quantify appropriate design margins to ensure a robust and safe design within the context of a probabilistic basis. Moreover, knowledge of the sensitivity of attributes leading to uncertainties requiring costly design margin can be used to reduce these sensitivities via design modifications. Finally, data assimilation can be used to target the reduction of model parameters uncertainties, which lead to costly design margins. Given the scope of this workshop, this paper will focus only on uncertainties originating from basic nuclear data, e.g., neutron cross sections, resonance parameters, etc. Two nuclear reactor cores models, one for a thermal reactor and another for a fast reactor, will be utilized to exemplify the applicability of the introduced framework. It is worth mentioning here that despite the importance of quantifying uncertainties in key reactor performance metrics, e.g., thermal margins, reactivity coefficients, reaction rates, etc., originating from basic nuclear data, the nuclear simulation codes have always lacked an integrated framework for the management of uncertainties. This is primarily due to: a) the complex nature of nuclear models - based on a multi-level modeling strategy where a number of models are linked together in a sequential or circular manner to account for the wide range of physical phenomena involved, e.g., neutron transport, thermal hydraulics, fuel performance, the large variations in energy and length scales, and the various forms of feedback mechanisms; b) the individual simulations codes requiring long execution times and being associated with voluminous size of I/O data streams; and c) the recent advances in uncertainty quantification algorithms have
Uncertainty Quantification...
NUCLEAR DATA SHEETS
been primarily demonstrated for modern software platforms - it is however hard to incorporate these advances in some of the legacy codes used extensively in the design process. Due to the above challenges, uncertainty quantification has been rendered computationally intractable for nuclear systems simulation. The need for an integrated framework for performing UQ/SA/DA has been emphasized by the nuclear engineering community via a series of workshops held in the past several years and attended by various sectors, including government, academia, and industry [1–4]. These workshops have asserted that the targeted predictive capabilities will utilize a simulation-based approach to the verification and experimental validation of simulation results. This approach will allow designers to utilize appropriate design margins, thus reducing unnecessary design conservatism, make rational decisions about the cost-benefit of reducing uncertainties, make simulationassisted engineering decisions such as scoping-type design activities, optimize the design of the experimental program required for model validation, select the best design and operational scenarios to maximize nuclear systems performance, and generate regulatory-type information, where the ability to accurately and convincingly predict systems behavior is of utmost importance. Such capabilities will be beneficial for improving the economy of existing as well as future reactors. This paper introduces a novel framework based on the Efficient Subspace Methods (ESM) for management of uncertainties for a general multi-level nuclear model [5]. The main advantage of ESM is that it represents an evolutionary hybrid approach that combines the strengths and addresses the weaknesses of existing UQ/SA/DA techniques. ESM achieves that by extracting minimal information from the individual simulations to quantify variability of the system macroscopic attributes. The information is transferred between the different simulations along subspaces with dimensions much smaller than the original I/O data streams. The framework places a premium on minimizing the computational overheads required to perform these analyses in order to meet the increased demand from the nuclear community to provide uncertainties on a routine basis with best-estimate predictions generated in support of the design, analysis, safety, and regulatory activities. In addition, it allows for the integration of modern nuclear simulation codes, currently being developed at various national and international institutions, as well as legacy codes, expected to persist as integral parts of nuclear design calculations. The result is an integrated framework that can take the problem of uncertainty management away from being computationally expensive and towards being genuinely effective, where the ‘effectiveness’ denotes the higher fidelity simulation, fast execution times, and routine application during design and operation.
II.
H. Abdel-Khalik et al.
MATHEMATICAL DESCRIPTION
A reactor core model is employed here to exemplify the applicability of the introduced framework. Reactor calculations involve the determination of the ensemble average of the neutrons density as a function of spatial position, angular dependence, energy, and time in a reactor core. Mathematically, this can be well described by the Boltzmann transport equation, whose coefficients are the various experimentally evaluated neutron interaction cross sections. Obtaining a numerical solution however for the entire reactor core in practical time is not possible in the near future despite the startling growth in computer power and the emergence of high performance parallel computing environments. This is because, in addition to the seven-dimensional phase space, the cross sections have strong, discontinuous behavior in space on a scale much smaller than the reactor core dimensions due to material heterogeneities, and extreme variations with energy due to resonance phenomena associated with compound nucleus formation. Currently, these difficulties are addressed via a two-level approach: a microlevel model and a macro-level model. The micro-level model, also referred to as lattice physics, determines the neutron density as a function of the seven-dimensional phase space over a 2-D radial slice of a nuclear reactor fuel assembly based on assumed boundary conditions (the 2-D slice is often referred to as a lattice which represents the fundamental building block of a nuclear reactor core and in our later discussion the sub-region for the micro-level model). The lattice physics calculation often comprises more than one length and energy scale, e.g., 1D spatial ultra-fine group neutron transport calculations for each fuel pin in the lattice, 2-D spatial multi-group neutron transport calculations for each fuel lattice, and time-dependent nuclide transmutation calculations along different regions of each fuel pin. This series of calculations is repeated over a series of discrete time steps to model the time-dependent neutron transport and nuclide transmutation in a nuclear fuel lattice. From these calculations, effective few-group cross sections (representing the macro-level parameters, i.e., few energy groups and homogenized materials) appropriately averaged over angular direction, energy and space are determined so as to preserve neutron reactions rates, and correction factors, e.g., discontinuity factors, are introduced to preserve neutron leakage. The few-group cross sections are then input to a macro-level model, denoted by the core simulator. There are approximately 106 few-group cross sections that are fed to the core simulator model as input data. The core simulator uses the few-group cross sections to calculate approximately 105 macroscopic attributes, including neutron multiplication factor, safety margins, i.e., distance to design limits, and core-wide power distributions. The main sources of data errors in this approach are the cross sections which are either experimentally measured, or calculated from pre-processor models. In either
2786
Uncertainty Quantification...
NUCLEAR DATA SHEETS
case, they contain uncertainties reflecting the incomplete knowledge about their true values. Given that it is important to assess the range of the validity of the simulation, the changes in system’s macroscopic attributes should be quantified as the cross sections vary within their uncertainty ranges – this analysis is referred to as ‘uncertainty quantification’ (UQ)1 . To complete UQ, one must be able to calculate the changes in macroscopic attributes due to variations in cross section values – this is referred to as a ‘sensitivity analysis’ (SA). Finally, making experimental observations of the system’s performance and contrasting these observations with model predictions can provide a basis for improving the accuracy of the simulation by adjusting the cross sections – this process is referred to as ‘data assimilation’ (DA), also sometimes called ‘inverse theory’, or ‘adaptive simulation’. Mathematically, the above concepts can be described as follows: Let H denote the entire reactor core region,
and T and ζ vector functions defined on H. Modeling the reactor core reduces to finding the neutron density u such
that T ( σ , u ) = ζ ( σ ) on H, where σ = σ ( v ) describes micro-level model parameters, e.g., basic nuclear data, as a function of domain variables v , e.g., position, energy, etc. As mentioned earlier, solving this equation directly is not computationally feasible; therefore one resorts to a two-level strategy, in which the micro-level model is solved only over a sub-region (e.g., fuel pin or a fuel as(i)
(i)
sembly). Denoted mathematically, let T f and ζ f be functions (contractions of the functions defined above) (i)
defined on Hi , the solution u f for the ith region solves the following equations: (i)
(i) (i)
(i)
(i)
T f ( σ f , u f ) = ζ f ( σ f ) on Hi .
(i) (i) σ c =Πc
(i) (i)
( σ f , u f ),
(i)
where Πc is a coarsening operator for the ith sub-region. The macro-level solution for the entire problem domain solves the following equation:
(3)
where σ c denotes a vector of the macro-level parameters for all sub-regions (few-group cross sections for all dif ferent fuel lattices). The macro-level solution u c of Eq. (3) is used to evaluate the macroscopic system attributes (e.g., power density, accumulated exposure, thermal margins, reactivity coefficients, etc.) according to
ψ c = Γ c ( σ c , u c ).
(4)
UQ involves characterizing uncertainties in the macroscopic system attributes due to uncertainties of the micro-level parameters, which are often represented by a covariance matrix Cσ ∈ Rn×n , where n is the number of micro-level input parameters. The covariance matrix is symmetric, its diagonal elements measure the uncertainties of the parameter reference values, and the offdiagonal elements characterize the degree of correlations between these uncertainties. The propagated attributes uncertainties are also described by a covariance matrix given by [6] Cψ = ΘCσ ΘT ∈ Rm×m ,
(5)
where m is the number of attributes, and Θ is a sensitivity matrix that characterizes the first-order derivatives of the attributes with respect to micro-level parameters, and can be shown to be symbolically given by
∂ Γc ∂ uc
(1)
(2)
T c ( σ c , u c ) = ζ c ( σ c ),
Θ=
Next, the micro-level solution is integrated over the subregions to calculate the macro-level parameters (fewgroup cross sections)
H. Abdel-Khalik et al.
×
∂ Πc ∂ uf
∂T c ∂ uc
∂T f
∂ uf
−1
∂ζc ∂σc
−1
−
∂ζf
∂σf
∂T c ∂σc
−
+
∂T f
∂ Γc ∂σc
+
∂σf
∂ Πc ∂σf
.
(6)
Note that the derivative operators above are typically dense, unavailable a priori, and computationally infeasible to calculate using conventional sensitivity analysis techniques due to their large dimensions necessitating large computer resources, including execution time, memory, and storage requirements. DA is concerned with the adjustment of micro-level parameters within their priori uncertainty limits in order to enhance the agreement between model predictions and measurements for key macroscopic system attributes. Let the measured macroscopic attributes (e.g.,
[1] Input parameters do not constitute the only source of simulation uncertainty; other sources include: errors due to the numerical approximations introduced to manipulate the continuous system’s equations on discrete digital computers, modeling errors originating from incomplete understanding of the physical phenomena governing system’s behavior, and the lack of knowledge of the boundary conditions used for the micro-level models. These sources of errors will not be discussed here since the focus of this workshop is on neutron cross section uncertainties only.
reaction rates) be given by a vector ψ m , and the associated measurement uncertainties by a matrix Cψm . Data assimilation can be mathematically represented by the following minimization problem [20]
2787
2 0 ψ m − ψ c −ΘΔ σ min f
Δσf
m Cψ
2 + λ2σ Δ σ f , Cσ
Uncertainty Quantification...
NUCLEAR DATA SHEETS
where the first term, denoted by the misfit term, is minimized by adjusting the micro-level parameters; the second term is Tikhonov-regularized and weighted by the parameter covariance matrix [7, 8]. Regularization confines the adjustments to only model parameters with both strong sensitivities and high uncertainties - parameters with low uncertainties are perfectly known and hence no adjustments are needed, and parameters with weak sensitivities have negligible impact on macroscopic attributes, and hence need not be adjusted. Further, regularization assures that only measurements with low uncertainties will be utilized for the adjustments. This is important to ensure that measurement noise - described by the covariance matrix Cψm weighting the misfit term does not affect the quality of the adjusted parameters. The λσ is called a regularization parameter selected by means of an L-Curve to achieve the above goals [9]. Although the details of this process are rather involved, we present a brief description here for completeness, and encourage the reader to consult with a previous publication for more details [10]. One can show that the adjusted parameters may be given by
Δ σf=
n ˆ ψi i=1
si
fi w i ,
where {si } are the singular values of a triple matrix product involving the sensitivity matrix, the parameter covariance matrix, and the inverse of the measurements covariance matrix; and ψˆi is the ith component of the 0 ˆ discrepancy vector, i.e., ψ = ψ m − ψ c , projected along a unique subspace that is defined by the above three ma trices and is spanned by the vectors {w i }. Qualitatively, a vector w i that is associated with a large singular value si implies a direction in the micro-level parameters space that has strong sensitivity and high uncertainty, and can be related to a component in the measurement space with low uncertainty. The filter term fi ensures that only directions that are associated with relatively large singular values are utilized for the adjustment. A Tikhonov filter factor takes the form
fi =
s2i . s2i + λ2σ
appropriate when the number of input data is small and the output is large. This is because it involves random or systematic perturbations of input data and repeated executions of the computational model where the number of executions is equal to the number of input data. In each forward run, one calculates the derivatives of all output responses with respect to one input, or a linear combination of input. Alternatively, the adjoint approach is appropriate only when the number of output responses is small and the number of input data is large. Here one calculates in one adjoint model run the derivatives of one output response with respect to all input. The adjoint approach requires nontrivial model modifications to allow for the evaluation of the adjoint system, which is particularly difficult for models involving linked computational modules. The I/O data streams associated with a nuclear reactor core model are very large, often numbering in millions, and therefore neither the forward nor the adjoint approach alone are appropriate for routine uncertainty quantification. To minimize the computational overheads, the ESM computes the required sensitivity matrices by recognizing that the effective ranks of the parameters covariance matrix and sensitivity coefficients matrices are many times smaller than the dimensions of these matrices. The implication is that much smaller matrices, obtained via matrix rank revealing decompositions, i.e., Θ = U SV T , can be utilized to accurately represent the action of these matrices [11]. The columns of U ∈ Rm×r and V ∈ Rn×r r is the rank of the sensitivity matrix - span subspaces of size r which is much smaller than the output stream consisting of m data, and the input of size n, respectively; the S ∈ Rr×r is a full rank square matrix - if S is selected to be diagonal, its entries are denoted by the singular values. The unique aspect of ESM is that the decompositions are achieved without having to first evaluate the full sensitivity matrix being decomposed. This is very important for the sensitivity matrices, since they are intractable to calculate. Like the forward method, ESM involves perturbations of model data, implying it can be implemented non-intrusively using existing codes. Unlike the forward method, ESM involves a number of perturbations equal to the effective rank r versus the dimensions of the sensitivity matrix. Thus, ESM effectively addresses all the weaknesses of the forward and adjoint approaches
Note that this factor goes to zero for si λσ , and approaches 1.0 for si λσ . III.
EFFICIENT SUBSPACE METHODS
Central to all previous analyses is the need for powerful SA that can calculate the derivatives for models with large I/O data streams. There are two general and well established approaches, each comes with its own strengths and limitations [6]. The forward approach is
H. Abdel-Khalik et al.
IV.
DEMONSTRATIVE EXAMPLES
As explained in the previous section, any uncertainties in cross sections due to the measurement process will propagate through the micro-level calculations and eventually to the attributes calculated at the macro-level. At both the micro and macro-levels, the cross sections typically number in the millions to tens of millions, and the macroscopic attributes number in the hundreds of thousands to millions.
2788
Uncertainty Quantification...
NUCLEAR DATA SHEETS
−3
−3
x 10
14
12
12
SD in MFLPD
RMS of Nodal Power SD
x 10
10 prior posterior
8 6 4
H. Abdel-Khalik et al.
10 8
prior posterior
6 4
0
5
10 15 Burnup (GWD/MTU)
2
20
FIG. 1: Priori and posteriori nodal power uncertainties.
0
5
10 15 Burnup (GWD/MTU)
20
FIG. 3: Priori and posteriori FLPD uncertainties.
700
500
Relative Singular Value
SD in ρ (pcm)
600 prior posterior
400 300 200 100 0
0
5 10 15 Burnup − (GWD/MTU)
20
10
0
10
−5
10
−10
ENDF MG MG2 FG CA 0
10
2
10 Singular Value Index
FIG. 2: Priori and posteriori core reactivity uncertainties. FIG. 4: Data streams singular values spectra.
The first study [12] employed the FORMOSA-B [13] core simulator to model a GE BWR/4 core design with 560 fuel assemblies. The basic nuclear data uncertainties from ENDF-B/VI [14] were propagated through PUFF [15], a multi-group generation code, and TRITON [16], a lattice physics code. To perform data assimilation, incore instrumentations which are collected routinely during reactor operation can be used as a basis for adjustment. In this work however a virtual data approach has been employed, where measurements are virtually simulated in the form of perturbed nodal power distributions, where the size of the perturbations selected to be representative of the actual discrepancies between real plant data and core simulators predictions. A representative Gaussian noise was also simulated on the virtual signal. Cross sections, the micro-level parameters, were adjusted at the multi-group level, i.e., input to TRITON. Figs. 1-3 plot the nodal power, core reactivity, and fraction to limiting power density (FLPD) margin uncertainties before and after the data assimilation step is completed as predicted by FORMOSA-B. These figures show the significant reduction in core
attributes uncertainties that can be achieved by cross sections adjustments. This work estimated the effective rank of the overall sensitivity matrix in Eq. (6) representing the various stages of calculations in the order of 102 , implying that only 102 model executions are sufficient to predict core attributes reference values, along with their uncertainties, and their optimum adjustments. Fig. 4 shows the decline in the singular values at the various data streams, i.e., the ENDF level, the multi-group, the few-group, and finally the core attributes. The second study employed the REBUS code [17] to analyze a model of the ZPR6-7/A assembly [18]. The nuclear data uncertainties, available in a 15-group format [19], were propagated through the REBUS code to key core attributes [20], e.g., the node-wise reaction rates for the various isotopes are plotted in Fig. 5 before and after the adaption. The rank of the associated sensitivity matrix was found to be in the order of 50, while the size of the input and output data streams numbered in the 105 and 104 , respectively [21].
2789
Uncertainty Quantification... 18
NUCLEAR DATA SHEETS
systems simulation, which often involves a chain of linked modules each representing aspects of the physics modeled in different details, and associated with voluminous input/output data streams. The method addresses the weaknesses of existing approaches by exploiting the rank reduction that is characteristic of most multi-level-based computational methods. The method presents a potential approach to meet the increased demand for routine estimation of uncertainties for best-estimate calculations.
Before DA After DA
16 14
Difference (%)
H. Abdel-Khalik et al.
12 10 8 6 4 2 0
0
500
Acknowledgments
1000 1500 2000 2500 3000 3500 Node Index
This work has contributed an evolutionary approach for management of uncertainties for multi-level nuclear
This work has been supported by the former Electric Power Research Center of NCSU, a fellowship from the Naval Nuclear Propulsion Graduate Program, a U.S. DOE NERI Research Grant, and an LDRD from Idaho National Laboratory. We are grateful to all the people that helped us with this work, including the Nuclear Science and Technology Division of Oak Ridge National Laboratory, and the Nuclear Engineering Division of the Argonne National Laboratory.
[1] Workshop on “Simulation and Modeling for Advanced Nuclear Energy systems,” DOE Offices of Nuclear Energy and Advanced Scientific Computing Research, August 15-17, 2006. [2] Workshop on “Assessment of Sensitivity/Uncertainty Analysis Capabilities Applicable for the Nuclear Fuel Cycle,” North Carolina State University, May, 2006. [3] Workshop on ”Management of Numerical Errors In Nuclear Systems Modeling,” North Carolina State University, September, 2007. [4] DOE, GNEP Program Workshop on “Nuclear Energy Advanced Modeling and Simulation (NEAMS)”, Argonne National Laboratory, May 2008. [5] H.S. Abdel-Khalik, P.J. Turinsky, and M.A. Jessee, “Efficient Subspace Methods-Based Algorithms for Performing Sensitivity, Uncertainty, and Adaptive Simulation of Large-Scale Computational Models,” Nucl. Sci. Eng., vol. 159, pp. 256-272, 2008. [6] D. Cacuci, Sensitivity and Uncertainty Analysis, Theory, Vol. I. Boca Raton: CRC Press, 2003. [7] J. Hadamard, Bull. University Princeton, 13, p. 49, 1902. [8] A. N. Tikhonov, “Regularization of incorrectly posed problems,” Soviet Mathematics Doklady, 4, 1624, 1963. [9] H.W. Engl and W. Grever, ”Using the L-Curve for determining optimal regularization parameters,” Numerische Mathematik, vol. 69, p. 25, 1994. [10] Matthew A. Jessee, Hany S. Abdel-Khalik, and Paul J. Turinsky, “Evaluation of BWR Core Attributes Uncertainties Due to Multi-Group Cross Section Uncertainties,” Joint Inter. Meeting on Mathematics, Computation, and Supercomputing in Nuclear Applications, April 2007. [11] G. Golub and C.V. Loan, Matrix Computations, 3rd ed. Baltimore: Johns Hopkins University Press, 1996.
[12] Matthew A. Jessee, ”Cross Section Adjustment Techniques for BWR Adaptive Simulation,” PhD Dissertation, North Carolina State University, 2008. [13] B.R. Moore, P.J. Turinsky, and A.A. Karve, “FORMOSA-B: A Boiling Water Reactor In-core Fuel Management Optimization Package,” Nuclear Technology, 126, 153, 1999. [14] “ENDF-102 Data Formats and Procedures for the Evaluated Nuclear Data File ENDF-6,” BNL-NCS 44945, Rev. 06/05 (ENDF/B-VI), Brookhaven National Laboratory, June 2005. [15] M.E. Dunn, “PUFF-III: A Code for Processing ENDF Uncertainty Data into Multigroup Covariance Matrices,” NUREG/CR-6650, ORNL/TM-1999/235), U.S. Regulatory Commission, Oak Ridge National Laboratory, 2006. [16] M.D. DeHart, “TRITON: A Two Dimensional Depletion Sequence for Characterization of Spent Nuclear Fuel,” ORNL/TM 2005/39, part of SCALE Version 5.1, November 2006. [17] Argonne National Laboratory, REBUS-3/VARIANT8.0: Code System for Analysis of Fast Reactor Fuel Cycles, CCC-653, REBUS-3, Variant 8.0, 2001. [18] M.A. Smith et. al., ”ZPR-6 ASSEMBLY 7: a cylindrical assembly with mixed (Pu,U)-oxide fuel and sodium with a thick depleted-uranium reflector”, Nuclear Energy Agency, Handbook of Evaluated Criticality Benchmark Experiments, 2007. [19] D. Rochman et al., “Preliminary Cross Section and νbar Covariances for WPEC Subgroup 26,” Brookhaven National Laboratory, BNL-77407-2007-IR, 2007. [20] M. Iqbal, private communication. [21] J.R. Elkins, ”Efficient Uncertainty Quantification for a Fast-Spectrum Generation IV Reactor Systems,” M.Sc. Thesis, North Carolina State University, 2008.
FIG. 5: Priori and posteriori nodal reaction rates uncertainties before and after data assimilation.
V.
CONCLUSIONS
2790