Progress in Nuclear Energy 114 (2019) 227–233
Contents lists available at ScienceDirect
Progress in Nuclear Energy journal homepage: www.elsevier.com/locate/pnucene
Non-linear, time dependent target accuracy assessment algorithm for multiphysics, high dimensional nuclear reactor calculations
T
Bassam A. Khuwaileh∗, Paul J. Turinsky Department of Nuclear Engineering, North Carolina State University, United States
A R T I C LE I N FO
A B S T R A C T
Keywords: Target accuracy assessment Uncertainty reduction Model calibration Surrogate modeling
Safety analysis and design optimization depend on the accurate prediction of various reactor core responses. Model predictions can be enhanced by reducing the uncertainty associated with the responses of interest. Accordingly, an inverse problem analysis can be designed to provide guidance to determine the optimum experimental program to reduce the uncertainties in model parameters, e.g. cross-sections and fuel pellet-clad thermal conductivity, so as to reduce the uncertainties in constrained reactor core responses. This process is referred to as target accuracy assessment. In this work a nonlinear algorithm to determine an optimum experimental program has been developed and tested. The algorithm is based on the construction of surrogate model to replace the original model used to predict the core responses and uncertainties, therefore, enabling the target accuracy assessment to treat non-linearity within reasonable computational cost. Subspace based projection techniques are used to identify the influential degrees of freedom, which are then used to construct the surrogate model. Once constructed, the new computationally efficient surrogate model is used to propagate uncertainties via Monte Carlo sampling. Moreover, this work replaces the classical objective function used for nuclear data target accuracy assessment with another that factors in the financial gains of the target accuracy assessment results and replaces [or can supplement] differential experiments with many times more readily available integral experiments. Finally, the proposed algorithm is applied on a 3-dimensional fuel assembly depletion problem with thermalhydraulics feedback using the VERA-CS core simulator. Specifically, CASL Progression Problem Number 6 is the illustrative problem employed which resembles a pressurized water reactor fuel assembly.
1. Introduction Nuclear reactor core design and safety analysis requires rigorous calculations of several important reactor attributes such as the core reactivity, and reactor power and fuel temperature distributions. These calculations require high fidelity modeling accompanied with uncertainty estimation capabilities. Uncertainties can be reduced to targeted values (e.g. values that satisfy specified safety limits) by completing model calibration utilizing data from appropriately designed experiments. However, with the emerging complexity of high fidelity nuclear reactor core simulators, both uncertainty analysis and model calibration are becoming hindered by their high computational demands. Therefore, this work introduces efficient non-linear techniques for target accuracy assessment for nuclear reactor systems. Note that in the context of this manuscript the words accuracy, uncertainty and standard deviation are used interchangeably. The uncertainties in predictions of responses of interest are an
∗
inevitable consequence of the uncertainties introduced by forms of closure models, governing equations, numerics, and physical parameters. Our work only addresses the contribution of physical parameters, hence assumes the other sources of uncertainty to the predicted responses of interest are relatively small. The uncertainties in the physical parameters used by models (such as nuclear data cross-sections) originate as a result of measurement (experimental) uncertainties involved in the experiments used to determine via model calibration their nominal values and uncertainties introduced during the calibration process. However, while uncertainty cannot be avoided in such calculations, it can be reduced via various techniques as discussed in Aliberti et al. (2006); Cacuci (2010); Khuwaileh and Turinsky (2017); Proctor (2012). If parameter uncertainties can be reduced which result in the reduction of uncertainties in responses of interest, the competitiveness of nuclear energy against other energy sources can be improved by utilizing the margin so gained to improve the design and/or operation resulting in economic savings while maintaining or
Corresponding author. E-mail addresses:
[email protected] (B.A. Khuwaileh),
[email protected] (P.J. Turinsky).
https://doi.org/10.1016/j.pnucene.2019.01.023 Received 6 November 2017; Received in revised form 15 January 2019; Accepted 23 January 2019 0149-1970/ Published by Elsevier Ltd.
Progress in Nuclear Energy 114 (2019) 227–233
B.A. Khuwaileh and P.J. Turinsky
parameters might grow very large, which increases the computational cost of the inverse uncertainty quantification analysis and weakens the identifiability of the problem's solution. Moreover, working with high fidelity models adds another degree of complexity and computational cost. These challenges are usually addressed by the elimination of parameters that do not contribute significantly to the overall uncertainty of responses of interest based upon experience and/or linear sensitivity estimations. Using linear sensitivity estimation, the sensitivity of each response is calculated for a reference case composition; then, influential parameters are selected based on their contributions to the overall uncertainty of the response of interest. However, the sensitivity profile may not remain constant over a range of parameters. This is true for non-linear models. Hence the contribution might change as the parameters change value. This means that eliminating parameters that do not significantly contribute to the uncertainty relies on the assumption that the uncertainty contribution of each source is constant. Since this assumption cannot be always asserted and/or guaranteed, many sources of uncertainty should be retained in the inverse uncertainty quantification analysis. In order to meet the challenges discussed above, this work introduces two improvements to target accuracy assessment. First, a new formulation of the optimization problem is proposed such that it reflects a more realistic cost function. Second, while previous work considered responses of interest to respond linearly to changes in model parameter values, this work introduces a non-linear, time dependent formulation. In order to introduce non-linearity, we propose to replace the original model by a more computationally efficient surrogate model that captures the non-linearity of the original model within certain error bounds that should be much lower than the target accuracies (i.e. target uncertainties). The constructed surrogates are crafted to represent a reactor core's depletion cycle such that they capture the time dependency of the target accuracies of interest. While there are a plethora of methods that can be used in surrogate model construction (Queipo et al., 2005), this work proposes the use of reduced order modeling based surrogates. More specifically, polynomial surrogates will be used in conjunction with reduced order modeling algorithms developed earlier (Constantine et al., 2014; Khuwaileh and AbdelKhalik, 2015; Khuwaileh, 2015). This work is devoted to the development of a non-linear, surrogate based formulation for the inverse uncertainty quantification problem (or the target accuracy assessment in the context of this manuscript) with applications to nuclear reactor core problems. The proposed formulation and surrogate based algorithm will be tested using a 3 dimensional multi-physics reactor core depletion problem with thermal-hydraulics feedback, simulated via the Virtual Environment for Reactor Applications-Core Simulator (VERA-CS) (Palmtag et al., 2014). Since nuclear reactor modeling and simulation involve neutronics, fuel thermo-mechanics and thermal-hydraulics, multi-physics coupling is needed to account for the feedback from these different effects. Therefore, this work will extend target accuracy assessment to multi-physics coupled models with non-linear, time dependence employing a gradient-free formulation, i.e. not assuming sensitivities are available or constant.
enhancing nuclear safety. In this work, nuclear data and various thermal-hydraulics parameters are considered to be the major contributors to the uncertainties in the calculated reactor core responses of interest. Therefore, it is natural to seek algorithms that identify the key uncertainty sources whose reduced uncertainties would have the highest impact on the uncertainties of reactor core responses of interest. Once the key parameter contributors are identified, experiments can be designed such that the uncertainties in the parameters are reduced resulting in improved model predictability and reduced uncertainties in the responses of interest. However, such experiments are likely to be expensive. Therefore, one must take into account both the cost of experiments and the potential economic benefit resulting from uncertainty reductions on the responses of interest. For example, if a safety limit is limiting reactor core power level, a reduced uncertainty could allow a power level upgrade with the resulting economic benefit. To provide guidance to determine the optimum experimental program, a constrained optimization problem can be defined that minimizes a cost function representing the cost of the experiments minus the economic benefits obtained, while being constrained by limits on the reduced uncertainties sought for the responses of interest. This problem has been tackled and appears under the name nuclear data target accuracy assessment, initially developed by Urschel in the 1970s (Usachev and Bobkov, 1972). Recent developments and contributions to target accuracy assessment are best represented by the works in Aliberti et al. (2006); Salvatores and Jacqmin (2008); Rochman et al. (2017). Examples of target accuracy assessment in nuclear engineering applications include a variety of problems that are important to both the safety and design of nuclear reactors. Target accuracy assessment is among the most important examples of the so called inverse uncertainty quantification problems (Khuwaileh and Turinsky, 2017; Proctor, 2012), which also include data assimilation and model calibration practices. However, there are basic differences between model calibration and target accuracy assessment. Model calibration is used to obtain best estimates of key parameters and their uncertainties (e.g. nuclear data cross-sections) with the aim of improving the simulation prediction accuracy. In other words, model calibration aims to find the true state or the best estimate of the true state of a certain set of key parameters utilizing experimental measurements of certain physical responses. For example, given a certain uncertainty in a measured response, model calibration can calibrate the uncertainties of the simulation model parameters such that they are consistent with the measured response uncertainty. On the other hand, target accuracy assessment aims to estimate the requirements on the experiments such that the target accuracy of the prediction of the attributes of interest can be met. Specifically, target accuracy assessment uses uncertainty targets specified by safety, design and operational constraints to obtain the requirements on the uncertainty (e.g. covariance) library of the key uncertainty sources originating from the model's parameters (Usachev and Bobkov, 1972). Mathematically, both model calibration and target accuracy assessment have a great deal of similarity with regard to mathematical formulation. Target accuracy assessment focused on nuclear data has been applied to current and future reactor cores (Aliberti et al., 2006; Rochman et al., 2017; Alhassan et al., 2016; Arbanas et al., 2015; Khuwaileh and Abdel-Khalik, 2015). These studies considered different integral quantities such as the multiplication factor, reactivity coefficients and various important reaction rates. Based on target uncertainties for the responses of interest, these studies have shown that the current nuclear data evaluations would benefit from further improvements in accuracy; see Ref. Aliberti et al. (2006) for an example of a comprehensive study. Ideally, all parameters that might contribute to the overall uncertainty of a response of interest would be included in an inverse uncertainty quantification analysis; for example, nuclear fuel and structural material cross-sections, fission products yields, and any other potential sources of uncertainty would be included. Hence the number of
2. Methods and algorithms 2.1. Problem formulation In this subsection, the classical formalism is overviewed after which the new proposals are highlighted and discussed. The classical target accuracy assessment problem formulation used in Ref. Aliberti et al. (2006); Usachev and Bobkov (1972) is altered such that the cost (objective) function is modified. In addition, target uncertainty constraints are calculated nonlinearly via Monte Carlo technique, which is possible due to the replacement of the high fidelity model with a surrogate model that has negligible computational cost, which will be discussed 228
Progress in Nuclear Energy 114 (2019) 227–233
B.A. Khuwaileh and P.J. Turinsky
therefore, potentially resulting in increased margins that can be taken advantage of to achieve financial gains (e.g. enlarged operating space or increased design freedom). Moreover, in our application since the parameters (e.g. cross-sections) are challenging to measure via differential experiments, therefore this work argues that it makes sense to assume that the enhancement of these parameters can also be achieved via measurement of certain integral responses (e.g. core responses such as multiplication factor and in-core detector responses). The integral responses are then used in model calibration studies to achieve the actual enhancement in the parameters. The formula below is a candidate for this type of cost (objective) functions:
in the next subsection. The target accuracy assessment problem is an inequality-nonlinearconstrained optimization problem. The starting point is the prior covariance matrix C of the parameters of interest, whose diagonal elements are the variances of the parameters. The target is to calculate the updated covariance matrix C′ which when propagated through the model yields uncertainties of the responses of interest that meet the targets pre-specified by safety, design or operational requirements. In the classical target accuracy assessment formalism updating the covariance elements goes as follow (Aliberti et al., 2006):
Cij′ = di Cij dj,
(1)
⇀ Cost( d ) =
where di's are the adjustment parameters to be determined by the optimization problem solution. More precisely one would write Cij′ = dij Cij , where dji = dij to preserve symmetry, which when compared to Eq. (1) reveals the assumption that dij = di .dj. This assumption assumes that the posterior correlation of parameters i and j are unchanged from the prior correlation. To show this, note that the correlation terms can be expressed as follow:
i=1
Cii and σj =
Cjj . Sub-
where σi′ = di σi and σ ′j = dj σj , showing that the correlation coefficient is unchanged. Note that this assumption can be removed by extending the adjustment parameters (di) to independently update the correlation term ρij . However, in this work and in the reference formalism (Aliberti et al., 2006) the correlations (but not the covariance terms) are assumed to be constant so that the underlying physics that causes the correlation term is preserved. Also, the classical formalism of the target accuracy assessment problem often assumes that the experimental cost (objective function) to be minimized can be defined as:
w
∑ (σ ′)i 2 i
i
=
w
∑ d (σ )i 2d i
i
i
i
(2)
where the wi ’s are user-defined weights representing the cost of measuring the ith parameter. This cost function is to be minimized while the uncertainties of the responses of interest are constrained by the vector of target accuracies τ¯ (maximum allowed variance): 2 ⇀ σ R = diag{SC′ST} ≤ ⇀ τ
1
IR ′ ⇀ 2 ⎝ (σi ( d ))
−
1
⎞
2 (σiIR) ⎟ ⎠
m
−
d )) ) ∑ μj ((σR,j)2 − (σR' ,j (⇀ 2
j=1
for j = 1, ⋯, m
(5)
where σR, j is the initial uncertainty in the jth target response of interest, ⇀ and σR' , j ( d ) is its updated uncertainty propagated through Monte Carlo ⇀ sampling given an adjustment vector ( d ), σiIR is the current integral response measurement (experiment) uncertainty for measurement i, ⇀ and σiIR ′ ( d ) is the updated integral responses measurement uncertainty required to update the parameters' uncertainties (C ) via model calibration in order to achieve the desired responses' accuracies. In the context of this manuscript the measurement uncertainty refers to the overall experimental uncertainty in the actual measurement. The integral responses used to improve the uncertainties need not be and are generally not the same as the target responses of interest. M is the number of integral responses, and m is the number of target responses of interest. The { μj } and {wi } represent factors inserted to give the user the freedom to bias certain experiment and variance reductions to have more or less importance in the objective function. Note that μj reflects the financial gain per unit improvement in the variance of the response of interest, and wi reflects the cost of improving the experiment via reduced measurement uncertainty. Again, this cost function is just a candidate with dependencies on experimental costs and financial gains, replaceable by a different cost function without a need to modify the target accuracy assessment algorithm. As denoted earlier, τj is the target variance to be achieved in the jth response of interests. One final question to be answered before moving to the next section: How does the optimization algorithm connect the integral response ⇀ uncertainties σiIR ′ ( d ) to those of the parameters? Using surrogate models, to be discussed in the coming subsection, it is straight forward to map the parameter uncertainties, at each optimization iteration, to ⇀ obtain the updated integral response uncertainties σiIR ′ ( d ) which are needed to calculate the cost function. A constrained optimization problem is solved to identify the set of optimum integral experiments. Our constrained minimization is the ⇀ problem of finding a vector d (integral experiments adjustment factors) that is a local minimum to a scalar function, Eq. (4) subject to con⇀ straints on a function of the allowable d , Eq. (5):
Cij′ = di Cij dj = ρij di σi σj dj = ρij σi′ σ ′j
Cost[C'] =
⎛
(4)
⇀ 2 (σR′ , j ( d )) ≤ τj
Cij = ρij σi σj where ρij is the correlation coefficient, σi = stituting this equation into Eq. (1) produces:
M
∑ wi ⎜
(3)
2 σ R denotes the response variance vector, the “diag” function where ⇀ forms a vector from the matrix diagonal, and S is the application's responses' sensitivity profile with respect to the parameters (e.g. crosssections) at the reference case. Note that the expression in Eq. (3) includes the covariance terms. The covariance terms reflect the correlations between different parameters; hence they affect the magnitude of the responses' variance reduction. It has been shown that the correlation terms are important in targeted cross-sections assessment (Palmiotti et al., 2010). The current work aims to replace two assumptions embedded in the classical formalism: First, the cost of performing an experiment is only dependent on the variance (inversely proportional) of the parameters and how expensive the experiment is to achieve this reduction (wi weights). Second, the uncertainty has been propagated linearly via Eq. (3), which might be a weak assumption especially when non-linear responses are considered originating due to thermal-hydraulics feedback. This work proposes to improve the formalism by taking into account the fact that the cost function should also account for the financial gains achieved by employing target accuracy assessment. In other words, whenever a parameter's uncertainty is reduced, that modification could reduce the uncertainties in some of the target responses of interest,
⇀ mind¯ Cost( d ) This is accomplished using the fmincon MATLAB routine, which employs the Nelder-Mead algorithm (Lagarias et al., 1998; Han, 1977). To understand this algorithm, a trust-region reflective approach, to the optimization solution, consider a minimization problem that minimizes f( x¯ ) (in our case this is the cost function), where the function takes vector arguments and returns a scalar. Suppose that the algorithm is at ⇀ point ⇀ x (in our case d ) in the M-space. The algorithm moves to a point with a lower function value. The basic idea is to approximate f with a simpler function g, which approximates the behavior of the function f x ) in a neighborhood N around the reference point ⇀ x . This neigh(⇀ borhood is the trust region. A trial step ⇀ x ′ is computed by minimizing 229
Progress in Nuclear Energy 114 (2019) 227–233
B.A. Khuwaileh and P.J. Turinsky
and Khuwaileh et al. (2015) introduced a very efficient method utilizing the non-converged iterates of the model. The following steps summarize the overall process:
(or approximately minimizing) over N. This is the trust-region subproblem,
⇀ ⇀ min⇀ x ′ {g ( x ′), x ′ ∈ N }.
1 Calculate the basis of the active subspace (the influential degrees of freedoms as captured in matrix U) for the parameters with regard to each response of interest and each integral response. For example, gradient-free or gradient based algorithms proposed in Ref. (Khuwaileh, 2015) can be used. 2 Restricting the subspace using matrix U, construct the surrogate model for each response of interest and each integral response. For example, a surrogate of the form suggested in Eq. (6). τ for the responses of interest. 3 Set the target uncertainty limits, ⇀ 4 Solve the optimization problem for the adjustment parameters ⇀ vector ( d ) using the cost function defined by Eq. (4) and the constraint defined by Eq. (5). At each optimization step the uncertainty in the target responses (σR2, j ) are obtained by Monte Carlo sampling via the surrogate model. At each step the information carried by the ⇀ adjustment parameters vector ( d ) are transformed into required uncertainty in the integral responses by further propagating the new differential parameter uncertainties via Monte Carlo sampling using the surrogate model.
x ′) < f (⇀ x ) ; otherwise, the current The updated point is ⇀ x ′ if f (⇀ point (⇀ x ) is unchanged and N, the region of trust, is shrunk and the trial step computation is repeated. 2.2. Non-linear surrogate based target accuracy assessment In this subsection, the formulation of a surrogate model will be discussed. The role of the surrogate model is vital to this work. The claim here is to replace the original high fidelity and computationally expensive model with a surrogate model with associated negligible computational cost. Surrogate models can take many forms such as polynomial surrogates, Gaussian process, polynomial chaos and more (Queipo et al., 2005). While these surrogates have great benefits they suffer from a number of drawbacks. For example in order to construct such surrogates the original model must provide a training set; therefore, in order to construct a high order surrogate model, the original model must be run a number of times. Moreover, in order to make sure that the surrogate captures the physical behavior of the original model within the region of interest, rigorous error analysis must be performed. To exemplify the process of using surrogates in target accuracy assessment, this work assumes a polynomial surrogate form. Moreover, in order to reduce the number of training sets needed to estimate the surrogate's fitting coefficients, reduced order modeling projection techniques are going to be employed to reduce the number of surrogate's fitting parameters. Ref. Constantine et al. (2014); Khuwaileh and Abdel-Khalik (2015) detailed the use of reduced order modeling projection techniques to identify the important degrees of freedom in input, state or response spaces. For example, assume that a model f has n parameters, where n is a very large number, then constructing a surrogate for such a model requires training it heavily. However, the reduced order modeling algorithms proposed in Ref. (Khuwaileh, 2015) can enable the identification of the influential degrees of freedoms based on the model physics (e.g. r degrees of freedoms where r ≤ n) within the n-space. These degrees of freedoms formulate a subspace with dimension r and basis vectors representing the influential degrees of freedoms, mathematically denoted as the column vectors of matrix U. Calculating the columns of matrix U requires r model runs and if r≪ n then the computational cost of this process is relatively small. Once this matrix is obtained, it can be used to project the parameter's space of dimension n onto the identified subspace of dimension r. To illustrate this process let us consider the 3rd order polynomial surrogate below. Eq. (6) indicates that the matrix U is not only used to project the parameters but also the surrogate coefficients onto that subspace. T β¯1, r Δα¯ ⇀T ⇀ + f ≈ f˜ = β 1 U UT Δx
2
The uncertainty is propagated non-linearly via Monte Carlo sampling through the computationally efficient surrogate models; thereby dropping the linearity assumption assumed in the classical formalism. Moreover, the end result is a set of required uncertainties in the measurable integral responses to be measured so that the target accuracies in the responses of interest are met. By varying the integral response experiments being considered in Step 3, different experimental approaches can be evaluated. Note that the recommended integral responses' uncertainty assumes that future analysts will not have a priori information about the parameters’ uncertainties. Therefore, this algorithm results in conservative requirements. If future analyst (who will use these requirements) will have a priori information and employ it in Bayesian inference, then the requirements produced by the proposed algorithm will result in uncertainties below the target accuracies.
3. Case study: 3 dimensional multi-physics assembly problem with thermal-hydraulics feedback In this case study a 3 dimensional nuclear fuel assembly depletion problem with thermal-hydraulics feedback will be the subject of a target accuracy assessment study. CASL Progression Problem Number 6 is used for the illustrative example. The fuel assembly is modeled using the VERA-CS (Palmtag et al., 2014) core simulator, depicted in Fig. 1, composed of the MPACT neutronics solution (MPACT, 2013) coupled with the COBRA-TF thermal-hydraulics solution (Avramova, 2009). The material composition is updated over user specified time (burnup) steps via solution of a generalized form of the Bateman equations using ORIGEN (SCALE, 2009) integrated with MPACT employing a predictorcorrector method. In Fig. 1, X, Y and Z denote input parameter values, and a, b, d and l denote internally calculated quantities shared with other physics modules. Since MPACT employs a 2D MOC – 1D SPN methodology in our case using 47 energy groups, COBRA-TF employs a subchannel method, and ORIGEN is tracking several hundred isotopes, VERA-CS is both high fidelity and computationally demanding. CASL Progression Problem Number 6 is a single pressurized water reactor 17 × 17 fuel assembly (Westinghouse fuel design) with uniform fuel enrichment resembling fuel used in Watts Bar Unit 1 Cycle 1. The example used here uses a soluble boron concentration of 1300 ppm and 100% power level with no axial blankets. Overall, the total number of fuel rods is 264 fuel rods, with 24 guide tubes and a single instrument tube at the assembly's center. There are no control rods or removable
3
T T ⎞ ⎛ β¯3, r ⎞ ⎛ β¯2, r Δα¯ Δα ¯ ⎟ ⎟ ⎜⇀T ⎜⇀T T T ⇀ ⇀ ⎜ β 2 U U Δx ⎟ + ⎜ β 3 U U Δx ⎟ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎠ ⎝ ⎠ ⎝ 2
3
⇀T ⇀T ⇀T = β 1, r Δ⇀ α + ⎜⎛ β 2, r Δ⇀ α ⎟⎞ + ⎜⎛ β 3, r Δ⇀ α ⎟⎞ ⎝ ⎠ ⎝ ⎠
(6)
⇀T
⇀ ∈ r then β ∈ r for α = UT Δx Given that U ∈ n xr and Δ⇀ j, r ⇀ j = 1,2,3. Hence in order to determine the unknown elements of { βj, r } the model needs (at most) to be run 3r times in addition 1r to calculate the subspace. The fitting coefficients are determined by generating 3r ⇀. To minimize the random samples of the input parameters vector, Δx computational cost of generating the subspace, Refs. Byrd et al. (2000) 230
Progress in Nuclear Energy 114 (2019) 227–233
B.A. Khuwaileh and P.J. Turinsky
Fig. 1. Coupling scheme for VERA-CS core simulator.
based on the assumption of constant lethargy intervals. 295 nuclides, 47 energy groups, 5 reactions are treated (absorption, fission, nu-fission, transport, scattering) besides the fission spectrum and (n, 2n) reaction. In addition, an initial uncertainty of ± 50% is assumed for the gap conductivity and ± 4% in the grid spacer loss coefficient, each without correlation to any other parameters. On the other hand, the target responses are the maximum fuel pin power (Pmax ) and the maximum fuel pin temperature (Tmax ). The maximum pin power and maximum pin temperature are highly dependent on the nuclear cross-sections and pellet-clad gap thermal conductivity. Moreover, the multiplication factor and fission rate are both affected by the same parameters. Hence, reducing the measurement (experimental) uncertainties in the multiplication factor and the fission rate would reduce the predicted uncertainties in the parameters of interest through model calibration studies (Aliberti et al., 2006; Cacuci, 2010; Khuwaileh and Turinsky, 2017). Instead of using a linear constraint (defined by Eq. (3)), Monte Carlo samples from computationally efficient surrogates will be used to estimate the uncertainties in both the responses of interest (Tmax and Pmax ) and the integral responses used for the improvement of the parameters’ uncertainties (keff and FR ). For this case study all benefits and experiments are considered equivalent hence, { μj } and {wi } are set to 1.0 in Eq. (4). Also, the initial uncertainties appearing in Eq. (4) are set to σkeff = 60 pcm and σFR = 0.01 First, the subspaces for the responses of interest and integral responses are determined such that the errors so introduced are negligible versus the magnitude of the uncertainties in these quantities. This results in a dimensional reduction in the selected parameters from 69327 (corresponding to 295 isotopes, 5 reactions, 47 energy groups and two thermal-hydraulics parameters (gap conductivity and grid spacer loss coefficient) to 53. Next the 3rd order surrogates of the form defined in Eq. (6) are constructed and tested. Table 2 shows the accuracy tests of the 3rd order surrogates. To provide some perspective of the uncertainty reductions desired, Table 3 shows the initial and the tightest employed target accuracies of the responses of interest (σTmax : the standard deviation in the fuel maximum temperature, τTmax : is the corresponding target accuracy (standard
Table 1 CASL problem 6 specifications (Godfrey, 2013). Parameter Name
Value
Fuel Pellet Radius Fuel Cladding Inner Radius Fuel Clad Outer Radius Guide Tube Inner Radius Guide Tube Outer Radius Rod Pitch Instrument Tube Inner Radius Instrument Tube Outer Radius Outside Rod Height Fuel Stack Height (active fuel) Plenum Height End Plug Heights (×2) Fuel (composition | enrichment) Clad/Caps/Guide Tube Material Inlet Temperature Pressure Rated Flow (100%) Rated Power (100%)
0.4096 cm 0.4180 cm 0.4750 cm 0.5610 cm 0.6020 cm 1.26 cm 0.5590 cm 0.6050 cm 385.10 cm 385.76 cm 16 cm 1.67 cm UO2 | 3.1% Zircaloy-4 565 K 2250 psia 0.6824 Mlb/hr 17.67 MWt
burnable absorber rods in this problem (Godfrey, 2013). For more descriptive information refer to Table 1. The problem described above is used in this case study to perform target accuracy assessment on a few measurable integral responses: the multiplication factor (k-eff) and the fission reaction rates (FR) of the movable in-core instruments as functions of assembly locations and axial elevation levels. These integral responses are used to reduce selected parameters (i.e. cross-sections in MPACT library (MPACT, 2013), pellet-clad gap thermal conductivity, grid spacer loss coefficient) uncertainties via model calibration. It is important to emphases here that the nuclear data cross-sections covariance library that will be used in this section, generated by SCALE, is a 44 energy group library while VERA-CS uses a 47 group library. Therefore, it is obvious that the perturbations generated by the 44 group library must be mapped to the 47 group structure. This is achieved via linear interpolation that is 231
Progress in Nuclear Energy 114 (2019) 227–233
B.A. Khuwaileh and P.J. Turinsky
Table 2 Target accuracy assessment - surrogate accuracy features. Surrogate order 3rd order
Construction Data Pointsa
Validation Pointsb
Residuals distributionc
ε keff → 5 [pcm]
150
40
k eff → i.i.d
14.7 [pcm]
Pmax → i.i.d Tmax → i.i.d FR → i.i.d
0.01 [Co] 9.1 [Co] 0.0009
ε Pmax → 0.0098 [W/cm] εTmax → 7.4 [Co] εFR → 0.00021 a b c d
Surrogate Form – Related uncertaintyd
RMS
Used to construct the surrogate models. Used to test the model's performance and error analysis. Independent and identically distributed random variables (i.i.d.). Standard deviation of surrogate models (i.i.d.).
Table 3 Initial and target accuracies for the responses of interest.
εk − eff =
Entry
Value at 0 MWd/MTU
Value at 30 GWd/MTU
σTmax τTmax
98 Co [7%] 28 Co [2%]
129 Co [11.72%] 56 Co [4%]
σPmax τPmax
2.7 W/cm [1.2%] 1 W/cm [0.37%]
3.5 W/cm [1.8%] 1.5 W/cm [0.54%]
τPmax %
6% 4% 2%
1% 0.5% 0.37%
σk′eff
′ σFR
432 pcm 369 pcm 314 pcm
0.0063 0.0031 0.0020
εTmax =
τPmax %
10% 8% 4%
1% 0.75% 0.54%
σk′eff
′ σFR
398 pcm 311 pcm 268 pcm
0.0041 0.0019 0.0009
εFR =
∑ s=1
K
1 ⎛ ⎞ ⎝s⎠
S
∑ (T s max,surrogate − Tmax, exact )2 . s=1
Target accuracy assessment (TAA) is a practice employed to determine the experimental requirements to achieve a certain level of accuracy. This work has introduced the use of surrogate models to solve the TAA problem within nuclear engineering applications. The use of surrogate models facilitates the target accuracy analysis for large scale, highly non-linear applications. This manuscript illustrated the proposed algorithm by performing a TAA on a depletion problem with thermal hydraulics feedback. First, a surrogate was constructed to replace a computationally expensive simulator (VERA-CS). Surrogate models take negligible computer time to execute versus the original physics models, since they are algebraic equations versus likely the solution of partial differential equations. The computational burden is associated with generating the surrogate model. In this work, this has been done by first sampling the physics models until the subspaces that predicts the quantities of interest with sufficient accuracy, followed by using these samples and additional samples as necessary to build the surrogate models and to quantify their uncertainties. This results in a surrogate that is computationally efficient and has much fewer parameters. So what needs to be compared is the number of original physics model samples to complete the TAA analysis without using a surrogate model versus the number of original physics model samples to build the surrogate model. Since the number of samples required to build the subspace is dependent upon the subspace rank and for our applications is much smaller than the original space's rank (53 versus 69327), for the current application the reduction in original physics samples is a factor of 1308. Therefore, the use of a surrogate model introduce significant computational savings and yet can optimize the experimental requirements to achieve certain target accuracy. Moreover, the TAA for the depletion problem indicates that the more fuel depletion is involved in the TAA study the stricter the experimental requirements are.
deviation). σPmax : the standard deviation in the fuel maximum power, τPmax : is the corresponding target accuracy (standard deviation)). The values are reported in absolute and relative senses. Table 4 and Table 5 illustrate several target accuracies along with the required integral responses measurement (experimental) uncertainties at 0 MWd/MTU and 30 MWd/MTU, respectively. For the same target accuracies, note that the requirement on the accuracies of the integral experiment responses is stricter when the fuel assembly is depleted. This is believed to occur because isotopes are created with depletion whose microscopic cross sections have associated larger uncertainties, leading to uncertainties in these isotopes number densities. The increasing abundance of isotopes with both larger microscopic cross section and isotope number density uncertainties, increase macroscopic cross section uncertainties with depletion. This interpretation is supported by the initial uncertainties shown in Table 3. The Root Mean Square values reported in Table 2 are calculated as follow: For the fission rate (for S samples and K axial nodes used in the model): S
s=1
4. Summary and conclusions
Table 5 The target accuracies along with the required experimental uncertainties for the measurable integral parameters at 30 GWd/MTU. τTmax %
S
∑ (k s eff ,surrogate − keff ,Exact )2 ,
for Tmax (and Pmax):
Table 4 The target accuracies along with the required experimental uncertainties for the measurable integral parameters at 0 GWd/MTU. τTmax %
1 ⎛ ⎞ ⎝s⎠
Acknowledgment This research was supported by the Consortium for Advanced Simulation of Light Water Reactors (http://www.casl.gov), an Energy Innovation Hub (http://www.energy.gov/hubs) for Modeling and Simulation of Nuclear Reactors under U.S. Department of Energy Contract No. DE-AC05-00OR22725.
2
− FRsExact (FRsSurrogate 1 ∑ ,k ) ,k ⎛ ⎞. k = 1 , K Exact 2 ⎝s⎠ (∑k = 1 FRs, k )
for the k-eff: 232
Progress in Nuclear Energy 114 (2019) 227–233
B.A. Khuwaileh and P.J. Turinsky
Appendix A. Supplementary data
user/login?dest=?url=https://search.proquest.com/docview/1798874988? accountid=42604. Khuwaileh, B.A., Abdel-Khalik, H.S., 2015. Subspace-based inverse uncertainty quantification for nuclear data assessment. Nucl. Data Sheets 123, 57–61. Khuwaileh, B.A., Turinsky, P.J., 2017. Surrogate based model calibration for pressurized water reactor physics calculations. Nucl. Eng. Technol. 49 (6), 1219–1225. Khuwaileh, B.A., Wang, C., Bang, Y., Abdel-Khalik, H.S., 2015. Efficient Subspace Construction for Reduced Order Modeling in Reactor Analysis (No. JAEA-CONF– 2014-003). Lagarias, J.C., Reeds, J.A., Wright, M.H., Wright, P.E., 1998. Convergence properties of the Nelder–Mead simplex method in low dimensions. SIAM J. Optim. 9 (1), 112–147. MPACT, T.E.A.M., 2013. MPACT Theory Manual, Version 1.0. University of Michigan, Ann Arbor, Michigan Oct. 2013. Palmiotti, G., Salvatores, M., Assawaroongruengchot, M., Herman, M., Oblozinsky, P., Mattoon, C., Pigni, M., 2010, April. Nuclear data target accuracies for Generation-IV systems based on the use of new covariance data. In: Proc. Int. Conf. ND2010, International Conference on Nuclear Data for Science and Technology 2010. Palmtag, S., Clarno, K., Davidson, G., Salko, R., Evans, T., Turner, J., Schmidt, R., 2014, September. Coupled neutronics and thermal-hydraulic solution of a full-core PWR using VERA-CS. In: Proceedings of International Topical Meeting on Advances in Reactor Physics (PHYSOR), Kyoto, Japan. Proctor, W.C., 2012. Elements of High-Order Predictive Model Calibration Algorithms With Applications to Large-Scale Reactor Physics Systems. North Carolina State University. Queipo, N.V., Haftka, R.T., Shyy, W., Goel, T., Vaidyanathan, R., Tucker, P.K., 2005. Surrogate-based analysis and optimization. Prog. Aero. Sci. 41 (1), 1–28. Rochman, D., Leray, O., Hursin, M., Ferroukhi, H., Vasiliev, A., Aures, A.,, et al., 2017. Nuclear data uncertainties for typical LWR fuel assemblies and a simple reactor core. Nucl. Data Sheets 139, 1–76. Salvatores, M., Jacqmin, R., 2008. Uncertainty and Target Accuracy Assessment for Innovative Systems Using Recent Covariance Data Evaluations. Nuclear Energy Agency Report. SCALE, 2009. A Modular Code System for Performing Standardized Computer Analyses for Licensing Evaluation, ORNL-TM/2005/39, Version 6, vols. I–III Oak Ridge National Laboratory, Oak Ridge, Tenn. Usachev, L.N., Bobkov, Y.G., 1972. Planning an optimum set of microscopic experiments and evaluation to obtain a given accuracy in reactor parameter calculations (No. INDC (CCP)–19/U). International Nuclear Data Committee.
Supplementary data to this article can be found online at https:// doi.org/10.1016/j.pnucene.2019.01.023. References Alhassan, E., Sjöstrand, H., Helgesson, P., Österlund, M., Pomp, S., Koning, A.J., Rochman, D., 2016. On the use of integral experiments for uncertainty reduction of reactor macroscopic parameters within the TMC methodology. Prog. Nucl. Energy 88, 43–52. Aliberti, G., Palmiotti, G., Salvatores, M., Kim, T.K., Taiwo, T.A., Anitescu, M., et al., 2006. Nuclear data sensitivity, uncertainty and target accuracy assessment for future nuclear systems. Ann. Nucl. Energy 33 (8), 700–733. Arbanas, G., Williams, M.L., Leal, L.C., Dunn, M.E., Khuwaileh, B.A., Wang, C., AbdelKhalik, H., 2015. Advancing inverse sensitivity/uncertainty methods for nuclear fuel cycle applications. Nucl. Data Sheets 123, 51–56. Avramova, M.N., 2009. CTF: A Thermal Hydraulic Sub-channel Code for LWR Transient Analyses, User's Manual. Pennsylvania State University, Department of Nuclear Engineering. Byrd, R.H., Gilbert, J.C., Nocedal, J., 2000. A trust region method based on interior point techniques for nonlinear programming. Math. Program. 89 (1), 149–185. Cacuci, D.G. (Ed.), 2010. Handbook of Nuclear Engineering, vol. 1 Nuclear Engineering Fundamentals Vol. 2: Reactor Design; Vol. 3: Reactor Analysis; Vol. 4: Reactors of Generations III and IV; Vol. 5: Fuel Cycles, Decommissioning, Waste Disposal and Safeguards (Vol. 2). Springer Science & Business Media. Constantine, P.G., Dow, E., Wang, Q., 2014. Active subspace methods in theory and practice: applications to kriging surfaces. SIAM J. Sci. Comput. 36 (4), A1500–A1524. Godfrey, A.T., 2013. VERA Core Physics Benchmark Progression Problem Specifications. Oak Ridge National Laboratory CASL-U-2012-0131-004. Han, S.P., 1977. A globally convergent method for nonlinear programming. J. Optim. Theor. Appl. 22 (3), 297–309. Khuwaileh, B., 2015. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors (Order No. 10110742). Available from ProQuest Dissertations & Theses Global. (1798874988). Retrieved from: https://uoseresources.remotexs.xyz/
233