Verification and validation of system thermal-hydraulic computer codes, scaling and uncertainty evaluation of calculated code results
13
H. Glaeser GRS, Garching, Germany
Abstract The approximations of system thermal-hydraulic (SYS TH) codes necessitate proper verification and validation (V&V) processes. Validation by predictions of complex phenomena can only be performed after the availability of relevant experimental data. Since many experiments used to validate a code are in small scale compared with a nuclear reactor, the scaling effects have to be considered. Therefore, scaling effects or addressing the scaling issues must be part of the validation activity. Deviations between calculation results and experimental data are observed. Consequently, various sources of uncertainty are identified and uncertainty methods for their quantification have been developed. International programs highlighted the validation, scaling and qualification of uncertainty methods. The chapter discusses V&V, scaling and the evaluation of uncertainty including the effects of the code users.
Nomenclature AEA/AEAT AEKI ANS BEMUSE BWR CCFL CEA CFD CFR CIAU CSAU CSNI DVI DSA
Atomic Energy Agency/Technology (UK) Atomic Energy Agency (Hungary) American Nuclear Society Best Estimate Methods—Uncertainty and Sensitivity Evaluation boiling water reactor counter-current flow limitation Commissariat a L’Energie Atomique (France) computational fluid dynamics Code of Federal Regulation code with the capability of internal assessment of uncertainty code scaling, applicability, and uncertainty Committee on the Safety of Nuclear Installations direct vessel injection (line) deterministic safety analysis
Thermal Hydraulics in Water-Cooled Nuclear Reactors. http://dx.doi.org/10.1016/B978-0-08-100662-7.00013-0 © 2017 Elsevier Ltd. All rights reserved.
832
ECC ECCS EMDAP ENUSA FCM FFTBM FRC FSA GRS H2TS IAEA IPSN IRSN IRWST ISP IT ITF I/O JNES KAERI KINS LB LOCA LWR NPP NRC NRI OECD PDF PIRT PREMIUM PSI PWR QAM SB SET SYS TH UMAE UMS UNIPI UPC USNRC V&V WWER
Thermal Hydraulics in Water-Cooled Nuclear Reactors
emergency core cooling emergency core cooling systems evaluation model development and assessment process Empresa Nacional del Uranio, SA (Spain) fractional change metric fast Fourier transform-based method fractional rate of change fractional scaling analysis Gesellschaft f€ur Anlagen- und Reaktorsicherheit (Germany) hierarchical two-tiered scaling International Atomic Energy Agency Institut de Protection et de Suˆrete Nucleaire (France) Institut de Radioprotection et Suˆrete Nucleaire (France) in-containment refuelling water storage tank International Standard Problem integral test integral test facility input/output Japan Nuclear Energy Safety Organization Korean Atomic Energy Research Institute Korean Institute for Nuclear Safety large break loss of coolant accident light water reactor nuclear power plant Nuclear Regulatory Commission (USA) Nuclear Research Institute (Czech Republic) Organization for Economic Cooperation and Development probability density function phenomena identification and ranking table Post-BEMUSE REflood Model Input Uncertainty Methods Paul Scherrer Institute (Switzerland) pressurized water reactor quantity accuracy matrix small break separate effects test System Thermal-Hydraulics uncertainty method based on accuracy extrapolation uncertainty method study University of Pisa (Italy) University of Barcelona (Spain) United States Nuclear Regulatory Commission (see also NRC, USA) verification and validation water-cooled and water-moderated energy reactor
Verification and validation
833
Symbols JX_ ¼
1
Wallis number ¼ dimensionless superficial velocity based on downcomer annulus circumference
1
Kutateladze number ¼ dimensionless superficial velocity based on surface tension
ρX 2 M_ X ρX ADC ½gW ðρ ρ Þ12 W s
KX_ ¼
ρX 2 M_ X ρX ADC ½gσ ðρ ρ Þ14 W s
ADC Bo dav douter Fr g L M W θECC-BCL ν ρ σ
flow cross section of downcomer Bond number downcomer average diameter downcomer outer diameter Froude number gravity characteristic geometrical length mass flow rate average downcomer annulus circumference angle between ECC injection leg and broken loop kinematic viscosity density surface tension
Subscripts g l s w x
vapor liquid steam water either steam or water
Chapter foreword Validation (and verification, i.e., what is included in the acronym V&V), scaling, and uncertainty constitute crosscutting issues from development to application in system thermal-hydraulics (SYS THs). The unavoidable approximations introduced at the development level of any code model, including the lack of direct consideration of turbulence, the averaging, and the application of theorems from mechanics of continuum to the noncontinuum two-phase flows impose a validation process. The difficulty to perform experiments (even to be used for validation) at the same conditions (size, geometry, pressure, and power) of the prototype nuclear power plants (NPPs) needs to deal with the scaling issue. Differences between measured and calculated values of transient variables and consequently expected deviations in predicting accident scenarios in NPPs are a reason to deal with uncertainty. Thus, this chapter is connected with Chapters 5, 9, and 10 of the book where modeling features are discussed in addition to Chapters 7 and 8. The chapter is also connected with Chapter 11 where system code validation, scaling, and uncertainty
834
Thermal Hydraulics in Water-Cooled Nuclear Reactors
analysis is already mentioned. The validation procedures described in this chapter are based on phenomena described in Chapter 6 of the book. Finally, validation, scaling, and uncertainty constitute very basic and important elements for the best estimate plus uncertainty (BEPU) approach described in Chapter 14 and should be considered when interpreting results of calculations presented in Chapter 15 of the book. The reader may recognize that validation, scaling, and uncertainty are linked among each other and needs suitable understanding prior to the application of thermal-hydraulic models and code to the evaluation of safety of nuclear installations.
13.1
Introduction
This chapter deals with important steps before performing safety analyses using thermal-hydraulic computer codes calculating the physical behavior of a steam-supply system of a NPP. Verification and validation are means to prove the quality of a computer code. Since many experiments, which are used to validate a code, are in small scale compared with a nuclear reactor, the scaling effects have to be considered. Since initial and boundary conditions, geometry, reactor plant operating parameters, fuel parameters, and scale effects may not be exactly known, and the SYS TH computer codes approximate the physical behavior, the code predictions are not exact but uncertain. Agreement of calculated results with experimental data is often obtained by choosing specific code input options or changing parameter values in the model equations. These selected parameter values usually have to be changed again for different experiments in the same facility or a similar experiment in a different facility in order to obtain agreement with data. To make calculation results useful, for example to be compared with limits of acceptance, the uncertainty in the predictions has to be calculated separately. Therefore, uncertainty analysis methods were developed to estimate safety margins if best estimate codes are used to justify reactor operation. These uncertainty analyses are used more and more instead of the former conservative analysis methods using conservative computer models and/or conservative initial and boundary conditions with the intention to cover all existing uncertainties. This chapter describes the verification, validation, scaling, and uncertainty evaluation to perform safety analyses because of their strong interrelation.
13.2
Verification and validation of thermal-hydraulic system computer codes
Evaluation of nuclear reactor safety is based on deterministic safety analysis (DSA) and probabilistic safety analysis (PSA). In order to perform DSA, properly qualified computational tools are needed. Verification deals with the question “are the correct
Verification and validation
835
equations implemented and solved correctly?” while validation deals with the question “are the calculated results in agreement with experimental results?” Among the computational tools, the current emphasis is on the SYS TH codes. These codes consist of three major modules: the input/output (I/O) module, the governing equation solver, and closure models. The governing equation solver consists of the following three key modules: l
l
l
Thermal-hydraulics (or fluid) balance equations solved in the fluid domains Fourier equations for conduction of heat transfer in solid material Neutron diffusion equations
Parts of the system code are the “special models” which supplement the capabilities of the various sets of equations. Deviations in the prediction of the same scenario by the same code used by different users may come from different options and selections of input parameters by these users. This book focuses on water-cooled nuclear reactors, including research reactors, heavy water-cooled/moderated reactors, and graphite-moderated boiling reactors designed and operated in Russia. Any calculation method or tool, including computer codes, applied within the nuclear reactor safety issue shall undergo proper verification and validation (V&V). The quality of a calculation depends upon: (a) the quality of the code, (b) the quality of the nodalization, (c) the adequacy of the adopted computer-compiler installation, (d) the quality of the input data, and (e) the qualification level of the code-user. Furthermore, any calculation by a best estimate SYS TH code should be supported by uncertainty evaluation, specifically when results are relevant to safety or licensing. Attention should also be exercised that scaling-related qualification is performed to be part of the validation. Qualitative and quantitative accuracy evaluation, including identification and ranking of thermal-hydraulic phenomena, is also considered for performing verification and validation.
13.2.1 Verification of computer codes Verification of computer codes is performed within a code development process to show that the code behaves as intended, i.e., l
l
l
that it is a proper mathematical representation of the conceptual model or the design that the equations are correctly encoded and solved corresponds to the description in the documentation
The verification is performed to demonstrate that the design of the code numerical algorithms conforms to the design requirements, that its logic is consistent with the design specification, and that the source code conforms to programming standards and language standards. Physical phenomena and processes to be simulated are represented by a set of equations and closure relations. Verification of computer codes consists of: (1) the verification of the code design or concept and (2) the verification of the source code (ANS, 1987; IAEA, 2009, 2015).
836
Thermal Hydraulics in Water-Cooled Nuclear Reactors
In general, the verification of the code design is performed to ensure that the numerical methods, the transformation of the numerical equations into a numerical scheme to provide solutions, and user options and their restrictions are appropriately implemented in accordance with the design requirements. The verification of the code design includes a review of the design concept, basic logic, flow diagrams, numerical methods, algorithms, and computational environment. If the code is run on a hardware or software platform other than that on which the verification process was carried out, the continued validity of the code verification should be assessed (portability). When the code design contains the integration or coupling of codes, then the verification of the code design should ensure that the links and/or interfaces between the codes are correctly designed and implemented to meet the design requirements. Verification of the source code is performed to demonstrate that it conforms to programming standards and language standards, and that its logic is consistent with the design specification. The comparison with measured values is not part of the verification; such a process is included in the validation process. Practical instruments to verify the source code are review, inspection, and audit. Checklists may be provided for review and inspection. Examples for checklists for reviews and inspections of verification of the code design as well as the source code are given in ANS (1987) and IAEA (2015). Comparisons with independent calculations are carried out where practicable to verify that the mathematical operations are performed correctly. Audits will be performed on selected items to ensure quality. A review and inspection of the entire code may not be practicable for verification purposes due to its large size. In such cases, verification of individual modules or parts of the code can be conducted, and this should include a careful inspection of all interfaces between the modules. Any noncompliance of the code or its documentation with the design requirements should be reported to and should be corrected by the code developer. The impact of such noncompliances on results of analyses that have been completed before the correction and used as part of the safety assessment for a plant should be assessed.
13.2.2 Validation of computer codes Validation is the process carried out by comparing code calculation results with experimental measurements or measurements in a reactor plant if available. A code or code model is considered validated when sufficient testing has been performed to ensure an acceptable level of predictive accuracy over the range of conditions over which the code may be applied. Accuracy is a measure of the difference between measured and calculated quantities taking into account uncertainties and biases in both. Bias is a measure, usually expressed statistically, of the systematic difference between a true mean value and a predicted or measured mean. Uncertainty is a measure of the scatter in experimental or predicted data. The acceptable level of accuracy is judgmental and will vary depending on the specific problem or question to be addressed by the code. The procedure for specifying, qualitatively or quantitatively, the accuracy of code predictions is also called code qualification or code assessment.
Verification and validation
837
13.2.2.1 Validation process Validation is mandatory to be performed on all computer codes that are used for the DSA of NPPs. The purpose of validation is to provide confidence in the ability of a code to predict, realistically or conservatively, the values of the safety parameter or parameters of interest. Analysis methods that are used for safety demonstration of the fulfillment of the regulatory acceptance criteria must be validated for their respective scope of application. If calculation methods are used for analyzing the effectiveness of preventive or mitigative accident management measures, these shall also be validated for their respective scope of application. The major sources of information that should be used to assess the quality of computer code predictions are analytical solutions, experimental data, NPP transients, and benchmark calculations, i.e., code-to-code comparisons. Those code-to-code comparisons can be used for validation purposes provided that, at least, one of the codes has been validated. It is recommended for complex analysis, to perform the validation process in two steps: the development phase, in which the assessment is performed by the code developer, and the independent validation phase, in which the validation is done by someone who is independent of the developer of the code. Both phases are necessary for an adequate validation procedure. If possible, it is recommended that the data used for the independent validation of the code and the data that are used for the validation by the code developers should be selected from different experiments. Where possible, users should do validation exercises without having any prior knowledge of the experimental results to preclude any deliberate tuning of code calculations to yield better agreement with experimental results. Validation exercises are a very good means for users of a code to get experience in applying the code. The validation process should ideally include four different types of test calculation: (1) Basic tests. Basic tests are simple test cases that may not be directly related to a NPP. These tests may have analytical solutions or may use correlations or data derived from experiments. (2) Separate effects tests (SET). Separate effects tests address specific phenomena that may occur at a NPP but do not address other phenomena that may occur at the same time. SETs should ideally be performed at full scale. In the absence of analytical solutions or experimental data, other codes that are known to model accurately the limited physics represented in the test case may be used to determine the accurate solution. (3) Integral tests (ITs). Integral tests are test cases that are directly related to the behavior of the system of a NPP. All or most of the relevant physical processes as well as their interactions are represented. However, these tests are usually carried out at a reduced scale, the tests may use substitute materials or may be performed at low pressure. (4) NPP level tests and operational transients. Nuclear power plant level tests are performed on an actual NPP. Validation through operational transients together with NPP tests are important means of quantifying the plant model.
838
Thermal Hydraulics in Water-Cooled Nuclear Reactors
The validation tests should ideally cover the entire range of values of parameters, conditions, and physical phenomena that the code is intended to cover. The scope of the independent validation exercise performed by the code user should be consistent with the intended purpose of the code and the range of experimental data available. The scope of validation should also be in accordance with the complexity of the code and the complexity of the physical processes that it represents. The code user should also evaluate qualitatively or quantitatively the accuracy of the results of the calculations. For complex applications, a validation matrix should be available for code validation because a code may predict one set of test data with a high degree of accuracy but may be extremely inaccurate for other data sets. The validation matrix should include test data from different experimental facilities at different scale and different sets of conditions in the same facility. It should ideally include basic tests, SETs, ITs, and NPP level tests. If sufficient data from full-scale experiments are not available, data from reduced scale experiments will be used, with appropriate consideration of scaling (see Section 13.3). Internationally agreed integral test facility (ITF) matrices for validation of realistic thermal hydraulic system computer codes were established in the OECD/NEA/CSNI validation matrix (CSNI, 1996) containing a total number of 177 PWR and BWR specific ITs selected as potential source for thermal hydraulic code validation. ITF development is mainly for pressurized water reactors (PWRs) and boiling water reactors (BWRs). A separate activity was for Russian pressurized water-cooled and water-moderated energy reactors (WWER) (CSNI, 2001). First, the main physical phenomena that occur during considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Second, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. The SET are designed to validate specific phenomena of the whole physical model independently from the others. SETs may address: – – – –
A specific basic flow process (flashing, critical flow, direct contact condensation, counter-current flow limitation (CCFL), lower plenum voiding, etc.) A specific closure term for a given flow or for a given range of flow parameters The specific behavior of a specific reactor component (separator, break, valve, loop seal, etc.). This is called a component test A specific flow process occurring in a specific reactor component during a selected time span of an accidental transient identified as a phenomenological phase
Due to the variety of models, the variety of flow regimes and the variety of geometrical configurations, a SET validation matrix has a large number of tests. The OECD/ NEA/CSNI SET matrix (CSNI, 1994) contains 67 thermal-hydraulic phenomena and 1094 tests from 185 test facilities as potential source for thermal hydraulic code validation. Subsequent experiments have been performed in the meantime to investigate different reactor designs. The number and the selection of tests in the test matrix used for validating specific computer codes do not need to use all these experiments contained in the international validation matrices. The number of tests used should be justified as being sufficient for
Verification and validation
839
the intended application of the code. The range of validity and the limitations of a computer code, which are established as a result of validation, should be documented in a validation report which should be referenced in licensing documentation. However, code users may perform additional validation to demonstrate that the code satisfies the objectives for their specific application. This, however, necessitates the availability of experimental data as well as experimental documentation or the results of NPP data, like start-up tests. When experiments are used for validation of the computer codes, care should be exercised when planning an experiment to ensure that the measured data are as suitable as possible for the purpose of code validation. The safety parameters that will be calculated using the code should be considered when the experiment and its instrumentation are planned. To ensure that the code is validated for conditions that are as close as possible to those in a NPP, it should be ensured that the boundary conditions and initial conditions of the test are appropriate. Consideration should be given to scaling laws. A scaled experimental facility cannot be used to represent all the phenomena that are relevant for a full size facility (see Section 13.3). Thus, for each scaled facility that is used in the assessment process, the phenomena that are correctly represented and those that are not correctly represented should be identified. The effects of phenomena that are not correctly represented should be addressed in other ways. The uncertainty in the experimental data should be reported in the documentation of the experiment. When performing a validation against experimental data, allowance for errors in the measurements should be included in the determination of the uncertainty of the computer code.
13.2.2.2 Qualification of input data The input data for a computer code include a representation of all or parts of the NPP. There is usually a degree of flexibility in how the plant is modeled or nodalized. The intention is to be flexible in calculating different reactor types and different scenarios. The input data that are used to perform safety assessment calculations should conform to the best practice guidelines for using the computer code as described in the user manual and should be independently checked. The input data should be a compilation of actual information found in as-built and valid technical drawings, operating manuals, procedures, set point lists, pump performance charts, process diagrams and instrumentation diagrams, control diagrams, etc. Users who prepare input data to model validation tests should be suitably qualified and should make use of all available guides. These include the specific code user guide, generic best practice guides for the type of code, and guidance from experienced users. The validation process itself often enables the determination of best practices. This may include nodalization schemes, model options, solution methods, and mesh size. Those who generate input data sets for safety analysis calculations should use the best practice guidelines established during the validation process.
840
Thermal Hydraulics in Water-Cooled Nuclear Reactors
13.2.2.3 Qualitative and quantitative assessment of validation calculations A realistic or “best estimate” computer code or analysis method may be deemed validated if the applicability and sufficient accuracy of the code applied has been demonstrated for the respective application within the framework of the validation scope performed and documented. Qualitative judgment of agreement between the code calculation and the data is generally divided into four categories: Excellent agreement: The calculation lies within or near the data uncertainty band at all times during the phase of interest. Reasonable agreement: The calculation sometimes lies within the data uncertainty band and shows the same trends as the data. Code deficiencies are minor. Minimal agreement: Significant code deficiencies exist. Some major trends and phenomena are not predicted. Incorrect conclusions may be drawn based on the calculation when data are not available. Unacceptable agreement: A comparison is unacceptable when a significant difference between calculation and the data is present, and the difference is not understood. Such a difference could follow from errors in either the calculation or the presentation of the data, or an inadequate code model of the phenomenon.
A code is considered to be of adequate applicability when it shows either excellent or reasonable agreement with the highly ranked phenomena, sometimes identified as the dominant phenomena, for a transient of interest. If the code gives minimal or unacceptable agreement, additional work must be performed, ranging from additional code development to additional analysis to understand the phenomena. Quantitative judgment of agreement between code results and data can be performed in different ways. The Japan Atomic Energy Research Institute comparing the submissions of International Standard Problem No. 26 (ISP-26), calculating an experiment on the LSTF facility, a 5% cold leg break experiment LSTF-SB-CL-18, performed one accuracy quantification study. Seventeen time-averaged deviations between selected, calculated, and measured results were performed for the different participants: jXcal Xexpj=Xexp < limit The limit chosen for ISP-26 (LSTF-SB-CL-18) was, for example: good agreement: limit ¼ 0.1 acceptable agreement: limit ¼ 0.2 poor agreement: limit > 0.2
Such an accuracy quantification method has not generally been used because of the dependence of results on the number and type of included variables, the dependence on the chosen time span, error compensation may not be detected, erroneous initial and boundary conditions may not be taken into account, and a lack of information about qualitatively correct trends in the calculation.
Verification and validation
841
University Pisa, using the fast Fourier transform-based method (FFTBM) with imposed acceptability limits (Ambrosini et al., 1990) has proposed another accuracy quantification method. The FFTBM has been developed to perform quantitative accuracy evaluation of the results of SYS TH codes. The idea is to perform the comparison between the measured and the calculated results from the time domain toward the frequency domain in order to get a figure of merit independent of the time duration of the scenario. The main FFTBM output is the average accuracy related to an individual or the total accuracy related to a set of variables. Kunz et al. (2002) discuss the comparison of the FFTBM with other methods suitable for quantifying accuracy, and Prosek et al. (2005) present application results. For conservative safety analysis methods used in licensing applications of NPPs, experimental results should be bounded in the direction of regulatory acceptance criteria, like the peak cladding temperature (PCT) of 1200 °C for loss of coolant accidents (LOCAs). Conservative is understood here in the unfavorable direction to acceptance criteria of safety parameters. That means the result of a conservative code should always be closer to the acceptance criterion than the realistic value.
13.2.2.4 International Standard Problems A special and challenging case of validation activities is occurring during an International Standard Problem (ISP), where comparison exercises are performed between different participants. Two kinds of comparison exercises can be organized between participants for code calculations, following the definition made generally in the CSNI, benchmarks, and ISPs. The benchmark exercise corresponds to calculations of a physical case defined theoretically for which no experimental reference exists. The comparisons are performed between each participant calculations, but there is no way to decide which one is really the best. In an ISP the calculations are centered on experimental results. Comparisons between each participant are still part of the exercise, but the final quality of the predictions is judged on the basis of the experimental results themselves which play consequently a key role. As a consequence, the specifications relative to the experimental results should follow detailed requirements in the ISP process. The mode for distribution of experimental data becomes also very important. A specific CSNI Report 17 (CSNI, 2004) discusses all these points and provides recommendations on each of them. 26 out of the 52 ISPs performed within the CSNI between the years 1973 and 2012 were from primary and secondary SYS THs and six from containment thermal-hydraulics. The others were from fuel behavior during LOCA (2) and severe accidents (19); 1 proposal was cancelled. Brief descriptions of all CSNI Standard Problems up to ISP 43 can be found in report CSNI (2000); further reports about CSNI Standard Problems are available on the CSNI web site www.oecd-nea.org/nsd/docs. The recommendations on the ISP specifications emphasized particular important features. Here are some of them: –
The experimental facility description provided to the ISP participants should be sufficiently detailed including layout, drawings, component characteristics that are necessary. Before proposing an ISP one must be sure that no proprietary interdiction is on the required data.
842
– –
Thermal Hydraulics in Water-Cooled Nuclear Reactors
The instrumentation should be detailed. Location, type, accuracy, data acquisition system, sampling, response time should be supplied to the participants. The test conditions should be well known; initial and boundary conditions should be provided with sufficient detail for running the prediction calculations in a meaningful way.
One can observe that those recommendations are still topical and that partial nonobservance is, even now, sometimes the source of shortcomings in the results. The governing idea has been to put the ISP participant in the same situation as when the user of a code is proceeding to a plant calculation for solving a safety issue. Therefore, in the last ISPs, results like those from characterization tests giving, for example, pressure drops, pump characteristics, heat losses and their distributions, etc., have been added in the specifications to the test conditions. Furthermore, for ISPs in relation to actual plant transients, additional specifications were supplied such as the timing and degree of interventions from operators as well as plant auxiliary system conditions with their uncertainty bands. The distribution of test results is also a key point in the ISP performance. Different options have been used which change the nature and the outcome of the ISP. Depending of those options, ISPs are classified in open, blind, semiblind, or double-blind ISPs. These categories are explained in more details: – –
Open ISP: This is an ISP where the participant gets all the experimental data from the very beginning. All information is open. The calculations are performed, while the experimental results are known. Blind ISP: on the contrary to open ISP, the participants in the blind ISP have no access to the test results except the test boundary and initial conditions in the general meaning discussed above. Actual test results remain locked until the calculations are finished and sent to the ISP organizer. The participants are doing their calculation in a blind situation in the same way they are doing the calculations of plant accidents where no experimental data is available. Blind ISPs are consequently recommended because they reflect better the real conditions in which an analyst will find himself for plant analysis. Furthermore, this is reinforced by the very frequent observation that calculations are generally better when experimental results are available at the time of the calculation. This is due to adjusting of the computer model and/or of the input data to account for the specific test results. Calculations in blind situation may avoid most of this kind of “tuning effect.”
In reality, even in blind situations, some tuning of the code may be obtained by using tests which have been already performed on the same experimental facility and which may be more or less similar to the one proposed for the ISP. For this reason, distinction was made in ISPs between real blind participants and participants who in fact had opportunities to perform extensive analysis of other tests and who could therefore not be considered as completely blind. This is also why the concept of double-blind ISP was created. –
A double-blind ISP is an ISP for which the participant has not access to the results of the ISP test or to the results of any other tests performed on the experimental facility (except the characterization tests considered as normal additions to tests conditions). This situation represents in fact the exact real situation of the analyst performing plant calculations. Consequently, double-blind ISPs are especially challenging to both the codes and the code users, and correspondingly more valuable in assessing code performance. However, such ISP can only be performed in the very rare situations where no test results have been already
Verification and validation
843
published, i.e., when starting a new facility. In this situation, meeting the high quality of standard requested for ISP is considerably difficult for the organization running the facility as it requires a perfect control of the experimental procedure, which is in fact in contradiction with the learning phase generally experienced on a brand new experiment. Consequently, very few double-blind ISPs have been practically organized.
From the preceding discussion, it appears clearly that the “more” blind an ISP is, the more significant the conclusions may be. However, there are cases where the generalization of this statement is in fact an oversimplification. –
A semi blind ISP concept has been introduced, for example, when the prediction of the tests involves two types of interacting phenomena and when the uncertainties on one of these phenomena will preclude the conclusions, which can be drawn on the other one. In these specific cases, it has often been decided, for preserving the ISP efficiency that the results concerning the first phenomena will be opened, and the results of the second will be blind. This semiblind procedure has been for instance used for fuel behavior ISPs where thermal-hydraulic conditions were often open and fuel results were blind so that uncertainties on thermal-hydraulics will not prevent to draw conclusions on fuel.
As another example where blind ISPs are not providing necessarily the most significant conclusions, we can mention ISPs dealing with physical areas where the knowledge is at an early stage or dealing with areas where phenomena are particularly complex. In such cases, blind calculations would give unusable results and valuable ISPs could only be open ones. In those specific areas, even if the ISPs results look encouraging, it should be kept in mind that for such ISP, an open exercise has been chosen because of the difficulties expected in blind calculations and that consequently, one could also expect similar difficulties in similar plant calculations, which will be performed by the analysts in a necessarily blind-like situation. The comparisons between the calculations and the experiment and between the calculations themselves are the constitutive part of the ISPs. It provides the basis for drawing conclusions on the capabilities and the deficiencies of existing calculation tools in an international framework. It is also during the analysis that the most detailed discussions can take place between experts.
13.2.2.5 User effects The user of state-of-the-art SYS TH computer codes must make many choices in setting up an input deck and running a calculation. Such a flexibility is by purpose to be able to apply such a code for different events and purposes. One experience from International Standard Problems is, however, that large differences of code results from different code users applying the same code version and the same specifications (e.g., initial and boundary conditions, geometry) have been observed. These differences are considered as code user effects (CSNI, 1995). In other arenas, e.g., plant calculations, some of these elements may better be classified as code, scale, plant state, etc., uncertainties. Uncertainty analyses are supposed to quantify some of these user effects. The focus is here on quantifying user effects of experienced code users and qualified input decks but not exactly known values for several input parameters. Mistakes in the input deck and code deficiencies in calculating important phenomena
844
Thermal Hydraulics in Water-Cooled Nuclear Reactors
should not be included in these uncertainty quantifications. The main source of quantifying code model uncertainties is therefore the experience of experts during the validation process. The development of thermal-hydraulic advanced codes was expected to decrease this effect of code users, but the last thermal-hydraulic ISPs have shown that there was still a significant “user effect” in applying these advanced codes. Detailed studies of this effect have been made on different ISPs. In addition to the identification of the user effect, ISPs have contributed largely to its understanding. ISPs are really providing data that are absolutely unique on this crucial subject. In recent comparisons of international applications of uncertainty evaluations, we have seen significant user effects, too. Ways to reduce user effects have been proposed within the CSNI working groups (CSNI, 1995, 1998; Shotkin, 1997; D’Auria, 1998). User training by performing validation, mentoring, and instruction are the most effective and easiest ways to improve the quality of code predictions. Users need a good knowledge of the physical modeling and limitations of the codes, the facility to be described, and an understanding of major phenomena expected to occur during the transient. User Guidelines that are based on the results of a systematic code validation program are also a way to reduce the user effect. They prevent an inadequate use of a code and avoid mistakes. However, these guidelines cannot give detailed recipes for all existing conditions. They cannot substitute for a trained and experienced code user. Another way to reduce code user effects is quality assurance. Preparation and testing of an input deck requires at least 6 months for an experienced code user. Clear quality assurance strategy is necessary. Often, incomplete or error-ridden input decks are used. It is common practice to use existing input decks. However, a warning should be issued to share existing decks between different users to save time who then introduce only minor modifications without complete checking of the major part and without understanding for what purpose the available input decks have been developed. Even though some suggested ways to reduce the user effect have been proposed, it remains that we are quite far from controlling it completely. The user effect has also appeared as a generic question and not only in the thermal-hydraulics area where it has been discovered. In particular, the several ISPs that have been recently performed in the severe accident area have shown the same importance for this effect.
13.2.2.6 Counterpart and similar tests The development and the assessment of system codes used in the safety analysis of water-cooled reactors have to rely mainly on experimental data from scaled, separate effects, or integral system test facilities. Data from full size plants are not usually available for code validation except for mild operational transients. The typicality of the experimental data acquired in scaled test facilities is however often questioned due to inherent scaling distortions stemming from design and simulation constraints. In order to assure the adequacy of system codes in predicting a realistic system response, the code assessment process is thus diversified over a wide range of test data from test facilities having different scaling ratios and/or design concepts.
Verification and validation
845
In addition, it is considered beneficial to verify the code scalability aspects against a set of data from tests performed in different facilities but under “similar” initial and boundary conditions. This would partially separate the code assessment process from physical assumptions emphasizing instead the influence of configurational and geometric scaling aspects. The word “similar” has been however the subject of many discussions in the past, and some authors tried to indicate which of the minimum set of initial and boundary conditions have to be preserved between the various test facilities in order to classify that series of tests as “Counterpart Tests.” Counterpart tests are tests specified from the beginning as counterpart tests to be performed in different test facilities with scaled initial and boundary conditions. It should be noted then that some tests are more suitable than others to be the subject of a counterpart test: in particular tests where the influence of facility distortions (e.g., heat losses) are less important. The difficulty in the specification of a counterpart test connected with generally strict and fixed test programs makes it difficult to have a large number of these test series. Nevertheless, looking at the integral facilities test programs it is possible to identify some tests which although they do not meet exactly the conditions for a counterpart test do maintain a great similarity in the initial and boundary conditions. These tests here referred to as “Similar Tests” can reveal important scaling influences that would otherwise been lost. The most relevant counterpart test activity was performed during a time frame of about 20 years. Selecting small break (SB) counterpart tests, for example, five IT facilities and seven experiments, are involved. The ITFs are l
l
l
l
l
LOBI (simulator of a German four loops PWR by two unequal loops) SPES (simulator of a US three loops PWR by three equal loops) BETHSY (simulator of a French three loops PWR by three equal loops) LSTF (simulator of a US four loops PWR by two equal loops) PSB (simulator of Russian four loops VVER-1000 by four equal loops)
Four of the involved ITF are equipped with vertical U-tubes steam generators, and the fifth one, PSB, is equipped with horizontal tube steam generators. The concerned experiment is a SB-LOCA simulating a break in one cold leg, without activation of the high-pressure injection system. A few remarkable results taken from the SB-LOCA counterpart test database (e.g., see D’Auria et al., 1994; Blinkov et al., 2005; D’Auria and Galassi, 2010) are reported in Figs. 13.1–13.3 for the system pressure, water mass inventory, and rod surface temperature. These figures were produced by University of Pisa. The objective is to give an impression of the counterpart test database. Each of the three diagrams shows the comparison between seven experiments out of five test facilities and seven calculated trends. This gives an impression of the capabilities of computational tools to calculate phenomena at different scale with a comparable level of accuracy. Some selected interesting aspects of these counterpart tests are: l
Occurrence of three dry-out conditions due to: (a) pump seal blockage, consequential drop of water level in the core region, and clearance of the pump seal and consequently recovery of water level in the core region
846
Thermal Hydraulics in Water-Cooled Nuclear Reactors WinGraf 4.1 - 02-06-2010
18.0 16.0
XXX YYY ZZZ VVV JJJ HHH ### OOO AAA BBB CCC DDD EEE FFF
Pressure (MPa)
14.0 12.0 10.0 8.0 6.0 4.0 2.0
Y X A
BETHSY_EXP BETHSY_CALC LOBI_EXP LOBI_CALC LOBI_EXP_H LOBI_CALC_H LSTF_EXP LSTF_CALC PSB-VVER_EXP PSB-WWER_CALC SPES_EXP SPES_CALC SPES_EXP_H SPES_CALC_H
ZB# VO VYC OV X ZY H J EHFA B D JO #H ED CXYZ V FOAJ E H B #C D FY ZA OX EJ V C F DY H ZJ #E D Y BFCA Y Y ZB XE D EFH JE #V Y C DFC YZ Y BY E Y# O XO B #A C AV D JX # Y EO #D F XV C H AVD# O Y#A # #X AV JE ZFX #EC FH BJFOA YA D H JCD E Y B ZE O FEBZ X D C JO FA E XC BO H #O D ZFH H JB EO VZ X AZYJHF C H JDC X B A D V C Z FZE JH V A ZB V C Z JH#X V V AB # FX F H JO E C#XDV O DCXB DF J#H XZ VA O ZVJHO VAH BA B
0 0
500.
1000.
1500. Time (s)
2000.
B
2500.
B
3000.
Fig. 13.1 Upper plenum pressure of SB-LOCA counterpart test database and calculations. Courtesy of D’Auria, F., Galassi, G.M., 2010. Scaling in nuclear reactor system thermal-hydraulics. Nucl. Eng. Des. 240, 3267–3293.
WinGraf 4.1 - 02-06-2010
Normalized residual mass (-)
1.20
XXX YYY ZZZ VVV JJJ HHH ### OOO AAA BBB CCC DDD EEE FFF
1.00 .80
X Y A B J VH Z
.60
VH #E O CC C Y J DF D XFZ V F C F O O HJ HJ AE H D H #JX VZ #X O#X JH VH X VX H #F J FVO ZFC # J JF X #XF FC HJCC XJ X #Z V FO FC YO H C CJDV ZD ZH A #Z Z BDXE JH V OE J Z X Z A A D F D F O A D A X Z XZJH JH #E DJ YEZA #F #V V E YE YE ED A FE JF FC A#V O O A V E F D J H H D H Y Y V Y YD A XZ X Z Y E YE DY D BOCC#OY # V Y Y OXHZA HZA B VAX Z BO #YVA B D D DB B A VA# A BE E B Y B O# O B VB # E E E A B Y O B B OB B
.40 .20 0
BETHSY_EXP BETHSY_CALC LOBI_EXP LOBI_CALC LOBI_EXP_H LOBI_CALC_H LSTF_EXP LSTF_CALC PSB-VVER_EXP PSB-WWER_CALC SPES_EXP SPES_CALC SPES_EXP_H SPES_CALC_H
0
500.
1000.
1500. Time (s)
2000.
2500.
B B
3000.
Fig. 13.2 Normalized residual water mass of SB-LOCA counterpart test database and calculations. Courtesy of D’Auria, F., Galassi, G.M., 2010. Scaling in nuclear reactor system thermal-hydraulics. Nucl. Eng. Des. 240, 3267–3293.
Verification and validation
847 WinGraf 4.1 - 02-06-2010
1000
Temperature (K)
900 800 700 600 500 400
XXX BETHSY_EXP YYY BETHSY_CALC ZZZ LOBI_EXP O VVV LOBI_CALC JJJ LOBI_EXP_H A HHH LOBI_CALC_H JH ### LSTF_EXP B # OOO LSTF_CALC Z H V AAA PSB-VVER_EXP E C BBB PSB-WWER_CALC A B J CCC SPES_EXP H O Z DDD SPES_CALC E H EEE SPES_EXP_H Z C # JV X Z V FFF SPES_CALC_H F B J Y A J A X E Z BO # Z V Y V O VX H JC HFAY B O # J D H H E ED YZAJV A CFO D #C B ZH X FY ED O EDY C F C #EXD Y BFAX Y Z V Y #D C O B DFYZ Y BY X Y# OE EFBJA B AY JX # Y D #D F X E AVD# XJDFA H Y#X # O #XH DV ZFJXE C OC#EVC O EH Y J XYV # Y A V #X BJFOA FH JCA ZE C FEBZ O D HC O ZFH AZH JBO EVZC AZJH#XB #A JDBX B X ABX E A Z VD V V O FO O C DFH DF ZVJH H E CV C DFC V
0
500.
1000.
1500. Time (s)
2000.
2500.
B
B
3000.
Fig. 13.3 Heater rod surface temperature of SB-LOCA counterpart test database and calculations. Courtesy of D’Auria, F., Galassi, G.M., 2010. Scaling in nuclear reactor system thermal-hydraulics. Nucl. Eng. Des. 240, 3267–3293.
l
l
l
(b) water inventory loss via the SB and water inventory recovery by accumulator injection (c) water inventory recovery by Low Pressure Injection System (LPIS) For all these ITF experiments, the scaled initial power condition was considered in the design. However, only LOBI and SPES were able to simulate scaled full initial power, the others had reduced scaled initial power (10%–14%). Seven experiments in five ITF are part of the SB counterpart test program. Measured data from the seven experiments fulfilled the objectives of the tests, are consistent, and discrepancies among the corresponding time trends could be explained. However, in principle, all thermal-hydraulic phenomena are scale dependent with regard to geometry. For instance, two-phase pressure drop depends on diameter because flow regimes depend on diameter. Then, a number of phenomena depending on volume to surface ratio are scale dependent. Noticeable examples are found for liquid penetration (i.e., downflow) in the downcomer, or across the upper core plate in the case of hot leg injection. It strongly depends upon the geometric dimensions of the components for counter-current flow, see also Sections 13.3.2 and 13.3.3 of this chapter as well as Glaeser and Karwat (1993). In case of small-scale experiments, liquid penetration is interfered by more intense steam-water interaction compared with large scale where more separate steam-water flow passages are possible. Consequently, higher ECC water penetration to the core region is possible for cooling the core. Accuracy in the predictions by using more than one code was independent upon scaling using an appropriate ratio of experimental and calculated value Yexp/Ycalc. “Y” is a generic parameter value representing a thermal-hydraulic phenomenon under investigation (D’Auria et al., 1988).
Analyses of counterpart tests in PWR for scenarios different from SB-LOCA and performed in ITF, scaled according to time-preserving laws, can also be found in the literature (e.g., Bazin et al., 1992) dealing with natural circulation.
848
Thermal Hydraulics in Water-Cooled Nuclear Reactors
13.2.3 Conclusions Verification and validation of a computer code are important activities to assure its quality. On the other hand, performing validation calculations is a very good exercise for the users of a code to improve the understanding and handling of the code to set-up the input data. The results of a systematic code validation are an important basis to determine the uncertainty of the results obtained by a code calculation. In assessing the uncertainty of a code, user effects should be minimized. Different methods are appropriate for assessing the uncertainty of the results from the validation by comparing calculation results with experimental data. Dependent on the methods used for evaluating the uncertainty, differences between calculation and experimental results are used directly, or uncertainty ranges and distributions of relevant model parameters are derived (see Section 13.4).
13.3
Scaling of thermal-hydraulic phenomena and relation to system code validation
In the last five decades, large efforts have been undertaken to provide reliable thermal-hydraulic system codes for the analyses of transients and accidents in NPPs. Many SETs and integral system tests were carried out to establish a database for code development and code validation. In this context, the question has to be answered, to what extent the results of downscaled test facilities represent the thermal-hydraulic behavior expected in a full-scale nuclear reactor under accident conditions. Scaling principles, developed by many scientists and engineers, present a scientific-technical basis and give a valuable orientation for the design of test facilities. However, it is impossible for a downscaled facility to reproduce all physical phenomena in the correct temporal sequence and the kind and strength of their occurrence. The designer needs to optimize a downscaled facility for the processes of primary interest. This leads compulsorily to scaling distortions of other processes with less importance. Due to distortions, phenomena may dominate small-scale facility behavior, but these may be less important for large-scale tests. For example, due to different surface to volume ratio the heat losses and wall effects are higher in small-scale facilities. Therefore, the computer code to be applied for safety analyses is the necessary and important tool to scale physical phenomena. Consequently, a goal-oriented code validation strategy is required, based on the analyses of SETs and integral system tests as well as transients occurred in full-scale nuclear reactors. The CSNI validation matrices and international experimental programs are an excellent basis for fulfilling this task. SETs in full scale, like on the upper plenum test facility (UPTF) test facility, have here an important role. Because a full-scale IT including physical interactions over the whole coolant system of a NPP is usually impossible, the scaling of the test facilities becomes a fundamental issue. The question of scaling has to be considered not only in the design of the test facilities; it has also to be taken into account in the code development process.
Verification and validation
849
The codes must describe the results from SETs and integral system tests of different scale (for example, volume scaling of integral system tests range from 1:48 to 1:2070). When a code model describing, for example, the counter-current flow in the downcomer of a PWR, was development mainly on the basis of results of the 1:1 scaled UPTF separate effects experiments must also describe this phenomenon in down-scaled integral system tests (scale-down capability). For the full range of transients and accidents, the interactions of the code models can be checked only on the results of down-scaled integral system test facilities. On the other hand, code models developed mainly based on the results of down-scaled experiments, must describe the thermal-hydraulic processes expected in the full-scale reactor plant (scale-up capability).
13.3.1 Scaling concepts The transfer of experimental results—obtained in a scaled-down test facility—to reality in a large-scale industrial plant is a common challenge of engineering. A sufficient similarity of parameters is needed controlling the processes in the test rig and the real plant. This similarity can be geometric, mechanic, thermal-hydraulic, static, or dynamic. Several methods have been proposed to develop scaling laws. One of the most known methods is based on the nondimensional form of the governing differential equations describing the thermal-hydraulic processes. In this method the differential equations are made nondimensional by choosing proper reference values for various physical quantities in the equations. From these equations nondimensional numbers, respectively, similarity groups can be obtained. For example, the derivation of the nondimensional Reynolds and Grashof numbers is the simplest case, namely for the steady-state single-phase flow (Glaeser and Wolfert, 2012). Compared with this simple example for a single-phase flow the development of scaling laws for two-phase processes is much more complicated. Not only the number of differential equations but also the high number of constitutive equations describing the interfacial interactions and the interactions of the two phases with the channel wall hamper enormously the derivation of scaling laws. Some examples of scaling laws used for the design of integral system tests facilities and SETs facilities can be found in Glaeser and Wolfert (2012). To obtain information about the overall system behavior and about the interaction of the different system components of a nuclear steam-supply system, integral system tests have been performed for more than four decades. However, because a full-scale ITF is usually not feasible, the facility scaling becomes a fundamental issue. The Nahavandi scaling principle is most popular for LOCA test facilities (Nahavandi et al., 1979, see Table 13.1). Ishii and Kataoka (1983) have presented similarity groups for two-phase flow, which can be obtained from the set of balance equations directly or the perturbation analyses. Similarity between test facility and prototype is assumed if these dimensionless numbers are identical for facility and prototype. Two typical similarity groups are shown below for illustration, where g
850
Thermal Hydraulics in Water-Cooled Nuclear Reactors
Scaling ratios proposed by Nahavandi et al. (1979)
Table 13.1
Length Time Area Volume Velocity Acceleration Material property Heat generation rate/volume Power
1 1 S S 1 1 1 1 S
S, area-model/area-prototype.
is gravitational acceleration, H is enthalpy, ρ is density, u is flow velocity, and l is characteristic length. Subcooling number (subcooling/latent heat)
Nsub ¼
ΔHsub Δρ ΔHfg ρg
Froude number (inertia/gravity force)
u2 ρ gl Δρ Similarity criteria, such as given above, cannot be fulfilled in total. Nevertheless, they present a scientific-technical basis and give a valuable orientation for the design of test facilities. Ishii and Kataoka (1983) presented scaling criteria specifically for the coolant loops of PWRs under single- and two-phase natural circulation conditions. Ishii et al. (1998) presented the Ishii scaling for the test facility PUMA, simulating the simplified BWR by introducing scaling of geometric length. The hierarchical two-tiered scaling (H2TS) (Zuber, 1991) creates a hierarchy among scaling factors and scaling design. The “first tier” or top-down scaling analysis examines synergistic effects on the system by interactions between the constituents. A phenomena identification and ranking table (PIRT) process is applied, involving expert judgment. The “second tier,” or bottom-up scaling analysis, provides similarity criteria for specific processes, such as flow pattern transitions and flow-dependent heat transfer. The focus is on developing similarity criteria to scale individual processes of importance to system behavior as identified in the first tier. Closure equations addressing local phenomena are considered to obtain similarity criteria. The fractional scaling analysis (FSA) proposed by Zuber et al. (2007) considers fractional rate of change (FRC) and fractional change metric (FCM). The first quantifies the intensity of each transfer process (agent of change), which changes a state variable by a fraction of its total change during a selected transient, and allows the ranking of the agents of change. The second is representative for scaling the fractional NFr ¼
Verification and validation
851
change of a state variable. The FCM is used for scaling the fractional change of a system state variable, like pressure. These scaled state variable trajectories representative for different transients show similarities that do not show up when the same variables are represented versus time. FRC and FCM are similar to the “specific frequency” and “time ratio” in the H2TS, respectively. It is claimed that FSA can replace complex computer codes and to establish a different framework for the analysis of complex scenarios. However, that obviously overburdens this method. FSA is a scaling analysis, not a prediction tool (Wulff et al., 2005). As stated in Wulff et al. (2005), FSA provides the quantitative hierarchy that may supersede the widely used, but subjectively generated PIRT. FSA can be used to compare computer code simulations by an independent analytical infrastructure (D’Auria and Galassi, 2010). Reyes (2015) and Reyes et al. (2015) to address the time dependency of scaling distortion, propose a dynamical system scaling (DSS). The strategy is to convert the transport equations to a process space through a coordinate transform and exploit the principle of covariance to derive similarity relationships between the prototype and model. After the transformation, the target process can be expressed in the process-time space as a three-dimensional phase curve called geodesics. If a similarity is established between model and prototype, these two-phase curves will overlap at any moment of the transient. Any deviation of the process curves represents the deviation of the scaling as a function of time. By specifying the ratios of the conserved quantity and the process (called 2-parameter transform), the generalized framework can be converted to a specific scaling method such as the power-to-volume scaling. But this generalized approach provides the benefit of identifying the distortion objectively and quantitatively at any moment of the transient. DSS computes scaling distortion as a function of dimensionless process time via the (flat-space) geodesic separation between the prototype and test facility geodesic process curves. The same normalizing factor, the process action, is applied to all quantities when computing dimensionless process time. This allows comparing the prototype and test facility process curve trajectories throughout the entire transient which reveals that distortion is a dynamic quantity. It may increase and/or decrease as the responses evolve. The geodesic separation can then be integrated to yield a single measure which accounts for all distortion present between the two facilities or between a code calculation and experimental results. DSS can make use of a simplified transient analysis model for optimization of an experiment over the entire duration of a transient process (Yurko et al., 2015). An overview about different scaling approaches for the design of thermal-hydraulic integral test facilities (ITFs) is given in a published article of D’Auria and Galassi (2010). The importance of system computer codes for the scaling issue is emphasized there, too. A state-of-the art report on scaling collected extensive information on the topic (CSNI, 2016b). It is impossible for a down-scaled facility to reproduce all the physical phenomena during a transient process in a real scale plant. The designer needs to optimize a down-scaled facility for the processes of greatest interest. However, this leads compulsorily to distortions of other processes with less importance.
852
Thermal Hydraulics in Water-Cooled Nuclear Reactors
The H2TS, FSA, and DSS aim to the design of an experimental facility. A different objective is the scaling application of SYS TH computer codes to the process of determining the safety of NPPs.
13.3.2 Integral system tests The LOBI Programme (Riebold et al., 1984) has demonstrated the impact of test facility scaling very impressively. The LOBI facility was volume scaled 1:712. The tests were performed with two different gap widths of the annular-shaped downcomer. In the first test series, a downcomer gap width of 50 mm was used what resulted in 6.3 times too large downcomer volume and therefore in a strong distortion of the mass distribution in the scaled system. The intention was to preserve as far as possible counter-current flow as well as hot wall-related phenomena during the refill period. In the second test series, a downcomer of 12 mm gap width was installed. The 12 mm was chosen as a compromise between the volume-scaled downcomer (7 mm gap width) and a downcomer which would yield the same pressure drop due to wall friction as in the reference reactor (25 mm gap width for the scaled facility). The influence of downcomer gap width and downcomer volume was shown by comparing the results. Both tests simulated a double-ended cold leg break with cold leg accumulator injection. Overall initial and boundary conditions were generally equal or directly comparable for the two tests. No significant influence of downcomer gap width on the SYS TH behavior occurred during the very first blowdown period when subcooled fluid conditions persisted in the downcomer region. However, the course of the transient was strongly affected during the subsequent saturated blowdown and refill periods. Water evaporation had also started in the cold regions of the system due to the depressurization rate. The relatively higher-density fluid persisting near the core entrance and the reestablishment of positive mass flow through the core were much more pronounced in the case of the large downcomer where the initial liquid inventory in the downcomer is about 3.6 times larger than in the case of the small downcomer. This in turn ensured enhanced cooling of the heater rod bundle during the late blowdown. Therefore, completely different conditions existed in the primary system at the time when ECC injection from the accumulator started. Conversely, a smaller downcomer width tended to inhibit ECC water penetration and lower plenum refill, which led to nearly stagnation conditions in the core and relatively poor core cooling. Extremely poor core cooling was observed in the smaller 1:1600 scale SEMISCALE facility. In contrast to the results of the UPTF test facility with a geometrical scaling of 1:1, the SEMISCALE test results have shown that penetration of ECC water into the downcomer during cold leg ECC injection was completely prevented by CCFL. The multidimensional flow in the annulus downcomer of a large PWR could not be reproduced by the small tubular downcomer of the SEMISCALE facility. No effect of cold leg ECC injection on core cooling has been observed, what was misleading with respect to large-scale behavior. These examples show very clearly that the thermal-hydraulic behavior of a down-scaled ITF cannot be extrapolated directly to obtain nuclear plant behavior.
Verification and validation
853
PWR integral system test facilities and main characteristics
Table 13.2
Volume scaling
Max. power (MWth)
Pressure (MPa)
1:48 1:50
10.0a 50.0
16.0 15.5
3.0a 2.5a 1.0a
17.2 4.5 2.76
1.96a
15.7
Russia
1:100 1:145 1:192 (1/4 height) 1:288 (1/2 height) 1:300
1.5a
20
Finland
1:305
1.0a
8.0
Finland Italy The United States EU/Italy The United States The United States Hungary
1:400 1:430 1:500
1.0a 9.0 0.2a
8.0 20.0 2.1
1:712 1:840
5.4 0.34a
15.5 15.5
1:1600
2.0
15.0
1:2070
0.664
16.0
Test facility
Country
LSTF LOFT
Japan The United States France Germany The United States South Korea
BETHSY PKL APEX (AP1000) ATLAS (APR1400) PSB-VVER (VVER-1000) PACTEL (VVER-440) PACTEL (EPR) SPES UMCP (2 hot, 4 cold legs) LOBI MIST (2 hot, 4 cold legs) SEMISCALE PMK-2 (VVER-440/213) a
Full power scaling not possible.
Although the facilities have been designed in different volume scaling, ranging from 1:2070 to 1:48 (Table 13.2), the experimental results of these facilities are not solving the scaling problem. A combination of integral system tests with SETs in full reactor scale is indispensable. The computer codes shall be validated to calculate integral and separate effects experiments over the whole available scaling range in good agreement with the data. Therefore, the SYS TH computer code is an important scaling tool to perform safety analysis for power reactors. Another effect of scaling of geodetic height of a test facility for passive emergency core cooling systems (ECCS) relying on gravity was observed from the APEX test facility. The design of the AP1000 reactor uses gravity-driven passive safety systems to provide emergency core cooling (ECC) and containment cooling in case of an accident for an extended period of time. The APEX test facility at Oregon State University in USA simulates accident behavior of an AP1000 reactor plant. Height scaling of APEX is 1:4 (see Table 13.2). That is the reason for a drawback. The incontainment refuelling water
854
Thermal Hydraulics in Water-Cooled Nuclear Reactors
storage tank (IRWST) injection to the AP1000 reactor coolant system starts when the pressure in the primary system is lower than the pressure in the IRWST due to its water head. The scaling of APEX delays the IRWST injection start. NUREG-1826 report “APEX-AP1000 Confirmatory Testing To Support AP1000 Design Certification” (nonproprietary) (NRC, 2005a) states that the IRWST injection start occurs for: l
l
AP1000 at reactor pressure 0.198 MPa (28.7 psia) ¼ 10.0 m height of liquid level in the IRWST APEX at reactor pressure 0.125 MPa (18.2 psia) ¼ 2.5 m (¼¼) height of liquid level in the IRWST
Such a delayed IRWST injection may have a negative effect on core cooling on the APEX tests. Core-uncovery was observed in tests investigating failure of two ADS4 valves to open on the nonpressurizer side (beyond design basis accident in the United States) in combination with either a double-ended direct vessel injection (DVI) break or a 2 in. break at the bottom of a cold leg, for example. The explanation for the behavior on the APEX test facility is the height scaling of the test facility: APEX has a reduced IRWST head due to ¼-scaled height. Timing of IRWST injection would be earlier with 1:1 height scaling.
13.3.3 Separate effects tests In addition to the integral system tests, SETs are being performed for investigating particular phenomena. There are several reasons for the importance and high value of this test type. First, it has been recognized that the development of individual code models often requires some iteration, and that a model, despite well-conceived, may need refinement as the range of applications is widened. To establish a firm need for the modification or further development of a model it is usually necessary to compare predictions with SETs data rather than to rely on inferences from IT comparisons. Second, a key issue concerning the application of best estimate codes to LOCA and transient calculations is the quantification of the uncertainties in predicting safety-relevant parameters. Most methods for determining these uncertainties rely on assigning uncertainties to the modeling of individual phenomena. This concept has placed a new emphasis on SETs above that originally envisaged for model development. The advantages of SETs are: l
l
l
l
l
Clear boundary conditions Measurement instrumentation can be focused on a particular phenomenon Reduced possibility of compensating modeling errors during validation More systematic evaluation of the accuracy of a code model across a wide range of conditions up to full reactor plant scale Steady state and transient observations possible
A further incentive to conduct SETs in addition to experiments carried out on integral system test facilities is the difficulty encountered in the upscaling of predictions of phenomena from down-scaled ITs to real plant applications. Where a phenomenon is known to be highly scale dependent and difficult to model mechanistically, there is a strong need for conducting SETs at full scale.
Verification and validation
855
Only a few selected results from experiments on the 1:1 scaled UPTF can be presented here in the following sections to show some scaling effects and differences to observations from small-scale experiments.
13.3.3.1 UPTF tests UPTF (Weiss et al., 1986, 1990; GRS, 1992, 1993) in Germany was a full-scale (1:1) representation of the primary system of a 1300 MWe PWR with four-loop system. It was a fluid-dynamic facility especially designed for studying multidimensional two-phase flow effects in components of large volume like downcomer, upper plenum, entrance region of the steam generator, and main coolant pipes. The steam generation in the core and the entrained water flow during emergency cooling were simulated by feeding steam and water via nozzles in the so-called core simulator. The behavior of the steam generators was simulated by a controlled removal or feed of steam and water from or to the steam generator simulator from outside. The required steam was provided by a fossil fired power station.
13.3.3.2 Vertical heterogeneous counter-current steam-water flow in the downcomer In order to illustrate the scaling dependence of thermal-hydraulic phenomena in the downcomer for heterogeneous vertical steam-water counter-current flow, some results are presented. That phenomenon is considered to be important for efficient core cooling by the ECC systems during a LOCA when a break of a pipe of the main coolant system occurs. The previous insight of downcomer phenomena during cold leg ECC injection had been developed particularly based on the USNRC ECC Bypass Programme. It contained steam-water tests which were performed at 1/30, 1/15, 2/15, and 1/5 length scale at the Battelle Columbus Laboratories and CREARE. Steady-state CCFL tests with steam upflow and ECC water downflow as well as transient tests involving lower plenum flashing and two-phase upflow were carried out. Empirical flooding correlations based on experimental data of these down-scaled test facilities have been developed, using two dimensionless phase velocities: a modified Wallis parameter 1
ρX 2 M_ X JX_ ¼ ρX ADC ½gW ðρ ρ Þ12 W s containing W as average downcomer annulus circumference and the Kutateladze number 1
KX_ ¼
ρX 2 M_ X ρX ADC ½gσ ðρ ρ Þ14 W s
In spite of the variation of scale of the test facilities between 1/30 and 1/5, the extrapolation to full-scale downcomer geometry was still not clear. To provide CCFL and ECC bypass data for full reactor geometry, tests on UPTF were carried out. The result of a UPTF experiment, simulating the downcomer
856
Thermal Hydraulics in Water-Cooled Nuclear Reactors
UPTF TEST RUN 131 SET DOWNCOMER CCFL PROCEDURE A1 ECC=120C
90°
360°
Offset from DAS start: 61.40 sec Data averaged over 0.04 sec
270°
180°
1000
0
1
BR
2
–2
–6
–2
–2000 –2500
–2
Height (mm)
2
–6
–1500
3 –1
–500 –1000
–2
500
–3000 –3500 –4000 –2
–4500 –5000
–2
–5500 –6000 –6500
Fluid subcooling distribution in downcomer
Fig. 13.4 Counter-current flow conditions in full-scale downcomer for strongly subcooled ECC water, distribution of subcooled temperatures.
behavior during the end-of-blowdown and the refill phases for a large cold leg break, is shown in Fig. 13.4. The test was carried out at a reactor typical steam upflow of 320 kg/s and ECC water injection (subcooling 115 K) into the three intact loops. The contour plot shows isotherms of fluid temperatures (subcooling) in the downcomer projected in a plane. The two-dimensional presentation shows strongly heterogeneous steam-water flow conditions which were not observed in small-scale experiments. The ECC water delivered from the cold legs 2 and 3, which are located opposite the broken loop connection, penetrates the downcomer without being strongly affected by the upflowing steam. Most of the ECC water delivered from cold leg 1, which is located close to the broken loop, however, flows directly to the break bypassing the core. To demonstrate the effect of the facility scaling on downcomer CCFL, the data obtained from UPTF and CREARE with 1/5 scaling are compared in reference (GRS, 1992; Glaeser and Wolfert, 2012). Slightly subcooled ECC water was used in this UPTF test and saturated ECC water on CREARE. Due to the strongly heterogeneous flow conditions in the full-scale downcomer of UPTF the water delivery curve of UPTF is significantly higher than for CREARE. The main findings with respect to downcomer behavior during the end-of-blowdown and refill phases of a large cold leg break with cold leg or downcomer ECC injection can be summarized as follows: – –
A significant scale effect on downcomer behavior can be observed Steam-water flow conditions in the downcomer are highly heterogeneous at full scale
Verification and validation
– – – –
857
Heterogeneous or multidimensional behavior increases water delivery rates at full-scale relative to previous tests on down-scaled facilities CCFL correlations developed from the down-scaled tests are not applicable to full-scale downcomers Downcomer CCFL correlations for cold leg ECC injection based on down-scaled test results underpredict the water penetration to the lower plenum at full scale Due to strong heterogeneous flows in a real downcomer, a CCFL correlation has to account for the location of the ECC injection relative to the broken cold leg
In order to describe the vertical asymmetric heterogeneous gas/liquid counter-current flow in the full-scale downcomer, the Kutateladze-type flooding equation was extended by Glaeser (1992) correlating the local steam velocities of the multidimensional flow field with the superficial steam velocity. A geometrical lateral distance between the legs with ECC injection and the broken loop is introduced in the gas upflow momentum term. This term relates the local upward gas velocity at the water downflow locations to the superficial gas velocities. The superficial gas velocity can be calculated from the steam mass flow rate. 2=3
Kg1=2 g1=3 L 2=3
vg
vg g1=3 L
!1=2 1=2
+ 0:011Kl
¼ 0:0245
5400 L ¼ 0:5πdouter sin 2 ð0:5ΘECC BCL Þ
If there is more than one ECC injection location, the arithmetic mean value of all distances L between the ECC injection legs and the broken leg has to be used in the correlation. However, only those injection locations can be considered where water can flow downward. This means that the modified dimensionless gas velocity obtained by using the value of L for the individual injection location has to be below the onset of penetration point. Otherwise, the respective ECC injection leg cannot be included in the arithmetic mean value L. More details of the derivation and application of the Glaeser-correlation can be found in Glaeser (1992) and Glaeser and Karwat (1993). The resulting lowest gas velocity for zero water penetration (onset of penetration) is shown in Fig. 13.5 (legend provided in Table 13.3) compared with the downcomer circumference scale. Water downflow is impossible for gas velocities above the curve. Three different flooding correlations are applicable for three different scaling regions. These are the classical Wallis- and Kutateladze-type as well as the Glaeser-correlation. The range of applicability is dependent on the dimensionless annulus circumference, which governs the different flooding correlations. It can be seen that it is impossible to extrapolate counter-current flow correlations from small-scale data below one-ninth downcomer circumference scale (equivalent to 1/81 flow cross-section scale) to reactor scale. The full-scale UPTF data were needed to clarify the influence of scaling on the ECC flooding phenomenon.
858
Thermal Hydraulics in Water-Cooled Nuclear Reactors DOWNCOMER (ONSET OF PENETRATION)
14
K Correlation
J' Correlation
Own Correlation
Dimensionless gas velocity Kg
10.5 CREARE 1/30
CREARE 1/15
CREARE 1/5
7 Kg = (0.0245)2
Kg = 3.2
Kg = 0.16 Bo1/2
g1/3L ng2/3
3.5
0 Bo = 400
g1/3L ng2/3
g(rl . rg) Bo = pdav p = 400 kPa : 0
= 5400
s 1/15
1/9
L= Reg2/3
Frg1/3
g =
1/3
L
pdouter
2 (q = 180°)
ng2/3
Downcomer circumference scale
1
Fig. 13.5 Downcomer flooding correlation for zero penetration of liquid (total bypass).
Table 13.3 ADC Bo dav douter Fr g L M ECC-BCL ν ρ σ
Legend for Fig. 13.5 Flow cross section of downcomer Bond number Downcomer average diameter Downcomer outer diameter Froude number Gravity Characteristic geometrical length Mass flow rate Angle between ECC injection leg and broken loop Kinematic viscosity Density Surface tension
Subscripts g l
Vapor Liquid
13.3.3.3 Calculated results for importance of downcomer counter-current flow on maximum cladding temperature and quench time The downflow of ECC water injected into the cold legs or at the top of the downcomer counter-current to steam flowing upward was considered as highly important in the PIRTs, especially for a large break (LB) in one cold leg scenario. As described before,
Verification and validation
859
that phenomenon had a significant effect on the cooling of the heater rod bundles representing the core of a reactor in small-scale experimental facilities, like SEMISCALE and LOBI. A statistical uncertainty and sensitivity analysis for a 2% 100% cold leg break on the Zion NPP was performed using the ATHLET computer code (Glaeser et al., 2008). Of interest here was to get information on the importance of counter-current flow in the downcomer on the maximum cladding temperature. Such information can be obtained from the statistical GRS-uncertainty and sensitivity method, see Section 13.4. Validation of differently scaled experiments up to 1:1 UPTF separate counter-current flow experiments in the downcomer was performed. The interfacial shear multiplier was adjusted to calculate the counter-current flow in good agreement. Interfacial shear had to be reduced significantly for UPTF experiments during nondispersed flow in the downcomer and the core compared with small-scale experiments in order to calculate the downflow water mass flow in agreement with the data. The interfacial shear in the upper plenum and core region was adjusted in order to calculate water flow coming locally via the hot legs from the steam generators and the pressurizer. These extended ranges for interfacial shear were used for the Zion calculations. The upper bound of the uncertainty ranges from small-scale experiments remained. The multipliers for the downcomer derived from 1:1 scale UPTF calculations are in the range of 0.05–3.0 for the reactor instead of 0.33–3.0 used for the 1:50 scale LOFT experiments. The ranges for the core, tie plate, and upper plenum are 0.01–2.5 instead of 0.2–2.5 for LOFT. Different to earlier PIRTs, it turned out, that no significant effect of downcomer counter-current flow on the cladding temperature is to be seen in the reactor calculation. Fig. 13.6 shows the uncertainty band (tolerance interval) of calculated maximum cladding temperature versus time and the time span of accumulator injection. 1200 Two-sided lower limit Two-sided upper limit
Maximum clad temperature (°C)
1000
Reference
800
600
400
200 Accumulator injection time : from approx 12 s through 50-80 s (ref. calc. 60 s)
0 0
50
100
150
200
250
300
350
400
450
500
Time (s)
Fig. 13.6 Calculated uncertainty band of maximum cladding temperature of Zion reactor, 2% 100% cold leg break.
860
Thermal Hydraulics in Water-Cooled Nuclear Reactors
The importance measures are shown in Fig. 13.7 during the blowdown phase and Fig. 13.8 during the reflood phase. The length of the bars indicate the importance, positive sign tends to increase cladding temperature with increasing input parameter values, and vice versa. More effect on the cladding temperature comes from the core and tie plate counter-current flow, even more on the quench time (Fig. 13.9). A reason may be that the time of limitation of water downflow through the
0.5 42
Power factor hot rod
Rank correlation coefficient
0.4 0.3
Interfacial shear in vertical bundle
9
0.2
39
24
3334
14 3
0.1
4
1
8
6
49 52 54
30
15
22
10
27
19
35 29 3132
43
38
464748 50
41
55
0 –0.1
7
5
1718 20 21 23 16
1213
11
–0.3
1
4
7
28
10
13
16
19
51 53
45
Interfacial shear in downcomer
2
–0.2
25 26
37 36
44 40
22 25 28 31 34 Index of parameter
37
40
43
46
49
52
55
Fig. 13.7 Importance measures of 55 uncertain input parameters for maximum cladding temperature during blowdown phase. 0.5
Rank correlation coefficient
0.4 0.3 0.2
Model for one-phase 7 convection to steam
Model for film boiling
9
Power factor hot rod 42
Interfacial shear in vertical bundle non dispersed flow
Decay heat 52
24
Interfacial shear in downcomer
1
29 31
0.1
3
6
14
8
18
2223
39 49
34 35
25
38
47 46
43 41
53
50 51
32
54
0 4
12
–0.1 5
–0.2
28
17 15
11
19 21 20
55
36
Film boiling
4
7
40
10
Number of droplets
1
48
44
26
13
–0.4
37
30
2
–0.3
45
33
27
10
13
16
16
19
22
25
28
31
34
37
40
43
46
49
52
Index of parameter
Fig. 13.8 Importance measures of 55 uncertain input parameters for maximum cladding temperature during reflood phase.
55
Verification and validation
861
0.6 Model for one-phase convection to steam
Rank correlation coefficient
0.4
24
Interfacial shear in vertical bundle nondispersed flow Direct condensation 18
7
Interfacial shear in downcomer nondispersed flow
25
9
0.2 8
29
Decay heat 52 Initial core fluid temperature 47 38
11 12
2
46
49
394041 17
27
23
31 32
35
45
42
51
53
48
0 5
1
6 10
–0.2
14 13 15 16
20 2122
30
4
–0.4 Rewet heat
44
transfer upper quenchfront
–0.6
1
4
7
Critical velocity nondispersed to dispersed in vertical bundle
26
10
13
16
19
55
43
3637
19
Rewet heat transfer lower quenchfront
3
33
54
50
34
28
22
25
28
31
34
37
40
43
46
49
52
55
Index of parameter
Fig. 13.9 Importance measures of 55 uncertain input parameters for quench time.
downcomer counter-current to significant steam upflow is relatively short during such a LB transient in reactor scale.
13.3.3.4 Vertical counter-current flow of saturated steam and water for homogeneous flow conditions at the upper core tie plate and in the upper plenum To study the counter-current flow at the upper core tie plate and the liquid hold up above the tie plate in case of saturated steam/water upflow a series of UPTF tests were carried out. The core simulator with controlled injection of steam and water adjusted reactor typical steam/water upflow. Steam and entrained water upflow is of higher relevance for counter-current flow in the reactor than steam upflow only. This effect has not been investigated in vertical counter-current SETs prior to UPTF. The test results indicate that: l
l
l
l
Steam/water upflow, two-phase pool above the tie plate, and water fall back through the tie plate is uniform across the core area Flooding curves for both full-scale and subscale test facilities are similar Water downflow to each fuel assembly is scale-invariant For homogeneous flow conditions at the tie plate the flooding curve can be defined by applying the Kutateladze number as scaling parameter
Consequently, the effects of scaling on vertical counter-current steam-water flow is dependent on the two-phase flow distribution, either heterogeneous or homogeneous. Heterogeneous flows are highly scaling dependent but homogeneous flows not.
862
Thermal Hydraulics in Water-Cooled Nuclear Reactors
13.3.3.5 Flow conditions in the hot leg during reflux condenser mode Heat is transferred from the core to the secondary side of the steam generators by evaporation of water in the core and subsequent condensation of the steam in the U-tubes of the steam generators in the reflux condenser mode. A portion of the condensate flows back from the steam generator to the upper plenum counter-currently to the steam through the hot leg. That condensate serves cooling the core. Due to momentum exchange between the steam-flow and the water-flow in the hot legs flooding may occur, which could prevent or at least deteriorate the water flow back to the core. The Kutateladze-type equation considering instabilities of the gas-liquid interface in horizontal counter-current flow conditions can be transferred into the Wallis-type equation (Glaeser, 1992). This is possible for horizontal or inclined flow since the gravity force of unstable waves and droplets act perpendicular to the main flow directions of steam and water and counter the pressure difference between the bottom of such a wave and the crest due to different steam velocities at the wave crest and bottom (Glaeser and Karwat, 1993). Hence, the Wallis correlation is applicable to horizontal counter-current flow over the whole scaling region what is different to vertical heterogeneous counter-current flow. The UPTF test has demonstrated that a substantial margin exists between the flooding limit and the typical conditions expected in a PWR during reflux condenser mode of a SB-LOCA. This is the case in a full-reactor scale hot leg (Glaeser and Karwat, 1993; see Fig. 13.10). This steam mass flow results from a 2% core decay power scaled from 8 to 0.3 MPa pressure of UPTF. GRS used these full-scale UPTF test series data of counter-current flow in the hot leg for the code development and validation of their ATHLET computer code when they became available in the year 1984. Prior to that, the code was developed and validated using small-scale LOBI and LOFT experiments. A new correlation for the geometry of the horizontal tube including the inclined tube section at the steam generator inlet was developed and validated against a variety of UPTF experiments. For example, saturated water was supplied from the steam-generator side and, simultaneously, saturated steam from the vessel side, i.e., from the upper plenum was supplied in UPTF Test 11. The steam mass flow rate was increased gradually until occurrence of the CCFL. Good correspondence for a horizontal tube with original diameter demonstrated that the velocities of the two phases including the CCFL are determined realistically by the new correlation. With the new correlation, further experiments were analyzed successfully, e.g., the depletion of the steam generator U-pipes during a small-break experiment in the LOBI test facility. After completion of the model validation calculations, the new correlation was included in the ATHLET code. Based on the same findings the vendor made similar improvements in his version of SRELAP5. Some licensing calculations were repeated in order to find out the effect of the changed correlation in ATHLET. A medium break size of 160 cm2 (3.6%) in the cold leg of the KONVOI plants was selected for demonstration. It turned out that no heat-up was calculated compared with the old model due to improved back-flow of condensate from the steam generator toward the upper plenum to fill the core region and serve better cooling of the core (see Fig. 13.11).
Verification and validation
863
45
12
Condensed mass flux (1 active SG)
8 Water delivery Richter
UPTF Test 11
15
0
4
Steam mass flow rate for D=0.75 (kg/s)
30
Ohnuki
Steam mass flux (kg/s m2)
16
0 0
1:10
1
1:3
:1
Scaling Dfacility/Dreactor
Fig. 13.10 Counter-current flow limitation compared with reactor steam mass flow in the hot leg.
Cladding temperature (°C)
800 Previous analysis
600
400
200 Recent analysis
0 0
200
600
400
800
1000
Time (s)
Fig. 13.11 Application of counter-current flow results of UPTF on ATHLET calculations for small break LOCA (160 cm2 break in cold leg of PWR 1300 MW electric) with respect to maximum cladding temperature in the core.
13.3.4 Conclusions and future trends Some information about scaling in nuclear reactor SYS THs is presented. Some phenomena strongly depend on geometrical scale, like heterogeneous vertical steam-water CCFL. The SYS TH computer code is an important scaling tool to
864
Thermal Hydraulics in Water-Cooled Nuclear Reactors
perform safety analysis for power reactors. Strong connections are therefore between scaling, code validation, setting up the nodalization and uncertainty evaluation, as well as code user qualification and expertise. Power and volume scaling is the dominant approach. The concept of scaling hierarchy by Zuber may be applied. Full height is often used for test facilities. The alternative Ishii scaling allows to reduce the cost of an experimental facility, and to reduce the influence of heat release from structures or the distortion due to different surface to volume ratio on the course of a transient. However, the Ishii scaling may need confirmation by a scaling preserving the time in order to provide confidence in the simulation of important phenomena and the timing of sequences of an accident scenario. The examples presented show very clearly that the thermal-hydraulic behavior of a down-scaled ITF cannot be extrapolated directly to obtain nuclear plant behavior. Although the ITFs have been designed in different volume scaling, ranging from 1:2070 to 1:48, the experimental results of these facilities are not solving the scaling problem. A combination of integral system tests with separate effect tests in full reactor scale is indispensable. The computer codes shall be validated to calculate integral and separate effects experiments over the whole available scaling range in good agreement with the data. Therefore, the SYS TH computer code is an important scaling tool to perform safety analysis for power reactors. The refinement of the thermal-hydraulic system codes, the completion of the validation and the quantification of the uncertainties in the simulation of full size plant accidents are tasks for the next years. Another important effort in the analytical work will be focused on the development of two-phase flow computational fluid dynamics (CFD) codes. Due to the high resolution in space and the possibility for three-dimensional description of the single- and two-phase flow phenomena the scaling issue will become less important. Parallel to the progress in the development of the computer hardware with regard to reducing calculation time the application area of these codes will be extended. The development of the two-phase flow CFD codes is a great challenge for the analytical teams in the next decades and may contribute to the motivation of young scientists working in the field of nuclear technology.
13.4
Current uncertainty methods
13.4.1 Introduction The safety analysis used to justify the design and construction of water-cooled NPPs demonstrates that the plants are designed to respond safely to various postulated accidents in a deterministic thermal-hydraulic safety evaluation. These postulated accidents include LOCAs in which the pressure boundary containing the water, which takes heat away from the reactor core, is breaking and a range of other transients in which conditions deviate from those in normal operation. The equations of state and heat transfer properties of water including relevant two-phase phenomena vary very substantially over the conditions that will occur in such accidents. These events can therefore only be simulated by computer codes.
Verification and validation
865
Initial and boundary conditions, geometry, reactor plant operating parameters, fuel parameters, and scale effects may not be exactly known. The models of the thermal-hydraulic computer codes approximate the physical behavior, and the solution methods in the codes are approximate. Therefore, the code predictions are not exact but uncertain. Agreement of calculated results with experimental data is often obtained by choosing specific code input options or changing parameter values in the model equations. These selected parameter values usually have to be changed again for different experiments in the same facility or a similar experiment in a different facility in order to obtain agreement with data. A single “best estimate” calculation with an advanced realistic code would give results of one code run of unknown accuracy. To make such results useful, for example, if they are to be compared with limits of acceptance, the uncertainty in the predictions then has to be calculated separately. Uncertainty analysis methods therefore were developed to estimate safety margins if best estimate codes are used to justify reactor operation. A particular stimulus was the USNRC’s revision of its regulations in 1989 to permit the use of realistic models with quantification of uncertainty in licensing submissions for ECCS (CFR, 1989). In addition, uncertainty analysis can be used to assist research prioritization. It can help to identify models that need improvement and areas where more data would be needed. It can make code development and validation more cost-effective. The alternative is to use conservative computer codes and/or conservative initial and boundary conditions to introduce unquantified conservatisms to bound uncertainties. This may lead to unrealistic results. A list of ECCS evaluation models is provided in Appendix C of CFR (1989) to follow the conservative approach in the United States. That approach is not part of this section dealing with uncertainty evaluation rather than trying to bound computational results of safety parameters conservatively toward the direction of regulatory acceptance criteria (CFR, 1989). An uncertainty analysis quantifies the uncertainty of a result of a safety parameter, and the range toward the acceptance criterion determines the margin. Such a procedure is not nonconservative. It is quantifying the conservatism of a calculation result.
13.4.2 Common methods Within the uncertainty methods considered, uncertainties are evaluated using either (1) Propagation of input uncertainties: code scaling, applicability, and uncertainty (CSAU) method and GRS (Gesellschaft f€ur Anlagen- und Reaktorsicherheit) method or (2) Extrapolation of output uncertainties: uncertainty methodology based on accuracy extrapolation and code with the capability of internal assessment of uncertainty (UMAE/ CIAU).
For the “propagation of input uncertainties,” uncertainty is obtained following the identification of uncertain input parameters with specified ranges and/or probability distributions of these parameters, and performing calculations varying these parameters. The propagation of input uncertainties can be performed by either deterministic or statistical methods. For the “extrapolation of output uncertainty” approach,
866
Thermal Hydraulics in Water-Cooled Nuclear Reactors
uncertainty is obtained from the output uncertainty based on comparison between calculation results and relevant experimental data. A short description of these methods is provided in the following sections. These methods have been described in several publications. Detailed comprised information on the different methods and their applications can be obtained from the IAEA Safety Series No. 52 (IAEA, 2008).
13.4.2.1 CSAU method The CSAU methodology was proposed to investigate the uncertainty of safety-related output parameters. It supplies a complete framework to go through the whole process of an uncertainty analysis. The output parameters in the demonstration cases of this method were only single-valued parameters, such as PCT or minimum water inventory, no time dependent values were provided. Prior to this, a procedure is used to evaluate the code applicability to a selected plant scenario. Experts identify all the relevant phenomena. Following this step, the most important phenomena are identified and are listed, based on an examination of experimental data and code predictions of the scenario under investigation, ranking them as highly important. In the resulting PIRT, ranking is accomplished by expert judgment. The PIRT and code documentation are evaluated, and it is decided whether the code is applicable to the plant scenario. The CSAU methodology is described in detail by Boyack et al. (1990). Demonstration applications have been performed for a LB-LOCA and a SB-LOCA for a PWR (Boyack et al., 1990; Ortiz and Ghan, 1992). An optimized nodalization to capture the important physical phenomena should be used for all calculations. This nodalization represents a compromise between accuracy and cost, based on experience obtained by analyzing SETs and integral experiments. No particular method or criteria are prescribed for this task. Only parameters important for the highly ranked phenomena are selected for consideration as uncertain input parameters. The selection is based on a judgment of their influence on the output parameters. Additional output biases are introduced to consider the uncertainty of other parameters not included in the variation calculations. Information from the vendor of NPP components as well as from experiments and previous calculations was used to define the mean value and probability distribution or standard deviation of uncertain input parameters, for both the LB- and the SB-LOCA analyses. Additional biases can be introduced in the output uncertainties. Distributions values of uncertain input parameters are uniform and normal which were used in the two applications performed for demonstration. Output uncertainty is the result of the propagation of input uncertainties through a number of code calculations. A statistical method for uncertainty evaluation has not been proposed in the CSAU but has been widely used in the past. A response surface approach was used in the demonstration applications. The response surface fits the code predictions obtained for selected parameters and is further used instead of the original computer code for performing the variation calculations. Such an approach then entails the use of
Verification and validation
867
a limited number of uncertain parameters in order to reduce the number of code runs and the cost of analysis but was limited to results of point values, like PCT in the demonstrations. However, within the CSAU framework the response surface approach is not prescribed and other methods can be applied. Scaling is considered by the CSAU, identifying several issues based on test facilities and code assessment. The effect of scale distortions on main processes, the applicability of the existing database to the given NPP, the scale-up capability of closure relationships, and their applicability to the NPP range is evaluated at a qualitative level. Biases are introduced if the scaling capability is not provided.
13.4.2.2 Statistical (GRS) method The statistical method again propagates potential uncertain input parameters to uncertain output parameters by code calculations. The main advantage to use these tools is that the number of calculations is independent of the number of uncertain parameters to be investigated. The number of calculations depends only on the chosen tolerance limits or intervals of the uncertainty statements of the results. Not using these statistical tools necessitates a drastic increase of code calculations with the number of uncertain parameters when uncertain values of the selected input parameters have to be combined. Choosing combinations of parameter values at will by the expert and performing the respective code runs does not allow to make quantitative tolerance/confidence statements about the combined influence of the identified uncertainties. Such statements could not be made without the tools from statistics, not even after a very large number of code runs. It simply is a matter of efficiency to exploit what is known from statistics in order to reach the target uncertainty coverage and confidence at minimum cost or to extract the most of information from the expended number of runs. The features of the statistical method are: (1) The uncertainty space of input parameters like computer code models, initial and boundary conditions, geometry, reactor plant operating parameters, fuel parameters, scale effects, and solution algorithms are defined by their uncertainty ranges and probability distributions. These are sampled at random according to the probability distributions of the uncertain parameters. Code calculations are performed by sampled sets of parameters. (2) The number of code calculations is determined by the requirement to estimate a tolerance and confidence interval for the quantity of interest, such as PCT or cladding temperature versus time. Following a proposal by GRS, Wilks’ formula (Wilks, 1941, 1942) is used to determine the number of calculations to obtain the uncertainty bands. (3) Statistical evaluations are performed to determine the sensitivities or importance of input parameter uncertainties on the uncertainties of key results, i.e., parameter importance analysis.
The smallest number n of code runs to be performed is according to Wilks’ formula (Wilks, 1941, 1942): 1 an b
868
Thermal Hydraulics in Water-Cooled Nuclear Reactors
which is the size of a random sample (number of calculations) such that the maximum calculated value in the sample is an upper statistical tolerance limit. The required number n of code runs for the a ¼ upper 95% fractile is 59 at the b ¼ 95% confidence level. For two-sided statistical tolerance intervals (investigating the output parameter distribution within an interval) the formula is: 1 an nð1 aÞan1 b The minimum number for a 95%/95% two-sided tolerance interval is 93. The minimum number of calculations can be found in Table 13.4. Upper statistical tolerance limits are the upper b confidence for the chosen a fractile. The fractile indicates the probability content of the probability distributions of the code results (e.g., a 95% means that PCT is below the tolerance limit with at least a ¼ 95% probability). One can be b% confident that at least a% of the combined influence of all the characterized uncertainties is below the tolerance limit. The confidence level is specified because the probability is not analytically determined. It accounts for the possible influence of the sampling error due to the fact that the statements are obtained from a random sample of limited size. The 95% probability for the uncertainty evaluation comes from US NRC Regulatory Guide 1.157 “Best Estimate Calculations of Emergency Core Cooling System Performance” that an acceptance criterion has not to be exceeded with a probability of 95% or more. The same value is stated in the IAEA Specific Safety Guide No. SSG-2 “Deterministic Safety Analysis for Nuclear Power Plants” (IAEA, 2009). In addition it says: “Techniques may be applied that use additional confidence levels, for example, 95% confidence levels.” For regulatory purposes, where the margin to licensing criteria is of primary interest, the one-sided tolerance limit may be applied, that is, for a 95th/95th percentile, a minimum of 59 calculations should be performed. As a consequence, the number n of code runs is independent of the number of selected input uncertain parameters, only depending on the percentage of the fractile and the desired confidence level percentage. The number of code runs for obtaining sensitivity measures is also independent of the number of uncertain input parameters. As an example, 100 runs were carried out in the analysis of a reference reactor, using
Table 13.4 Minimum number of calculations n for one- and two-sided statistical tolerance limits
b ¼ 0.50 b ¼ 0.90 b 5 0.95 b ¼ 0.99
One-sided statistical tolerance limit
Two-sided statistical tolerance limit
a ¼ 0.5 1 4 5 7
a ¼ 0.5 3 7 8 11
0.90 7 22 29 44
0.95 14 45 59 90
0.99 69 230 299 459
0.90 17 38 46 64
0.95 34 77 93 130
0.99 168 388 473 662
Verification and validation
869
50 parameters. More description can be found in references of Hofer (1993) and Glaeser (2008). The total number of (n) code runs is performed varying simultaneously the values of all uncertain input parameters, according to their distribution. For each instant of time the n values of the considered output parameters are ordered from the lowest to the highest values: Y ð1Þ < Y ð2Þ⋯ < Y ðn 1Þ < Y ðnÞ That “order statistics” is used when Wilks’ formula is applied. On the basis of ordering of the output values, the tolerance limits are obtained at a confidence level of 95% by selecting the output values in Table 13.5. For the selected plant transient, the method can be applied to an integral effects test simulating the same scenario prior to the plant analysis. If experimental data are not bounded, the set of uncertain input parameters has to be modified or ranges and distributions changed. Experts identify significant uncertainties to be considered in the analysis, including the modeling uncertainties and the related parameters, and identify and quantify dependencies between uncertain parameters if present. Probability density functions (PDFs) are used to quantify the state of knowledge of uncertain parameters for the specific scenario. Uncertainties of code model parameters are derived based on validation experience. Therefore, experts performing the validation should determine the probability distributions of model uncertainties. The scaling effect is quantified as a model uncertainty. Additional uncertain model parameters can be included or PDFs can be modified, accounting for results from differently scaled SET analysis. Input parameter values are simultaneously varied by random sampling according to the PDFs and according to dependencies between them, if dependencies are relevant. A set of parameters is provided to perform the required number n of code runs. For example, the 95% fractile and 95% confidence limit of the resulting distribution of
Selection of tolerance limits at the 95% confidence level depending on number of code runs performed
Table 13.5
Number of code runs (samples)
One-sided 95th percentile tolerance limit
One-sided 5th percentile tolerance limit
Two-sided 95th percentile tolerance interval
59 93 124 153 181
Y(n) Y(n 1) Y(n 2) Y(n 3) Y(n 4)
Y(1) Y(2) Y(3) Y(4) Y(5)
– Y(1) and Y(n) Y(1) and Y(n) Y(2) and Y(n 1) Y(3) and Y(n 2)
870
Thermal Hydraulics in Water-Cooled Nuclear Reactors
the selected output quantities are directly obtained from the n code results, without assuming any specific distribution. No response surface is used. Statistical uncertainty and sensitivity analysis provides statements on: l
l
Uncertainty range of code results that enables to determine the margin between the bound of uncertainty range closest to an acceptance criterion and the acceptance criterion. The 95% fractile, 95% confidence limit for output parameters versus time are provided. Sensitivity measures about the influence of input parameters on calculation results, i.e., a ranking of importance versus time is provided which allows a ranking of input parameters on output uncertainty as result of the analysis instead of setting up a prior PIRT guides further code development prioritizes experimental investigations to obtain more detailed information
The sensitivity or importance measures give useful information about those input parameters influencing the uncertainty of computer code results most. That information can be used to find out which ranges and distributions of input uncertainties should potentially be determined more accurately. These sensitivity measures, using regression or correlation techniques from the sets of input parameters and the corresponding output values, allow ranking of the uncertain input parameters in relation to their contribution to output uncertainty. The ranking of parameters is therefore a result of the analysis, not of prior expert judgment, like a PIRT. Nevertheless, a PIRT may be performed to define uncertainty ranges and distributions for a limited number of input parameters, which are considered to be important or to identify important code models to be validated. The same ordered statistical method using Wilks’ formula for setting up the number of code calculations followed later the AREVA Method (Martin and Dunn, 2004), ASTRUM (Automatic Statistical TReatment of Uncertainty)—Method of Westinghouse (Muftuoglu et al., 2004), KREM in Korea, ESM-3D in France and several more. The AREVA method has been licensed by USNRC in the year 2003 and the ASTRUM Method in 2004.
Number of calculations for statistical methods to meet more than one regulatory acceptance limit A controversial international discussion took place about the number of calculations to be performed using ordered statistical methods during the years 2003–2005. That issue was mainly brought up when more than one regulatory acceptance criterion or limit has to be met, like PCT, local oxidation of fuel rods, and core average zirconium-water reaction for LOCAs. Wald (1943) extended Wilks’ formula for multidimensional joint/simultaneous tolerance limits or intervals. However, it seems that a direct and satisfactory extension of the concept of tolerance limits for safety-relevant applications in nuclear safety is not necessary. A slightly modified concept has therefore been proposed by Krzykacz-Hausmann from GRS, introducing a lower confidence limit (Glaeser et al., 2008). The lower confidence limit according to Clopper-Pearson (Brown et al., 2001) for the binomial parameter is now the unknown probability that a result is lower than a regulatory acceptance limit. Instead of direct joint tolerance limits for the outputs of interest, one considers the lower confidence limit for the probability of “complying with the safety limits for all outputs,”
Verification and validation
871
i.e., “meeting the regulatory acceptance criteria.” Basis is that both of the following statements are equivalent: 1. The Wilks’ (probability a ¼ 95% and confidence b ¼ 95%) limit for the results is below the regulatory acceptance limit 2. The lower b ¼ 95% confidence limit for the probability that the value of the result stays below the regulatory acceptance limit is greater or equal a ¼ 95%
The regulatory acceptance limits are incorporated into the probabilistic statements. It turns out that (1) in the one-dimensional case, i.e., for a single output parameter, this concept is equivalent to the one-sided upper tolerance limit concept. (2) the necessary number of model runs is also the same in the general case, i.e., independent of the number of outputs or criteria involved and of the type of interrelationships between these outputs or criteria. Therefore, the number of necessary model runs is the same as in the one-dimensional tolerance limit case, even if several output parameters are involved. In the one-dimensional case the lower 95%-confidence interval for the probability of “complying with the regulatory limit” corresponds to the two step procedure: (1) compute the tolerance limit as usual and (2) compare this tolerance limit with the given regulatory limit. In other words: The statement “there is a 95% confidence that the probability of complying with the regulatory limit xreg exceeds 95%” is equivalent to the statement “the computed 95%/95% tolerance limit xTL lies below the regulatory limit xreg.” In the multidimensional case there is no such direct correspondence or equivalence.
The principal advantage of the confidence interval or limit (or “sign-test”) approach seems to be that it can directly be used in the multidimensional case, i.e., multiple output or several output variables, too. The multidimensional extensions of the tolerance limit approach suffer from (1) not being unique because the runs with the highest value for checking the first limit has to be eliminated for comparison with the next limit, and so on (2) require substantially increased calculation runs and (3) are in most cases not necessary since functions of several variables can be reduced to the one-dimensional case
Much more influence on the uncertainty range of computational results has the specified input uncertainty ranges. Less important is the distribution of these input uncertainties. Therefore, high requirements are on the specification and the justification for these ranges. Investigations are underway to transform data measured in experiments and posttest calculations into thermal-hydraulic model parameters with uncertainties. Care must be taken to select suitable experimental and analytical information to specify uncertainty distributions. The selection of suitable experiments is important for the UMAE/CIAU method as well.
13.4.2.3 UMAE/CIAU method The CIAU method is based on the principle that it is reasonable to extrapolate code output deviations of relevant experimental tests to real plants, as discussed by D’Auria et al. (1995) and D’Auria and Giannotti (2000). The development of the method implies the availability of qualified experimental data. A first step is to check the
872
Thermal Hydraulics in Water-Cooled Nuclear Reactors
quality of code results with respect to experimental data by using a procedure based on fast Fourier transform method. Then a method (UMAE) is applied to determine both quantity accuracy matrix (QAM) and time accuracy matrix (TAM). Considering ITFs of a reference light water reactor (LWR) and qualified computer codes based on advanced models, the method relies on code capability qualified by application to facilities of increasing scale. Direct data extrapolation from small-scale experiments to the reactor scale is difficult due to the imperfect scaling criteria adopted in the design of each scaled-down facility. Only the accuracy (i.e., the difference between measured and calculated quantities) is therefore extrapolated. Experimental and calculated data in differently scaled facilities are used to demonstrate that physical phenomena and code predictive capabilities of important phenomena do not change when increasing the dimensions of the facilities. However, available IT facility scales are far from reactor scale. One basic assumption is that phenomena and transient scenarios in larger-scale facilities are close enough to plant conditions. The influence of the user and the nodalization upon the output uncertainty is minimized in the methodology. However, user and nodalization inadequacies affect the comparison between measured and calculated trends; that deviation is considered in the extrapolation process and contributes to the overall uncertainty. Calculations of both ITs and plant transients are used to obtain uncertainty from accuracy. Nodalizations are set-up and qualified against experimental data by an iterative procedure, requiring that a reasonable level of accuracy be satisfied. Similar criteria are adopted in developing plant nodalization and performing plant transient calculations. The demonstration of the similarity of the phenomena exhibited in test facilities and plant calculations, taking scaling laws into consideration, leads to a qualified nodalization of the plant. It is not possible to establish a correspondence between each input and each output parameter without performing additional specific calculations. That is, however, beyond the scope of the UMAE. The process starts with the experimental and calculated database. Following the identification (e.g., from the CSNI validation matrix) of the physical phenomena involved in the selected transient scenario, relevant thermal-hydraulic aspects are used to evaluate the acceptability of code calculations, the similarity among experimental data, and the similarity between plant calculation results and available data. Statistical treatment is pursued in order to process accuracy values calculated for the various test facilities and to obtain uncertainty ranges with a 95% probability level. These are superimposed as uncertainty bands bracketing the qualified base calculation using the above mentioned qualified nodalization. The scaling of both experimental and calculated data is claimed to be explicitly assessed within the framework of the analysis. In fact, the demonstration of phenomena scalability is necessary for the application of the method and the evaluation of the uncertainty associated with the prediction of the NPP scenario. Comparison of thermal-hydraulic data from experimental facilities of a different scale constitutes the basis of the UMAE checking whether the nodalization and code calculation results are acceptable. An adequate experimental database including the same phenomena as in the selected test scenario of the NPP is needed for the
Verification and validation
873
application of this method. For a successful application it is necessary that the accuracy of the calculations does not dramatically decrease with increasing scale of the experimental facilities. The demonstration that accuracy improves when the dimensions of the facility in question are increases would need a sufficiently large database which is not fully available now. The extension of the method to internal assessment of uncertainty CIAU can be summarized in two parts (D’Auria and Giannotti, 2000): (1) Consideration of plant state: Each state is characterized by the value of six relevant quantities (i.e., a hypercube) and the value of the time since the transient start (2) Association of an uncertainty to each plant state
In the case of a PWR the six quantities are: (a) upper plenum pressure; (b) primary loop mass inventory (including the pressurizer); (c) steam generator secondary side pressure; (d) cladding surface temperature at 2/3 of core active height (measured from the bottom of the active fuel), where the maximum cladding temperature in one horizontal core cross section is expected; (e) core power; and (f ) steam generator downcomer collapsed liquid level. If levels are different in the various steam generators, the largest value is considered. A hypercube and a time interval characterize a unique plant state for the purpose of uncertainty evaluation. All plant states are characterized by a matrix of hypercubes and a vector of time intervals. Y defines as a generic thermal-hydraulic code output plotted versus time. Each point of the curve is affected by a quantity uncertainty and a time uncertainty. Owing to the uncertainty, each point may take any value within the rectangle identified by the quantity and time uncertainties. The value of uncertainty— corresponding to each edge of the rectangle—can be defined in probabilistic terms. This shall satisfy the requirement of a 95% probability level acceptable to NRC staff for comparing best estimate predictions of postulated transients with the licensing limits in 10 CFR 50. The basic assumption of the CIAU is that the uncertainty in code prediction is the same for each plant state. A quantity uncertainty matrix (QUM) and a time uncertainty vector (TUV) can be set-up by means of an uncertainty methodology, like UMAE. Possible compensating errors of the used computer code are not taken into account by UMAE and CIAU, and error propagation from input uncertainties through output uncertainties is not performed and is not the purpose of that method. A high effort is needed to provide the database for deviations between experiment and calculation results in CIAU. That time and resource consuming process has been performed only by University of Pisa for the codes CATHARE and RELAP5 up to now. The database is available only there. That is the reason why this method is only used by University of Pisa up to now.
13.4.3 International validation of uncertainty methods Several international programs have been performed within OECD/CSNI in order to apply uncertainty methods, to compare their applications by different participants and derive conclusions and recommendations.
874
Thermal Hydraulics in Water-Cooled Nuclear Reactors
13.4.3.1 Uncertainty method study International validation and comparison exercises in applying uncertainty methods have been performed. The first was the uncertainty method study (UMS) with the objectives: 1. To gain insights into differences between features of the methods by: comparing the different methods, step by step, when applied to the same problem comparing the uncertainties predicted for specified output quantities of interest comparing the uncertainties predicted with measured values allowing conclusions to be drawn about the suitability of methods 2. To inform those who will take decisions on conducting uncertainty analyses, for example, in the light of licensing requirements. l
l
l
l
The United Kingdom was given the task of leading the study. The study was performed from May 1995 to June 1997.
Methods compared The methods compared in the UMS are summarized in Table 13.6. The methods may be divided into three groups according to their basic principles (Wickett et al., 1998): l
The University of Pisa method, the uncertainty method based on accuracy extrapolation (UMAE), extrapolates the accuracy of predictions from a set of integral experiments to the reactor case or experiment being assessed, see previous Section 13.4.2.
The other methods rely on identifying uncertain models and data and quantifying and combining the uncertainties in them. They fall into two kinds: l
l
The AEA Technology method that characterizes the uncertainties by “reasonable uncertainty ranges” and attempts to combine these ranges with a bounding analysis. Methods which assign probability distributions to uncertainty ranges for uncertain input parameters and sample the resulting probability density at random in the space defined by the uncertainty ranges (see Section 13.4.2.2). In the UMS, the GRS, IPSN, and ENUSA methods are of this kind. The probability used here is due to imprecise knowledge and is not probability due to stochastic or random variability.
Chosen experiment The International Standard Problem (ISP) 26 experiment, LSTF-SB-CL-18, was used. LSTF-SB-CL-18 is a 5% cold leg SB-LOCA experiment conducted in the ROSA-IV Large-Scale Test Facility (LSTF). The LSTF is located at the Tokai Research Establishment of JAERI and is a 1/48 volumetrically scaled, full height, full pressure simulator of a Westinghouse type 3423 MWth PWR. The experiment simulates a loss of off-site power, no high-pressure injection (HPIS), the accumulator system initiates coolant injection into the cold legs at a pressure of 4.51 MPa, the low pressure injection system initiates at 1.29 MPa. Thus, the experiment considers a beyond design basis accident. Because of the ISP the experiment was already well documented. Although ISP 26 was an open ISP participants in the UMS have not used experimental measurements from the test, for example, the break flow or the secondary pressure. In the same way other experiments performed in the LSTF were not used. This
Verification and validation
Table 13.6
875
Summary of methods compared in the UMS study
Institution
Code version
Method name and type
AEA Technology, UK
RELAP5/ MOD3.2
University of Pisa, Italy
RELAP5/MOD2 cycle 36.04, IBM version CATHARE 2 version 1.3U rev 5 ATHLET Mod 1.1 Cycle A
AEAT Method. Phenomena uncertainties selected, quantified by ranges and combined Uncertainty method based on accuracy extrapolation (UMAE). Accuracy in calculating similar integral tests is extrapolated to plant
Gesellschaft f€ur Anlagenund Reaktorsicherheit (GRS), Germany Institut de Protection et de Suˆrete Nucleaire (IPSN), France Empresa Nacional del Uranio, SA (ENUSA), Spain
CATHARE 2 version 1.3U rev 5 RELAP5/MOD 3.2
GRS Method. Phenomena uncertainties quantified by ranges and probability distributions (PDs) and combined IPSN Method. Phenomena uncertainties quantified by ranges and PDs and combined ENUSA Method. Phenomena uncertainties quantified by ranges and PDs and combined
means not allowing the related measurements to influence the choice of input uncertainty ranges and probability distributions of the input uncertainties. Submissions included a written statement of the justification used for input uncertainty ranges and distributions used and were reviewed at workshops. All other assumptions made were listed and reviewed. Comparison of calculated uncertainty ranges The participants were asked to make uncertainty statements, based on their calculation results, for: (1) Functions of time Pressurizer pressure Primary circuit mass inventory Rod surface temperature for B18 rod (4, 4) position 8 (TW359 TWE-B18448) (2) Point quantities First peak clad temperature Second peak clad temperature Time of overall peak clad temperature (the higher of the two) Minimum core pressure difference at DP 50 DPE300-PV l
l
l
l
l
l
l
As a result of the short time scale of UMS and limited funding some of the uncertainty ranges derived from experimental data for the calculations have been less well refined and, as a consequence, wider than would have been the case, for example, if the calculations were for a plant safety analysis. This affected the AEAT and ENUSA CCFL ranges, for example.
876
Thermal Hydraulics in Water-Cooled Nuclear Reactors
Functions of time The uncertainty ranges calculated for pressurizer pressure, primary inventory, and hot rod surface temperature for LSTF B18 rod (4, 4) at position 8 are shown in Figs. 13.12–13.14.
Clad temperature ranges The AEAT and ENUSA predictions are remarkably similar. The same code (RELAP5), the same base input deck, and mostly the same input uncertainties were used. However, they applied different methods, i.e., bounding (AEAT) versus statistical treatment (ENUSA) of the uncertainties, and the numbers of the input uncertainties are different (7 by AEAT and 25 by ENUSA). The two applications of the Pisa (UMAE) method using different computer codes (RELAP and CATHARE) are broadly similar. The clad temperature ranges for the second dry-out are similar. Those ranges for the first dry-out calculated by CATHARE gave a narrower range because the base case calculation did not predict the dry-out. However, the UMAE still generated a successful uncertainty range for this quantity because of the first dry-out measured on those experiments considered for accuracy extrapolation to the LSTF experiment. Of the probabilistic methods, IPSN gave the narrowest ranges. This is attributed to the choice of parameters made and the parameter ranges used. An actuation of the CCFL option in the CATHARE code (at the upper core plate and the inlet plena of the steam generators) was needed to predict the first dry-out. The large uncertainty ranges for clad temperature calculated by AEAT, ENUSA, and GRS prompted discussion among the participants about the reasons for them. In the case of AEAT and ENUSA this may be in part due to unrealistically large uncertainty ranges for CCFL parameters, agreed upon in order to complete the study on time. When AEAT discounted CCFL uncertainty completely the calculated uncertainty range for peak clad temperature changed from 584-1142 K to 584-967 K. Another contribution may come from the used RELAP5 version MOD 3.2, calculating higher peak clad temperatures than MOD 2 and MOD 3.2.2, see under “follow-on activity.” The measured maximum clad temperature during the second heat-up is well below that of the first. In the GRS calculations, many of the 99 calculated time histories exhibit the contrary, namely a maximum during the second heat-up. The uncertainty of the maximum temperature during the second heat-up is about double of that during the first. In some code runs a partial dry-out occurs due to fluid stagnation in the rod bundle which causes an earlier second heat-up, and as a consequence, a larger increase of the clad temperature. The large uncertainty is striking, and the sensitivity measures indicate which of the uncertainties are predominantly important. These sensitivities are presented in Volume 2 of reference Wickett et al. (1998). Main contributions to uncertainty come from the critical discharge model and the drift in the heater rod bundle. Other contributions that are worth mentioning come from the bypass cross section upper downcomer-upper plenum. As a consequence, improved state of knowledge about the respective parameters would reduce the striking uncertainty of the
Exp
8
LB
6
UB
4 2
12 10
Exp LB
8
UB
6 4
300
500
700
0 –100
900
100
16
500
700
Exp
8
LB
6
UB
4
4
100
LB
6
900
UB
4
0 –100
700
Exp
8
2
300
500
700
900
ENUSA
14
10
0 –100
Time (s)
UB
16
12
2 500
LB
6
Time (s)
Pressure (MPa)
10
300
Exp
8
0 –100
900
IPSN
14
12
Pressure (MPa)
Pressure (MPa)
300
16
GRS
100
10
Time (s)
Time (s)
14
12
2
2 100
Pisa (CATHARE)
14 Pressure (MPa)
Pressure (MPa)
Pressure (MPa)
10
Pisa (RELAP)
14
12
0 –100
16
16
AEAT
14
Verification and validation
16
12 10
Exp
8
LB UB
6 4 2
100
300
500
Time (s)
700
900
0 –100
100
300
500
700
900
Time (s)
877
Fig. 13.12 Uncertainty ranges and data for pressurizer pressure. From Wickett, T., Sweet, D., Neill, A., D’Auria, F., Galassi, G., Belsito, S., Ingegneri, M., Gatta, P., Glaeser, H., Skorek, T., Hofer, E., Kloos, M., Chojnacki, E., Ounsy, M., Lage Perez, C., Sa´nchez Sanchis, J.I., 1998. Report of the uncertainty methods study for advanced best estimate thermal hydraulic code applications, Volume 1 (Comparison) and Volume 2 (Report by the participating institutions). NEA/CSNI/R(97)35, Paris, France.
7000
6000
6000
5000
5000 Mass (kg)
Exp
4000
LB
3000
UB
LB UB
3000
LB
3000 2000
1000
1000
1000
300
500
700
0 –100
900
100
Time (s)
300
500
700
0 –100
900
Mass (kg)
UB
4000
LB
3000
UB
1000
1000
1000
500
Time (s)
700
900
0 –100
100
300 500 Time (s)
700
900
LB
3000 2000
300
Exp
4000
2000
100
900
5000 Exp
2000
0 –100
700
ENUSA
6000
5000
LB
500
7000
IPSN
6000 Exp
300
0 –100
UB
100
300
500
700
900
Time (s)
Fig. 13.13 Uncertainty ranges and data for primary mass. From Wickett, T., Sweet, D., Neill, A., D’Auria, F., Galassi, G., Belsito, S., Ingegneri, M., Gatta, P., Glaeser, H., Skorek, T., Hofer, E., Kloos, M., Chojnacki, E., Ounsy, M., Lage Perez, C., Sa´nchez Sanchis, J.I., 1998. Report of the uncertainty methods study for advanced best estimate thermal hydraulic code applications, Volume 1 (Comparison) and Volume 2 (Report by the participating institutions). NEA/CSNI/R(97)35, Paris, France.
Thermal Hydraulics in Water-Cooled Nuclear Reactors
6000
3000
100
Time (s)
7000
GRS
4000
UB
Time (s)
7000
5000
Exp
4000
2000
100
Pisa (CATHARE)
5000
Exp
4000
2000
0 –100
Mass (kg)
6000
Mass (kg)
Mass (kg)
7000
Pisa (RELAP)
Mass (kg)
AEAT
878
7000
800
Exp LB
700
UB
600 500
Pisa (RELAP)
800
Exp LB
700
UB
600 500
100
300
500
700
400 –100
900
100
700
Temperature (K)
LB
800
UB
700 600
100
300 500 Time (s)
700
900
100
300
500
700
900
ENUSA
800 Exp
700
LB UB
600
400 –100
600
900 800 Exp
700
LB
600
UB
500
500
500
UB
Time (s)
IPSN
900
Exp
LB
700
400 –100
900
Temperature (K)
900
Exp
800
1000
GRS Temperature (K)
300 500 Time (s)
1000
1000
Pisa (CATHARE)
500
Time (s)
400 –100
900 Temperature (K)
900 Temperature (K)
Temperature (K)
900
400 –100
1000
1000
AEAT
Verification and validation
1000
100
300
500
Time (s)
700
900
400 –100
100
300
500
700
900
Time (s)
879
Fig. 13.14 Uncertainty ranges and data for hot rod temperature. From Wickett, T., Sweet, D., Neill, A., D’Auria, F., Galassi, G., Belsito, S., Ingegneri, M., Gatta, P., Glaeser, H., Skorek, T., Hofer, E., Kloos, M., Chojnacki, E., Ounsy, M., Lage Perez, C., Sa´nchez Sanchis, J.I., 1998. Report of the uncertainty methods study for advanced best estimate thermal hydraulic code applications, Volume 1 (Comparison) and Volume 2 (Report by the participating institutions). NEA/CSNI/R(97)35, Paris, France.
880
Thermal Hydraulics in Water-Cooled Nuclear Reactors
computed rod surface temperature in the range of the second core heat-up most effectively.
UMS follow-on activity After the closure of the UMS and after the report by Wickett et al. (1998) was issued University of Pisa performed comparison calculations of experiment LSTF-SB-CL-18 using different versions of the RELAP 5 code, i.e., MOD 2, MOD 3.2, and MOD 3.2.2. Mod2 was used by the University of Pisa, and MOD 3.2 by AEA Technology as well as ENUSA in this study. It turned out that MOD 3.2 calculated a 170 K higher peak clad temperature compared with MOD 2 and MOD 3.2.2 using the same input deck. This may contribute to the relative high upper limit of the uncertainty ranges calculated by AEAT and ENUSA. That explanation is also in agreement with the AEAT peak clad temperature of 787 K at 300 s for their reference calculation using nominal values for the input parameters, without calculating the first heat-up. The measured second peak is 610 K at 500 s. A new application was performed by GRS using revised probability distributions for the most important uncertain parameters, which were identified by importance measures in the UMS analysis, contraction coefficient of critical discharge flow and drift in the heater rod bundle. A newer ATHLET version Mod 1.2, cycle A was used instead of Mod 1.1, Cycle A. The result was a shorter duration of the second heat-up and lower peak clad temperatures during the second heat-up (Fig. 13.15). Due to these findings lower differences of uncertainty ranges by different participants are expected.
Cladding temperature, Level 8 (K)
800 Two-sided lower limit Two-sided upper limit Reference Exp. min. Exp. max.
700
600
500
400 –200
0
200
400 Time (s)
600
800
1000
Fig. 13.15 Revised calculated uncertainty range compared with measured minimum and maximum values of rod clad temperature by GRS.
Verification and validation
881
Conclusions and recommendations of UMS Study Five uncertainty methods for advanced best estimate thermal hydraulic codes have been compared. Most of the methods identify and combine input uncertainties. Three of these, the GRS, IPSN, and ENUSA methods, use probability distributions, and one, the AEAT method, performs a bounding analysis. One of the methods, the University of Pisa method, is based on accuracy extrapolation from integral experiments. To use this method, stringent criteria on the code and experimental database must be met. Where the predictions of the methods differ, the differences have been accounted for in terms of the assumptions of the methods and the input data used. The calculated ranges bound the experimental results with some exceptions. The possible causes of these discrepancies have been identified. Comparison between the calculations The differences between the predictions from the methods will come from a combination of: l
l
l
l
l
the method used the underlying accuracy of the reference calculation and the modeling used the completeness of the identification and selection of uncertainties the conservatism of the input (e.g., uncertainty ranges, probability distributions) used in the Pisa method optimization of the nodalization and possibly from the different number of experiments investigated (5 experiments with CATHARE, 10 experiments with RELAP)
Of these, differing degrees of conservatism of the input, particularly where data are sparse, probably account for the major differences between the specified uncertainty ranges. For example, critical two-phase flow data from LSTF were not used, and data for the geometry of the LSTF orifice were not available. The ideal is for a specified uncertainty range that fully reflects the accuracy of the underlying modeling. The input uncertainties mainly responsible for the differences are probably those identified as important by the sensitivity studies of the methods. These are: choked flow and CCFL (AEAT and ENUSA) and choked flow and drift in the heater rod bundle (GRS). Further work to refine input uncertainty ranges for these will affect the calculated uncertainty ranges. An example performed by GRS is presented here. It follows that it is very important how the methods are applied. Due to these findings lower differences of uncertainty ranges by different participants are expected in the future. In all cases appropriate knowledge, skill, experience, and quality standards must be applied.
Other Conclusions of UMS For some of the methods described here the present study was the first full application. For all of them it constituted a pilot study that allowed a better understanding of the basic assumptions of the method. The information gained in the UMS about the methods described and how they can be applied can be used to inform decisions on the conduct of uncertainty analyses, for example, in the light of licensing requirements.
882
Thermal Hydraulics in Water-Cooled Nuclear Reactors
13.4.3.2 Best estimate methods—uncertainty and sensitivity evaluation (BEMUSE) programme Objectives of BEMUSE The high level objectives of the work are: – – –
To evaluate the practicability, quality, and reliability of best-estimate (BE) methods including uncertainty and sensitivity evaluation in applications relevant to nuclear reactor safety To develop common understanding from the use of those methods To promote and facilitate their use by the regulatory bodies and the industry
Operational objectives include an assessment of the applicability of best estimate and uncertainty and sensitivity methods to ITs and their use in reactor applications. The justification for such an activity is that some uncertainty methods applied to BE codes exist and are used in research organizations, by vendors, technical safety organizations, and regulatory authorities. Over the last years, the increased use of BE codes and uncertainty and sensitivity evaluation for design basis accident (DBA), by itself, shows the safety significance of the proposed activity. Uncertainty methods are used worldwide in licensing of LOCAs for power uprates of existing plants, new reactors, and new reactor developments. End users for the results are expected to be industry, safety authorities, and technical safety organizations.
Main steps of BEMUSE The programme was divided into two main steps, each one consisting of three phases. The first step is to perform an uncertainty and sensitivity analysis related to the LOFT L2-5 test, and the second step is to perform the same analysis for a NPP LB-LOCA. The program started in January 2004 and was finished in September 2010. – – – – – –
Phase 1: Presentation “a priori” of the uncertainty evaluation methodology to be used by the participants; lead organization: IRSN, France Phase 2: Reanalysis of the International Standard Problem ISP-13 exercise, posttest analysis of the LOFT L2-5 large cold leg break test calculation; lead organization: University of Pisa, Italy (CSNI, 2006) Phase 3: Uncertainty evaluation of the L2-5 test calculations, first conclusions on the methods and suggestions for improvement; lead organization: CEA, France (CSNI, 2007) Phase 4: Best-estimate analysis of a NPP-LBLOCA; lead organization: UPC Barcelona, Spain (CSNI, 2008) Phase 5: Sensitivity analysis and uncertainty evaluation for the NPP LB-LOCA, with or without methodology improvements resulting from phase 3; lead organization: UPC Barcelona, Spain (CSNI, 2009) Phase 6: Status report on the area, classification of the methods, conclusions and recommendations; lead organization: GRS, Germany (CSNI, 2011)
The participants of the different phases of the programme and the used computer codes are given in Table 13.7.
Used methods Two classes of uncertainty methods were applied. One propagates “input uncertainties,” and the other one extrapolates “output uncertainties.”
Verification and validation
Table 13.7
883
BEMUSE participants and used codes Participation in phases
No.
Organization
Country
Code
1 2
AEKI CEA
Hungary France
1, 2, 4, 5 1, 2, 3, 4, 5
3
Russia
4
EDO “Gidropress” GRS
ATHLET 2.0A CATHARE2 V2.5_1 TECH-M-97
1, 2, 3, 4, 5
5
IRSN
France
6 7 8 9
JNES KAERI KINS NRI-1
10
NRI-2
11
PSI
Japan South Korea South Korea Czech Republic Czech Republic Switzerland
ATHLET 1.2C/2.1B CATHARE2 V2.5_1 TRACE ver4.05 MARS 2.3/3.1 RELAP5 mod3.3 RELAP5 mod3.3
12 13
UNIPI-1 UNIPI-2
Italy Italy
14
UPC
Spain
Germany
ATHLET 2.0A/2.1A TRACE v4.05 5rc3 RELAP5 mod3.2 CATHARE2 V2.5_1 RELAP5 mod3.3
2, 4, 5
1, 2, 3, 4, 5 1, 2, 3, 4, 5 2, 3, 4, 5 1, 2, 3, 4, 5 2, 3, 4, 5 1, 2, 3, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 4, 5 2, 3, 4, 5
From CSNI, 2011. BEMUSE phase VI report, status report on the area, classification of the methods, conclusions and recommendations. NEA/CSNI/R(2011)4, Paris, France.
The main characteristics of the statistical methods based upon the propagation of input uncertainties is to assign probability distributions for these input uncertainties, and sample out of these distributions values for each code calculation to be performed. The number of code calculations is independent of the number of input uncertainties but is only dependent on the defined probability content (percentile) and confidence level. The number of calculations is given by Wilks’ formula (see Section 13.4.2.2). By performing code calculations using variations of the values of the uncertain input parameters, and consequently calculating results dependent on these variations, the uncertainties are propagated in the calculations up to the results. Uncertainties are due to imprecise knowledge and the approximations of the computer codes simulating thermal-hydraulic physical behavior. The methods based upon extrapolation of output uncertainties need available relevant experimental data and extrapolate the differences between code calculations and experimental data at different reactor scales (see Section 13.4.2.3). The main difference of this method compared with statistical methods is that there is no need
884
Thermal Hydraulics in Water-Cooled Nuclear Reactors
to select a reasonable number of uncertain input parameters and provide uncertainty ranges (or distribution functions) for each of these variables. The determination of uncertainty is only on the level of calculation results due to the extrapolation of deviations between measured data and calculation results. The two principles have advantages and drawbacks. The first method propagating input uncertainties is associated with order statistics. The method needs to select a reasonable number of variables and associated range of variations and possibly distribution functions for each one. Selection of parameters and their distribution must be justified. Uncertainty propagation occurs through calculations of the code under investigation. The “extrapolation on the outputs” method needs to have “relevant experimental data” available to derive uncertainties. The sources of deviation between calculation and data cannot be derived by this method. The method seeks to avoid engineering judgment as much as possible. In BEMUSE, the majority of participants used the statistical approach, associated with Wilks’ formula. Only University of Pisa used its method extrapolating output uncertainties. This method is called the CIAU method, Code with (the capability of ) Internal Assessment of Uncertainty. The reason why other participants do not use this method is the high effort needed to get the database for deviations between experiment and calculation results in CIAU. That time and resource consuming process has been performed only by University Pisa for the codes CATHARE and RELAP5 for the time being. The database is available only there.
Selected results Application to LOFT L2-5 experiment Based on procedures developed at University of Pisa, a systematic qualitative and quantitative accuracy evaluation of the code results have been applied to the calculations performed by the participants for LOFT test L2-5 in BEMUSE phase 2. The test simulated a 2% 100% cold leg break. LOFT was an experimental facility with nuclear core. A FFTBM was performed to quantify the deviations between code predictions and measured experimental data (CSNI, 2006). Participants carefully pursued the proposed criteria for qualitative and quantitative evaluation at different steps in the process of code assessment during the development of the nodalization, the evaluation of the steady-state results, and the measured and calculated time trends. All participants fulfilled the criteria with regard to agreement of geometry data and calculated steady-state values. The results of uncertainty bands for the four single-valued output parameters first PCT, second PCT, time of accumulator injection, and time of complete quenching for the calculations of the LOFT L2-5 test are presented in Fig. 13.16 (CSNI, 2007). It was agreed to submit the 5/95 and 95/95 estimations of the one-sided tolerance limits, that is, to determine both tolerance limits with a 95% confidence level each. They are ranked by increasing band width. It was up to the participants to select their uncertain input parameters.
2nd PCT: uncertainty bounds ranked by increasing band width 1300
1200
1200
1100
1100
1000 900 Lower uncertainty bound Reference Upper uncertainty bound Experiment
800 700
Temperature (K)
Temperature (K)
1st PCT: uncertainty bounds ranked by increasing band width
1000 900 800 Lower uncertainty bound
700
Reference Upper uncertainty bound Experiment
600 600 GRS PSI
UPC UNIPI NRI-2 NRI-1 CEA IRSN KAERI KINS
Verification and validation
1300
500
Organisation name
PSI GRS UPC NRI-1 IRSN UNIPI NRI-2 KINS CEA KAERI
Organisation name Time of accumulator injection: uncertainty bounds ranked by increasing band width
Time of complete quenching: uncertainty bounds ranked by increasing band width 140
24
Lower uncertainty bound
22
Reference
120
Upper uncertainty bound Experiment
20 100 Time (s)
Time (s)
18 16 14 12 10
Lower uncertainty bound
60
Reference Upper uncertainty bound
8
80
40
Experiment
6
20 KINS PSI IRSN CEA GRS KAERI NRI-1 UPC NRI-2 UNIPI
Organisation name
UPC
PSI NRI-2 NRI-1 UNIPI GRS KAERI CEA KINS IRSN
Organisation name
885
Fig. 13.16 Uncertainty analysis results of LOFT L2-5 test calculations for four single-valued output parameters, first PCT, second PCT, time of accumulator injection and time of complete quenching compared with experimental data. From CSNI, 2007. BEMUSE phase III report, uncertainty and sensitivity analysis of the LOFT L2-5 test. NEA/CSNI/R(2007)4, Paris, France; CSNI, 2011. BEMUSE phase VI report, status report on the area, classification of the methods, conclusions and recommendations. NEA/CSNI/R(2011)4, Paris, France.
886
Thermal Hydraulics in Water-Cooled Nuclear Reactors
The following observations can be made: l
l
l
l
First PCT: The spread of the uncertainty bands is within 138-471 K. The difference among the upper 95%/95% uncertainty bounds, which is important to compare with the regulatory acceptance criterion, is up to 150 K and all but one participant cover the experimental value. One participant (UPC) does not envelop the experimental PCT due to a too high lower bound. Two reasons can explain this result: Among all the participants, on the one hand, UPC has the highest reference value; on the other hand, its band width is among the narrowest ones. KINS attribute their low lower uncertainty bound to a too high value of maximum gap conductance of the fuel rod. Second PCT: In this case, one participant (PSI) does not envelop the experimental PCT, due to a too low upper bound. The reasons are roughly similar to those given for the first PCT: PSI, as several participants, calculates a too low reference value but has also the specificity to consider an extremely narrow upper uncertainty band. The spread of the uncertainty bands is within 127–599 K. The difference among the upper 95%/95% uncertainty bounds, which is important to compare with the regulatory acceptance criterion, is up to 200 K. Time of accumulator injection: Four participants among 10 calculate too low upper bounds (KINS, PSI, KAERI, and UPC), whereas CEA finds an upper bound just equal to the experimental value. These results are in relationship with the prediction of the cold leg pressure reaching the accumulator pressure 4.29 MPa. The band widths vary within the range 0.7–5.1 s for all the participants except for UNIPI which finds a much larger band, equal to 15.5 s. This is mainly due to the consideration of time error for the pressure transient calculated by UNIPI. Time of complete quenching: All the uncertainty bands envelop the experimental value, even if the upper bound is close to the experimental value for one participant. The width of the uncertainty range varies from 10 s to more than 78 s. If the core is not yet quenched at the end of the calculation as it is the case for two participants (KAERI, KINS), or if there are several code failures before the complete quenching (IRSN), the upper bound is plotted at 120 s in Fig. 13.16.
First suggestions for improvement of the methods have not been proposed as result of the exercise; however, recommendations for proper application of the statistical method were given, see under “Conclusions and recommendations of BEMUSE programme.” Application to Zion NPP The scope of phase 4 was the simulation of a LB-LOCA in a NPP using experience gained in phase 2. Reference calculation results were the basis for uncertainty evaluation, to be performed in the next phase. The objectives of the activity are (1) to simulate a LB-LOCA reproducing the phenomena associated to the scenario and (2) to have a common, well-known basis for the future comparison of uncertainty evaluation results among different methodologies and codes (CSNI, 2008). The activity for the Zion NPP was similar to the previous phase 2 for the LOFT experiment. The UPC team together with UNIPI provided the database for the plant, including RELAP5 and TRACE input decks. Geometrical data, material properties, pump information, steady-state values, initial and boundary conditions, as well as sequence of events were provided. The nodalization comprised generally more hydraulic nodes and axial nodes in the core compared with the LOFT applications.
Verification and validation
887
1500 CEA (CATHARE V2.5_1 mod6.1 EDO (Tech-M-97)
1300
GRS (ATHLET 2.1A) IRSN (CATHARE V2.5_1 mod6.1)
Temperature (K)
JNES (TRACE ver4.05) KAERI (MARS)
1100
KINS (RELAP5/MOD3.3) NRI-1 (RELAP5/MOD3.3) PSI (TRACEv5.0rc3)
900
UNIPI-1 (RELAP5/MOD3.3) UNIPI-2 (CATHARE V2.5_1) UPC (RELAP5/MOD3.3)
700
AEKI (ATHLET 2.0A)
500
300
–25
0
25
50
75 100 125 150 175 200 225 250 275 300 325 350 375 400 425 450 475 500 525 550 575 600
Time (s)
Fig. 13.17 Calculated maximum cladding temperature versus time for the Zion NPP. From CSNI, 2008. BEMUSE phase IV report, simulation of a LB-LOCA in Zion nuclear power plant. NEA/CSNI/R(2008)6, Paris, France; CSNI, 2011. BEMUSE phase VI report, status report on the area, classification of the methods, conclusions and recommendations. NEA/ CSNI/R(2011)4, Paris, France.
Results of reference calculations The base case or reference calculations are the basis for the uncertainty evaluation. The calculated maximum cladding temperatures versus time shows Fig. 13.17. The highest difference in calculated maximum PCTs between the participants is 167 K (EDO “Gidropress”: 1326 K, KAERI: 1159 K), what is lower than the difference of BEMUSE phase 2 calculations of the LOFT test. Single parameter sensitivity A list of sensitivity calculations was proposed by UPC and performed by the participants to study the influence of different parameters, such as material properties, initial and boundary conditions upon the influence of relevant output parameters in the scenario under investigation, i.e., a LB-LOCA of a cold leg (CSNI, 2008). Fig. 13.18 contains values for difference in PCT as well as calculated difference in reflood time ΔtReflood, what is meant as difference in total quench time, for the minimum and maximum values of the input parameter range. This shows the highest difference in calculated cladding temperature using the same variation in maximum linear power of 7.6% of the nominal value. The differences are up to 100 K, mainly between different codes with the exception of UPC using RELAP5 mod 3.3.
Results of uncertainty analysis Phase 5 dealt with a power plant (CSNI, 2009) like phase 4. There was no available documentation concerning the uncertainties of the state of the plant, initial and boundary conditions, fuel properties, etc. To solve this situation, it was agreed to provide
888
Thermal Hydraulics in Water-Cooled Nuclear Reactors ΔPCT (Ir) ΔPCT (ur) ΔtReflood (Ir) ΔtReflood (ur)
250
ΔPCT (K) and ΔtReflood (s)
200 150 100
ΔPCT = 60 K
50 0 –50
ΔPCT = 103 K
–100
UPC
UNIPI-2
UNIPI-1
PSI
NRI-1
KINS
KAERI
JNES
IRSN
GRS
EDO
CEA
–150
Fig. 13.18 Largest effect of same variation of maximum linear power by 7.6% on differences of participant’s calculated PCT and ΔtReflood for NPP Zion. From CSNI, 2011. BEMUSE phase VI report, status report on the area, classification of the methods, conclusions and recommendations. NEA/CSNI/R(2011)4, Paris, France.
common information about geometry, core power distribution, and modeling. In addition, a list of common input parameters with its uncertainty was prepared. This was done due to the results of phase 3, calculating the LOFT experiment, showing quite a significant dispersion of the uncertainty ranges by the different participants. The CEA, GRS, and UPC teams prepared this list of common uncertain input parameters with their distribution type and range for the NPP. These parameters were strongly recommended to be used in the uncertainty analysis when a statistical approach was followed. Not all participants used all proposed parameters. On the other hand, some considered only these given parameters without any model uncertainty. The list is shown in Table 13.8. The main results of the calculated uncertainty bands can be seen for the single-valued code results maximum PCT in Fig. 13.19. This temperature is defined as the maximum fuel cladding temperature value, independently of the axial or radial location in the active core during the whole transient. It is the main parameter to be compared with its regulatory acceptance limit in LOCA licensing analyses. For comparison purposes it was agreed to submit the 5%/95% and 95%/95% estimations of the one-sided tolerance limits, that is, to determine both tolerance limits with a 95% confidence level each. Comparing results for the maximum PCT, there is an overlap region of 17 K (i.e., between 1221 and 1238 K). This region is very small. EDO, JNES, and PSI considered only the proposed common input parameters from Table 13.8 without model uncertainties.
Verification and validation
889
Common input parameters associated with a specific uncertainty, range of variation, and type of probability density function
Table 13.8
Imposed range of variation
Type of pdf
Comments
Containment pressure Initial core power
[0.85, 1.15]
Uniform
Multiplier
[0.98; 1.02]
Normal
Peaking factor (power of the hot rod) Hot gap size (whole core except hot rod)
[0.95; 1.05]
Normal
Multiplier affecting both nominal power and the power after scram Multiplier
[0.8; 1.2]
Normal
Hot gap size (hot rod)
[0.8; 1.2]
Normal
Power after scram UO2 conductivity
[0.92; 1.08]
Normal
[0.9, 1.1] (Tfuel <2000 K) [0.8,1.2] (Tfuel >2000 K) [0.98, 1.02] (Tfuel <1800 K) [0.87,1.13] (Tfuel >1800 K) [0.98; 1.02]
Normal
Normal
Multiplier. Uncertainty depends on temperature Multiplier. Uncertainty depends on temperature Multiplier
[0.9; 1.1]
Normal
Multiplier
Phenomenon
Parameter
Flow rate at the break Fuel thermal behavior
UO2 specific heat
Pump behavior
Rotation speed after break for intact loops Rotation speed after break for broken loop
Normal
Multiplier. Includes uncertainty on gap and cladding conductivities Multiplier. Includes uncertainty on gap and cladding conductivities Multiplier
Continued
890
Table 13.8
Thermal Hydraulics in Water-Cooled Nuclear Reactors
Continued
Phenomenon
Parameter
Data related to injections
Initial accumulator pressure Friction form loss in the accumulator line Accumulators initial liquid temperature Flow characteristic of LPIS Initial level Initial pressure Friction form loss in the surge line Initial intact loop mass flow rate
Pressurizer
Initial conditions: primary system
Imposed range of variation
Type of pdf
Comments
[0.2; +0.2] MPa
Normal
[0.5; 2.0]
Log-normal
[10; +10] °C
Normal
[0.95; 1.05]
Normal
Multiplier
[10; +10] cm [0.1; +0.1] MPa [0.5; 2]
Normal Normal Log-normal
Multiplier
[0.96; 1.04]
Normal
Initial intact loop cold leg temperature
[2; +2] K
Normal
Initial upper-head mean temperature
[Tcold; Tcold + 10 K]
Uniform
Multiplier
Multiplier. This parameter can be changed through the pump speed or through pressure losses in the system This parameter can be changed through the secondary pressure, heat transfer coefficient, or area in the U-tubes This parameter refers to the “mean temperature” of the volumes of the upper plenum
From CSNI, 2009. BEMUSE phase V report, uncertainty and sensitivity analysis of a LB-LOCA in Zion nuclear power plant. NEA/CSNI/R(2009)13, Paris, France; CSNI, 2011. BEMUSE phase VI report, status report on the area, classification of the methods, conclusions and recommendations. NEA/CSNI/R(2011)4, Paris, France.
Verification and validation
891
1500
Temperature (K)
1400 1300 1200 1100 1000 900 800 700 JNES NRI1 KAERI PSI
GRS
EDO UNIPI2 CEA
KINS
UPC
IRSN UNIPI1 AEKI
NRI2
Participant Lower limit 5/95
Reference case
Upper limit 95/95
Fig. 13.19 Calculated uncertainty bands of the maximum PCT of Zion NPP LB-LOCA. From CSNI, 2009. BEMUSE phase V report, uncertainty and sensitivity analysis of a LB-LOCA in Zion nuclear power plant. NEA/CSNI/R(2009)13, Paris, France; CSNI, 2011. BEMUSE phase VI report, status report on the area, classification of the methods, conclusions and recommendations. NEA/CSNI/R(2011)4, Paris, France.
Results of sensitivity analysis Sensitivity analysis is here a statistical procedure to determine the influence of uncertain input parameters on the uncertainty of the output parameter (result of code calculations). Each participant using the statistical approach provided a table of the most relevant parameters for four single-valued output parameters and two time trends (maximum cladding temperature and upper-plenum pressure), based on their influence measures. To synthesize and compare the results of these influences, they are grouped in two main “macro” responses. The macro response for core cladding temperature comprise first, second, and maximum PCT, maximum cladding temperature as function of time before quenching and time of complete core quenching. The summary of the total ranking by participants is shown in Fig. 13.20. Such information is useful for further uncertainty analysis of a LB-LOCA. High ranked parameters are fuel pellet heat conductivity, containment pressure, power after scram, critical heat flux, and film boiling heat transfer.
Conclusions and recommendations of BEMUSE programme The methods used in this activity are considered to be mature for application, including licensing processes. Differences are observed in the application of the methods, consequently results of uncertainty analysis of the same task lead to different results. These differences raise concerns about the validity of the results obtained when applying uncertainty methods to system analysis codes. The differences may stem from the application of different codes and uncertainty methods. In addition, differences between applications of statistical methods may mainly be due to different input
0
2
4
6
8
10
12
14
16
20
22
24
26
18
Total ranking
28
Initial upper header mean temperature Initial intact loop cold leg temperature Initial intact loop mass flow rate Global HTC (core, downstream from the Interfacial friction (core, upstream from the Rewetted side HTC: lower QF Fluid-wall heat transfer (2D conduction near Bubbles rise velocity Pressurizer initial pressure LPIS: Flow characteristic of liquid injection Accumulator: liquid temperature Accumulator line form loss coefficient Accumulator:pressure Number of droplets per unit volume Liquid-interface heat transfer: turbulences Direct condensation CCFL in the upper core plate: c of Wallis Interfacial shear in dispersed horizontal Interfacial shear in stratified and wavy Interfacial friction in bubbly-slug flow (DWR) Alternative models: liquid entrainment model Alternative models – two-Phase flow Interfacial friction for annular flows Critical velocity of transition from nonVelocity of transition from non-dispersed to Interfacial shear in non-dispersed vertical Interfacial shear in non-dispersed vertical Interfacial friction downstream QF *Refill and reflood: Steen-Wallis velocity for *Refill and reflood: Interfacial friction in *Blowdown: interfacial friction (ILHL, UP and Critical heat flux Transition boiling Film boiling Forced convection to vapour Nucleate boiling Forced convection to liquid Complex of heat transfer models: heat Pump rotational speed (BL) Pump rotational speed (IL) Containment pressure UO2 specific heat UO2 conductivity Power after scram Hot gap size (hot rod #5) Hot gap size (whole core except rod #5) Peaking factor Initial core power Break discharge coefficient Heat transfer flashing Liquid-wall friction Two-Phase multiplier of pressure drop in Form loss coef. (BCL)
Fig. 13.20 Total ranking of the influence of input uncertainties on cladding temperature per uncertain input parameter for Zion NPP LB-LOCA. From CSNI, 2009. BEMUSE phase V report, uncertainty and sensitivity analysis of a LB-LOCA in Zion nuclear power plant. NEA/CSNI/R(2009)13, Paris, France; CSNI, 2011. BEMUSE phase VI report, status report on the area, classification of the methods, conclusions and recommendations. NEA/ CSNI/R(2011)4, Paris, France.
Thermal Hydraulics in Water-Cooled Nuclear Reactors
Parameter
892
Verification and validation
893
uncertainties, their ranges, and distributions. Differences between CIAU applications may stem from different data bases used for the analysis. However, as it was shown by all BEMUSE phases from 2 to 5, significant differences were observed between the base or reference calculation results. Furthermore, differences were seen in the results using the same values of single input parameter variations (e.g., in BEMUSE phase 4, see Fig. 13.18). When a conservative safety analysis method is used, it is claimed that all uncertainties are bounded by conservative assumptions. Differences in calculation results of conservative codes would also be seen, due to the user effect, such as different nodalization and code options, like for best estimate codes used in the BEMUSE programme. Difference of code calculation results have been observed for a long time and have been experienced in all International Standard Problems where different participants calculated the same experiment or a reactor event. The main reason is that the user of a computer code has a big influence on how a code is used. The objective of an uncertainty analysis is to quantify the uncertainties of a code result. An uncertainty analysis may not compensate for code deficiencies. Necessary precondition is that the code is suitable to calculate the scenario under investigation. Consequently, before performing uncertainty analysis, one should concentrate first of all on the reference calculation. Its quality is decisive for the quality of the uncertainty analysis. More lessons were learnt from the BEMUSE results. These are: –
–
–
The number of code runs, which may be increased to 150–200 instead of the 59 code runs needed when using Wilks’ formula at the first order for the estimation of a one-sided 95%/95% limit tolerance. More precise results are obtained, what is especially advisable if the upper tolerance limit approaches regulatory acceptance criteria (e.g., 1200°C PCT). For a proper use of Wilks’ formula, the sampling of the input parameters should be of type simple random sampling (SRS). Other types of parameter selection procedures like “Latin-Hypercube-Sampling” or “Importance-Sampling” may therefore not be appropriate for tolerance limits. Another important point is that all the code runs should be successful. At a pinch, if a number of code runs fail, the number of code runs should be increased so that applying Wilks’ formula is still possible. That is the case supposing that the failed code runs correspond to the highest values of the output (e.g., PCT).
In addition to the above recommendations, the most outstanding outcome of the BEMUSE programme is that a user effect can also be seen in applications of uncertainty methods, like in the application of computer codes. In uncertainty analysis, the emphasis is on the quantification of a lack of precise knowledge by defining appropriate uncertainty ranges of input parameters, which could not be achieved in all cases in BEMUSE. For example, some participants specified too narrow uncertainty ranges for important input uncertainties based on expert judgment, and not on sufficient code validation experience. Therefore, skill, experience, and knowledge of the users about the applied suitable computer code as well as the used uncertainty method are important for the quality of the results. Using a statistical method, it is very important to include influential parameters and provide distributions of uncertain input parameters, mainly their ranges. These
894
Thermal Hydraulics in Water-Cooled Nuclear Reactors
assumptions must be well justified. This has been the experience from the UMS Study already. An important basis to determine code model uncertainties is the experience from code validation. This is mainly provided by experts performing the validation. Appropriate experimental data are needed. More effort, specific procedures, and judgment should be focused on the determination of input uncertainties. This last point has been an issue of recommendation for further work. Especially, the method used to select and quantify computer code model uncertainties and compare their effects on the uncertainty of the results are studied in a common international investigation using different computer codes, the CSNI Post-BEMUSE REflood Model Input Uncertainty Methods (PREMIUM) programme (CSNI, 2016a).
13.4.3.3 Post-BEMUSE REflood Model Input Uncertainty Methods Programme The objective of the PREMIUM exercise was to find a better means than coarse expert judgment in order to quantify the uncertainty ranges and distributions of relevant model input parameters. The PREMIUM benchmark application was focused on determination of uncertainties of the physical models involved in the prediction of the core reflood, which is an important phase during a loss of coolant-accident (LOCA) scenario (CSNI, 2016a). PREMIUM generally addresses the solution of inverse problems in modeling and simulation, where model input parameters are estimated including their uncertainty from values of model output magnitudes. The solution of the inverse problem includes the processes of model calibration (adapting a model by a correction) and model uncertainty quantification. PREMIUM is focused on the last topic. PREMIUM has been organized in five consecutive phases. In Phase I (coordinated by UPC and CSN, Spain), the participants presented and described their methods of model uncertainty quantification. Two methods were develoffered to the participants (including assistance for their application): CIRCE, oped by CEA, France and FFTBM, developed by University of Pisa, Italy. Several participants used these methods, and others used their own quantification methods, like iteration of several input model parameters to bound relevant experimental parameters. The features of the two series of reflood experiments FEBA and PERICLES were also described in this phase. In Phase II (coordinated by University of Pisa, Italy) participants identified influential code input parameters, from the point of view of reflood, and made a preliminary quantification of their variation range. Phase II coordinators proposed a methodology for the identification of influential parameters, based on a set of quantitative criteria, and presented a preliminary list of possibly influential parameters on reflood. Some participants modified the proposed criteria or established their own criteria. Each participant obtained a list of influential model parameters to be quantified, assigning ranges of variation to the parameters, based on sensitivity calculations of test 216 of FEBA, and using as responses cladding temperatures and quench front propagation. The influential model parameters obtained by the participants were
Verification and validation
895
typically wall and interfacial heat transfer coefficients, interfacial friction coefficient, heat transfer enhancement at the quench front, and droplet diameter. In Phase III (coordinated by GRS, Germany), the uncertainty of influential input parameters (identified in Phase II) was quantified. Participants obtained uncertainties in the form of ranges or probability distributions. The uncertainties obtained depend strongly on the quantification method. Additionally, they depend on: – – –
The responses used for quantification of input uncertainties The set of input parameters being quantified The TH code and the specific model being used
The dependency on the quantification method was stronger than on the used TH code. The results exhibited a significant user effect and a large variability and discrepancy among participants. In some cases, extremely small uncertainty ranges were found for model parameters, which are physically nonrealistic. They were obtained by participants who performed a calibration of the code models, additionally to the uncertainty method. quantification, i.e., by a majority of users of the CIRCE In Phase IV (coordinated by CEA and IRSN, France), uncertainties calculated for model parameters in Phase III were propagated through the code simulation for the selected tests of FEBA and PERICLES experiments and compared to the experimental results. The objective was to confirm (using FEBA) and validate (using PERICLES) the model uncertainties. For FEBA tests, most of participants obtained by propagation, uncertainty bands enveloping the experimental values. The width of the bands varied significantly among the participants. The uncertainty bands were highly influenced by the responses used in the quantification and the selected input parameters. For PERICLES tests, the results were less satisfactory. Considering all the contributions, the fraction of experimental values, which were enveloped by uncertainty bands, was clearly lower than for FEBA, and far from the expected value. Therefore, the direct extrapolation of the model uncertainties calculated for FEBA to PERICLES gave poor results. This result can be explained in terms of significant differences between the two facilities and the analyzed tests. The quality of the nominal calculation was found important in the final results of the propagation calculations. A good nominal calculation facilitated enveloping the experimental data by the calculated uncertainty bands. Reasons for deviations between nominal calculation and experimental value should be clarified. The results were highly dependent on the parameter uncertainties derived in Phase III, and therefore were strongly related to the quantification method, rather than to the TH code. Quantification methods may have the option of performing calibration, as well as quantification, of the models. Applying such option means a calibration of the code models. Some participants in Phase IV compared results with and without the calibration option for the FEBA application, concluding that results improve for PERICLES when there is no calibration. This outcome is completely in line with the validation strategy of “best estimate” TH codes to include several experiments of one test facility, and of differently scaled test facilities.
896
Thermal Hydraulics in Water-Cooled Nuclear Reactors
As in Phase III, a user effect was observed in Phase IV results. Participants using the same method and the same version of the same system code obtained uncertainty bands, which were significantly different. Differences in the input deck, influential on the nominal calculations, and in the choice of input parameters and responses, could account for the discrepancies. In Phase V (coordinated by CSN and UPC, Spain), the main conclusions, lessons learned, and recommendations have been collected, and the final report of PREMIUM has been produced (CSNI, 2016a). users obtained envelop calculations, which were globally A majority of CIRCE satisfactory for FEBA results, but not for PERICLES results. For the nominal calculation, most participants predict a too rapid quench front progression for FEBA. However, a majority of participants predict a too slow quench front progression for PERICLES. users estimate median values for their input parameters, different from the CIRCE nominal values. The median values produce what is called a “calibrated calculation.” This corresponds to a “calibration” of models of the system code. Theoretically, this action corrects systematic deviations of the nominal calculation with respect to the experiment used for the quantification. This effect can be especially observed for the prediction of the quench front progression. The calibrated value of the input parameters found for FEBA is applied to PERICLES. But the prediction of the quench front progression is often different for FEBA and PERICLES. Therefore, the calibrated calculation leads to a deviation is to to the measured quench front on PERICLES. In addition, the intention of CIRCE derive narrow uncertainty bands. In order to get more representative ranges and distributions, more suitable experiments should be included for deriving the input uncertainties. The intention of PREMIUM was to use one or a series of six experiments of the same test facility (FEBA) to derive model input uncertainties and use these for the uncertainty evaluation of another experiment (PERICLES). Using only one experimental facility is not recommended for a validation strategy and to derive input model uncertainties based on the experience from that validation. In the case of PREMIUM, an application of methods to derive input uncertainties inversely from output magnitudes was the objective, and it was considered that it can be extrapolated to a similar experiment, PERICLES. That assumption failed obviously because of differences of the two experimental facilities. In order to get more representative ranges and distributions of model input parameters, more suitable experiments should be considered for deriving the input uncertainties. Very well known BEPU framework like the CSAU (Boyack et al., 1990) or the evaluation model development and assessment process (EMDAP) (NRC, 2005b) should be taken into account in the task, see also Section 13.4.2.1 of this chapter. Two issues are important here: l
l
A strategy to collect experimental databases for TH code model input uncertainty evaluation Combine results of input uncertainty quantification from several experiments
Verification and validation
897
13.4.4 Applications in licensing Best estimate computer codes plus uncertainty evaluation is more and more used in licensing of NPPs. Mostly the ordered statistical method has been used. Applications of uncertainty analyses in licensing cases have been performed up to now in the following countries: Argentina (CIAU), Belgium (ordered statistics), Brazil (output uncertainties and statistical method as well as CIAU), China, France (ordered statistics), Great Britain (ordered statistics), Korea (ordered statistics), Lithuania (ordered statistics), Netherlands (statistical method), Spain (ordered statistics), Taiwan (ordered statistics), and the United States (ordered statistics). Significant activities for use in licensing are performed in these countries: Canada, Czech Republic, Germany, Hungary, Japan, Russia, Slovak Republic, and Ukraine. The change from the conservative analysis to the BEPU evaluation according to the change of 10 CFR §50.46 “Acceptance criteria for emergency core cooling systems for light water nuclear power reactors” in the United States had a big economic effect. The demonstration of the CSAU for a large cold leg break resulted in much lower PCTs than the conservative calculation applying Appendix K for the ECC evaluation models. The total 95% probability PCT of the CSAU demonstration in the year 1988 was 856°C (second peak was the highest). Uncertainties were determined for the single values blowdown peak, first reflood peak and second reflood PCTs. The calculation shows a large margin to the acceptance criterion 1200°C. Consequently, the utilities applied for power uprates. Increasing the maximum linear heat generation rates from 299 W/cm to 495 W/cm resulted in a total 95% probability PCT of 1081°C, still below the acceptance criterion. Power uprates were applied and approved for a significant number of plants which made it unnecessary to build new plants in the United States. The CSAU demonstration was performed for a generic Westinghouse four-loop plant which generates 3411 MW normal full thermal power.
13.4.5 Conclusions of uncertainty methods The safety demonstration method “uncertainty analysis” is becoming common practice worldwide, mainly based on ordered statistics. Basis for applications of statistical uncertainty evaluation methods is the development of the GRS method. Several activities are carried out on an international world-wide level, like in OECD and IAEA. Comparison of applications of existing uncertainty methods have been performed in the frame of OECD/CSNI Programmes. Differences were observed in the results of uncertainty analysis to the same task. These differences are sometimes reason for arguing about the suitability of the applied methods. Differences of results may come from different methods. When statistical methods are used, differences may be due to different input uncertainties, their ranges, and distributions. However, differences are already seen in the basic or reference calculations.
898
Thermal Hydraulics in Water-Cooled Nuclear Reactors
When a conservative method is used, it is claimed that all uncertainties, which are considered by an uncertainty analysis, are bounded by conservative assumptions. That may only be right when conservative code models or conservative input parameters for relevant code models are used in addition to conservative initial and boundary conditions. In many countries, however, a best estimate code plus conservative initial and boundary conditions are accepted for conservative analysis in licensing. Differences in calculation results of best estimate and even conservative codes would also be seen, comparing results of different users of the same computer code due to different nodalizations and code options the user is selecting. That was observed in all International Standard Problems where code users calculated the same experiment or a reactor event. The main reason is that the user of a computer code has a big influence in applying a code. A user effect can also be seen in applications of uncertainty methods. Another international discussion took place about the number of calculations to be performed using ordered statistics methods. That issue was mainly brought up when more than one acceptance criterion has to be met. However, much more influence on the results is by the specification of uncertainty ranges of these input parameters. Therefore, high requirements are on their specification and the justification for these ranges. The international OECD/NEA/CSNI PREMIUM (Post-BEMUSE REflood Models Input Uncertainty Methods) Project deals with this issue.
13.5
Summary remarks
The verification and validation of an analysis method, here a computer code, is necessary before applied for safety analyses. The SYS TH computer code is an important scaling tool to perform safety analysis for power reactors. The question has to be answered, to what extent the results of down-scaled test facilities represent the thermal-hydraulic behavior expected in a full-scale nuclear reactor under accident conditions. It was shown very clearly that the thermal-hydraulic behavior of a down-scaled ITF cannot be extrapolated directly to obtain nuclear plant behavior. Although the ITFs have been designed in different volume scaling, ranging from 1:2070 to 1:48, the experimental results of these facilities are not solving the scaling problem. A combination of integral system tests with SETs in full reactor scale is indispensable. The computer codes shall be validated to calculate integral and separate effects experiments over the whole available scaling range in good agreement with the data. Therefore, the SYS TH computer code is an important scaling tool to perform safety analysis for power reactors. Strong connections are therefore between scaling, code validation, setting up the nodalization and uncertainty evaluation, as well as code user qualification and expertise. A future important effort in the analytical work will be focused on the development of two-phase flow CFD codes. Due to the high resolution in space and the possibility for three-dimensional description of the single- and two-phase flow phenomena the scaling issue may become less important. Parallel to the progress
Verification and validation
899
in the development of the computer hardware with regard to reducing calculation time the application area of these codes will be extended. The development of the two-phase flow CFD codes is a great challenge for the analytical teams in the next decades and may contribute to the motivation of young scientists working in the field of nuclear technology. Because of the long calculation time of CFD codes, uncertainty analysis using statistical methods will be very time consuming. Other methods may be necessary or response surface methods may be used for uncertainty analysis of CFD codes. When no BEPU evaluation is performed, a conservative safety analysis approach anticipates that all unquantified uncertainties are bounded by conservative assumptions. An uncertainty analysis quantifies the uncertainty of a result of an assigned safety parameter and the bound toward the acceptance criterion determines the margin. Such a procedure cannot be considered as nonconservative. Rather, it is quantifying the conservatism of a calculation result. Uncertainty analysis will be more used in licensing applications beyond the early application to LB scenarios in the future. That is promoted by the USNRC, for example. Other institutions prefer to continue to follow the conservative approach with settled conservative assumptions and fewer calculations to be performed. However, power uprates and optimized fuel strategies may need to apply more the BEPU approach for safety analysis. Wide range cooperative international activities have been performed connected with V&V, scaling, and uncertainty. Selected international activities and programs received proper emphasis in the chapter. The following additional specific topics, objective of international activities, are linked with V&V, scaling, and uncertainty: l
l
l
User effects: user effects are expected in the applications of codes and in performing uncertainty analysis. The skills of users of a code or an uncertainty method determines to a great deal the quality of the results. Aksan et al. (1993) identify user effects and qualification and Ashley et al., CSNI (1998) and D’Auria (1998) discuss training in order to reduce the user effect. International Standard Problems have been performed within the time frame from 1973 to 2015 and about 32 “problems” (each one constituting an international project) deal with thermal-hydraulics directly related to the issues of validation and scaling. Useful references are by Reocreux (2012) and by Choi et al., CSNI (2012) the last one dealing with the documentation of the latest ISP in the area of SYS THs. Experimental programs involving ITF and SET have been performed addressing scaling which are necessary for code validation and the assessment of uncertainty methods, as discussed in this chapter. Mull et al. (2007) provide one example of final report of an international activity. Counterpart tests are also carried out within the framework of international cooperation, based on experiments in ITF (e.g., Belsito et al., 1994).
References Aksan, S.N., D’Auria, F., Staedtke, H., 1993. User effects on the thermal-hydraulic transient system codes calculations. Nucl. Eng. Des. 145 (1–2). Also OECD ‘Status’ Report NEA/CSNI R(94)35, Paris, France, January 1995. AmbrosinI, W., Bovalini, R., D’Auria, F., 1990. Evaluation of accuracy of thermal-hydraulics codes calculations. Energia Nucl. 7, 2.
900
Thermal Hydraulics in Water-Cooled Nuclear Reactors
ANS, 1987. American National Standard Guidelines for the verification and validation of scientific and engineering computer programs for the nuclear industry. ANSI/ANS-10.4-1987. Bazin, P., Deruaz, R., Yonomoto, T., Kukita, Y., 1992. BETHSY-LSTF counterpart test on natural circulation in a pressurized water reactor. In: ANS National Heat Transfer Conference, San Diego, USA. Belsito, S., D’Auria, F., Galassi, G.M., 1994. Application of a Statistical model to the evaluation of Counterpart Test data base. Kerntechnik. 59(3). Blinkov, V.N., Melikhov, O.I., Kapustin, A.V., Lipatov, I.A., Dremin, G.I., Nikonov, S.M., Rovnov, A.A., Elkin, I.V., 2005. PSB-VVER counterpart experiment simulating a small cold leg break LOCA. In: Proceedings of the International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-11), Avignon, France. Boyack, B.E., Catton, I., Duffey, R.B., Griffith, P., Katsma, K.R., Lellouche, G.S., Levy, S., May, R., Rohatgi, U.S., Shaw, R.A., Wilson, G.E., Wulff, W., Zuber, N., 1990. Quantifying reactor safety margins. Nucl. Eng. Des. 119, 1–117. Brown, L.D., Cai, T.T., Gupta, A., 2001. Interval estimation for a binomial proportion. Stat. Sci. 16 (2), 101–133. CFR, 1989. 10 CFR 50.46, Acceptance criteria for emergency core cooling systems for light water nuclear power reactors, to 10 CFR Part 50, Code of Federal Regulations, US Office of the Federal Register. CSNI, 1994. Aksan, N., D’Auria, F., Glaeser, H., Pochard, R., Richards, C., Sj€ oberg, A. Separate effects test matrix for thermal-hydraulic code validation, (a) Volume I: phenomena characterization and selection of facilities and tests, (b) Volume II: facility and experiment characteristics. NEA/CSNI/R(93)14/Part 1 and Part 2, Paris, France. CSNI, 1995. Aksan, S.N., D’Auria, F., St€adtke, H. User effects on the transient system code calculations. NEA/CSNI/R(94)35, Paris. CSNI, 1996. Annunziato, A., Glaeser, H., Lillington, J., Marsili, P., Renault, C., Sj€ oberg, A. CSNI integral test facility validation matrix for the assessment of thermal-hydraulic codes for LWR LOCA and Transients. NEA/CSNI/R(96)17, Paris, France. CSNI, 1998. Ashley, R., El-Shanawany, M., Eltawila, F., D’Auria, F. Good practices for user effect reduction. NEA/CSNI/R(98)22, Paris. CSNI, 2000. CSNI International Standard Problems (ISP), brief descriptions (1975–1999). NEA/CSNI/R(2000)5. CSNI, 2001. Validation matrix for the assessment of thermal-hydraulic codes for VVER LOCA and transients. NEA/CSNI/R(2001)4, Paris, France. CSNI, 2004. International standard problem procedure. CSNI Report 17 Revision 4, NEA/CNI/ R(2004)8, Paris. CSNI, 2006. BEMUSE phase II report, re-analysis of the ISP-13 exercise, post-test analysis of the LOFT L2-5 test calculation. NEA/CSNI/R (2006)2, Paris, France. CSNI, 2007. BEMUSE phase III report, uncertainty and sensitivity analysis of the LOFT L2-5 test. NEA/CSNI/R(2007)4, Paris, France. CSNI, 2008. BEMUSE phase IV report, simulation of a LB-LOCA in Zion nuclear power plant. NEA/CSNI/R(2008)6, Paris, France. CSNI, 2009. BEMUSE phase V report, uncertainty and sensitivity analysis of a LB-LOCA in Zion nuclear power plant. NEA/CSNI/R(2009)13, Paris, France. CSNI, 2011. BEMUSE phase VI report, status report on the area, classification of the methods, conclusions and recommendations. NEA/CSNI/R(2011)4, Paris, France. CSNI, 2012. Choi, K.Y., Baek, W.P., Cho, S., Park, H.S., Kang, K.H., Kim, Y.S., Kim, H.T. International Standard Problem No. 50, Volume 1, Analysis of blind/open, calculations. NEA/CSNI/R(2012)6, Paris, France.
Verification and validation
901
CSNI, 2016a. OECD/NEA Post-BEMUSE reflood model input uncertainty methods (PREMIUM) Benchmark. Final Report, NEA/CSNI/R(2016)18, Paris, France. CSNI, 2016b. Scaling in system thermal-hydraulics applications to nuclear reactor safety and design: a state-of-the-art report. NEA/CSNI/R(2016)14, Paris, France. D’Auria, F., Galassi, G.M., 2010. Scaling in nuclear reactor system thermal-hydraulics. Nucl. Eng. Des. 240, 3267–3293. D’Auria, F., Giannotti, W., 2000. Development of code with capability of internal assessment of uncertainty. Nucl. Technol. 131 (1), 159–196. D’Auria, F., Karwat, H., Mazzini, M., 1988. Planning of counterpart tests in LWR experimental simulators. In: Proceedings of the ANS National Heat Transfer Conference, Houston, USA. D’Auria, F., Galassi, G.M., Ingegneri, M., 1994. Evaluation of the data base from high power and low power small break LOCA counterpart tests performed in LOBI, SPES, BETHSY and LSTF facilities. University of Pisa Report, DCMN NT 237(94), Pisa, Italy. D’Auria, F., 1998. Proposal for training of thermal-hydraulic system codes users. In: IAEA Specifications Meeting on User Qualification and User Effects on Accident Analysis for Nuclear Power Plants—Vienna (A), 31 August–4 September. D’Auria, F., Leonardi, M., Glaeser, H., Pochard, R., 1995. Current status of methodologies evaluating the Uncertainties in the prediction of thermal-hydraulic phenomena in nuclear reactors. In: International Symposium on Two Phase Flow Modelling and Experimentation, Rome, Italy, October 9–11. Edizioni ETS, Pisa, pp. 501–509. Glaeser, H., 1992. Downcomer and tie plate countercurrent flow in the Upper Plenum Test Facility (UPTF). Nucl. Eng. Des. 113, 259–283. Glaeser, H., 2008. GRS method for uncertainty and sensitivity evaluation of code results and applications. Sci. Technol. Nucl. Install. (Q1), 1–7 Article ID 798901. Glaeser, H., Karwat, H., 1993. The contribution of UPTF experiments to resolve some scale-up uncertainties in countercurrent two phase flow. Nucl. Eng. Des. 145, 63–84. Glaeser H., Wolfert K., 2012, Scaling of thermal-hydraulic phenomena and system code assessment, seminar on the transfer of competence, knowledge and experience gained through CSNI activities in the field of thermal-hydraulics. OECD/NEA Head-quarters, Issy-lesMoulineaux, France, 25–29 June. Glaeser, H., Krzykacz-Hausmann, B., Luther, W., Schwarz, S., Skorek, T., 2008. Methodenentwicklung und exemplarische Anwendungen zur Bestimmung der Aussagesicherheit von Rechenprogrammergebnissen. GRS-A-3443, Garching, Germany. GRS, 1992. 2D/3D program work summary report. Gesellschaft f€ ur Anlagen- und Reaktorsicherheit (GRS) mbH, GRS-100, Garching, Germany. GRS, 1993. Reactor safety issues resolved by the 2D/3D Program. Gesellschaft f€ ur Anlagenund Reaktorsicherheit (GRS) mbH GRS-101, Garching, Germany. Hofer, E., 1993. Probabilistische Unsicherheitsanalyse von Ergebnissen umfangreicher Rechenmodelle. GRS-A-2002, Garching, Germany. IAEA, 2008. Best Estimate safety analysis for nuclear power plants: uncertainty evaluation, safety report series 52. International Atomic Energy Agency, Vienna, Austria. IAEA, 2009. IAEA safety standards series: deterministic safety analysis for nuclear power plants, Specific Safety Guide No. SSG-2. International Atomic Energy Agency, Vienna, Austria. IAEA, 2015. Verification and validation of thermal-hydraulic system codes for nuclear safety analyses. IAEA Consultancy, unpublished report, Vienna, Austria, 2015. Ishii, M., Kataoka, I., 1983. Scaling criteria for LWR’s under single-phase and two-phase natural circulation. ANL-83-32, NUREG/CR-3267, Argonne National Laboratory, Chicago, IL, USA.
902
Thermal Hydraulics in Water-Cooled Nuclear Reactors
Ishii, M., Revankar, S.T., Leonardi, T., Dowlati, A., Bertodano, M.L., Babelli, I., Wang, W., Pokharna, H., Ransom, V.H., Viskanta, R., Han, J.T., 1998. The three-level scaling approach with application to the Purdue University Multi-Dimensional Integral Test Assembly (PUMA). Nucl. Eng. Des. 186, 177–211. Kunz, R.F., Kasmala, G.F., Mahaffy, J.H., Murray, C.J., 2002. On the automated assessment of nuclear reactor systems code accuracy. Nucl. Eng. Des. 211 (2–3), 245–272. Martin, R.P., Dunn, B.M., 2004. Application and licensing requirements of the Framatome ANP RLBLOCA methodology. In: International Meeting on Updates in Best Estimate Methods in Nuclear Installation Safety Analysis (BE-2004), Washington, DC, USA. Muftuoglu, K., Ohkawa, K., Frepoli, C., Nissley, M., 2004. Comparison of realistic large break LOCA analyses of a 3-Loop Westinghouse plant using response surface and statistical sampling techniques. In: Proceedings of ICONE12, 25–29 April, Arlington, VA, USA. Mull, T., Umminger, K., Bucalossi, A., D’Auria, F., Monnier, P., Toth, I., Schwarz, W., 2007. Final Report of the OECD-PKL Project, AREVA NP GmbH, NTCTP-G/2007/en/0009, Erlangen, Germany, November. Nahavandi, A.N., Castellana, F.S., Moradkhanian, E.N., 1979. Scaling laws for modeling nuclear reactors systems. Nucl. Sci. Eng. 72, 75. NRC, 2005a. APEX—AP1000 confirmatory testing to support AP1000 design certification (non-proprietary). Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC. NRC, 2005b. USNRC regulatory guide 1.203, transient and accident analysis methods. U.S. Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC, USA. Ortiz, M.G., Ghan, L.S., 1992. Uncertainty analysis of minimum vessel liquid inventory during a small break LOCA in a Babcock and Wilcox plant: an application of the CSAU methodology using the RELAP5/MOD3 computer code. Rep. NUREG/CR-5818, EGG-2665, Idaho Natl Engineering Lab., Idaho Falls, ID, USA. Prosek, A., D’Auria, F., Richards, D.J., Mavko, B., 2005. Quantitative assessment of thermal-hydraulics codes used for heavy water reactor calculations. Nucl. Eng. Des. 236, 295–308. Reocreux, M., 2012. Historical perspective on the role and the achievements of CSNI in the field of thermal-hydraulics. In: THICKET-3: seminar on the transfer of competence, knowledge and experience gained through CSNI in the field of thermal-hydraulics, OECD/NEA Headquarters, Issy-les-Moulineaux, France, 25–29 June. Reyes, J.N., 2015. The dynamical system scaling methodology. In: International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-16), Chicago, IL, USA. Reyes, J.N., Frepoli, C., Yurko, J., 2015. The dynamical system scaling methodology: comparing dimensionless governing equations with the H2TS and FSA methodologies. In: International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH16), Chicago, IL, USA. Riebold, W.L., Addabbo, C., Piplies, L., Staedtke, H., 1984. LOBI Project: Influence of Primary Loops on Blowdown, Part 1. EU Joint Research Centre, Ispra, Italy. Shotkin, L.M., 1997. How the code analyst can influence the calculation and use good practices to minimize pitfalls. Nucl. Technol. 117, 40–48. Wald, A., 1943. An extension of Wilk’s method for setting tolerance limits. Ann. Math. Statist. 14, 45–55. Weiss, P., Sawitzki, M., Winkler, F., 1986. UPTF, a full-scale PWR loss-of-coolant accident experiment program. Atomkernenerg. Kerntech. 49.
Verification and validation
903
Weiss, P., Watzinger, H., Hertlein, R., 1990. UPTF experiment: a synopsis of full-scale test results. Nucl. Eng. Des. 121, 219–234. Wickett, T., Sweet, D., Neill, A., D’Auria, F., Galassi, G., Belsito, S., Ingegneri, M., Gatta, P., Glaeser, H., Skorek, T., Hofer, E., Kloos, M., Chojnacki, E., Ounsy, M., Lage Perez, C., Sa´nchez Sanchis, J.I., 1998. Report of the uncertainty methods study for advanced best estimate thermal hydraulic code applications, Volume 1 (Comparison) and Volume 2 (Report by the participating institutions). NEA/CSNI/R(97)35, Paris, France. Wilks, S.S., 1941. Determination of sample sizes for setting tolerance limits. Ann. Math. Stat. 12, 91–96. Wilks, S.S., 1942. Statistical prediction with special reference to the problem of tolerance limits. Ann. Math. Stat. 13, 400–409. Wulff, W., Zuber, N., Rohatgi, U.S., Catton, I., 2005. Application of fractional scaling analysis (FSA) to loss of coolant accidents (LOCA)—Part 2: system level scaling for system depressurization. In: International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-11), Avignon, France. Yurko, J., Frepoli, C., Reyes, J.N., 2015. Demonstration of test facility design optimization with the dynamical system scaling methodology. In: International Topical Meeting on Nuclear Reactor Thermal Hydraulics (NURETH-16), Chicago, IL, USA. Zuber N., 1991. A hierarchical, two-tiered scaling analysis. US NRC, NUREG/CR-5809, Washington, DC, USA. Zuber, N., Rohatgi, U.S., Wulff, W., Catton, I., 2007. Application of fractional scaling analysis (FSA) to loss of coolant accidents (LOCA) methodology development. Nucl. Eng. Des. 237, 1593–1607.