CTF is a thermal hydraulic subchannel code developed to predict light water reactor (LWR) core behavior. It is a version of Coolant Boiling in Rod Arrays (COBRA) developed by Oak Ridge National Laboratory (ORNL) and North Carolina State University (NCSU) and used in the Consortium for the Advanced Simulation of LWRs (CASL). In this work, the existing CTF code verification matrix is expanded, which ensures that the code is a faithful representation of the underlying mathematical model. The suite of code verification tests are mapped to the underlying conservation equations of CTF and significant gaps are addressed. As such, five new problems are incorporated: isokinetic advection, conduction, pressure drop, convection, and pipe boiling. Convergence behavior and numerical errors are quantified for each of the tests and all tests converge at the correct rate to their corresponding analytic solution. A new verification utility that generalizes the code verification process is used to incorporate these problems into the CTF automated test suite.
In 2010, the U.S. Department of Energy created its first Energy Innovation Hub, which is focused on developing high-fidelity and high-resolution Modeling and Simulation (M&S) tools for modeling of Light Water Reactors (LWRs). This hub, Consortium for Advanced Simulation of LWRs (CASL), has developed an LWR simulation tool called Virtual Environment for Reactor Applications (VERA). The multi-physics capability of VERA is achieved through the coupling of single-physics codes, including BISON, CTF, MPACT, and MAMBA. BISON is a fuel performance code which models the thermo-mechanical behavior of nuclear fuel using high performance M&S. It is capable of modeling traditional LWR fuel rods, fuel plates, and TRi-structural ISOtropic (TRISO) fuel particles. It can employ three-dimensional Cartesian, two-dimensional axisymmetric cylindrical, or one-dimensional radial spherical geometry. It includes empirical models for a large variety of fuel physics: temperature- and burnup-dependent thermal properties, fuel swelling and densification, fission gas production, cladding creep, fracture, cladding plasticity, and gap/plenum models. This document details a series of code verification test problems that are used to test BISON. These problems add confidence that the BISON code is a faithful representation of its underlying mathematical model. The suite of verification tests are mapped to the underlying conservation equations solved by the code: heat conduction, mechanics, and species conservation. Twenty-two problems are added for the heat conduction solution, two for the mechanics solution, and none for species conservation. Method of Manufactured Solutions (MMS) capability is demonstrated with three problems, and temperature drops across the fuel gap are tested.
In 2010, the U.S. Department of Energy created its first Energy Innovation Hub, which focuses on improving Light Water Reactors (LWRs) through Modeling and Simulation. This hub, named the Consortium for the Advanced Simulation of LWRs (CASL), attempts to characterize and understand LWR behavior under normal operating conditions and use any gained insights to improve their efficiency. In collaboration with North Carolina State University (NCSU), CASL has worked extensively on the thermal-hydraulic subchannel code Coolant Boiling in Rod Arrays—Three Field (COBRA-TF). The NCSU/CASL version of COBRA-TF has been rebranded as CTF. This document focuses on code verification test problems that ensure CTF converges to the correct answer for the intended application. The suite of code verification tests are mapped to the underlying conservation equations of CTF, and significant gaps are addressed. Convergence behavior and numerical errors are quantified for each of the tests. Tests that converge at the correct rate to the corresponding analytic solution are incorporated into the CTF automated regression suite. A new verification utility is created for this purpose, which enables code verification by generalizing the process. For problems that do not behave correctly, the results are reported but the problem is not included in the regression suite. In addition to verification studies, this document also quantifies the existing tests of constitutive models. A few existing gaps are addressed by adding new unit tests.
Consortium for Advance Simulation of Light Water Reactors (CASL) is a Department of Energy Innovation Hub whose mission is the following, "CASL is a collaboration of the nation's leading scientists, institutions, and supercomputers, with an aggressive 10-year mission to confidently predict the performance of existing and next-generation commercial nuclear reactors through comprehensive, science-based modeling and simulation." The CASL program to date has focused on developing the necessary predictive capability. Rightly so, it is characterized by many as a research project. As a matter of intent, the first 6 years of CASL focused on developing and demonstrating the prediction capability of a suite of independent physics codes: MPACT (neutronics), CTH (thermal hydraulics in the core), BISON (fuel performance), and MAMBA (CRUD and boron uptake on fuel rod surfaces) The last 4 years focused on initial attempts to couple the codes and to demonstrate those capabilities through a series of challenge problems aligned to 3 key issues of interest to the nuclear power industry.
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allow an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). Representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent are derived and numerically evaluated for a variety of WL/SL configurations, including PLOAS defined by (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS are considered.
Prediction is defined in the American Heritage Dictionary as follows: 'To state, tell about, or make known in advance, especially on the basis of special knowledge.' What special knowledge do we demand of modeling and simulation to assert that we have a predictive capability for high consequence applications? The 'special knowledge' question can be answered in two dimensions: the process and rigor by which modeling and simulation is executed and assessment results for the specific application. Here we focus on the process and rigor dimension and address predictive capability in terms of six attributes: (1) geometric and representational fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) validation, and (6) uncertainty quantification. This presentation will demonstrate through mini-tutorials, simple examples, and numerous case studies how each attribute creates opportunities for errors, biases, or uncertainties to enter into simulation results. The demonstrations will motivate a set of practices that minimize the risk in using modeling and simulation for high-consequence applications while defining important research directions. It is recognized that there are cultural, technical, infrastructure, and resource barriers that prevent analysts from performing all analyses at the highest levels of rigor. Consequently, the audience for this talk is (1) analysts, so they can know what is expected of them, (2) decision makers, so they can know what to expect from modeling and simulation, and (3) the R&D community, so they can address the technical and infrastructure issues that prevent analysts from executing analyses in a practical, timely, and quality manner.
The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and the software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.
This paper presents the conceptual framework that is being used to define quantification of margins and uncertainties (QMU) for application in the nuclear weapons (NW) work conducted at Sandia National Laboratories. The conceptual framework addresses the margins and uncertainties throughout the NW life cycle and includes the definition of terms related to QMU and to figures of merit. Potential applications of QMU consist of analyses based on physical data and on modeling and simulation. Appendix A provides general guidelines for addressing cases in which significant and relevant physical data are available for QMU analysis. Appendix B gives the specific guidance that was used to conduct QMU analyses in cycle 12 of the annual assessment process. Appendix C offers general guidelines for addressing cases in which appropriate models are available for use in QMU analysis. Appendix D contains an example that highlights the consequences of different treatments of uncertainty in model-based QMU analyses.
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.