Publications

19 Results

Search results

Jump to search filters

Verification and validation benchmarks

Nuclear Engineering and Design

Oberkampf, William L.; Trucano, Timothy G.

Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of achievement in V&V activities, how closely related the V&V benchmarks are to the actual application of interest, and the quantification of uncertainties related to the application of interest. © 2007 Elsevier B.V. All rights reserved.

More Details

Predictive Capability Maturity Model for computational modeling and simulation

Pilch, Martin P.; Oberkampf, William L.; Trucano, Timothy G.

The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

More Details

Verification and validation benchmarks

Oberkampf, William L.; Trucano, Timothy G.

Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of achievement in V&V activities, how closely related the V&V benchmarks are to the actual application of interest, and the quantification of uncertainties related to the application of interest.

More Details

Verification, validation, and predictive capability in computational engineering and physics

Applied Mechanics Reviews

Oberkampf, William L.; Trucano, Timothy G.; Hirsch, Charles

The views of state of art in verification and validation (V & V) in computational physics are discussed. These views are described in the framework in which predictive capability relies on V & V, as well as other factors that affect predictive capability. Some of the research topics addressed are development of improved procedures for the use of the phenomena identification and ranking table (PIRT) for prioritizing V & V activities, and the method of manufactured solutions for code verification. It also addressed development and use of hierarchical validation diagrams, and the construction and use of validation metrics incorporating statistical measures.

More Details

On the role of code comparisons in verification and validation

Trucano, Timothy G.; Pilch, Martin P.; Oberkampf, William L.

This report presents a perspective on the role of code comparison activities in verification and validation. We formally define the act of code comparison as the Code Comparison Principle (CCP) and investigate its application in both verification and validation. One of our primary conclusions is that the use of code comparisons for validation is improper and dangerous. We also conclude that while code comparisons may be argued to provide a beneficial component in code verification activities, there are higher quality code verification tasks that should take precedence. Finally, we provide a process for application of the CCP that we believe is minimal for achieving benefit in verification processes.

More Details

General Concepts for Experimental Validation of ASCI Code Applications

Trucano, Timothy G.; Pilch, Martin P.; Oberkampf, William L.

This report presents general concepts in a broadly applicable methodology for validation of Accelerated Strategic Computing Initiative (ASCI) codes for Defense Programs applications at Sandia National Laboratories. The concepts are defined and analyzed within the context of their relative roles in an experimental validation process. Examples of applying the proposed methodology to three existing experimental validation activities are provided in appendices, using an appraisal technique recommended in this report.

More Details

Verification and validation in computational fluid dynamics

Progress in Aerospace Sciences

Oberkampf, William L.; Trucano, Timothy G.

The verification and validation (V & V) in computational fluid dynamics was presented. The methods and procedures for assessing V & V were presented. The issues such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainity, conceptual sources of error and uncertainity, and the relationship between validation and prediction was discussed. Methods for determining the accuracy of numerical solutions were presented and the importance of software testing during verification activities were emphasized.

More Details

Methodology for characterizing modeling and discretization uncertainties in computational simulation

Alvin, Kenneth F.; Oberkampf, William L.; Rutherford, Brian M.; Diegert, Kathleen V.

This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

More Details

Validation methodology in computational fluid dynamics

Fluids 2000 Conference and Exhibit

Oberkampf, William L.; Trucano, Timothy G.

Verification and validation are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in computational validation and develops a number of extensions to existing ideas. We discuss the early work in validation by the operations research, statistics, and CFD communities. The emphasis in our review is to bring together the diverse contributors to validation methodology and procedures. The disadvantages of standard practice of qualitative graphical validation are pointed out and the arguments for and the literature on validation quantification are presented. We discuss the attributes of a beneficial validation experiment hierarchy and then we give an example for a complex system; a hypersonic cruise missile. We present six recommended characteristics of how a validation experiment is designed, executed, and analyzed. Since one of the key features of a validation experiment is a careful experimental uncertainty estimation analysis, we discuss a statistical procedure that has been developed for improving the estimation of experimental uncertainty. One facet of code verification, the estimation of computational error and uncertainty, is discussed in some detail, but we do not address many other important issues in code verification. We argue for the separation of the concepts of error and uncertainty in computational simulations. Error estimation, primarily that due to numerical solution error, is discussed with regard to its importance in validation. In the same vein, we explain the need to move toward nondeterministic simulations in CFD validation, that is, the propagation of input quantity uncertainty in CFD simulations which yield probabilistic output quantities. We discuss the relatively new concept of validation quantification, also referred to as validation metrics. The inadequacy, in our view, of hypothesis testing in computational validation is discussed. We close the paper by presenting our ideas on validation metrics and we apply them to two conceptual examples. © 2000 The American Institute of Aeronautics and Astronautics Inc.

More Details
19 Results
19 Results