The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
A discussion of the five responses to the 2014 Sandia Verification and Validation (V&V) Challenge Problem, presented within this special issue, is provided hereafter. Overviews of the challenge problem workshop, workshop participants, and the problem statement are also included. Brief summations of teams' responses to the challenge problem are provided. Issues that arose throughout the responses that are deemed applicable to the general verification, validation, and uncertainty quantification (VVUQ) community are the main focal point of this paper. The discussion is oriented and organized into big picture comparison of data and model usage, VVUQ activities, and differentiating conceptual themes behind the teams' VVUQ strategies. Significant differences are noted in the teams' approaches toward all VVUQ activities, and those deemed most relevant are discussed. Beyond the specific details of VVUQ implementations, thematic concepts are found to create differences among the approaches; some of the major themes are discussed. Lastly, an encapsulation of the key contributions, the lessons learned, and advice for the future are presented.
It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. As a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.
In this discussion paper, we explore different ways to assess the value of verification and validation (V&V) of engineering models. We first present a literature review on the value of V&V and then use value chains and decision trees to show how value can be assessed from a decision maker’s perspective. In this context, the value is what the decision maker is willing to pay for V&V analysis with the understanding that the V&V results are uncertain. The 2014 Sandia V&V Challenge Workshop is used to illustrate these ideas.
The 2014 Sandia Verification & Validation Challenge Workshop was held at the 3rd ASME Verification & Validation Symposium in Las Vegas, on May 5-8, 2014. The workshop was built around a challenge problem, formulated as an engineering investigation that required integration of experimental data, modeling and simulation, and verification and validation. The challenge problem served as a common basis for the ASME Journal of Verification, Validation, and Uncertainty Quantification participants to both demonstrate methodology and explore a critical aspect of the field: the role of verification and validation in establishing credibility and supporting decision making. Ten groups presented responses to the challenge problem at the workshop, and the follow-on efforts are documented in this special edition of the ASME Journal of Verification, Validation, and Uncertainty Quantification.
This paper describes the challenge problem associated with the 2014 Sandia Verification and Validation (V&V) Challenge Workshop. The problem was developed to highlight core issues in V&V of engineering models. It is intended as an analog to projects currently underway at the Sandia National Laboratories—in other words, a realistic case study in applying V&V methods and integrating information from experimental data and simulations to support decisions. The problem statement includes the data, model, and directions for participants in the challenge. In addition, the workings of the provided code and the “truth model” used to create the data are revealed. The code, data, and truth model are available in this paper.
As more and more high-consequence applications such as aerospace systems leverage computational models to support decisions, the importance of assessing the credibility of these models becomes a high priority. Two elements in the credibility assessment are verification and validation. The former focuses on convergence of the solution (i.e. solution verification) and the “pedigree” of the codes used to evaluate the model. The latter assess the agreement of the model prediction to real data. The outcome of these elements should map to a statement of credibility on the predictions. As such this credibility should be integrated into the decision making process. In this paper, we present a perspective as to how to integrate these element into a decision making process. The key challenge is to span the gap between physics-based codes, quantitative capability assessments (V&V/UQ), and qualitative risk-mitigation concepts.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.