Publications

Results 26–41 of 41

Search results

Jump to search filters

Dakota A Multilevel Parallel Object-Oriented Framework for Design Optimization Parameter Estimation Uncertainty Quantification and Sensitivity Analysis: Version 6.12 Theory Manual

Dalbey, Keith D.; Eldred, Michael S.; Geraci, Gianluca G.; Jakeman, John D.; Maupin, Kathryn A.; Monschke, Jason A.; Seidl, Daniel T.; Swiler, Laura P.; Laros, James H.; Menhorn, Friedrich; Zeng, Xiaoshu

The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.

More Details

Dakota, A Multilevel Parallel Object-Oriented Framework for Design Optimization Parameter Estimation Uncertainty Quantification and Sensitivity Analysis: Version 6.12 User's Manual

Adams, Brian M.; Bohnhoff, William J.; Dalbey, Keith D.; Ebeida, Mohamed S.; Eddy, John P.; Eldred, Michael S.; Hooper, Russell H.; Hough, Patricia D.; Hu, Kenneth H.; Jakeman, John D.; Khalil, Mohammad K.; Maupin, Kathryn A.; Monschke, Jason A.; Ridgway, Elliott M.; Rushdi, Ahmad R.; Seidl, Daniel T.; Stephens, John A.; Swiler, Laura P.; Winokur, Justin W.

The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

Validation metrics for deterministic and probabilistic data

Journal of Verification, Validation and Uncertainty Quantification

Maupin, Kathryn A.; Swiler, Laura P.; Porter, Nathan W.

Computational modeling and simulation are paramount to modern science. Computational models often replace physical experiments that are prohibitively expensive, dangerous, or occur at extreme scales. Thus, it is critical that these models accurately represent and can be used as replacements for reality. This paper provides an analysis of metrics that may be used to determine the validity of a computational model. While some metrics have a direct physical meaning and a long history of use, others, especially those that compare probabilistic data, are more difficult to interpret. Furthermore, the process of model validation is often application-specific, making the procedure itself challenging and the results difficult to defend. We therefore provide guidance and recommendations as to which validation metric to use, as well as how to use and decipher the results. An example is included that compares interpretations of various metrics and demonstrates the impact of model and experimental uncertainty on validation processes.

More Details

Adaptive selection and validation of models of complex systems in the presence of uncertainty

Research in Mathematical Sciences

Maupin, Kathryn A.; Oden, John T.

This paper describes versions of OPAL, the Occam-Plausibility Algorithm (Farrell et al. in J Comput Phys 295:189–208, 2015) in which the use of Bayesian model plausibilities is replaced with information-theoretic methods, such as the Akaike information criterion and the Bayesian information criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.

More Details

Validation Metrics for Deterministic and Probabilistic Data

Maupin, Kathryn A.; Swiler, Laura P.

The purpose of this document is to compare and contrast metrics that may be considered for use in validating computational models. Metrics suitable for use in one application, scenario, and/or quantity of interest may not be acceptable in another; these notes merely provide information that may be used as guidance in selecting a validation metric.

More Details
Results 26–41 of 41
Results 26–41 of 41