Publications

185 Results
Skip to search filters

Deployment of Multifidelity Uncertainty Quantification for Thermal Battery Assessment Part I: Algorithms and Single Cell Results

Eldred, Michael S.; Adams, Brian M.; Geraci, Gianluca G.; Portone, Teresa P.; Ridgway, Elliott M.; Stephens, John A.; Wildey, Timothy M.

This report documents the results of an FY22 ASC V&V level 2 milestone demonstrating new algorithms for multifidelity uncertainty quantification. Part I of the report describes the algorithms, studies their performance on a simple model problem, and then deploys the methods to a thermal battery example from the open literature. Part II (restricted distribution) applies the multifidelity UQ methods to specific thermal batteries of interest to the NNSA/ASC program.

More Details

Multi-fidelity information fusion and resource allocation

Jakeman, John D.; Eldred, Michael S.; Geraci, Gianluca G.; Seidl, Daniel T.; Smith, Thomas M.; Gorodetsky, Alex A.; Pham, Trung P.; Narayan, Akil N.; Zeng, Xiaoshu Z.; Ghanem, Roger G.

This project created and demonstrated a framework for the efficient and accurate prediction of complex systems with only a limited amount of highly trusted data. These next generation computational multi-fidelity tools fuse multiple information sources of varying cost and accuracy to reduce the computational and experimental resources needed for designing and assessing complex multi-physics/scale/component systems. These tools have already been used to substantially improve the computational efficiency of simulation aided modeling activities from assessing thermal battery performance to predicting material deformation. This report summarizes the work carried out during a two year LDRD project. Specifically we present our technical accomplishments; project outputs such as publications, presentations and professional leadership activities; and the project’s legacy.

More Details

Adaptive experimental design for multi-fidelity surrogate modeling of multi-disciplinary systems

International Journal for Numerical Methods in Engineering

Jakeman, John D.; Friedman, Sam; Eldred, Michael S.; Tamellini, Lorenzo; Gorodetsky, Alex A.; Allaire, Doug

We present an adaptive algorithm for constructing surrogate models of multi-disciplinary systems composed of a set of coupled components. With this goal we introduce “coupling” variables with a priori unknown distributions that allow surrogates of each component to be built independently. Once built, the surrogates of the components are combined to form an integrated-surrogate that can be used to predict system-level quantities of interest at a fraction of the cost of the original model. The error in the integrated-surrogate is greedily minimized using an experimental design procedure that allocates the amount of training data, used to construct each component-surrogate, based on the contribution of those surrogates to the error of the integrated-surrogate. The multi-fidelity procedure presented is a generalization of multi-index stochastic collocation that can leverage ensembles of models of varying cost and accuracy, for one or more components, to reduce the computational cost of constructing the integrated-surrogate. Extensive numerical results demonstrate that, for a fixed computational budget, our algorithm is able to produce surrogates that are orders of magnitude more accurate than methods that treat the integrated system as a black-box.

More Details

srMO-BO-3GP: A sequential regularized multi-objective Bayesian optimization for constrained design applications using an uncertain Pareto classifier

Journal of Mechanical Design

Tran, Anh; Eldred, Michael S.; McCann, Scott M.; Wang, Yan W.

Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, and latent-variable modeling, have been proposed to address the limitations of the classical BO framework. In this work, we propose a novel multi-objective BO formalism, called srMO-BO-3GP, to solve multi-objective optimization problems in a sequential setting. Three different Gaussian processes (GPs) are stacked together, where each of the GPs is assigned with a different task. The first GP is used to approximate a single-objective computed from the multi-objective definition, the second GP is used to learn the unknown constraints, and the third one is used to learn the uncertain Pareto frontier. At each iteration, a multi-objective augmented Tchebycheff function is adopted to convert multi-objective to single-objective, where the regularization with a regularized ridge term is also introduced to smooth the single-objective function. Finally, we couple the third GP along with the classical BO framework to explore the convergence and diversity of the Pareto frontier by the acquisition function for exploitation and exploration. The proposed framework is demonstrated using several numerical benchmark functions, as well as a thermomechanical finite element model for flip-chip package design optimization.

More Details

Adaptive resource allocation for surrogate modeling of systems comprised of multiple disciplines with varying fidelity

Friedman, Sam F.; Jakeman, John D.; Eldred, Michael S.; Tamellini, Lorenzo T.; Gorodestky, Alex G.; Allaire, Doug A.

We present an adaptive algorithm for constructing surrogate models for integrated systems composed of a set of coupled components. With this goal we introduce ‘coupling’ variables with a priori unknown distributions that allow approximations of each component to be built independently. Once built, the surrogates of the components are combined and used to predict system-level quantities of interest (QoI) at a fraction of the cost of interrogating the full system model. We use a greedy experimental design procedure, based upon a modification of Multi-Index Stochastic Collocation (MISC), to minimize the error of the combined surrogate. This is achieved by refining each component surrogate in accordance with its relative contribution to error in the approximation of the system-level QoI. Our adaptation of MISC is a multi-fidelity procedure that can leverage ensembles of models of varying cost and accuracy, for one or more components, to produce estimates of system-level QoI. Several numerical examples demonstrate the efficacy of the proposed approach on systems involving feed-forward and feedback coupling. For a fixed computational budget, the proposed algorithm is able to produce approximations that are orders of magnitude more accurate than approximations that treat the integrated system as a black-box.

More Details

A generalized approximate control variate framework for multifidelity uncertainty quantification

Journal of Computational Physics

Gorodetsky, Alex A.; Geraci, Gianluca G.; Eldred, Michael S.; Jakeman, John D.

We describe and analyze a variance reduction approach for Monte Carlo (MC) sampling that accelerates the estimation of statistics of computationally expensive simulation models using an ensemble of models with lower cost. These lower cost models — which are typically lower fidelity with unknown statistics — are used to reduce the variance in statistical estimators relative to a MC estimator with equivalent cost. We derive the conditions under which our proposed approximate control variate framework recovers existing multifidelity variance reduction schemes as special cases. We demonstrate that existing recursive/nested strategies are suboptimal because they use the additional low-fidelity models only to efficiently estimate the unknown mean of the first low-fidelity model. As a result, they cannot achieve variance reduction beyond that of a control variate estimator that uses a single low-fidelity model with known mean. However, there often exists about an order-of-magnitude gap between the maximum achievable variance reduction using all low-fidelity models and that achieved by a single low-fidelity model with known mean. We show that our proposed approach can exploit this gap to achieve greater variance reduction by using non-recursive sampling schemes. The proposed strategy reduces the total cost of accurately estimating statistics, especially in cases where only low-fidelity simulation models are accessible for additional evaluations. Several analytic examples and an example with a hyperbolic PDE describing elastic wave propagation in heterogeneous media are used to illustrate the main features of the methodology.

More Details

Towards an integrated and efficient framework for leveraging reduced order models for multifidelity uncertainty quantification

AIAA Scitech 2020 Forum

Blonigan, Patrick J.; Geraci, Gianluca G.; Rizzi, Francesco N.; Eldred, Michael S.

Truly predictive numerical simulations can only be obtained by performing Uncertainty Quantification. However, many realistic engineering applications require extremely complex and computationally expensive high-fidelity numerical simulations for their accurate performance characterization. Very often the combination of complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity uncertainty quantification approach, i.e. a workflow that only uses high-fidelity simulations, is unfeasible due to its prohibitive overall computational cost. To overcome this difficulty, in recent years multifidelity strategies emerged and gained popularity. Their core idea is to combine simulations with varying levels of fidelity/accuracy in order to obtain estimators or surrogates that can yield the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a priori a sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical model realization and thus its computational cost. Less attention has been dedicated to low-fidelity models that can be built directly from a small number of available high-fidelity simulations. In this work we focus our attention on reduced order models (ROMs). Our main goal in this work is to investigate the combination of multifidelity uncertainty quantification and ROMs in order to evaluate the possibility to obtain an efficient framework for propagating uncertainties through expensive numerical codes. We focus our attention on sampling-based multifidelity approaches, like the multifidelity control variate, and we consider several scenarios for a numerical test problem, namely the Kuramoto-Sivashinsky equation, for which the efficiency of the multifidelity-ROM estimator is compared to the standard (single-fidelity) Monte Carlo approach.

More Details

srMO-BO-3GP: A sequential regularized multi-objective constrained Bayesian optimization for design applications

Proceedings of the ASME Design Engineering Technical Conference

Tran, Anh; Eldred, Michael S.; McCann, Scott; Wang, Yan

Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, and latent-variable modeling, have been proposed to address the limitations of the classical BO framework. In this work, we propose a novel multi-objective (MO) extension, called srMOBO-3GP, to solve the MO optimization problems in a sequential setting. Three different Gaussian processes (GPs) are stacked together, where each of the GP is assigned with a different task: the first GP is used to approximate a single-objective computed from the MO definition, the second GP is used to learn the unknown constraints, and the third GP is used to learn the uncertain Pareto frontier. At each iteration, a MO augmented Tchebycheff function converting MO to single-objective is adopted and extended with a regularized ridge term, where the regularization is introduced to smooth the single-objective function. Finally, we couple the third GP along with the classical BO framework to explore the richness and diversity of the Pareto frontier by the exploitation and exploration acquisition function. The proposed framework is demonstrated using several numerical benchmark functions, as well as a thermomechanical finite element model for flip-chip package design optimization.

More Details

Multifideliy optimization under uncertainty for a scramjet-inspired problem

Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018

Menhorn, Friedrich M.; Geraci, Gianluca G.; Eldred, Michael S.; Marzouk, Youssef M.

SNOWPAC (Stochastic Nonlinear Optimization With Path-Augmented Constraints) is a method for stochastic nonlinear constrained derivative-free optimization. For such problems, it extends the path-augmented constraints framework introduced by the deterministic optimization method NOWPAC and uses a noise-adapted trust region approach and Gaussian processes for noise reduction. In recent developments, SNOWPAC is available in the DAKOTA framework which offers a highly flexible interface to couple the optimizer with different sampling strategies or surrogate models. In this paper we discuss details of SNOWPAC and demonstrate the coupling with DAKOTA. We showcase the approach by presenting design optimization results of a shape in a 2D supersonic duct. This simulation is supposed to imitate the behavior of the flow in a SCRAMJET simulation but at a much lower computational cost. Additionally different mesh or model fidelities can be tested. Thus, it serves as a convenient test case before moving to costly SCRAMJET computations. Here, we study deterministic results and results obtained by introducing uncertainty on inflow parameters. As sampling strategies we compare classical Monte Carlo sampling with multilevel Monte Carlo approaches for which we developed new error estimators. All approaches show a reasonable optimization of the design over the objective while maintaining or seeking feasibility. Furthermore, we achieve significant reductions in computational cost by using multilevel approaches that combine solutions from different grid resolutions.

More Details

Multilevel uncertainty quantification of a wind turbine large eddy simulation model

Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018

Maniaci, David C.; Frankel, Ari L.; Geraci, Gianluca G.; Blaylock, Myra L.; Eldred, Michael S.

Wind energy is stochastic in nature; the prediction of aerodynamic quantities and loads relevant to wind energy applications involves modeling the interaction of a range of physics over many scales for many different cases. These predictions require a range of model fidelity, as predictive models that include the interaction of atmospheric and wind turbine wake physics can take weeks to solve on institutional high performance computing systems. In order to quantify the uncertainty in predictions of wind energy quantities with multiple models, researchers at Sandia National Laboratories have applied Multilevel-Multifidelity methods. A demonstration study was completed using simulations of a NREL 5MW rotor in an atmospheric boundary layer with wake interaction. The flow was simulated with two models of disparate fidelity; an actuator line wind plant large-eddy scale model, Nalu, using several mesh resolutions in combination with a lower fidelity model, OpenFAST. Uncertainties in the flow conditions and actuator forces were propagated through the model using Monte Carlo sampling to estimate the velocity defect in the wake and forces on the rotor. Coarse-mesh simulations were leveraged along with the lower-fidelity flow model to reduce the variance of the estimator, and the resulting Multilevel-Multifidelity strategy demonstrated a substantial improvement in estimator efficiency compared to the standard Monte Carlo method.

More Details

Mfnets: Multi-fidelity data-driven networks for bayesian learning and prediction

International Journal for Uncertainty Quantification

Gorodetsky, Alex A.; Jakeman, John D.; Geraci, Gianluca G.; Eldred, Michael S.

This paper presents a Bayesian multifidelity uncertainty quantification framework, called MFNets, which can be used to overcome three of the major challenges that arise when data from different sources are used to enhance statistical estimation and prediction with quantified uncertainty. Specifically, we demonstrate that MFNets can (1) fuse heterogeneous data sources arising from simulations with different parameterizations, e.g., simulation models with different uncertain parameters or data sets collected under different environmental conditions; (2) encode known relationships among data sources to reduce data requirements; and (3) improve the robustness of existing multifidelity approaches to corrupted data. In this paper we use MFNets to construct linear-subspace surrogates and estimate statistics using Monte Carlo sampling. In addition to numerical examples highlighting the efficacy of MFNets we also provide a number of theoretical results. Firstly we provide a mechanism to assess the quality of the posterior mean of a MFNets Monte Carlo estimator as a frequentist estimator. We then use this result to compare MFNets estimators to existing single fidelity, multilevel, and control variate Monte Carlo estimators. In this context, we show that the Monte Carlo-based control variate estimator can be derived entirely from the use of Bayes rule and linear-Gaussian models—to our knowledge the first such derivation. Finally, we demonstrate the ability to work with different uncertain parameters across different models.

More Details

Interatomic Potentials Models for Cu-Ni and Cu-Zr Alloys

Safta, Cosmin S.; Geraci, Gianluca G.; Eldred, Michael S.; Najm, H.N.; Riegner, David R.; Windl, Wolfgang W.

This study explores a Bayesian calibration framework for the RAMPAGE alloy potential model for Cu-Ni and Cu-Zr systems, respectively. In RAMPAGE potentials, it is proposed that once calibrated potentials for individual elements are available, the inter-species interac- tions can be described by fitting a Morse potential for pair interactions with three parameters, while densities for the embedding function can be scaled by two parameters from the elemen- tal densities. Global sensitivity analysis tools were employed to understand the impact each parameter has on the MD simulation results. A transitional Markov Chain Monte Carlo al- gorithm was used to generate samples from the multimodal posterior distribution consistent with the discrepancy between MD simulation results and DFT data. For the Cu-Ni system the posterior predictive tests indicate that the fitted interatomic potential model agrees well with the DFT data, justifying the basic RAMPAGE assumtions. For the Cu-Zr system, where the phase diagram suggests more complicated atomic interactions than in the case of Cu-Ni, the RAMPAGE potential captured only a subset of the DFT data. The resulting posterior distri- bution for the 5 model parameters exhibited several modes, with each mode corresponding to specific simulation data and a suboptimal agreement with the DFT results.

More Details

Leveraging Intrinsic Principal Directions for Multifidelity Uncertainty Quantification

Geraci, Gianluca G.; Eldred, Michael S.

In this work we propose an approach for accelerating Uncertainty Quantification (UQ) analysis in the context of Multifidelity applications. In the presence of complex multiphysics applications, which often require a prohibitive computational cost for each evaluation, mul- tifidelity UQ techniques try to accelerate the convergence of statistics by leveraging the in- formation collected from a larger number of a lower fidelity model realizations. However, at the-state-of-the-art, the performance of virtually all the multifidelity UQ techniques is related to the correlation between the high and low-fidelity models. In this work we proposed to design a multifidelity UQ framework based on the identification of independent important directions for each model. The main idea is that if the responses of each model can be represented in a common space, this latter can be shared to enhance the correlation when the samples are drawn with respect to it instead of the original variables. There are also two main additional advantages that follow from this approach. First, the models might be correlated even if their original parametrizations are chosen independently. Second, if the shared space between mod- els has a lower dimensionality than the original spaces, the UQ analysis might benefit from a dimension reduction standpoint. In this work we designed this general framework and we also tested it on several test problems ranging from analytical functions for verification purpose, up to more challenging application problems as an aero-thermo-structural analysis and a scramjet flow analysis.

More Details

Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

AIAA Journal

Huan, Xun H.; Safta, Cosmin S.; Sargsyan, Khachik S.; Geraci, Gianluca G.; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem L.; Oefelein, Joseph C.; Najm, H.N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

More Details

Global sensitivity analysis and estimation of model error, toward uncertainty quantification in scramjet computations

AIAA Journal

Huan, Xun H.; Safta, Cosmin S.; Geraci, Gianluca G.; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem M.; Oefelein, Joseph C.; Najm, H.N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertainparameters involvedandthe high computational costofflow simulations. These difficulties are addressedin this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying themin the current studyto large-eddy simulations ofajet incrossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system's stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

More Details

Multifidelity uncertainty quantification using spectral stochastic discrepancy models

Handbook of Uncertainty Quantification

Eldred, Michael S.; Ng, Leo W.T.; Barone, Matthew F.; Domino, Stefan P.

When faced with a restrictive evaluation budget that is typical of today's highfidelity simulation models, the effective exploitation of lower-fidelity alternatives within the uncertainty quantification (UQ) process becomes critically important. Herein, we explore the use of multifidelity modeling within UQ, for which we rigorously combine information from multiple simulation-based models within a hierarchy of fidelity, in seeking accurate high-fidelity statistics at lower computational cost. Motivated by correction functions that enable the provable convergence of a multifidelity optimization approach to an optimal high-fidelity point solution, we extend these ideas to discrepancy modeling within a stochastic domain and seek convergence of a multifidelity uncertainty quantification process to globally integrated high-fidelity statistics. For constructing stochastic models of both the low-fidelity model and the model discrepancy, we employ stochastic expansion methods (non-intrusive polynomial chaos and stochastic collocation) computed by integration/interpolation on structured sparse grids or regularized regression on unstructured grids. We seek to employ a coarsely resolved grid for the discrepancy in combination with a more finely resolved Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Grid for the low-fidelity model. The resolutions of these grids may be defined statically or determined through uniform and adaptive refinement processes. Adaptive refinement is particularly attractive, as it has the ability to preferentially target stochastic regions where the model discrepancy becomes more complex, i.e., where the predictive capabilities of the low-fidelity model start to break down and greater reliance on the high-fidelity model (via the discrepancy) is necessary. These adaptive refinement processes can either be performed separately for the different grids or within a coordinated multifidelity algorithm. In particular, we present an adaptive greedy multifidelity approach in which we extend the generalized sparse grid concept to consider candidate index set refinements drawn from multiple sparse grids, as governed by induced changes in the statistical quantities of interest and normalized by relative computational cost. Through a series of numerical experiments using statically defined sparse grids, adaptive multifidelity sparse grids, and multifidelity compressed sensing, we demonstrate that the multifidelity UQ process converges more rapidly than a single-fidelity UQ in cases where the variance of the discrepancy is reduced relative to the variance of the high-fidelity model (resulting in reductions in initial stochastic error), where the spectrum of the expansion coefficients of the model discrepancy decays more rapidly than that of the high-fidelity model (resulting in accelerated convergence rates), and/or where the discrepancy is more sparse than the high-fidelity model (requiring the recovery of fewer significant terms).

More Details

A multifidelity multilevel Monte Carlo method for uncertainty propagation in aerospace applications

19th AIAA Non-Deterministic Approaches Conference, 2017

Geraci, Gianluca G.; Eldred, Michael S.; Iaccarino, Gianluca

The accurate evaluation of the performance of complex engineering devices needs to rely on high-fidelity numerical simulations and the systematic characterization and propagation of uncertainties. Several sources of uncertainty may impact the performance of an engineering device through operative conditions, manufacturing tolerances, and even physical models. In the presence of multiphysics systems the number of the uncertain parameters can be fairly large and their propagation through the numerical codes still remains prohibitive because the overall computational budget often allows for only an handful of such high-fidelity realizations. On the other side, common engineering practice can take advantage from a solid history of development and assessment of so called low-fidelity models which albeit less accurate are often capable to at least capture overall trends and parameter dependencies of the system. In this contribution we address the forward propagation of uncertainty parameters relying on statistical estimators built on sequences of numerical and physical discretizations which are provably convergent to the high-fidelity statistics, while exploiting low-fidelity computational models to increase the reliability and confidence in the numerical predictions. The performances of the approaches are presented by means of two fairly complicated aerospace problems, namely the aero-thermo-structural analysis of a turbofan engine nozzle and a flow through a scramjet-like device.

More Details

Uncertainty Quantification in LES Computations of Turbulent Multiphase Combustion in a Scramjet Engine ? ScramjetUQ ?

Najm, H.N.; Debusschere, Bert D.; Safta, Cosmin S.; Sargsyan, Khachik S.; Huan, Xun H.; Oefelein, Joseph C.; Lacaze, Guilhem M.; Vane, Zachary P.; Eldred, Michael S.; Geraci, Gianluca G.; Knio, Omar K.; Sraj, I.S.; Scovazzi, G.S.; Colomes, O.C.; Marzouk, Y.M.; Zahm, O.Z.; Menhorn, F.M.; Ghanem, R.G.; Tsilifis, P.T.

Abstract not provided.

Uncertainty Quantification in LES Computations of Turbulent Multiphase Combustion in a Scramjet Engine

Najm, H.N.; Debusschere, Bert D.; Safta, Cosmin S.; Sargsyan, Khachik S.; Huan, Xun H.; Oefelein, Joseph C.; Lacaze, Guilhem M.; Vane, Zachary P.; Eldred, Michael S.; Geraci, G.G.; Knio, O.K.; Sraj, I.S.; Scovazzi, G.S.; Colomes, O.C.; Marzouk, Y.M.; Zahm, O.Z.; Augustin, F.A.; Menhorn, F.M.; Ghanem, R.G.; Tsilifis, P.T.

Abstract not provided.

Enhancing ℓ1-minimization estimates of polynomial chaos expansions using basis selection

Journal of Computational Physics

Jakeman, J.D.; Eldred, Michael S.; Sargsyan, Khachik S.

In this paper we present a basis selection method that can be used with ℓ1-minimization to adaptively determine the large coefficients of polynomial chaos expansions (PCE). The adaptive construction produces anisotropic basis sets that have more terms in important dimensions and limits the number of unimportant terms that increase mutual coherence and thus degrade the performance of ℓ1-minimization. The important features and the accuracy of basis selection are demonstrated with a number of numerical examples. Specifically, we show that for a given computational budget, basis selection produces a more accurate PCE than would be obtained if the basis were fixed a priori. We also demonstrate that basis selection can be applied with non-uniform random variables and can leverage gradient information.

More Details

Overview of selected DOE/NNSA predictive science initiatives: The predictive science academic alliance program and the DAKOTA project

53rd AIAA Aerospace Sciences Meeting

Eldred, Michael S.; Swiler, Laura P.; Adams, Brian M.; Jakeman, J.D.

This paper supports a special session on "Frontiers of Uncertainty Management for Com- plex Aerospace Systems" with the intent of summarizing two aspects of the DOE/NNSA Accelerated Strategic Computing (ASC) program, each of which is focused on predictive science using complex simulation models. The first aspect is academic outreach, as enabled by the Predictive Science Academic Alliance Program (PSAAP). The second aspect is the Dakota project at Sandia National Laboratories, which develops and deploys uncertainty quantification capabilities focused on high fidelity modeling and simulation on large-scale parallel computers.

More Details

Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

Journal of Aerospace Information Systems

Safta, Cosmin S.; Sargsyan, Khachik S.; Najm, H.N.; Chowdhary, Kenny; Debusschere, Bert D.; Swiler, Laura P.; Eldred, Michael S.

In this paper, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory-epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

More Details

Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

Adams, Brian M.; Jakeman, John D.; Swiler, Laura P.; Stephens, John A.; Vigil, Dena V.; Wildey, Timothy M.; Bauman, Lara E.; Bohnhoff, William J.; Dalbey, Keith D.; Eddy, John P.; Ebeida, Mohamed S.; Eldred, Michael S.; Hough, Patricia D.; Hu, Kenneth H.

The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

Adams, Brian M.; Jakeman, John D.; Swiler, Laura P.; Stephens, John A.; Vigil, Dena V.; Wildey, Timothy M.; Bauman, Lara E.; Bohnhoff, William J.; Dalbey, Keith D.; Eddy, John P.; Ebeida, Mohamed S.; Eldred, Michael S.; Hough, Patricia D.; Hu, Kenneth H.

The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.

More Details

Sensitivity analysis techniques applied to a system of hyperbolic conservation laws

Reliability Engineering and System Safety

Weirs, V.G.; Kamm, James R.; Swiler, Laura P.; Tarantola, Stefano; Ratto, Marco; Adams, Brian M.; Rider, William J.; Eldred, Michael S.

Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluations. © 2011 Elsevier Ltd. All rights reserved.

More Details

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis

Adams, Brian M.; Bohnhoff, William J.; Dalbey, Keith D.; Eddy, John P.; Eldred, Michael S.; Hough, Patricia D.; Lefantzi, Sophia L.; Swiler, Laura P.; Vigil, Dena V.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

More Details

Investigation of advanced UQ for CRUD prediction with VIPRE

Eldred, Michael S.

This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinement for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.

More Details

Reliability-based design optimization using efficient global reliability analysis

Eldred, Michael S.

Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.

More Details

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's manual

Adams, Brian M.; Dalbey, Keith D.; Eldred, Michael S.; Swiler, Laura P.; Bohnhoff, William J.; Eddy, John P.; Haskell, Karen H.; Hough, Patricia D.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's reference manual

Adams, Brian M.; Dalbey, Keith D.; Eldred, Michael S.; Swiler, Laura P.; Bohnhoff, William J.; Eddy, John P.; Haskell, Karen H.; Hough, Patricia D.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

More Details

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, developers manual

Adams, Brian M.; Dalbey, Keith D.; Eldred, Michael S.; Swiler, Laura P.; Bohnhoff, William J.; Eddy, John P.; Haskell, Karen H.; Hough, Patricia D.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

More Details

Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results

Eldred, Michael S.; Swiler, Laura P.

This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

More Details

Recent advances in non-intrusive polynomial chaos and stochastic collocation methods for uncertainty analysis and design

Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference

Eldred, Michael S.

Non-intrusive polynomial chaos expansion (PCE) and stochastic collocation (SC) methods are attractive techniques for uncertainty quantification (UQ) due to their strong mathematical basis and ability to produce functional representations of stochastic variability. PCE estimates coefficients for known orthogonal polynomial basis functions based on a set of response function evaluations, using sampling, linear regression, tensor-product quadrature, or Smolyak sparse grid approaches. SC, on the other hand, forms interpolation functions for known coefficients, and requires the use of structured collocation point sets derived from tensor product or sparse grids. When tailoring the basis functions or interpolation grids to match the forms of the input uncertainties, exponential convergence rates can be achieved with both techniques for a range of probabilistic analysis problems. In addition, analytic features of the expansions can be exploited for moment estimation and stochastic sensitivity analysis. In this paper, the latest ideas for tailoring these expansion methods to numerical integration approaches will be explored, in which expansion formulations are modified to best synchronize with tensor-product quadrature and Smolyak sparse grids using linear and nonlinear growth rules. The most promising stochastic expansion approaches are then carried forward for use in new approaches for mixed aleatory-epistemic UQ, employing second-order probability approaches, and design under uncertainty, employing bilevel, sequential, and multifidelity approaches. Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc.

More Details

LDRD Final Report: Capabilities for Uncertainty in Predictive Science

Phipps, Eric T.; Eldred, Michael S.; Salinger, Andrew G.

Predictive simulation of systems comprised of numerous interconnected, tightly coupled com-ponents promises to help solve many problems of scientific and national interest. Howeverpredictive simulation of such systems is extremely challenging due to the coupling of adiverse set of physical and biological length and time scales. This report investigates un-certainty quantification methods for such systems that attempt to exploit their structure togain computational efficiency. The traditional layering of uncertainty quantification aroundnonlinear solution processes is inverted to allow for heterogeneous uncertainty quantificationmethods to be applied to each component in a coupled system. Moreover this approachallows stochastic dimension reduction techniques to be applied at each coupling interface.The mathematical feasibility of these ideas is investigated in this report, and mathematicalformulations for the resulting stochastically coupled nonlinear systems are developed.3

More Details

Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity

Adams, Brian M.; Wittwer, Jonathan W.; Bichon, Barron J.; Carnes, Brian C.; Copps, Kevin D.; Eldred, Michael S.; Hopkins, Matthew M.; Neckels, David C.; Notz, Patrick N.; Subia, Samuel R.

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

More Details

DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual

Swiler, Laura P.; Giunta, Anthony A.; Hart, William E.; Watson, Jean-Paul W.; Eddy, John P.; Griffin, Joshua G.; Hough, Patricia D.; Kolda, Tamara G.; Martinez-Canales, Monica L.; Williams, Pamela J.; Eldred, Michael S.; Brown, Shannon L.; Adams, Brian M.; Dunlavy, Daniel D.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

The surfpack software library for surrogate modeling of sparse irregularly spaced multidimensional data

Collection of Technical Papers - 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference

Giunta, Anthony A.; Swiler, Laura P.; Brown, Shannon L.; Eldred, Michael S.; Richards, Mark D.; Cyr, Eric C.

Surfpack is a general-purpose software library of multidimensional function approximation methods for applications such as data visualization, data mining, sensitivity analysis, uncertainty quantification, and numerical optimization. Surfpack is primarily intended for use on sparse, irregularly-spaced, n-dimensional data sets where classical function approximation methods are not applicable. Surfpack is under development at Sandia National Laboratories, with a public release of Surfpack version 1.0 in August 2006. This paper provides an overview of Surfpack's function approximation methods along with some of its software design attributes. In addition, this paper provides some simple examples to illustrate the utility of Surfpack for data trend analysis, data visualization, and optimization. Copyright © 2006 by the American Institute of Aeronautics and Astronautics, Inc.

More Details

Perspectives on optimization under uncertainty: Algorithms and applications

Giunta, Anthony A.; Eldred, Michael S.; Swiler, Laura P.; Trucano, Timothy G.

This paper provides an overview of several approaches to formulating and solving optimization under uncertainty (OUU) engineering design problems. In addition, the topic of high-performance computing and OUU is addressed, with a discussion of the coarse- and fine-grained parallel computing opportunities in the various OUU problem formulations. The OUU approaches covered here are: sampling-based OUU, surrogate model-based OUU, analytic reliability-based OUU (also known as reliability-based design optimization), polynomial chaos-based OUU, and stochastic perturbation-based OUU.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0 Developers Manual (title change from electronic posting)

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.; Giunta, Anthony A.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0 Reference Manual

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

More Details
185 Results
185 Results