Publications

Results 51–100 of 210

Search results

Jump to search filters

Improving Multi-Model Trajectory Simulation Estimators using Model Selection and Tuning

AIAA Science and Technology Forum and Exposition, AIAA SciTech Forum 2022

Bomarito, Geoffrey F.; Geraci, Gianluca; Warner, James E.; Leser, Patrick E.; Leser, William P.; Eldred, Michael; Jakeman, John D.; Gorodetsky, Alex A.

Multi-model Monte Carlo methods have been illustrated to be an efficient and accurate alternative to standard Monte Carlo (MC) in the model-based propagation of uncertainty in entry, descent, and landing (EDL) applications. These multi-model MC methods fuse predictions from low-fidelity models with the high-fidelity EDL model of interest to produce unbiased statistics with a fraction of the computational cost. The accuracy and efficiency of the multi-model MC methods are dependent upon the magnitude of correlations of the low-fidelity models with the high-fidelity model, but also upon the correlation amongst the low-fidelity models, and their relative computational cost. Because of this layer of complexity, the question of how to optimally select the set of low-fidelity models has remained open. In this work, methods for optimal model construction and tuning are investigated as a means to increase the speed and precision of trajectory simulation for EDL. Specifically, the focus is on the inclusion of low-fidelity model tuning within the sample allocation optimization that accompanies multi-model MC methods. Results indicate that low-fidelity model tuning can significantly improve efficiency and precision of trajectory simulations and provide an increased edge to multi-model MC methods when compared to standard MC.

More Details

Reverse-mode differentiation in arbitrary tensor network format: with application to supervised learning

Journal of Machine Learning Research

Safta, Cosmin; Jakeman, John D.; Gorodetsky, Alex A.

This paper describes an efficient reverse-mode differentiation algorithm for contraction operations for arbitrary and unconventional tensor network topologies. The approach leverages the tensor contraction tree of Evenbly and Pfeifer (2014), which provides an instruction set for the contraction sequence of a network. We show that this tree can be efficiently leveraged for differentiation of a full tensor network contraction using a recursive scheme that exploits (1) the bilinear property of contraction and (2) the property that trees have a single path from root to leaves. While differentiation of tensor-tensor contraction is already possible in most automatic differentiation packages, we show that exploiting these two additional properties in the specific context of contraction sequences can improve eficiency. Following a description of the algorithm and computational complexity analysis, we investigate its utility for gradient-based supervised learning for low-rank function recovery and for fitting real-world unstructured datasets. We demonstrate improved performance over alternating least-squares optimization approaches and the capability to handle heterogeneous and arbitrary tensor network formats. When compared to alternating minimization algorithms, we find that the gradient-based approach requires a smaller oversampling ratio (number of samples compared to number model parameters) for recovery. This increased efficiency extends to fitting unstructured data of varying dimensionality and when employing a variety of tensor network formats. Here, we show improved learning using the hierarchical Tucker method over the tensor-train in high-dimensional settings on a number of benchmark problems.

More Details

Assessing the predictive impact of factor fixing with an adaptive uncertainty-based approach

Environmental Modelling and Software

Wang, Qian; Guillaume, Joseph; Jakeman, John D.; Yang, Tao; Iwanaga, Takuya; Croke, Barry; Jakeman, Tony

Despite widespread use of factor fixing in environmental modeling, its effect on model predictions has received little attention and is instead commonly presumed to be negligible. We propose a proof-of-concept adaptive method for systematically investigating the impact of factor fixing. The method uses Global Sensitivity Analysis methods to identify groups of sensitive parameters, then quantifies which groups can be safely fixed at nominal values without exceeding a maximum acceptable error, demonstrated using the 21-dimensional Sobol’ G-function. Furthermore, three error measures are considered for quantities of interest, namely Relative Mean Absolute Error, Pearson Product-Moment Correlation and Relative Variance. Results demonstrate that factor fixing may cause large errors in the model results unexpectedly, when preliminary analysis suggests otherwise, and that the default value selected affects the number of factors to fix. To improve the applicability and methodological development of factor fixing, a new research agenda encompassing five opportunities is discussed for further attention.

More Details

Risk-Adaptive Experimental Design for High-Consequence Systems: LDRD Final Report

Kouri, Drew P.; Jakeman, John D.; Huerta, Jose G.; Walsh, Timothy; Smith, Chandler; Uryasev, Stan

Constructing accurate statistical models of critical system responses typically requires an enormous amount of data from physical experiments or numerical simulations. Unfortunately, data generation is often expensive and time consuming. To streamline the data generation process, optimal experimental design determines the 'best' allocation of experiments with respect to a criterion that measures the ability to estimate some important aspect of an assumed statistical model. While optimal design has a vast literature, few researchers have developed design paradigms targeting tail statistics, such as quantiles. In this project, we tailored and extended traditional design paradigms to target distribution tails. Our approach included (i) the development of new optimality criteria to shape the distribution of prediction variances, (ii) the development of novel risk-adapted surrogate models that provably overestimate certain statistics including the probability of exceeding a threshold, and (iii) the asymptotic analysis of regression approaches that target tail statistics such as superquantile regression. To accompany our theoretical contributions, we released implementations of our methods for surrogate modeling and design of experiments in two complementary open source software packages, the ROL/OED Toolkit and PyApprox.

More Details

MFNets: data efficient all-at-once learning of multifidelity surrogates as directed networks of information sources

Computational Mechanics

Gorodetsky, Alex A.; Jakeman, John D.; Geraci, Gianluca

We present an approach for constructing a surrogate from ensembles of information sources of varying cost and accuracy. The multifidelity surrogate encodes connections between information sources as a directed acyclic graph, and is trained via gradient-based minimization of a nonlinear least squares objective. While the vast majority of state-of-the-art assumes hierarchical connections between information sources, our approach works with flexibly structured information sources that may not admit a strict hierarchy. The formulation has two advantages: (1) increased data efficiency due to parsimonious multifidelity networks that can be tailored to the application; and (2) no constraints on the training data—we can combine noisy, non-nested evaluations of the information sources. Finally, numerical examples ranging from synthetic to physics-based computational mechanics simulations indicate the error in our approach can be orders-of-magnitude smaller, particularly in the low-data regime, than single-fidelity and hierarchical multifidelity approaches.

More Details

Surrogate Modeling For Efficiently Accurately and Conservatively Estimating Measures of Risk

Jakeman, John D.; Kouri, Drew P.; Huerta, Jose G.

We present a surrogate modeling framework for conservatively estimating measures of risk from limited realizations of an expensive physical experiment or computational simulation. We adopt a probabilistic description of risk that assigns probabilities to consequences associated with an event and use risk measures, which combine objective evidence with the subjective values of decision makers, to quantify anticipated outcomes. Given a set of samples, we construct a surrogate model that produces estimates of risk measures that are always greater than their empirical estimates obtained from the training data. These surrogate models not only limit over-confidence in reliability and safety assessments, but produce estimates of risk measures that converge much faster to the true value than purely sample-based estimates. We first detail the construction of conservative surrogate models that can be tailored to the specific risk preferences of the stakeholder and then present an approach, based upon stochastic orders, for constructing surrogate models that are conservative with respect to families of risk measures. The surrogate models introduce a bias that allows them to conservatively estimate the target risk measures. We provide theoretical results that show that this bias decays at the same rate as the L2 error in the surrogate model. Our numerical examples confirm that risk-aware surrogate models do indeed over-estimate the target risk measures while converging at the expected rate.

More Details

Adaptive resource allocation for surrogate modeling of systems comprised of multiple disciplines with varying fidelity

Friedman, Sam; Jakeman, John D.; Eldred, Michael; Tamellini, Lorenzo; Gorodestky, Alex A.; Allaire, Doug

We present an adaptive algorithm for constructing surrogate models for integrated systems composed of a set of coupled components. With this goal we introduce ‘coupling’ variables with a priori unknown distributions that allow approximations of each component to be built independently. Once built, the surrogates of the components are combined and used to predict system-level quantities of interest (QoI) at a fraction of the cost of interrogating the full system model. We use a greedy experimental design procedure, based upon a modification of Multi-Index Stochastic Collocation (MISC), to minimize the error of the combined surrogate. This is achieved by refining each component surrogate in accordance with its relative contribution to error in the approximation of the system-level QoI. Our adaptation of MISC is a multi-fidelity procedure that can leverage ensembles of models of varying cost and accuracy, for one or more components, to produce estimates of system-level QoI. Several numerical examples demonstrate the efficacy of the proposed approach on systems involving feed-forward and feedback coupling. For a fixed computational budget, the proposed algorithm is able to produce approximations that are orders of magnitude more accurate than approximations that treat the integrated system as a black-box.

More Details

Data-driven learning of nonautonomous systems

SIAM Journal on Scientific Computing

Qin, Tong; Chen, Zhen; Jakeman, John D.; Xiu, Dongbin

We present a numerical framework for recovering unknown nonautonomous dynamical systems with time-dependent inputs. To circumvent the difficulty presented by the nonautonomous nature of the system, our method transforms the solution state into piecewise integration of the system over a discrete set of time instances. The time-dependent inputs are then locally parameterized by using a proper model, for example, polynomial regression, in the pieces determined by the time instances. This transforms the original system into a piecewise parametric system that is locally time invariant. We then design a deep neural network structure to learn the local models. Once the network model is constructed, it can be iteratively used over time to conduct global system prediction. We provide theoretical analysis of our algorithm and present a number of numerical examples to demonstrate the effectiveness of the method.

More Details

The Future of Sensitivity Analysis: An essential discipline for systems modeling and policy support

Environmental Modelling and Software

Razavi, Saman; Jakeman, Anthony; Saltelli, Andrea; Iooss, Bertrand; Borgonovo, Emanuele; Plischke, Elmar; Lo Piano, Samuele; Iwanaga, Takuya; Becker, William; Tarantola, Stefano; Guillaume, Joseph H.A.; Jakeman, John D.; Gupta, Hoshin; Melillo, Nicola; Rabitti, Giovanni; Chabridon, Vincent; Duan, Qingyun; Sun, Xifu; Smith, Stefan; Sheikholeslami, Razi; Hosseini, Nasim; Asadzadeh, Masoud; Puy, Arnald; Kucherenko, Sergei; Maier, Holger R.

Sensitivity analysis (SA) is en route to becoming an integral part of mathematical modeling. The tremendous potential benefits of SA are, however, yet to be fully realized, both for advancing mechanistic and data-driven modeling of human and natural systems, and in support of decision making. In this perspective paper, a multidisciplinary group of researchers and practitioners revisit the current status of SA, and outline research challenges in regard to both theoretical frameworks and their applications to solve real-world problems. Six areas are discussed that warrant further attention, including (1) structuring and standardizing SA as a discipline, (2) realizing the untapped potential of SA for systems modeling, (3) addressing the computational burden of SA, (4) progressing SA in the context of machine learning, (5) clarifying the relationship and role of SA to uncertainty quantification, and (6) evolving the use of SA in support of decision making. An outlook for the future of SA is provided that underlines how SA must underpin a wide variety of activities to better serve science and society.

More Details

Cholesky-based experimental design for Gaussian process and kernel-based emulation and calibration

Communications in Computational Physics

Helmut, Harbrecht; Jakeman, John D.; Zaspel, Peter

Gaussian processes and other kernel-based methods are used extensively to construct approximations of multivariate data sets. The accuracy of these approximations is dependent on the data used. This paper presents a computationally efficient algorithm to greedily select training samples that minimize the weighted Lp error of kernel-based approximations for a given number of data. The method successively generates nested samples, with the goal of minimizing the error in high probability regions of densities specified by users. The algorithm presented is extremely simple and can be implemented using existing pivoted Cholesky factorization methods. Training samples are generated in batches which allows training data to be evaluated (labeled) in parallel. For smooth kernels, the algorithm performs comparably with the greedy integrated variance design but has significantly lower complexity. Numerical experiments demonstrate the efficacy of the approach for bounded, unbounded, multi-modal and non-tensor product densities. We also show how to use the proposed algorithm to efficiently generate surrogates for inferring unknown model parameters from data using Bayesian inference.

More Details

Deep learning of parameterized equations with applications to uncertainty quantification

International Journal for Uncertainty Quantification

Qin, Tong; Chen, Zhen; Jakeman, John D.; Xiu, Dongbin

We propose a learning algorithm for discovering unknown parameterized dynamical systems by using observational data of the state variables. Our method is built upon and extends the recent work of discovering unknown dynamical systems, in particular those using a deep neural network (DNN). We propose a DNN structure, largely based upon the residual network (ResNet), to not only learn the unknown form of the governing equation but also to take into account the random effect embedded in the system, which is generated by the random parameters. Once the DNN model is successfully constructed, it is able to produce system prediction over a longer term and for arbitrary parameter values. For uncertainty quantification, it allows us to conduct uncertainty analysis by evaluating solution statistics over the parameter space.

More Details

Non-destructive simulation of node defects in additively manufactured lattice structures

Additive Manufacturing

Lozanovski, Bill; Downing, David; Tino, Rance; Du Plessis, Anton; Tran, Phuong; Jakeman, John D.; Shidid, Darpan; Emmelmann, Claus; Qian, Ma; Choong, Peter; Brandt, Milan; Leary, Martin

Additive Manufacturing (AM), commonly referred to as 3D printing, offers the ability to not only fabricate geometrically complex lattice structures but parts in which lattice topologies in-fill volumes bounded by complex surface geometries. However, current AM processes produce defects on the strut and node elements which make up the lattice structure. This creates an inherent difference between the as-designed and as-fabricated geometries, which negatively affects predictions (via numerical simulation) of the lattice's mechanical performance. Although experimental and numerical analysis of an AM lattice's bulk structure, unit cell and struts have been performed, there exists almost no research data on the mechanical response of the individual as-manufactured lattice node elements. This research proposes a methodology that, for the first time, allows non-destructive quantification of the mechanical response of node elements within an as-manufactured lattice structure. A custom-developed tool is used to extract and classify each individual node geometry from micro-computed tomography scans of an AM fabricated lattice. Voxel-based finite element meshes are generated for numerical simulation and the mechanical response distribution is compared to that of the idealised computer-aided design model. The method demonstrates compatibility with Uncertainty Quantification methods that provide opportunities for efficient prediction of a population of nodal responses from sampled data. Overall, the non-destructive and automated nature of the node extraction and response evaluation is promising for its application in qualification and certification of additively manufactured lattice structures.

More Details

MFNets: Multifidelity data-driven networks for Bayesian learning and prediction

International Journal for Uncertainty Quantification

Gorodetsky, Alex; Jakeman, John D.; Geraci, Gianluca; Eldred, Michael

This paper presents a multifidelity uncertainty quantification framework called MFNets. We seek to address three existing challenges that arise when experimental and simulation data from different sources are used to enhance statistical estimation and prediction with quantified uncertainty. Specifically, we demonstrate that MFNets can (1) fuse heterogeneous data sources arising from simulations with different parameterizations, e.g simulation models with different uncertain parameters or data sets collected under different environmental conditions; (2) encode known relationships among data sources to reduce data requirements; and (3) improve the robustness of existing multi-fidelity approaches to corrupted data. MFNets construct a network of latent variables (LVs) to facilitate the fusion of data from an ensemble of sources of varying credibility and cost. These LVs are posited as explanatory variables that provide the source of correlation in the observed data. Furthermore, MFNets provide a way to encode prior physical knowledge to enable efficient estimation of statistics and/or construction of surrogates via conditional independence relations on the LVs. We highlight the utility of our framework with a number of theoretical results which assess the quality of the posterior mean as a frequentist estimator and compare it to standard sampling approaches that use single fidelity, multilevel, and control variate Monte Carlo estimators. We also use the proposed framework to derive the Monte Carlo-based control variate estimator entirely from the use of Bayes rule and linear-Gaussian models -- to our knowledge the first such derivation. Finally, we demonstrate the ability to work with different uncertain parameters across different models.

More Details

Modeling Water Quality in Watersheds: From Here to the Next Generation

Water Resources Research

Fu, B.; Horsburgh, J.S.; Jakeman, A.J.; Gualtieri, C.; Arnold, T.; Marshall, L.; Green, T.R.; Quinn, N.W.T.; Volk, M.; Hunt, R.J.; Vezzaro, L.; Croke, B.F.W.; Jakeman, John D.; Snow, V.; Rashleigh, B.

In this synthesis, we assess present research and anticipate future development needs in modeling water quality in watersheds. We first discuss areas of potential improvement in the representation of freshwater systems pertaining to water quality, including representation of environmental interfaces, in-stream water quality and process interactions, soil health and land management, and (peri-)urban areas. In addition, we provide insights into the contemporary challenges in the practices of watershed water quality modeling, including quality control of monitoring data, model parameterization and calibration, uncertainty management, scale mismatches, and provisioning of modeling tools. Finally, we make three recommendations to provide a path forward for improving watershed water quality modeling science, infrastructure, and practices. These include building stronger collaborations between experimentalists and modelers, bridging gaps between modelers and stakeholders, and cultivating and applying procedural knowledge to better govern and support water quality modeling processes within organizations.

More Details

A Survey of Constrained Gaussian Process: Approaches and Implementation Challenges

Journal of Machine Learning for Modeling and Computing

Swiler, Laura P.; Gulian, Mamikon; Frankel, A.; Safta, Cosmin; Jakeman, John D.

Gaussian process regression is a popular Bayesian framework for surrogate modeling of expensive data sources. As part of a larger effort in scientific machine learning, many recent works have incorporated physical constraints or other a priori information within Gaussian process regression to supplement limited data and regularize the behavior of the model. We provide an overview and survey of several classes of Gaussian process constraints, including positivity or bound constraints, monotonicity and convexity constraints, differential equation constraints provided by linear PDEs, and boundary condition constraints. We compare the strategies behind each approach as well as the differences in implementation, concluding with a discussion of the computational challenges introduced by constraints.

More Details

Incorporating physical constraints into Gaussian process surrogate models (LDRD Project Summary)

Swiler, Laura P.; Gulian, Mamikon; Frankel, A.; Jakeman, John D.; Safta, Cosmin

This report summarizes work done under the Laboratory Directed Research and Development (LDRD) project titled "Incorporating physical constraints into Gaussian process surrogate models?' In this project, we explored a variety of strategies for constraint implementations. We considered bound constraints, monotonicity and related convexity constraints, Gaussian processes which are constrained to satisfy linear operator constraints which represent physical laws expressed as partial differential equations, and intrinsic boundary condition constraints. We wrote three papers and are currently finishing two others. We developed initial software implementations for some approaches. This report summarizes the work done under this LDRD.

More Details

Optimal experimental design for prediction based on push-forward probability measures

Journal of Computational Physics

Wildey, Timothy; Butler, T.; Jakeman, John D.

Incorporating experimental data is essential for increasing the credibility of simulation-aided decision making and design. This paper presents a method which uses a computational model to guide the optimal acquisition of experimental data to produce data-informed predictions of quantities of interest (QoI). Many strategies for optimal experimental design (OED) select data that maximize some utility that measures the reduction in uncertainty of uncertain model parameters, for example the expected information gain between prior and posterior distributions of these parameters. In this paper, we seek to maximize the expected information gained from the push-forward of an initial (prior) density to the push-forward of the updated (posterior) density through the parameter-to-prediction map. The formulation presented is based upon the solution of a specific class of stochastic inverse problems which seeks a probability density that is consistent with the model and the data in the sense that the push-forward of this density through the parameter-to-observable map matches a target density on the observable data. While this stochastic inverse problem forms the mathematical basis for our approach, we develop a one-step algorithm, focused on push-forward probability measures, that leverages inference-for-prediction to bypass constructing the solution to the stochastic inverse problem. A number of numerical results are presented to demonstrate the utility of this optimal experimental design for prediction and facilitate comparison of our approach with traditional OED.

More Details

Learning Hidden Structure in Multi-Fidelity Information Sources for Efficient Uncertainty Quantification (LDRD 218317)

Jakeman, John D.; Eldred, Michael; Geraci, Gianluca; Smith, Thomas M.; Gorodetsky, Alex A.

This report summarizes the work done under the Laboratory Directed Research and Development (LDRD) project entitled "Learning Hidden Structure in Multi-Fidelity Information Sources for Efficient Uncertainty Quantification". In this project we investigated multi-fidelity strategies for fusing data from information sources of varying cost and accuracy. Most existing strategies exploit hierarchical relationships between models, for example that occur when different models are generated by refining a numerical discretization parameter. In this work we focused on encoding the relationships between information sources using directed acyclic graphs. The multi-fidelity networks can have general structure and represent a significantly greater variety of modeling relationships than recursive networks used in the current state literature. Numerical results show that a non-hierarchical multi-fidelity Monte Carlo strategy can reduce the cost of estimating uncertainty in predictions of a model of plasma expanding in a vacuum by almost two orders of magnitude.

More Details

Arctic Tipping Points Triggering Global Change (LDRD Final Report)

Peterson, Kara J.; Powell, Amy J.; Tezaur, Irina K.; Roesler, Erika L.; Nichol, Jeffrey; Peterson, Matthew G.; Davis, Warren L.; Jakeman, John D.; Stracuzzi, David J.; Bull, Diana L.

The Arctic is warming and feedbacks in the coupled Earth system may be driving the Arctic to tipping events that could have critical downstream impacts for the rest of the globe. In this project we have focused on analyzing sea ice variability and loss in the coupled Earth system Summer sea ice loss is happening rapidly and although the loss may be smooth and reversible, it has significant consequences for other Arctic systems as well as geopolitical and economic implications. Accurate seasonal predictions of sea ice minimum extent and long-term estimates of timing for a seasonally ice-free Arctic depend on a better understanding of the factors influencing sea ice dynamics and variation in this strongly coupled system. Under this project we have investigated the most influential factors in accurate predictions of September Arctic sea ice extent using machine learning models trained separately on observational data and on simulation data from five E3SM historical ensembles. Monthly averaged data from June, July, and August for a selection of ice, ocean, and atmosphere variables were used to train a random forest regression model. Gini importance measures were computed for each input feature with the testing data. We found that sea ice volume is most important earlier in the season (June) and sea ice extent became a more important predictor closer to September. Results from this study provide insight into how feature importance changes with forecast length and illustrates differences between observational data and simulated Earth system data. We have additionally performed a global sensitivity analysis (GSA) using a fully coupled ultra- low resolution configuration E3SM. To our knowledge, this is the first global sensitivity analysis involving the fully-coupled E3SM Earth system model. We have found that parameter variations show significant impact on the Arctic climate state and atmospheric parameters related to cloud parameterizations are the most significant. We also find significant interactions between parameters from different components of E3SM. The results of this study provide invaluable insight into the relative importance of various parameters from the sea ice, atmosphere and ocean components of the E3SM (including cross-component parameter interactions) on various Arctic-focused quantities of interest (QOIs).

More Details

Data-Driven Learning of Non-Autonomous Systems

Qin, Tong; Chen, Zhen; Jakeman, John D.; Xiu, Dongbin

We present a numerical framework for recovering unknown non-autonomous dynamical systems with time-dependent inputs. To circumvent the difficulty presented by the non-autonomous nature of the system, our method transforms the solution state into piecewise integration of the system over a discrete set of time instances. The time-dependent inputs are then locally parameterized by using a proper model, for example, polynomial regression, in the pieces determined by the time instances. This transforms the original system into a piecewise parametric system that is locally time invariant. We then design a deep neural network structure to learn the local models. Once the network model is constructed, it can be iteratively used over time to conduct global system prediction. We provide theoretical analysis of our algorithm and present a number of numerical examples to demonstrate the effectiveness of the method.

More Details

A generalized approximate control variate framework for multifidelity uncertainty quantification

Journal of Computational Physics

Gorodetsky, Alex A.; Geraci, Gianluca; Eldred, Michael; Jakeman, John D.

We describe and analyze a variance reduction approach for Monte Carlo (MC) sampling that accelerates the estimation of statistics of computationally expensive simulation models using an ensemble of models with lower cost. These lower cost models — which are typically lower fidelity with unknown statistics — are used to reduce the variance in statistical estimators relative to a MC estimator with equivalent cost. We derive the conditions under which our proposed approximate control variate framework recovers existing multifidelity variance reduction schemes as special cases. We demonstrate that existing recursive/nested strategies are suboptimal because they use the additional low-fidelity models only to efficiently estimate the unknown mean of the first low-fidelity model. As a result, they cannot achieve variance reduction beyond that of a control variate estimator that uses a single low-fidelity model with known mean. However, there often exists about an order-of-magnitude gap between the maximum achievable variance reduction using all low-fidelity models and that achieved by a single low-fidelity model with known mean. We show that our proposed approach can exploit this gap to achieve greater variance reduction by using non-recursive sampling schemes. The proposed strategy reduces the total cost of accurately estimating statistics, especially in cases where only low-fidelity simulation models are accessible for additional evaluations. Several analytic examples and an example with a hyperbolic PDE describing elastic wave propagation in heterogeneous media are used to illustrate the main features of the methodology.

More Details

Dakota, A Multilevel Parallel Object-Oriented Framework for Design Optimization Parameter Estimation Uncertainty Quantification and Sensitivity Analysis: Version 6.12 User's Manual

Adams, Brian M.; Bohnhoff, William J.; Dalbey, Keith; Ebeida, Mohamed; Eddy, John P.; Eldred, Michael; Hooper, Russell; Hough, Patricia D.; Hu, Kenneth; Jakeman, John D.; Khalil, Mohammad; Maupin, Kathryn A.; Monschke, Jason A.; Ridgway, Elliott M.; Rushdi, Ahmad; Seidl, D.T.; Stephens, John A.; Swiler, Laura P.; Winokur, Justin

The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

Dakota A Multilevel Parallel Object-Oriented Framework for Design Optimization Parameter Estimation Uncertainty Quantification and Sensitivity Analysis: Version 6.12 Theory Manual

Dalbey, Keith; Eldred, Michael; Geraci, Gianluca; Jakeman, John D.; Maupin, Kathryn A.; Monschke, Jason A.; Seidl, D.T.; Swiler, Laura P.; Foulk, James W.; Menhorn, Friedrich; Zeng, Xiaoshu

The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.

More Details

Weighted greedy-optimal design of computer experiments for kernel-based and Gaussian process model emulation and calibration

Helmut, Harbrecht; Jakeman, John D.; Zaspel, Peter

This article is concerned with the approximation of high-dimensional functions by kernel-based methods. Motivated by uncertainty quantification, which often necessitates the construction of approximations that are accurate with respect to a probability density function of random variables, we aim at minimizing the approximation error with respect to a weighted $L^p$-norm. We present a greedy procedure for designing computer experiments based upon a weighted modification of the pivoted Cholesky factorization. The method successively generates nested samples with the goal of minimizing error in regions of high probability. Numerical experiments validate that this new importance sampling strategy is superior to other sampling approaches, especially when used with non-product probability density functions. We also show how to use the proposed algorithm to efficiently generate surrogates for inferring unknown model parameters from data.

More Details

Adaptive multi-index collocation for uncertainty quantification and sensitivity analysis

Jakeman, John D.; Eldred, Michael S.; Geraci, G.; Gorodetsky, A.

In this paper, we present an adaptive algorithm to construct response surface approximations of high-fidelity models using a hierarchy of lower fidelity models. Our algorithm is based on multiindex stochastic collocation and automatically balances physical discretization error and response surface error to construct an approximation of model outputs. This surrogate can be used for uncertainty quantification (UQ) and sensitivity analysis (SA) at a fraction of the cost of a purely high-fidelity approach. We demonstrate the effectiveness of our algorithm on a canonical test problem from the UQ literature and a complex multi-physics model that simulates the performance of an integrated nozzle for an unmanned aerospace vehicle. We find that when the input-output response is sufficiently smooth our algorithm produces approximations that can be up to orders of magnitude more accurate than single fidelity approximations for a fixed computational budget.

More Details
Results 51–100 of 210
Results 51–100 of 210