Publications

Results 26–50 of 210

Search results

Jump to search filters

Multi-fidelity information fusion and resource allocation

Jakeman, John D.; Eldred, Michael; Geraci, Gianluca; Seidl, D.T.; Smith, Thomas M.; Gorodetsky, Alex A.; Pham, Trung; Narayan, Akil; Zeng, Xiaoshu; Ghanem, Roger

This project created and demonstrated a framework for the efficient and accurate prediction of complex systems with only a limited amount of highly trusted data. These next generation computational multi-fidelity tools fuse multiple information sources of varying cost and accuracy to reduce the computational and experimental resources needed for designing and assessing complex multi-physics/scale/component systems. These tools have already been used to substantially improve the computational efficiency of simulation aided modeling activities from assessing thermal battery performance to predicting material deformation. This report summarizes the work carried out during a two year LDRD project. Specifically we present our technical accomplishments; project outputs such as publications, presentations and professional leadership activities; and the project’s legacy.

More Details

PyApprox: Enabling efficient model analysis

Jakeman, John D.

PyApprox is a Python-based one-stop-shop for probabilistic analysis of scientific numerical models. Easy to use and extendable tools are provided for constructing surrogates, sensitivity analysis, Bayesian inference, experimental design, and forward uncertainty quantification. The algorithms implemented represent the most popular methods for model analysis developed over the past two decades, including recent advances in multi-fidelity approaches that use multiple model discretizations and/or simplified physics to significantly reduce the computational cost of various types of analyses. Simple interfaces are provided for the most commonly-used algorithms to limit a user’s need to tune the various hyper-parameters of each algorithm. However, more advanced work flows that require customization of hyper-parameters is also supported. An extensive set of Benchmarks from the literature is also provided to facilitate the easy comparison of different algorithms for a wide range of model analyses. This paper introduces PyApprox and its various features, and presents results demonstrating the utility of PyApprox on a benchmark problem modeling the advection of a tracer in ground water.

More Details

Global Sensitivity Analysis Using the Ultra-Low Resolution Energy Exascale Earth System Model

Journal of Advances in Modeling Earth Systems

Tezaur, Irina K.; Peterson, Kara J.; Powell, Amy J.; Jakeman, John D.; Roesler, Erika L.

For decades, Arctic temperatures have increased twice as fast as average global temperatures. As a first step toward quantifying parametric uncertainty in Arctic climate, we performed a variance-based global sensitivity analysis (GSA) using a fully coupled, ultra-low resolution (ULR) configuration of version 1 of the U.S. Department of Energy's Energy Exascale Earth System Model (E3SMv1). Specifically, we quantified the sensitivity of six quantities of interests (QOIs), which characterize changes in Arctic climate over a 75 year period, to uncertainties in nine model parameters spanning the sea ice, atmosphere, and ocean components of E3SMv1. Sensitivity indices for each QOI were computed with a Gaussian process emulator using 139 random realizations of the random parameters and fixed preindustrial forcing. Uncertainties in the atmospheric parameters in the Cloud Layers Unified by Binormals (CLUBB) scheme were found to have the most impact on sea ice status and the larger Arctic climate. Our results demonstrate the importance of conducting sensitivity analyses with fully coupled climate models. The ULR configuration makes such studies computationally feasible today due to its low computational cost. When advances in computational power and modeling algorithms enable the tractable use of higher-resolution models, our results will provide a baseline that can quantify the impact of model resolution on the accuracy of sensitivity indices. Moreover, the confidence intervals provided by our study, which we used to quantify the impact of the number of model evaluations on the accuracy of sensitivity estimates, have the potential to inform the computational resources needed for future sensitivity studies.

More Details

Global Sensitivity Analysis Using the Ultra-Low Resolution Energy Exascale Earth System Model

Journal of Advances in Modeling Earth Systems

Tezaur, Irina K.; Peterson, Kara J.; Powell, Amy J.; Jakeman, John D.; Roesler, Erika L.

For decades, Arctic temperatures have increased twice as fast as average global temperatures. As a first step toward quantifying parametric uncertainty in Arctic climate, we performed a variance-based global sensitivity analysis (GSA) using a fully coupled, ultra-low resolution (ULR) configuration of version 1 of the U.S. Department of Energy's Energy Exascale Earth System Model (E3SMv1). Specifically, we quantified the sensitivity of six quantities of interests (QOIs), which characterize changes in Arctic climate over a 75 year period, to uncertainties in nine model parameters spanning the sea ice, atmosphere, and ocean components of E3SMv1. Sensitivity indices for each QOI were computed with a Gaussian process emulator using 139 random realizations of the random parameters and fixed preindustrial forcing. Uncertainties in the atmospheric parameters in the Cloud Layers Unified by Binormals (CLUBB) scheme were found to have the most impact on sea ice status and the larger Arctic climate. Our results demonstrate the importance of conducting sensitivity analyses with fully coupled climate models. The ULR configuration makes such studies computationally feasible today due to its low computational cost. When advances in computational power and modeling algorithms enable the tractable use of higher-resolution models, our results will provide a baseline that can quantify the impact of model resolution on the accuracy of sensitivity indices. Moreover, the confidence intervals provided by our study, which we used to quantify the impact of the number of model evaluations on the accuracy of sensitivity estimates, have the potential to inform the computational resources needed for future sensitivity studies.

More Details

Adaptive experimental design for multi-fidelity surrogate modeling of multi-disciplinary systems

International Journal for Numerical Methods in Engineering

Jakeman, John D.; Friedman, Sam; Eldred, Michael; Tamellini, Lorenzo; Gorodetsky, Alex A.; Allaire, Doug

We present an adaptive algorithm for constructing surrogate models of multi-disciplinary systems composed of a set of coupled components. With this goal we introduce “coupling” variables with a priori unknown distributions that allow surrogates of each component to be built independently. Once built, the surrogates of the components are combined to form an integrated-surrogate that can be used to predict system-level quantities of interest at a fraction of the cost of the original model. The error in the integrated-surrogate is greedily minimized using an experimental design procedure that allocates the amount of training data, used to construct each component-surrogate, based on the contribution of those surrogates to the error of the integrated-surrogate. The multi-fidelity procedure presented is a generalization of multi-index stochastic collocation that can leverage ensembles of models of varying cost and accuracy, for one or more components, to reduce the computational cost of constructing the integrated-surrogate. Extensive numerical results demonstrate that, for a fixed computational budget, our algorithm is able to produce surrogates that are orders of magnitude more accurate than methods that treat the integrated system as a black-box.

More Details

Surrogate modeling for efficiently, accurately and conservatively estimating measures of risk

Reliability Engineering and System Safety

Jakeman, John D.; Kouri, Drew P.; Huerta, Jose G.

We present a surrogate modeling framework for conservatively estimating measures of risk from limited realizations of an expensive physical experiment or computational simulation. Risk measures combine objective probabilities with the subjective values of a decision maker to quantify anticipated outcomes. Given a set of samples, we construct a surrogate model that produces estimates of risk measures that are always greater than their empirical approximations obtained from the training data. These surrogate models limit over-confidence in reliability and safety assessments and produce estimates of risk measures that converge much faster to the true value than purely sample-based estimates. We first detail the construction of conservative surrogate models that can be tailored to a stakeholder's risk preferences and then present an approach, based on stochastic orders, for constructing surrogate models that are conservative with respect to families of risk measures. Our surrogate models include biases that permit them to conservatively estimate the target risk measures. We provide theoretical results that show that these biases decay at the same rate as the L2 error in the surrogate model. Numerical demonstrations confirm that risk-adapted surrogate models do indeed overestimate the target risk measures while converging at the expected rate.

More Details

Improving Multi-Model Trajectory Simulation Estimators using Model Selection and Tuning

AIAA Science and Technology Forum and Exposition, AIAA SciTech Forum 2022

Bomarito, Geoffrey F.; Geraci, Gianluca; Warner, James E.; Leser, Patrick E.; Leser, William P.; Eldred, Michael; Jakeman, John D.; Gorodetsky, Alex A.

Multi-model Monte Carlo methods have been illustrated to be an efficient and accurate alternative to standard Monte Carlo (MC) in the model-based propagation of uncertainty in entry, descent, and landing (EDL) applications. These multi-model MC methods fuse predictions from low-fidelity models with the high-fidelity EDL model of interest to produce unbiased statistics with a fraction of the computational cost. The accuracy and efficiency of the multi-model MC methods are dependent upon the magnitude of correlations of the low-fidelity models with the high-fidelity model, but also upon the correlation amongst the low-fidelity models, and their relative computational cost. Because of this layer of complexity, the question of how to optimally select the set of low-fidelity models has remained open. In this work, methods for optimal model construction and tuning are investigated as a means to increase the speed and precision of trajectory simulation for EDL. Specifically, the focus is on the inclusion of low-fidelity model tuning within the sample allocation optimization that accompanies multi-model MC methods. Results indicate that low-fidelity model tuning can significantly improve efficiency and precision of trajectory simulations and provide an increased edge to multi-model MC methods when compared to standard MC.

More Details
Results 26–50 of 210
Results 26–50 of 210