Publications

Results 1–50 of 210

Search results

Jump to search filters

Democratizing uncertainty quantification

Journal of Computational Physics

Seelinger, Linus; Reinarz, Anne; Lykkegaard, Mikkel B.; Alghamdi, Amal M.A.; Aristoff, David; Bangerth, Wolfgang; Benezech, Jean; Diez, Matteo; Frey, Kurt; Jakeman, John D.; Jorgensen, Jakob S.; Kim, Ki-Tae; Martinelli, Massimiliano; Parno, Matthew; Pellegrini, Riccardo; Petra, Noemi; Riis, Nicolai A.B.; Rosenfeld, Katherine; Serani, Andrea; Tamellini, Lorenzo; Villa, Umberto; Dodwell, Tim J.; Scheichl, Robert

Uncertainty Quantification (UQ) is vital to safety-critical model-based analyses, but the widespread adoption of sophisticated UQ methods is limited by technical complexity. In this paper, we introduce UM-Bridge (the UQ and Modeling Bridge), a high-level abstraction and software protocol that facilitates universal interoperability of UQ software with simulation codes. It breaks down the technical complexity of advanced UQ applications and enables separation of concerns between experts. UM-Bridge democratizes UQ by allowing effective interdisciplinary collaboration, accelerating the development of advanced UQ methods, and making it easy to perform UQ analyses from prototype to High Performance Computing (HPC) scale. In addition, we present a library of ready-to-run UQ benchmark problems, all easily accessible through UM-Bridge. These benchmarks support UQ methodology research, enabling reproducible performance comparisons. We demonstrate UM-Bridge with several scientific applications, harnessing HPC resources even using UQ codes not designed with HPC support.

More Details

Assessing convergence in global sensitivity analysis: a review of methods for assessing and monitoring convergence

Socio-Environmental Systems Modelling

Sun, Xifu; Jakeman, Anthony J.; Croke, Barry F.W.; Roberts, Stephen G.; Jakeman, John D.

In global sensitivity analysis (GSA) of a model, a proper convergence analysis of metrics is essential for ensuring a level of confidence or trustworthiness in sensitivity results obtained, yet is somewhat deficient in practice. The level of confidence in sensitivity measures, particularly in relation to their influence and support for decisions from scientific, social and policy perspectives, is heavily reliant on the convergence of GSA. We review the literature and summarize the available methods for monitoring and assessing convergence of sensitivity measures based on application purposes. The aim is to expose the various choices for convergence assessment and encourage further testing of available methods to clarify their level of robustness. Furthermore, the review identifies a pressing need for comparative studies on convergence assessment methods to establish a clear hierarchy of effectiveness and encourages the adoption of systematic approaches for enhanced robustness in sensitivity analysis.

More Details

Progressive reduced order modeling: from single-phase flow to coupled multiphysics processes

58th US Rock Mechanics / Geomechanics Symposium 2024, ARMA 2024

Kadeethum, Teeratorn; Chang, Kyung W.; Jakeman, John D.; Yoon, Hongkyu

This study introduces the Progressive Improved Neural Operator (p-INO) framework, aimed at advancing machine-learning-based reduced-order models within geomechanics for underground resource optimization and carbon sequestration applications.The p-INO method transcends traditional transfer learning limitations through progressive learning, enhancing the capability of transferring knowledge from many sources.Through numerical experiments, the performance of p-INO is benchmarked against standard Improved Neural Operators (INO) in scenarios varying by data availability (different number of training samples).The research utilizes simulation data reflecting scenarios like single-phase, two-phase, and two-phase flow with mechanics inspired by the Illinois Basin Decatur Project.Results reveal that p-INO significantly surpasses conventional INO models in accuracy, particularly in data-constrained environments.Besides, adding more priori information (more trained models used by p-INO) can further enhance the process.This experiment demonstrates p-INO's robustness in leveraging sparse datasets for precise predictions across complex subsurface physics scenarios.The findings underscore the potential of p-INO to revolutionize predictive modeling in geomechanics, presenting a substantial improvement in computational efficiency and accuracy for large-scale subsurface simulations.

More Details

PyApprox: A software package for sensitivity analysis, Bayesian inference, optimal experimental design, and multi-fidelity uncertainty quantification and surrogate modeling

Environmental Modelling and Software

Jakeman, John D.

PyApprox is a Python-based one-stop-shop for probabilistic analysis of numerical models such as those used in the earth, environmental and engineering sciences. Easy to use and extendable tools are provided for constructing surrogates, sensitivity analysis, Bayesian inference, experimental design, and forward uncertainty quantification. The algorithms implemented represent a wide range of methods for model analysis developed over the past two decades, including recent advances in multi-fidelity approaches that use multiple model discretizations and/or simplified physics to significantly reduce the computational cost of various types of analyses. An extensive set of Benchmarks from the literature is also provided to facilitate the easy comparison of new or existing algorithms for a wide range of model analyses. This paper introduces PyApprox and its various features, and presents results demonstrating the utility of PyApprox on a benchmark problem modeling the advection of a tracer in groundwater.

More Details

Multifidelity uncertainty quantification with models based on dissimilar parameters

Computer Methods in Applied Mechanics and Engineering

Zeng, Xiaoshu; Geraci, Gianluca; Eldred, Michael; Jakeman, John D.; Gorodetsky, Alex A.; Ghanem, Roger

Multifidelity uncertainty quantification (MF UQ) sampling approaches have been shown to significantly reduce the variance of statistical estimators while preserving the bias of the highest-fidelity model, provided that the low-fidelity models are well correlated. However, maintaining a high level of correlation can be challenging, especially when models depend on different input uncertain parameters, which drastically reduces the correlation. Existing MF UQ approaches do not adequately address this issue. In this work, we propose a new sampling strategy that exploits a shared space to improve the correlation among models with dissimilar parameterization. We achieve this by transforming the original coordinates onto an auxiliary manifold using the adaptive basis (AB) method (Tipireddy and Ghanem, 2014). The AB method has two main benefits: (1) it provides an effective tool to identify the low-dimensional manifold on which each model can be represented, and (2) it enables easy transformation of polynomial chaos representations from high- to low-dimensional spaces. This latter feature is used to identify a shared manifold among models without requiring additional evaluations. We present two algorithmic flavors of the new estimator to cover different analysis scenarios, including those with legacy and non-legacy high-fidelity (HF) data. We provide numerical results for analytical examples, a direct field acoustic test, and a finite element model of a nuclear fuel assembly. For all examples, we compare the proposed strategy against both single-fidelity and MF estimators based on the original model parameterization.

More Details

A Decision-Relevant Factor-Fixing Framework: Application to Uncertainty Analysis of a High-Dimensional Water Quality Model

Water Resources Research

Wang, Qian; Guillaume, Joseph H.A.; Jakeman, John D.; Bennett, Frederick R.; Croke, Barry F.W.; Fu, Baihua; Yang, Tao; Jakeman, Anthony J.

Factor Fixing (FF) is a common method for reducing the number of model parameters to lower computational cost. FF typically starts with distinguishing the insensitive parameters from the sensitive and pursues uncertainty quantification (UQ) on the resulting reduced-order model, fixing each insensitive parameter at a fixed value. There is a need, however, to expand such a common approach to consider the effects of decision choices in the FF-UQ procedure on metrics of interest. Therefore, to guide the use of FF and increase confidence in the resulting dimension-reduced model, we propose a new adaptive framework consisting of four principles: (a) re-parameterize the model first to reduce obvious non-identifiable parameter combinations, (b) focus on decision relevance especially with respect to errors in quantities of interest (QoI), (c) conduct adaptive evaluation and robustness assessment of errors in the QoI across FF choices as sample size increases, and (d) reconsider whether fixing is warranted. The framework is demonstrated on a spatially-distributed water quality model. The error in estimates of QoI caused by FF can be estimated using a Polynomial Chaos Expansion (PCE) surrogate model. Built with 70 model runs, the surrogate is computationally inexpensive to evaluate and can provide global sensitivity indices for free. For the selected catchment, just two factors may provide an acceptably accurate estimate of model uncertainty in the average annual load of Total Suspended Solids (TSS), suggesting that reducing the uncertainty in these two parameters is a priority for future work before undertaking further formal uncertainty quantification.

More Details

Epistemic Uncertainty-Aware Barlow Twins Reduced Order Modeling for Nonlinear Contact Problems

IEEE Access

Kadeethum, Teeratorn; Jakeman, John D.; Choi, Youngsoo; Bouklas, Nikolaos; Yoon, Hongkyu

This study presents a method for constructing machine learning-based reduced order models (ROMs) that accurately simulate nonlinear contact problems while quantifying epistemic uncertainty. These purely non-intrusive ROMs significantly lower computational costs compared to traditional full order models (FOMs). The technique utilizes adversarial training combined with an ensemble of Barlow twins reduced order models (BT-ROMs) to maximize the information content of the nonlinear reduced manifolds. These lower-dimensional manifolds are equipped with Gaussian error estimates, allowing for quantifying epistemic uncertainty in the ROM predictions. The effectiveness of these ROMs, referred to as UQ-BT-ROMs, is demonstrated in the context of contact between a rigid indenter and a hyperelastic substrate under finite deformations. The ensemble of BT-ROMs improves accuracy and computational efficiency compared to existing alternatives. The relative error between the UQ-BT-ROM and FOM solutions ranges from approximately 3% to 8% across all benchmarks. Remarkably, this high level of accuracy is achieved at a significantly reduced computational cost compared to FOMs. For instance, the online phase of the UQ-BT-ROM takes only 0.001 seconds, while a single FOM evaluation requires 63 seconds. Furthermore, the error estimate produced by the UQ-BT-ROMs reasonably captures the errors in the ROMs, with increasing accuracy as training data increases. The ensemble approach improves accuracy and computational efficiency compared to existing alternatives. The UQ-BT-ROMs provide a cost-effective solution with significantly reduced computational times while maintaining a high level of accuracy.

More Details

Improving Bayesian networks multifidelity surrogate construction with basis adaptation

AIAA SciTech Forum and Exposition, 2023

Zeng, Xiaoshu; Geraci, Gianluca; Gorodetsky, Alex A.; Jakeman, John D.; Eldred, Michael; Ghanem, Roger

Surrogate construction is an essential component for all non-deterministic analyses in science and engineering. The efficient construction of easy and cheaper-to-run alternatives to a computationally expensive code paves the way for outer loop workflows for forward and inverse uncertainty quantification and optimization. Unfortunately, the accurate construction of a surrogate still remains a task that often requires a prohibitive number of computations, making the approach unattainable for large-scale and high-fidelity applications. Multifidelity approaches offer the possibility to lower the computational expense requirement on the highfidelity code by fusing data from additional sources. In this context, we have demonstrated that multifidelity Bayesian Networks (MFNets) can efficiently fuse information derived from models with an underlying complex dependency structure. In this contribution, we expand on our previous work by adopting a basis adaptation procedure for the selection of the linear model representing each data source. Our numerical results demonstrate that this procedure is computationally advantageous because it can maximize the use of limited data to learn and exploit the important structures shared among models. Two examples are considered to demonstrate the benefits of the proposed approach: an analytical problem and a nuclear fuel finite element assembly. From these two applications, a lower dependency of MFnets on the model graph has been also observed.

More Details

Multi-fidelity information fusion and resource allocation

Jakeman, John D.; Eldred, Michael; Geraci, Gianluca; Seidl, D.T.; Smith, Thomas M.; Gorodetsky, Alex A.; Pham, Trung; Narayan, Akil; Zeng, Xiaoshu; Ghanem, Roger

This project created and demonstrated a framework for the efficient and accurate prediction of complex systems with only a limited amount of highly trusted data. These next generation computational multi-fidelity tools fuse multiple information sources of varying cost and accuracy to reduce the computational and experimental resources needed for designing and assessing complex multi-physics/scale/component systems. These tools have already been used to substantially improve the computational efficiency of simulation aided modeling activities from assessing thermal battery performance to predicting material deformation. This report summarizes the work carried out during a two year LDRD project. Specifically we present our technical accomplishments; project outputs such as publications, presentations and professional leadership activities; and the project’s legacy.

More Details

PyApprox: Enabling efficient model analysis

Jakeman, John D.

PyApprox is a Python-based one-stop-shop for probabilistic analysis of scientific numerical models. Easy to use and extendable tools are provided for constructing surrogates, sensitivity analysis, Bayesian inference, experimental design, and forward uncertainty quantification. The algorithms implemented represent the most popular methods for model analysis developed over the past two decades, including recent advances in multi-fidelity approaches that use multiple model discretizations and/or simplified physics to significantly reduce the computational cost of various types of analyses. Simple interfaces are provided for the most commonly-used algorithms to limit a user’s need to tune the various hyper-parameters of each algorithm. However, more advanced work flows that require customization of hyper-parameters is also supported. An extensive set of Benchmarks from the literature is also provided to facilitate the easy comparison of different algorithms for a wide range of model analyses. This paper introduces PyApprox and its various features, and presents results demonstrating the utility of PyApprox on a benchmark problem modeling the advection of a tracer in ground water.

More Details

Global Sensitivity Analysis Using the Ultra-Low Resolution Energy Exascale Earth System Model

Journal of Advances in Modeling Earth Systems

Tezaur, Irina K.; Peterson, Kara J.; Powell, Amy J.; Jakeman, John D.; Roesler, Erika L.

For decades, Arctic temperatures have increased twice as fast as average global temperatures. As a first step toward quantifying parametric uncertainty in Arctic climate, we performed a variance-based global sensitivity analysis (GSA) using a fully coupled, ultra-low resolution (ULR) configuration of version 1 of the U.S. Department of Energy's Energy Exascale Earth System Model (E3SMv1). Specifically, we quantified the sensitivity of six quantities of interests (QOIs), which characterize changes in Arctic climate over a 75 year period, to uncertainties in nine model parameters spanning the sea ice, atmosphere, and ocean components of E3SMv1. Sensitivity indices for each QOI were computed with a Gaussian process emulator using 139 random realizations of the random parameters and fixed preindustrial forcing. Uncertainties in the atmospheric parameters in the Cloud Layers Unified by Binormals (CLUBB) scheme were found to have the most impact on sea ice status and the larger Arctic climate. Our results demonstrate the importance of conducting sensitivity analyses with fully coupled climate models. The ULR configuration makes such studies computationally feasible today due to its low computational cost. When advances in computational power and modeling algorithms enable the tractable use of higher-resolution models, our results will provide a baseline that can quantify the impact of model resolution on the accuracy of sensitivity indices. Moreover, the confidence intervals provided by our study, which we used to quantify the impact of the number of model evaluations on the accuracy of sensitivity estimates, have the potential to inform the computational resources needed for future sensitivity studies.

More Details

Global Sensitivity Analysis Using the Ultra-Low Resolution Energy Exascale Earth System Model

Journal of Advances in Modeling Earth Systems

Tezaur, Irina K.; Peterson, Kara J.; Powell, Amy J.; Jakeman, John D.; Roesler, Erika L.

For decades, Arctic temperatures have increased twice as fast as average global temperatures. As a first step toward quantifying parametric uncertainty in Arctic climate, we performed a variance-based global sensitivity analysis (GSA) using a fully coupled, ultra-low resolution (ULR) configuration of version 1 of the U.S. Department of Energy's Energy Exascale Earth System Model (E3SMv1). Specifically, we quantified the sensitivity of six quantities of interests (QOIs), which characterize changes in Arctic climate over a 75 year period, to uncertainties in nine model parameters spanning the sea ice, atmosphere, and ocean components of E3SMv1. Sensitivity indices for each QOI were computed with a Gaussian process emulator using 139 random realizations of the random parameters and fixed preindustrial forcing. Uncertainties in the atmospheric parameters in the Cloud Layers Unified by Binormals (CLUBB) scheme were found to have the most impact on sea ice status and the larger Arctic climate. Our results demonstrate the importance of conducting sensitivity analyses with fully coupled climate models. The ULR configuration makes such studies computationally feasible today due to its low computational cost. When advances in computational power and modeling algorithms enable the tractable use of higher-resolution models, our results will provide a baseline that can quantify the impact of model resolution on the accuracy of sensitivity indices. Moreover, the confidence intervals provided by our study, which we used to quantify the impact of the number of model evaluations on the accuracy of sensitivity estimates, have the potential to inform the computational resources needed for future sensitivity studies.

More Details

Adaptive experimental design for multi-fidelity surrogate modeling of multi-disciplinary systems

International Journal for Numerical Methods in Engineering

Jakeman, John D.; Friedman, Sam; Eldred, Michael; Tamellini, Lorenzo; Gorodetsky, Alex A.; Allaire, Doug

We present an adaptive algorithm for constructing surrogate models of multi-disciplinary systems composed of a set of coupled components. With this goal we introduce “coupling” variables with a priori unknown distributions that allow surrogates of each component to be built independently. Once built, the surrogates of the components are combined to form an integrated-surrogate that can be used to predict system-level quantities of interest at a fraction of the cost of the original model. The error in the integrated-surrogate is greedily minimized using an experimental design procedure that allocates the amount of training data, used to construct each component-surrogate, based on the contribution of those surrogates to the error of the integrated-surrogate. The multi-fidelity procedure presented is a generalization of multi-index stochastic collocation that can leverage ensembles of models of varying cost and accuracy, for one or more components, to reduce the computational cost of constructing the integrated-surrogate. Extensive numerical results demonstrate that, for a fixed computational budget, our algorithm is able to produce surrogates that are orders of magnitude more accurate than methods that treat the integrated system as a black-box.

More Details

Surrogate modeling for efficiently, accurately and conservatively estimating measures of risk

Reliability Engineering and System Safety

Jakeman, John D.; Kouri, Drew P.; Huerta, Jose G.

We present a surrogate modeling framework for conservatively estimating measures of risk from limited realizations of an expensive physical experiment or computational simulation. Risk measures combine objective probabilities with the subjective values of a decision maker to quantify anticipated outcomes. Given a set of samples, we construct a surrogate model that produces estimates of risk measures that are always greater than their empirical approximations obtained from the training data. These surrogate models limit over-confidence in reliability and safety assessments and produce estimates of risk measures that converge much faster to the true value than purely sample-based estimates. We first detail the construction of conservative surrogate models that can be tailored to a stakeholder's risk preferences and then present an approach, based on stochastic orders, for constructing surrogate models that are conservative with respect to families of risk measures. Our surrogate models include biases that permit them to conservatively estimate the target risk measures. We provide theoretical results that show that these biases decay at the same rate as the L2 error in the surrogate model. Numerical demonstrations confirm that risk-adapted surrogate models do indeed overestimate the target risk measures while converging at the expected rate.

More Details

Improving Multi-Model Trajectory Simulation Estimators using Model Selection and Tuning

AIAA Science and Technology Forum and Exposition, AIAA SciTech Forum 2022

Bomarito, Geoffrey F.; Geraci, Gianluca; Warner, James E.; Leser, Patrick E.; Leser, William P.; Eldred, Michael; Jakeman, John D.; Gorodetsky, Alex A.

Multi-model Monte Carlo methods have been illustrated to be an efficient and accurate alternative to standard Monte Carlo (MC) in the model-based propagation of uncertainty in entry, descent, and landing (EDL) applications. These multi-model MC methods fuse predictions from low-fidelity models with the high-fidelity EDL model of interest to produce unbiased statistics with a fraction of the computational cost. The accuracy and efficiency of the multi-model MC methods are dependent upon the magnitude of correlations of the low-fidelity models with the high-fidelity model, but also upon the correlation amongst the low-fidelity models, and their relative computational cost. Because of this layer of complexity, the question of how to optimally select the set of low-fidelity models has remained open. In this work, methods for optimal model construction and tuning are investigated as a means to increase the speed and precision of trajectory simulation for EDL. Specifically, the focus is on the inclusion of low-fidelity model tuning within the sample allocation optimization that accompanies multi-model MC methods. Results indicate that low-fidelity model tuning can significantly improve efficiency and precision of trajectory simulations and provide an increased edge to multi-model MC methods when compared to standard MC.

More Details
Results 1–50 of 210
Results 1–50 of 210