Truly predictive numerical simulations can only be obtained by performing Uncertainty Quantification. However, many realistic engineering applications require extremely complex and computationally expensive high-fidelity numerical simulations for their accurate performance characterization. Very often the combination of complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity uncertainty quantification approach, i.e. a workflow that only uses high-fidelity simulations, is unfeasible due to its prohibitive overall computational cost. To overcome this difficulty, in recent years multifidelity strategies emerged and gained popularity. Their core idea is to combine simulations with varying levels of fidelity/accuracy in order to obtain estimators or surrogates that can yield the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a priori a sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical model realization and thus its computational cost. Less attention has been dedicated to low-fidelity models that can be built directly from a small number of available high-fidelity simulations. In this work we focus our attention on reduced order models (ROMs). Our main goal in this work is to investigate the combination of multifidelity uncertainty quantification and ROMs in order to evaluate the possibility to obtain an efficient framework for propagating uncertainties through expensive numerical codes. We focus our attention on sampling-based multifidelity approaches, like the multifidelity control variate, and we consider several scenarios for a numerical test problem, namely the Kuramoto-Sivashinsky equation, for which the efficiency of the multifidelity-ROM estimator is compared to the standard (single-fidelity) Monte Carlo approach.
Truly predictive numerical simulations can only be obtained by performing Uncertainty Quantification. However, many realistic engineering applications require extremely complex and computationally expensive high-fidelity numerical simulations for their accurate performance characterization. Very often the combination of complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity uncertainty quantification approach, i.e. a workflow that only uses high-fidelity simulations, is unfeasible due to its prohibitive overall computational cost. To overcome this difficulty, in recent years multifidelity strategies emerged and gained popularity. Their core idea is to combine simulations with varying levels of fidelity/accuracy in order to obtain estimators or surrogates that can yield the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a priori a sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical model realization and thus its computational cost. Less attention has been dedicated to low-fidelity models that can be built directly from a small number of available high-fidelity simulations. In this work we focus our attention on reduced order models (ROMs). Our main goal in this work is to investigate the combination of multifidelity uncertainty quantification and ROMs in order to evaluate the possibility to obtain an efficient framework for propagating uncertainties through expensive numerical codes. We focus our attention on sampling-based multifidelity approaches, like the multifidelity control variate, and we consider several scenarios for a numerical test problem, namely the Kuramoto-Sivashinsky equation, for which the efficiency of the multifidelity-ROM estimator is compared to the standard (single-fidelity) Monte Carlo approach.
Uncertainty quantification is recognized as a fundamental task to obtain predictive numerical simulations. However, many realistic engineering applications require complex and computationally expensive high-fidelity numerical simulations for the accurate characterization of the system responses. Moreover, complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity approach, i.e. a workflow that only uses high-fidelity simulations to perform the uncertainty quantification task, is unfeasible due to the prohibitive overall computational cost. In recent years, multifidelity strategies have been introduced to overcome this issue. The core idea of this family of methods is to combine simulations with varying levels of fidelity/accuracy in order to obtain the multifidelity estimators or surrogates with the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a prioria sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical realization and thus its computational cost. However ,less attention has been dedicated to low-fidelity models that can be built directly from the small number of high-fidelity simulations available. In this work we focus our attention on Reduced-Order Models that can be considered a particular class of data-driven approaches. Our main goal is to explore the combination of multifidelity uncertainty quantification and reduced-order models to obtain an efficient framework for propagating uncertainties through expensive numerical codes.
We begin by presenting an overview of the general philosophy that is guiding the novel DARMA developments, followed by a brief reminder about the background of this project. We finally present the FY19 design requirements. As the Exascale era arises, DARMA is uniquely positioned at the forefront of asychronous many-task (AMT) research and development (R&D) to explore emerging programming model paradigms for next-generation HPC applications at Sandia, across NNSA labs, and beyond. The DARMA project explores how to fundamentally shift the expression(PM) and execution(EM)of massively concurrent HPC scientific algorithms to be more asynchronous, resilient to executional aberrations in heterogeneous/unpredictable environments, and data-dependency conscious—thereby enabling an intelligent, dynamic, and self-aware runtime to guide execution.