Recent advancement in Multifidelity Uncertainty Quantification
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Computer Methods in Applied Mechanics and Engineering
Polynomial chaos expansions (PCE) are well-suited to quantifying uncertainty in models parameterized by independent random variables. The assumption of independence leads to simple strategies for building multivariate orthonormal bases and for sampling strategies to evaluate PCE coefficients. In contrast, the application of PCE to models of dependent variables is much more challenging. Three approaches can be used to construct PCE of models of dependent variables. The first approach uses mapping methods where measure transformations, such as the Nataf and Rosenblatt transformation, can be used to map dependent random variables to independent ones; however we show that this can significantly degrade performance since the Jacobian of the map must be approximated. A second strategy is the class of dominating support methods. In these approaches a PCE is built using independent random variables whose distributional support dominates the support of the true dependent joint density; we provide evidence that this approach appears to produce approximations with suboptimal accuracy. A third approach, the novel method proposed here, uses Gram–Schmidt orthogonalization (GSO) to numerically compute orthonormal polynomials for the dependent random variables. This approach has been used successfully when solving differential equations using the intrusive stochastic Galerkin method, and in this paper we use GSO to build PCE using a non-intrusive stochastic collocation method. The stochastic collocation method treats the model as a black box and builds approximations of the input–output map from a set of samples. Building PCE from samples can introduce ill-conditioning which does not plague stochastic Galerkin methods. To mitigate this ill-conditioning we generate weighted Leja sequences, which are nested sample sets, to build accurate polynomial interpolants. We show that our proposed approach, GSO with weighted Leja sequences, produces PCE which are orders of magnitude more accurate than PCE constructed using mapping or dominating support methods.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
AIAA Scitech 2019 Forum
In the context of the DARPA funded project SEQUOIA we are interested in the design under uncertainty of a jet engine nozzle subject to the performance requirements of a reconnaissance mission for a small unmanned military aircraft. This design task involves complex and expensive aero-thermo-structural computational analyses where it is of a paramount importance to also include the effect of the uncertain variables to obtain reliable predictions of the device’s performance. In this work we focus on the forward propagation analysis which is a key part of the design under uncertainty workflow. This task cannot be tackled directly by means of single fidelity approaches due to the prohibitive computational cost associated to each realization. We report here a summary of our latest advancement regarding several multilevel and multifidelity strategies designed to alleviate these challenges. The overall goal of these techniques is to reduce the computational cost of analyzing a high-fidelity model by resorting to less accurate, but less computationally demanding, lower fidelity models. The features of these multifidelity UQ approaches are initially illustrated and demonstrated on several model problems and afterward for the aero-thermo-structural analysis of the jet engine nozzle.
Abstract not provided.
Journal of Computational Physics
Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. We use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.
Journal of Computational Physics
Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. We use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.
Abstract not provided.
SIAM Journal on Scientific Computing
Here, we analyze the convergence of probability density functions utilizing approximate models for both forward and inverse problems. We consider the standard forward uncertainty quantification problem where an assumed probability density on parameters is propagated through the approximate model to produce a probability density, often called a push-forward probability density, on a set of quantities of interest (QoI). The inverse problem considered in this paper seeks to update an initial probability density assumed on model input parameters such that the subsequent push-forward of this updated density through the parameter-to-QoI map matches a given probability density on the QoI. We prove that the densities obtained from solving the forward and inverse problems, using approximate models, converge to the true densities as the approximate models converge to the true models. Numerical results are presented to demonstrate convergence rates of densities for sparse grid approximations of parameter-to-QoI maps and standard spatial and temporal discretizations of PDEs and ODEs.
Abstract not provided.
We present a preliminary investigation of the use of Multi-Layer Perceptrons (MLP) and Recurrent Neural Networks (RNNs) as surrogates of parameter-to-prediction maps of computational expensive dynamical models. In particular, we target the approximation of Quantities of Interest (QoIs) derived from the solution of a Partial Differential Equations (PDEs) at different time instants. In order to limit the scope of our study while targeting a relevant application, we focus on the problem of computing variations in the ice sheets mass (our QoI), which is a proxy for global mean sea-level changes. We present a number of neural network formulations and compare their performance with that of Polynomial Chaos Expansions (PCE) constructed on the same data.
Computer Methods in Applied Mechanics and Engineering
The search for multivariate quadrature rules of minimal size with a specified polynomial accuracy has been the topic of many years of research. Finding such a rule allows accurate integration of moments, which play a central role in many aspects of scientific computing with complex models. The contribution of this paper is twofold. First, we provide novel mathematical analysis of the polynomial quadrature problem that provides a lower bound for the minimal possible number of nodes in a polynomial rule with specified accuracy. We give concrete but simplistic multivariate examples where a minimal quadrature rule can be designed that achieves this lower bound, along with situations that showcase when it is not possible to achieve this lower bound. Our second contribution is the formulation of an algorithm that is able to efficiently generate multivariate quadrature rules with positive weights on non-tensorial domains. Our tests show success of this procedure in up to 20 dimensions. We test our method on applications to dimension reduction and chemical kinetics problems, including comparisons against popular alternatives such as sparse grids, Monte Carlo and quasi Monte Carlo sequences, and Stroud rules. The quadrature rules computed in this paper outperform these alternatives in almost all scenarios.