Publications

Results 101–150 of 210

Search results

Jump to search filters

Polynomial chaos expansions for dependent random variables

Computer Methods in Applied Mechanics and Engineering

Jakeman, John D.; Franzelin, Fabian; Narayan, Akil; Eldred, Michael; Plfuger, Dirk

Polynomial chaos expansions (PCE) are well-suited to quantifying uncertainty in models parameterized by independent random variables. The assumption of independence leads to simple strategies for building multivariate orthonormal bases and for sampling strategies to evaluate PCE coefficients. In contrast, the application of PCE to models of dependent variables is much more challenging. Three approaches can be used to construct PCE of models of dependent variables. The first approach uses mapping methods where measure transformations, such as the Nataf and Rosenblatt transformation, can be used to map dependent random variables to independent ones; however we show that this can significantly degrade performance since the Jacobian of the map must be approximated. A second strategy is the class of dominating support methods. In these approaches a PCE is built using independent random variables whose distributional support dominates the support of the true dependent joint density; we provide evidence that this approach appears to produce approximations with suboptimal accuracy. A third approach, the novel method proposed here, uses Gram–Schmidt orthogonalization (GSO) to numerically compute orthonormal polynomials for the dependent random variables. This approach has been used successfully when solving differential equations using the intrusive stochastic Galerkin method, and in this paper we use GSO to build PCE using a non-intrusive stochastic collocation method. The stochastic collocation method treats the model as a black box and builds approximations of the input–output map from a set of samples. Building PCE from samples can introduce ill-conditioning which does not plague stochastic Galerkin methods. To mitigate this ill-conditioning we generate weighted Leja sequences, which are nested sample sets, to build accurate polynomial interpolants. We show that our proposed approach, GSO with weighted Leja sequences, produces PCE which are orders of magnitude more accurate than PCE constructed using mapping or dominating support methods.

More Details

Recent advancements in multilevel-multifidelity techniques for forward UQ in the DARPA sequoia project

AIAA Scitech 2019 Forum

Geraci, Gianluca; Eldred, Michael; Gorodetsky, Alex A.; Jakeman, John D.

In the context of the DARPA funded project SEQUOIA we are interested in the design under uncertainty of a jet engine nozzle subject to the performance requirements of a reconnaissance mission for a small unmanned military aircraft. This design task involves complex and expensive aero-thermo-structural computational analyses where it is of a paramount importance to also include the effect of the uncertain variables to obtain reliable predictions of the device’s performance. In this work we focus on the forward propagation analysis which is a key part of the design under uncertainty workflow. This task cannot be tackled directly by means of single fidelity approaches due to the prohibitive computational cost associated to each realization. We report here a summary of our latest advancement regarding several multilevel and multifidelity strategies designed to alleviate these challenges. The overall goal of these techniques is to reduce the computational cost of analyzing a high-fidelity model by resorting to less accurate, but less computationally demanding, lower fidelity models. The features of these multifidelity UQ approaches are initially illustrated and demonstrated on several model problems and afterward for the aero-thermo-structural analysis of the jet engine nozzle.

More Details

Gradient-based optimization for regression in the functional tensor-train format

Journal of Computational Physics

Gorodetsky, Alex A.; Jakeman, John D.

Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. We use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.

More Details

Gradient-based optimization for regression in the functional tensor-train format

Journal of Computational Physics

Gorodetsky, Alex A.; Jakeman, John D.

Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. We use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.

More Details

Convergence of Probability Densities Using Approximate Models for Forward and Inverse Problems in Uncertainty Quantification

SIAM Journal on Scientific Computing

Butler, Troy; Jakeman, John D.; Wildey, Timothy

Here, we analyze the convergence of probability density functions utilizing approximate models for both forward and inverse problems. We consider the standard forward uncertainty quantification problem where an assumed probability density on parameters is propagated through the approximate model to produce a probability density, often called a push-forward probability density, on a set of quantities of interest (QoI). The inverse problem considered in this paper seeks to update an initial probability density assumed on model input parameters such that the subsequent push-forward of this updated density through the parameter-to-QoI map matches a given probability density on the QoI. We prove that the densities obtained from solving the forward and inverse problems, using approximate models, converge to the true densities as the approximate models converge to the true models. Numerical results are presented to demonstrate convergence rates of densities for sparse grid approximations of parameter-to-QoI maps and standard spatial and temporal discretizations of PDEs and ODEs.

More Details

Neural Networks as Surrogates of Nonlinear High-Dimensional Parameter-to-Prediction Maps

Jakeman, John D.; Perego, Mauro; Severa, William M.

We present a preliminary investigation of the use of Multi-Layer Perceptrons (MLP) and Recurrent Neural Networks (RNNs) as surrogates of parameter-to-prediction maps of computational expensive dynamical models. In particular, we target the approximation of Quantities of Interest (QoIs) derived from the solution of a Partial Differential Equations (PDEs) at different time instants. In order to limit the scope of our study while targeting a relevant application, we focus on the problem of computing variations in the ice sheets mass (our QoI), which is a proxy for global mean sea-level changes. We present a number of neural network formulations and compare their performance with that of Polynomial Chaos Expansions (PCE) constructed on the same data.

More Details

Generation and application of multivariate polynomial quadrature rules

Computer Methods in Applied Mechanics and Engineering

Jakeman, John D.; Narayan, Akil

The search for multivariate quadrature rules of minimal size with a specified polynomial accuracy has been the topic of many years of research. Finding such a rule allows accurate integration of moments, which play a central role in many aspects of scientific computing with complex models. The contribution of this paper is twofold. First, we provide novel mathematical analysis of the polynomial quadrature problem that provides a lower bound for the minimal possible number of nodes in a polynomial rule with specified accuracy. We give concrete but simplistic multivariate examples where a minimal quadrature rule can be designed that achieves this lower bound, along with situations that showcase when it is not possible to achieve this lower bound. Our second contribution is the formulation of an algorithm that is able to efficiently generate multivariate quadrature rules with positive weights on non-tensorial domains. Our tests show success of this procedure in up to 20 dimensions. We test our method on applications to dimension reduction and chemical kinetics problems, including comparisons against popular alternatives such as sparse grids, Monte Carlo and quasi Monte Carlo sequences, and Stroud rules. The quadrature rules computed in this paper outperform these alternatives in almost all scenarios.

More Details

Generation and application of multivariate polynomial quadrature rules

Computer Methods in Applied Mechanics and Engineering

Jakeman, John D.; Narayan, Akil

The search for multivariate quadrature rules of minimal size with a specified polynomial accuracy has been the topic of many years of research. Finding such a rule allows accurate integration of moments, which play a central role in many aspects of scientific computing with complex models. The contribution of this paper is twofold. First, we provide novel mathematical analysis of the polynomial quadrature problem that provides a lower bound for the minimal possible number of nodes in a polynomial rule with specified accuracy. We give concrete but simplistic multivariate examples where a minimal quadrature rule can be designed that achieves this lower bound, along with situations that showcase when it is not possible to achieve this lower bound. Our second contribution is the formulation of an algorithm that is able to efficiently generate multivariate quadrature rules with positive weights on non-tensorial domains. Our tests show success of this procedure in up to 20 dimensions. We test our method on applications to dimension reduction and chemical kinetics problems, including comparisons against popular alternatives such as sparse grids, Monte Carlo and quasi Monte Carlo sequences, and Stroud rules. The quadrature rules computed in this paper outperform these alternatives in almost all scenarios.

More Details

Compressed sensing with sparse corruptions: Fault-tolerant sparse collocation approximations

Adcock, Ben; Bao, Anyi; Jakeman, John D.; Naryan, Akil

The recovery of approximately sparse or compressible coefficients in a polynomial chaos expansion is a common goal in many modern parametric uncertainty quantification (UQ) problems. However, relatively little effort in UQ has been directed toward theoretical and computational strategies for addressing the sparse corruptions problem, where a small number of measurements are highly corrupted. Such a situation has become pertinent today since modern computational frameworks are sufficiently complex with many interdependent components that may introduce hardware and software failures, some of which can be difficult to detect and result in a highly polluted simulation result. In this paper we present a novel compressive sampling-based theoretical analysis for a regularized t1 minimization algorithm that aims to recover sparse expansion coefficients in the presence of measurement corruptions. Our recovery results are uniform (the theoretical guarantees hold for all compressible signals and compressible corruptions vectors), and prescribe algorithmic regularization parameters in terms of a user-defined a priori estimate on the ratio of measurements that are believed to be corrupted. We also propose an iteratively reweighted optimization algorithm that automatically refines the value of the regularization parameter, and empirically produces superior results. Our numerical results test our framework on several medium-to-high dimensional examples of solutions to parameterized differential equations, and demonstrate the effectiveness of our approach.

More Details

An overview of methods to identify and manage uncertainty for modelling problems in the water-environment-agriculture cross-sector

Mathematics for Industry

Jakeman, Anthony J.; Jakeman, John D.

Uncertainty pervades the representation of systems in the water–environment–agriculture cross-sector. Successful methods to address uncertainties have largely focused on standard mathematical formulations of biophysical processes in a single sector, such as partial or ordinary differential equations. More attention to integrated models of such systems is warranted. Model components representing the different sectors of an integrated model can have less standard, and different, formulations to one another, as well as different levels of epistemic knowledge and data informativeness. Thus, uncertainty is not only pervasive but also crosses boundaries and propagates between system components. Uncertainty assessment (UA) cries out for more eclectic treatment in these circumstances, some of it being more qualitative and empirical. Here in this paper, we discuss the various sources of uncertainty in such a cross-sectoral setting and ways to assess and manage them. We have outlined a fast-growing set of methodologies, particularly in the computational mathematics literature on uncertainty quantification (UQ), that seem highly pertinent for uncertainty assessment. There appears to be considerable scope for advancing UA by integrating relevant UQ techniques into cross-sectoral problem applications. Of course this will entail considerable collaboration between domain specialists who often take first ownership of the problem and computational methods experts.

More Details

Optimal experimental design using a consistent Bayesian approach

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering

Walsh, Scott N.; Wildey, Timothy; Jakeman, John D.

We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems, which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set of potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative partial differential equations (PDE)-based models.

More Details

Combining push-forward measures and bayes' rule to construct consistent solutions to stochastic inverse problems

SIAM Journal on Scientific Computing

Wildey, Timothy; Butler, T.; Jakeman, John D.

We formulate, and present a numerical method for solving, an inverse problem for inferring parameters of a deterministic model from stochastic observational data on quantities of interest. The solution, given as a probability measure, is derived using a Bayesian updating approach for measurable maps that finds a posterior probability measure that when propagated through the deterministic model produces a push-forward measure that exactly matches the observed probability measure on the data. Our approach for finding such posterior measures, which we call consistent Bayesian inference or push-forward based inference, is simple and only requires the computation of the push-forward probability measure induced by the combination of a prior probability measure and the deterministic model. We establish existence and uniqueness of observation-consistent posteriors and present both stability and error analyses. We also discuss the relationships between consistent Bayesian inference, classical/statistical Bayesian inference, and a recently developed measure-theoretic approach for inference. Finally, analytical and numerical results are presented to highlight certain properties of the consistent Bayesian approach and the differences between this approach and the two aforementioned alternatives for inference.

More Details

Multilevel-multifidelity approaches for forward uq in the DARPA SEQUOIA project

AIAA Non-Deterministic Approaches Conference, 2018

Eldred, Michael; Geraci, Gianluca; Gorodetsky, Alex; Jakeman, John D.

Within the SEQUOIA project, funded by the DARPA EQUiPS program, we pursue algorithmic approaches that enable comprehensive design under uncertainty, through inclusion of aleatory/parametric and epistemic/model form uncertainties within scalable forward/inverse UQ approaches. These statistical methods are embedded within design processes that manage computational expense through active subspace, multilevel-multifidelity, and reduced-order modeling approximations. To demonstrate these methods, we focus on the design of devices that involve multi-physics interactions in advanced aerospace vehicles. A particular problem of interest is the shape design of nozzles for advanced vehicles such as the Northrop Grumman UCAS X-47B, involving coupled aero-structural-thermal simulations for nozzle performance. In this paper, we explore a combination of multilevel and multifidelity forward and inverse UQ algorithms to reduce the overall computational cost of the analysis by leveraging hierarchies of model form (i.e., multifidelity hierarchies) and solution discretization (i.e., multilevel hierarchies) in order of exploit trade offs between solution accuracy and cost. In particular, we seek the most cost effective fusion of information across complex multi-dimensional modeling hierarchies. Results to date indicate the utility of multiple approaches, including methods that optimally allocate resources when estimator variance varies smoothly across levels, methods that allocate sufficient sampling density based on sparsity estimates, and methods that employ greedy multilevel refinement.

More Details

Compressed sensing with sparse corruptions: Fault-tolerant sparse collocation approximations

SIAM-ASA Journal on Uncertainty Quantification

Adcock, Ben; Bao, Anyi; Jakeman, John D.; Narayan, Akil

The recovery of approximately sparse or compressible coefficients in a polynomial chaos expansion is a common goal in many modern parametric uncertainty quantification (UQ) problems. However, relatively little effort in UQ has been directed toward theoretical and computational strategies for addressing the sparse corruptions problem, where a small number of measurements are highly corrupted. Such a situation has become pertinent today since modern computational frameworks are sufficiently complex with many interdependent components that may introduce hardware and software failures, some of which can be difficult to detect and result in a highly polluted simulation result. In this paper we present a novel compressive sampling-based theoretical analysis for a regularized \ell1 minimization algorithm that aims to recover sparse expansion coefficients in the presence of measurement corruptions. Our recovery results are uniform (the theoretical guarantees hold for all compressible signals and compressible corruptions vectors) and prescribe algorithmic regularization parameters in terms of a user-defined a priori estimate on the ratio of measurements that are believed to be corrupted. We also propose an iteratively reweighted optimization algorithm that automatically refines the value of the regularization parameter and empirically produces superior results. Our numerical results test our framework on several medium to high dimensional examples of solutions to parameterized differential equations and demonstrate the effectiveness of our approach.

More Details

Time and Frequency Domain Methods for Basis Selection in Random Linear Dynamical Systems

International Journal for Uncertainty Quantification

Jakeman, John D.; Pulch, Roland

Polynomial chaos methods have been extensively used to analyze systems in uncertainty quantification. Furthermore, several approaches exist to determine a low-dimensional approximation (or sparse approximation) for some quantity of interest in a model, where just a few orthogonal basis polynomials are required. In this work, we consider linear dynamical systems consisting of ordinary differential equations with random variables. The aim of this paper is to explore methods for producing low-dimensional approximations of the quantity of interest further. We investigate two numerical techniques to compute a low-dimensional representation, which both fit the approximation to a set of samples in the time domain. On the one hand, a frequency domain analysis of a stochastic Galerkin system yields the selection of the basis polynomials. It follows a linear least squares problem. On the other hand, a sparse minimization yields the choice of the basis polynomials by information from the time domain only. An orthogonal matching pursuit produces an approximate solution of the minimization problem. Finally, we compare the two approaches using a test example from a mechanical application.

More Details
Results 101–150 of 210
Results 101–150 of 210