Multi-model Monte Carlo methods have been illustrated to be an efficient and accurate alternative to standard Monte Carlo (MC) in the model-based propagation of uncertainty in entry, descent, and landing (EDL) applications. These multi-model MC methods fuse predictions from low-fidelity models with the high-fidelity EDL model of interest to produce unbiased statistics with a fraction of the computational cost. The accuracy and efficiency of the multi-model MC methods are dependent upon the magnitude of correlations of the low-fidelity models with the high-fidelity model, but also upon the correlation amongst the low-fidelity models, and their relative computational cost. Because of this layer of complexity, the question of how to optimally select the set of low-fidelity models has remained open. In this work, methods for optimal model construction and tuning are investigated as a means to increase the speed and precision of trajectory simulation for EDL. Specifically, the focus is on the inclusion of low-fidelity model tuning within the sample allocation optimization that accompanies multi-model MC methods. Results indicate that low-fidelity model tuning can significantly improve efficiency and precision of trajectory simulations and provide an increased edge to multi-model MC methods when compared to standard MC.
This paper describes an efficient reverse-mode differentiation algorithm for contraction operations for arbitrary and unconventional tensor network topologies. The approach leverages the tensor contraction tree of Evenbly and Pfeifer (2014), which provides an instruction set for the contraction sequence of a network. We show that this tree can be efficiently leveraged for differentiation of a full tensor network contraction using a recursive scheme that exploits (1) the bilinear property of contraction and (2) the property that trees have a single path from root to leaves. While differentiation of tensor-tensor contraction is already possible in most automatic differentiation packages, we show that exploiting these two additional properties in the specific context of contraction sequences can improve eficiency. Following a description of the algorithm and computational complexity analysis, we investigate its utility for gradient-based supervised learning for low-rank function recovery and for fitting real-world unstructured datasets. We demonstrate improved performance over alternating least-squares optimization approaches and the capability to handle heterogeneous and arbitrary tensor network formats. When compared to alternating minimization algorithms, we find that the gradient-based approach requires a smaller oversampling ratio (number of samples compared to number model parameters) for recovery. This increased efficiency extends to fitting unstructured data of varying dimensionality and when employing a variety of tensor network formats. Here, we show improved learning using the hierarchical Tucker method over the tensor-train in high-dimensional settings on a number of benchmark problems.
Wang, Qian; Guillaume, Joseph; Jakeman, John D.; Yang, Tao; Iwanaga, Takuya; Croke, Barry; Jakeman, Tony
Despite widespread use of factor fixing in environmental modeling, its effect on model predictions has received little attention and is instead commonly presumed to be negligible. We propose a proof-of-concept adaptive method for systematically investigating the impact of factor fixing. The method uses Global Sensitivity Analysis methods to identify groups of sensitive parameters, then quantifies which groups can be safely fixed at nominal values without exceeding a maximum acceptable error, demonstrated using the 21-dimensional Sobol’ G-function. Furthermore, three error measures are considered for quantities of interest, namely Relative Mean Absolute Error, Pearson Product-Moment Correlation and Relative Variance. Results demonstrate that factor fixing may cause large errors in the model results unexpectedly, when preliminary analysis suggests otherwise, and that the default value selected affects the number of factors to fix. To improve the applicability and methodological development of factor fixing, a new research agenda encompassing five opportunities is discussed for further attention.
Constructing accurate statistical models of critical system responses typically requires an enormous amount of data from physical experiments or numerical simulations. Unfortunately, data generation is often expensive and time consuming. To streamline the data generation process, optimal experimental design determines the 'best' allocation of experiments with respect to a criterion that measures the ability to estimate some important aspect of an assumed statistical model. While optimal design has a vast literature, few researchers have developed design paradigms targeting tail statistics, such as quantiles. In this project, we tailored and extended traditional design paradigms to target distribution tails. Our approach included (i) the development of new optimality criteria to shape the distribution of prediction variances, (ii) the development of novel risk-adapted surrogate models that provably overestimate certain statistics including the probability of exceeding a threshold, and (iii) the asymptotic analysis of regression approaches that target tail statistics such as superquantile regression. To accompany our theoretical contributions, we released implementations of our methods for surrogate modeling and design of experiments in two complementary open source software packages, the ROL/OED Toolkit and PyApprox.
We present an approach for constructing a surrogate from ensembles of information sources of varying cost and accuracy. The multifidelity surrogate encodes connections between information sources as a directed acyclic graph, and is trained via gradient-based minimization of a nonlinear least squares objective. While the vast majority of state-of-the-art assumes hierarchical connections between information sources, our approach works with flexibly structured information sources that may not admit a strict hierarchy. The formulation has two advantages: (1) increased data efficiency due to parsimonious multifidelity networks that can be tailored to the application; and (2) no constraints on the training data—we can combine noisy, non-nested evaluations of the information sources. Finally, numerical examples ranging from synthetic to physics-based computational mechanics simulations indicate the error in our approach can be orders-of-magnitude smaller, particularly in the low-data regime, than single-fidelity and hierarchical multifidelity approaches.
We present a surrogate modeling framework for conservatively estimating measures of risk from limited realizations of an expensive physical experiment or computational simulation. We adopt a probabilistic description of risk that assigns probabilities to consequences associated with an event and use risk measures, which combine objective evidence with the subjective values of decision makers, to quantify anticipated outcomes. Given a set of samples, we construct a surrogate model that produces estimates of risk measures that are always greater than their empirical estimates obtained from the training data. These surrogate models not only limit over-confidence in reliability and safety assessments, but produce estimates of risk measures that converge much faster to the true value than purely sample-based estimates. We first detail the construction of conservative surrogate models that can be tailored to the specific risk preferences of the stakeholder and then present an approach, based upon stochastic orders, for constructing surrogate models that are conservative with respect to families of risk measures. The surrogate models introduce a bias that allows them to conservatively estimate the target risk measures. We provide theoretical results that show that this bias decays at the same rate as the L2 error in the surrogate model. Our numerical examples confirm that risk-aware surrogate models do indeed over-estimate the target risk measures while converging at the expected rate.
We present an adaptive algorithm for constructing surrogate models for integrated systems composed of a set of coupled components. With this goal we introduce ‘coupling’ variables with a priori unknown distributions that allow approximations of each component to be built independently. Once built, the surrogates of the components are combined and used to predict system-level quantities of interest (QoI) at a fraction of the cost of interrogating the full system model. We use a greedy experimental design procedure, based upon a modification of Multi-Index Stochastic Collocation (MISC), to minimize the error of the combined surrogate. This is achieved by refining each component surrogate in accordance with its relative contribution to error in the approximation of the system-level QoI. Our adaptation of MISC is a multi-fidelity procedure that can leverage ensembles of models of varying cost and accuracy, for one or more components, to produce estimates of system-level QoI. Several numerical examples demonstrate the efficacy of the proposed approach on systems involving feed-forward and feedback coupling. For a fixed computational budget, the proposed algorithm is able to produce approximations that are orders of magnitude more accurate than approximations that treat the integrated system as a black-box.
We present a numerical framework for recovering unknown nonautonomous dynamical systems with time-dependent inputs. To circumvent the difficulty presented by the nonautonomous nature of the system, our method transforms the solution state into piecewise integration of the system over a discrete set of time instances. The time-dependent inputs are then locally parameterized by using a proper model, for example, polynomial regression, in the pieces determined by the time instances. This transforms the original system into a piecewise parametric system that is locally time invariant. We then design a deep neural network structure to learn the local models. Once the network model is constructed, it can be iteratively used over time to conduct global system prediction. We provide theoretical analysis of our algorithm and present a number of numerical examples to demonstrate the effectiveness of the method.
Sensitivity analysis (SA) is en route to becoming an integral part of mathematical modeling. The tremendous potential benefits of SA are, however, yet to be fully realized, both for advancing mechanistic and data-driven modeling of human and natural systems, and in support of decision making. In this perspective paper, a multidisciplinary group of researchers and practitioners revisit the current status of SA, and outline research challenges in regard to both theoretical frameworks and their applications to solve real-world problems. Six areas are discussed that warrant further attention, including (1) structuring and standardizing SA as a discipline, (2) realizing the untapped potential of SA for systems modeling, (3) addressing the computational burden of SA, (4) progressing SA in the context of machine learning, (5) clarifying the relationship and role of SA to uncertainty quantification, and (6) evolving the use of SA in support of decision making. An outlook for the future of SA is provided that underlines how SA must underpin a wide variety of activities to better serve science and society.
Gaussian processes and other kernel-based methods are used extensively to construct approximations of multivariate data sets. The accuracy of these approximations is dependent on the data used. This paper presents a computationally efficient algorithm to greedily select training samples that minimize the weighted Lp error of kernel-based approximations for a given number of data. The method successively generates nested samples, with the goal of minimizing the error in high probability regions of densities specified by users. The algorithm presented is extremely simple and can be implemented using existing pivoted Cholesky factorization methods. Training samples are generated in batches which allows training data to be evaluated (labeled) in parallel. For smooth kernels, the algorithm performs comparably with the greedy integrated variance design but has significantly lower complexity. Numerical experiments demonstrate the efficacy of the approach for bounded, unbounded, multi-modal and non-tensor product densities. We also show how to use the proposed algorithm to efficiently generate surrogates for inferring unknown model parameters from data using Bayesian inference.