Enabling efficient uncertainty quantification for seismic modeling via projection-based model reduction
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Computational Physics
This work proposes a windowed least-squares (WLS) approach for model reduction of dynamical systems. The proposed approach sequentially minimizes the time-continuous full-order-model residual within a low-dimensional space–time trial subspace over time windows. The approach comprises a generalization of existing model reduction approaches, as particular instances of the methodology recover Galerkin, least-squares Petrov–Galerkin (LSPG), and space–time LSPG projection. In addition, the approach addresses key deficiencies in existing model reduction techniques, e.g., the dependence of LSPG and space–time LSPG projection on the time discretization and the exponential growth in time exhibited by a posteriori error bounds for both Galerkin and LSPG projection. We consider two types of space–time trial subspaces within the proposed approach: one that reduces only the spatial dimension of the full-order model, and one that reduces both the spatial and temporal dimensions of the full-order model. For each type of trial subspace, we consider two different solution techniques: direct (i.e., discretize then optimize) and indirect (i.e., optimize then discretize). Numerical experiments conducted using trial subspaces characterized by spatial dimension reduction demonstrate that the WLS approach can yield more accurate solutions with lower space–time residuals than Galerkin and LSPG projection.
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
This work proposes an extension of neural ordinary differential equations (NODEs) by introducing an additional set of ODE input parameters to NODEs. This extension allows NODEs to learn multiple dynamics specified by the input parameter instances. Our extension is inspired by the concept of parameterized ODEs, which are widely investigated in computational science and engineering contexts, where characteristics of the governing equations vary over the input parameters. We apply the proposed parameterized NODEs (PNODEs) for learning latent dynamics of complex dynamical processes that arise in computational physics, which is an essential component for enabling rapid numerical simulations for time-critical physics applications. For this, we propose an encoder-decoder-type framework, which models latent dynamics as PNODEs. We demonstrate the effectiveness of PNODEs on benchmark problems from computational physics.
Projection-based reduced-order models (ROMs) comprise a promising set of data-driven approaches for accelerating the simulation of high-fidelity numerical simulations. Standard projection-based ROM approaches, however, suffer from several drawbacks when applied to the complex nonlinear dynamical systems commonly encountered in science and engineering. These limitations include a lack of stability, accuracy, and sharp a posteriori error estimators. This work addresses these limitations by leveraging multiscale modeling, least-squares principles, and machine learning to develop novel reduced-order modeling approaches, along with data-driven a posteriori error estimators, for dynamical systems. Theoretical and numerical results demonstrate that the two ROM approaches developed in this work - namely the windowed least-squares method and the Adjoint Petrov - Galerkin method - yield substantial improvements over state-of-the-art approaches. Additionally, numerical results demonstrate the capability of the a posteriori error models developed in this work.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Computer Methods in Applied Mechanics and Engineering
This work proposes a machine-learning framework for modeling the error incurred by approximate solutions to parameterized dynamical systems. In particular, we extend the machine-learning error models (MLEM) framework proposed in Ref. Freno and Carlberg (2019) to dynamical systems. The proposed Time-Series Machine-Learning Error Modeling (T-MLEM) method constructs a regression model that maps features – which comprise error indicators that are derived from standard a posteriori error-quantification techniques – to a random variable for the approximate-solution error at each time instance. The proposed framework considers a wide range of candidate features, regression methods, and additive noise models. We consider primarily recursive regression techniques developed for time-series modeling, including both classical time-series models (e.g., autoregressive models) and recurrent neural networks (RNNs), but also analyze standard non-recursive regression techniques (e.g., feed-forward neural networks) for comparative purposes. Numerical experiments conducted on multiple benchmark problems illustrate that the long short-term memory (LSTM) neural network, which is a type of RNN, outperforms other methods and yields substantial improvements in error predictions over traditional approaches.
Computer Methods in Applied Mechanics and Engineering
We formulate a new projection-based reduced-order modeling technique for non-linear dynamical systems. The proposed technique, which we refer to as the Adjoint Petrov–Galerkin (APG) method, is derived by decomposing the generalized coordinates of a dynamical system into a resolved coarse-scale set and an unresolved fine-scale set. A Markovian finite memory assumption within the Mori–Zwanzig formalism is then used to develop a reduced-order representation of the coarse scales. This procedure leads to a closed reduced-order model that displays commonalities with the adjoint stabilization method used in finite elements. The formulation is shown to be equivalent to a Petrov–Galerkin method with a non-linear, time-varying test basis, thus sharing some similarities with the Least-Squares Petrov–Galerkin method. Theoretical analysis examining a priori error bounds and computational cost is presented. Numerical experiments on the compressible Navier–Stokes equations demonstrate that the proposed method can lead to improvements in numerical accuracy, robustness, and computational efficiency over the Galerkin method on problems of practical interest. Improvements in numerical accuracy and computational efficiency over the Least-Squares Petrov–Galerkin method are observed in most cases.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.