Publications

Results 26–35 of 35

Search results

Jump to search filters

Multiscale modeling high-order methods and data-driven modeling

Parish, Eric J.

Projection-based reduced-order models (ROMs) comprise a promising set of data-driven approaches for accelerating the simulation of high-fidelity numerical simulations. Standard projection-based ROM approaches, however, suffer from several drawbacks when applied to the complex nonlinear dynamical systems commonly encountered in science and engineering. These limitations include a lack of stability, accuracy, and sharp a posteriori error estimators. This work addresses these limitations by leveraging multiscale modeling, least-squares principles, and machine learning to develop novel reduced-order modeling approaches, along with data-driven a posteriori error estimators, for dynamical systems. Theoretical and numerical results demonstrate that the two ROM approaches developed in this work - namely the windowed least-squares method and the Adjoint Petrov - Galerkin method - yield substantial improvements over state-of-the-art approaches. Additionally, numerical results demonstrate the capability of the a posteriori error models developed in this work.

More Details

The Adjoint Petrov–Galerkin method for non-linear model reduction

Computer Methods in Applied Mechanics and Engineering

Parish, Eric J.; Wentland, Christopher R.; Duraisamy, Karthik

We formulate a new projection-based reduced-order modeling technique for non-linear dynamical systems. The proposed technique, which we refer to as the Adjoint Petrov–Galerkin (APG) method, is derived by decomposing the generalized coordinates of a dynamical system into a resolved coarse-scale set and an unresolved fine-scale set. A Markovian finite memory assumption within the Mori–Zwanzig formalism is then used to develop a reduced-order representation of the coarse scales. This procedure leads to a closed reduced-order model that displays commonalities with the adjoint stabilization method used in finite elements. The formulation is shown to be equivalent to a Petrov–Galerkin method with a non-linear, time-varying test basis, thus sharing some similarities with the Least-Squares Petrov–Galerkin method. Theoretical analysis examining a priori error bounds and computational cost is presented. Numerical experiments on the compressible Navier–Stokes equations demonstrate that the proposed method can lead to improvements in numerical accuracy, robustness, and computational efficiency over the Galerkin method on problems of practical interest. Improvements in numerical accuracy and computational efficiency over the Least-Squares Petrov–Galerkin method are observed in most cases.

More Details

Time-series machine-learning error models for approximate solutions to parameterized dynamical systems

Computer Methods in Applied Mechanics and Engineering

Parish, Eric J.; Carlberg, Kevin T.

This work proposes a machine-learning framework for modeling the error incurred by approximate solutions to parameterized dynamical systems. In particular, we extend the machine-learning error models (MLEM) framework proposed in Ref. Freno and Carlberg (2019) to dynamical systems. The proposed Time-Series Machine-Learning Error Modeling (T-MLEM) method constructs a regression model that maps features – which comprise error indicators that are derived from standard a posteriori error-quantification techniques – to a random variable for the approximate-solution error at each time instance. The proposed framework considers a wide range of candidate features, regression methods, and additive noise models. We consider primarily recursive regression techniques developed for time-series modeling, including both classical time-series models (e.g., autoregressive models) and recurrent neural networks (RNNs), but also analyze standard non-recursive regression techniques (e.g., feed-forward neural networks) for comparative purposes. Numerical experiments conducted on multiple benchmark problems illustrate that the long short-term memory (LSTM) neural network, which is a type of RNN, outperforms other methods and yields substantial improvements in error predictions over traditional approaches.

More Details
Results 26–35 of 35
Results 26–35 of 35